How do I Determine My Usage on Bluemix?

There are a lot of reasons that I publish a blog.  One of them is so that I can answer a question once – and then I never have to answer them again.  Recently I have been getting questions from our customers on their usage of services on Bluemix.  It’s pretty easy to do, I’ll show you how.

First you’ll need to log into your Bluemix account.  Once in there, click on your user profile, it’s in the upper right hand corner of your browser.

Bluemix_Account

Once you click on this, you will see a menu on the right hand side of your browser.  Now you will need to click on the Account option in this pane.  You should now see your account details in the main browser screen.BM_Account_Details

Now you can see a breakdown of the usage in your Bluemix account.  The main screen defaults to show usage details, and it looks like this:

UsageDetailsScreen

As you scroll down you can see a breakdown of your usage on any number of different factors.  You can see things broken out by organization, or you can scroll down and see the charges being incurred for the various services that you are using on Bluemix.  This should allow you to see what your usage has been over a two month timeframe, and give you some idea of what your costs will be.

If you look at the navigation pane on the left of your browser, you will notice an option called Notifications.  The screen looks like this:

BMNotifications

Here you can set thresholds for when you will be notified of usage.  Bluemix will send you notifications when you reach 80%, 90%, and 100% of the thresholds that you enter on this screen.  You can set spending alerts based on overall account spending, spending on runtime resources, spending on containers, spending on all services, or even set alert for specific services.  This will notify you when these limits are being approached.

So now you know how to monitor your Bluemix usage, and you also know how to set alerts for your Bluemix spending.  Now get out there and start creating your software solutions in the cloud!

Help for Managing Your Watson Bluemix Retrieve and Rank Instance

Every once in a while I see something that has the potential to save people time and aggravation.  When I see these things, I like to blog about them to help other members of the Watson cognitive community.  Yesterday I saw one of those things.

OK – I actually saw it a couple of weeks ago, while it was being tested, but it just became available to the public yesterday on the Cognitive Catalyst.  This is a sub-project for Watson under the larger IBM Open project.  If you navigate to the bottom of the IBM Open project page, you will see a variety of different resources available to developers using Watson – from a variety of different SDK’s, to the Cognitive Catalyst project.

For those of you who are not aware, the Cognitive Catalyst site has a bunch of “open source” tools and utilities that developers can use to help them in the management of their Watson services deployed on Bluemix.  All of the code for the Cognitive Catalyst projects is available via a Cognitive Catalyst GitHub project, and we encourage anyone to contribute to the projects, or to submit projects of their own.  There are some really good tools and utilities out there already.

The latest addition to these tools is the ignite-electron tool.  The author of this tool, Andrew Ayres, is a frequent contributor to the Cognitive catalyst site.  The ignite-electron tooling provides a UI which simplifies the creation and management of Retrieve & Rank (R&R) components (solr clusters/configurations/collections and rankers) while also providing an easy to use interface for the creation of ground truth.  This ability eliminates the need of using a bunch of curl commands (which you probably have in a text file and cut/paste into terminal windows), and provides a visualization of the service to reduce confusion.  The tool will eliminate the inevitable errors that can occur from copying/pasting cluster/ranker/document IDs.

That alone would be pretty valuable, but the tool will also help reduce the amount of time required for a subject matter expert (SME) to create ground truth.  Currently an SME would need to repeatedly go through the process making a curl command to query a collection, adding the question to a csv, copying document IDs to the csv, and adding relevance scores.  With the ignite-electron tool the SME can provide a list of questions, and the tool will step through those questions allowing the SME to simply assign relevance scores.  All of the POST requests, and creation of the CSV file, are handled automatically by the tool.

This is one of those simple utilities and tools that should be in the toolset of any developer or subject matter expert that uses the IBM Watson Retrieve & Rank capability.  It will make the management, maintenance, and training of your R&R instance a lot easier.

What Does Watson Really Do?

Moving to a new role is always exciting, but always tough.  There is a learning curve, new technology, new terminology, new business processes, and new people to meet and work with.  Part of my starting as the Manager of the Watson architects was to address the learning curve around the technology and the terminology.  Since I know that any customer or partner looking to use Watson will need to go through a similar exercise, I thought that it might make sense to capture some of the better sources of information that helped me.

Getting Started

Once I started my new job, the first question that I had to answer is, “What exactly is Watson?  What does it do?”  I had to have an answer to this question, because my wife, kids, parents, and friends were asking me, and I felt dumb when I couldn’t answer this simple question.  one of the better answers that I have found is this 8 minute long video called, IBM Watson: How it Works.  I’m not usually big on videos, and I really don’t like marketing fluff (I’m a Tech weenie, I have always been a Tech weenie), but this one is short and to the point.  There are shorter snippets of this information on the SmarterPlanet – What is Watson page that you can look at as well.

The video introduces some key terminology that is important to understand.  It spends the first couple of minutes introducing cognitive computing, and how this is fundamentally different from more traditional computing.  If you’re not a technical person, this is pretty easy to understand.  This piece is important for more traditional programmers to understand, because it dispels the notion that this is just a turbo-charged expert system.  It is actually more than that.

It also talks about the differences between cognitive computing, and simple search capabilities.  The section on natural language processing is really interesting.  It should also help you begin to understand some of the concepts that are important to cognitive computing ( like data curation, corpus, machine learning, training, and question/answer pairs).

I also found an interesting article on No UI is the New UI, which covers some interesting ground on what the author calls artificial intelligence (which I call cognitive computing).  The processing of natural language, the conversion of speech-to-text, and conversions of text-to-speech are all Watson services that you can access on the Bluemix dashboard.  Bluemix is the IBM Cloud platform that allows you to select particular services (or APIs), and combine them to create customer Cloud applications.  The Watson services and capabilities are all cloud enabled and cloud ready for you, and you can see a list of them in the Bluemix catalog.  Not all of them are intuitively obvious at first (“What is the Alchemy API?”), but if you click on the tiles for each service, you will get a synopsis of what that service offers, and you can then drill down into more information on the service.

Cognitive computing is rapidly maturing and is beginning to be seen in the products and services that you use today.  Understanding the practical capabilities and limitations of cognitive computing will be critical in the future.  Because of this, I plan on following up on some of the services themselves, and the implications of using these services, in future blog posts.

Evolution, Change, and Success

I am now transitioning to a new role, and leaving my old job behind. People always seem to take this job change thing with either:

  • Malicious glee – basically laughing and telling everyone that they have left behind “so long suckers!”, or
  • False modesty – with a carefully worded message telling everyone (and if I didn’t mention you I am so VERY sorry) what an honor it was to work with them, and that we should all get together for lunch sometime.

I will try to do neither (although I am sure my close friends will be able to provide examples of both of the above behaviors), and instead I will attempt to focus on what is in front of me.  For me, it is an exciting time, a time when I am eager to look ahead to new challenges and new experiences.  I think the thing to do when changing jobs/roles is to look back and see how you’ve done, and think about what makes people successful.  I was reminded of this by a recent LinkedIn post by Patrick Leddin, which outlined the Five Invaluable Behaviors of Top Performers.  Now I have read similar posts from other writers on performance, but I read his post and it made me reflect on the role that I had played as leader of the Jazz Jumpstart, Emerging Technologies, and DevOps/Cloud teams.  His five behaviors read like this:

  • Deliver Results; Don’t Just Pleasantly Accomplish Activities
  • Solve Problems; Don’t Just Point Them Out
  • Learn New Stuff; Don’t Just Be Comfortable
  • Experience the Customer’s World; Don’t Just Observe It
  • Provide Value That Is Not Easily Replaced; Don’t Just Do the Job

These all seem very simple things, and they are.  The hard part is maintaining a focus on these areas as you deal with the day-to-day challenges that you and your team face.  The real hard part is putting this into practice as you go about doing the non-glamorous parts of your job.

So why write about this in the first place?  I do it as a way to remind myself to focus on these important areas as I move into my new role, and as a way to help the teams that I leave behind to realize their value, and to remind them that they need to focus on these things even when I am not around to remind them about it.  I hope one of my new team members references this blog post in the future, while trying to convince me to change some decision I have made which doesn’t hold to these principles.

This will be the final blog in this series, and I have started rebranding this blog.  That means new pictures (like my dog Jack) and new areas of focus.  I will still be here at the same old address, and you can still come here to find my past articles on the economics of software development, Agile development, the Jazz tools, DevOps, and Cloud technologies.  But from now on you will read blog posts about new technologies, like IBM Watson, and my observations on how to lead a successful team in helping to launch new technologies.

Using Bluemix DevOps Services to Support Multiple Deployments

In my earlier blog post on using Bluemix to deploy a simple Cloud Calendar on Bluemix, I added something the next day or so that discussed using the Deploy to Bluemix button for an even Easier Cloud Calendar.  Well my post has gotten some responses from within IBM, with various different teams wanting to use this simple cloud based calendar to provide a simple widget that they can use to provide a team calendar capability on their RTC dashboards.  So now I have a few different teams that would like me to deploy this simple solution for them.

Well I started out just showing people how they could do this themselves on Bluemix, because it really is quite easy.  However, some of the people asking for this are not technical types, they’re business type or (worse) management types.  They are afraid or unable to do this for themselves.  I’ll address the fear of cloud in a different blog post in the future.  The main thing here is that I ended up using the same code (or similar code) to deploy about 4 different types of calendars for 4 different groups within IBM.

How do I manage to do this without driving myself crazy?  I use the DevOps services available to me in Bluemix, and I configured a delivery pipeline that allows me to build, test, and then deploy four different variants of the same basic application.  The project was already out there in the Bluemix hosted Git repository, what I needed to do is to make some small changes to how I deploy, and some small changes for each flavor (or variant) of my application.

Overview

The original application was a Node.js app, and was pretty simple.  For details on it, view my original blog post called How About a Generic Calendar in the Cloud?.  Now to deploy this project, I went into my Test Calendar project and clicked on the Build & Deploy button.  This brought me to the Build and Deploy pipeline area.  I then looked and saw a simple build already set up for my Node.js project.  This simple build executes every time I push new changes to the master branch of my Git repository.  That’s perfect, I now have a continuous build (not to be confused with Continuous Integration) environment set up for my project.

Simple Deploy

Next I need to add a new stage to my project.  So I add a new stage to deploy my test configuration.  This stage takes the outputs of the build stage as inputs, and deploys these out to my test calendar environment.  This “test environment” is a simple application/MongoDB pair that I have out in the Bluemix staging environment.  Now since I have multiple instances that I want to deploy, under different names, in different Bluemix environments/spaces, I will need different manifest.yml files.  The manifest.yml file indicates where the application gets deployed, and the services that it will use.

So I decide to create multiple manifest.yml files, with a naming convention that indicates which deployment they belong to.  So I have a manifest_TestCalendar.yml file with the settings for my TestCalendar (which is my test configuration), and a manifest_ToxCalendar.yml file with the settings for my team calendar (which is my production configuration).  I actually have five of these, but I’ll keep it simple and just highlight the two for the purposes of explaining things here.  So my manifest_TestCalendar.yml file looks like this:

applications:
- disk_quota: 512M
- services:
  - mongolab-TestCalendar
  host: TestCalendar
  name: TestCalendar
  random-route: true
  path: .
  domain: stage1.mybluemix.net
  instances: 1
  memory: 512M
declared-services:
  mongolab-TestCalendar:
  label: mongodb
  plan: 100

and my manifest_ToxCalendar.yml file looks the same, except for the host line (which specifies “ToxCalendar”), the name line (which specifies “ToxCalendar”), and the declared services (which name a different MongoDB instance).  Note that the name of the Mongo DB service instance MUST match the name of the service as shown in Bluemix.  You’ll need to go and create that service first, before you try using this to spin up new instances of your application.  Also note that the route here is pointing at an internal IBM instance of Bluemix, if you do this in the public Bluemix, you’ll use mybluemix.net as the domain.

Configuring the Test Stage

Back to our deployment pipeline.  When I look at the pipeline, I decide to leave the first stage of deployment set to automatically deploy by leaving the stage trigger set to “Run jobs when the previous stage is completed”.  This means that this stage will run when the build stage successfully completes.  Now since I want to have a continuous integration environment, the target of this deploy should be my test instance.  What I will have isn’t really a true continuous integration environment, as I don’t have any automated tests being run as part of the deployment, but you can see how you can easily modify the stage to support this.

DeployToTestSo we’ve decided to do an automated deploy of my test instance on any build.  Go click on the gear icon in the stage, and chose Configure Stage.  You will now see a dialog box where you can configure your DevOps pipeline stage.  In this dialog, check the Input tab to make sure that the stage is set to be run automatically, by making sure that the Run jobs when previous stage is completed option is selected.  Then click on the Jobs tab and make sure that you have a Deploy job selected (if you don’t, then go and create one).  The Deployer Type should be set to Cloud Foundry, and the Target (where this should be deployed) should be your Bluemix environment.  Once you select the target, your organization and space drop-down selections should get populated with organizations that you are a part of, and the spaces available in those organizations (For more on organizations and spaces in Bluemix/CloudFoundry, read Managing your account).  Also enter in your Application Name in the field provided, and make sure that this application name matches the name of the application in your Bluemix console, as well as the application name that you used in the manifest.yml file.  In this case, it is the name of the application that we used in the manifest_TestCalendar.yml file, which was “TestCalendar”.

Finally you will see a grey box with the deploy script.  Now in order to make sure that we use the manifest_TestCalendar.yml file to deploy this, we have to copy this over the existing manifest.yml file.  It is a simple Unix/Linux copy command, and your resulting deploy script should now look like this:

#!/bin/bash
# Move proper manifest file into place
cp manifest_testCalendar.yml manifest.yml
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

The result of this is that we copy the version of the manifest file that we want into place, and then CloudFoundry just does the rest.  Go ahead and make a simple code change (just update a comment, or the README file), commit and push it (if you’re using Git), and watch as the application gets automatically built, and then the test calendar gets deployed.

Configuring the Production Stage

Now if we want to deploy using the same mechanism for our production instance, the process is simple.  We just click on the gear icon in the stage, and chose Clone Stage.  This creates a new stage, just like our stage to deploy our test instance.  Click on the gear icon for this new stage.  You will now see a dialog box where you can configure your cloned DevOps pipeline stage.  In this dialog, check the Input tab to make sure that the stage is NOT set to be run automatically, by making sure that the Run jobs only when this stage is run manually option is selected.  Then click on the Jobs tab and make sure that you have a Deploy job selected.  The Deployer Type and the Target (where this should be deployed) should remain the same.  Once you select the target, your organization and space drop-down selections should get populated with organizations that you are a part of, and the spaces available in those organizations.  If you want to deploy your production instance to a different organization and space, you will change the settings for these fields.  Enter in your Application Name (for the Production application) in the field provided, and make sure that this application name matches the name of the application in your Bluemix console, as well as the application name that you used in the manifest.yml file.  In the case of my production instance, it is the name of the application that we used in the manifest_ToxCalendar.yml file, which was “ToxCalendar”.

Finally you will see a grey box with the deploy script.  Now in order to make sure that we use the manifest_ToxCalendar.yml file to deploy this, we have to copy this over the existing manifest.yml file.  We’ll need to modify the cloned deploy script to reflect this change:

#!/bin/bash
# Move proper manifest file into place
cp manifest_ToxCalendar.yml manifest.yml
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

The result of this is that for the production stage, we copy the version of the manifest file that we want (manifest_ToxCalendar.yml) into place for use by CloudFoundry.  Since this has been configured as a manual stage, you now will only deploy a new version of your production calendar when you manually drag a new build to the stage.

What does it mean?

PipelineNow we have a deployment pipeline built that allows us to do automated builds and deployments of changes to our calendar application to the test environment.  It should look something like the picture on the right.

Once we are satisfied with our changes, we can then just drag the appropriate build from the App Build stage, and drop it on the stage where we deploy to production (Deploy to Bluemix Staging – Production in the picture).  This will start the deployment process for the production instance of our calendar application.

What about different variants of our calendar application?

Now we can use this same technique of having multiple file variants to support different deployed “flavors” or variants of our calendar application.  I do the same thing with the Node code in the index.html file of the calendar application.  This file is in the public subdirectory.  I can create two variants of this file, and save them in different files (say index_ToxCalendar.html and index_TestCalendar.html).  The change to the index_TestCalendar.html file is the addition of a single line of code, which will display a heading on the calendar.  The code snippet looks like this:

<body onload="init();">
    <div id="scheduler_here" class="dhx_cal_container" style='width:100%; height:100%;'>
    <center><b>TEST CALENDAR</b></center>
        <div class="dhx_cal_navline">

The single added line just puts a title line (“TEST CALENDAR”) in the application.  For other instances, I can make similar changes to similar files.  To get the deployed application to use the correct version of the index.html field that I want, I need to make one more modification to the deploy script.  It should now look like this:

#!/bin/bash
# Move proper manifest file into place
cp manifest_testCalendar.yml manifest.yml
# Move proper index.html file into place
cp public/index_TestCalendar.html public/index.html
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

Now we have a specific variant for our test calendar, deployed to the test environment.  You would then do similar changes and file variants for all of the other deployed variants of this that you want to deploy and support.

Basics for Upgrading ANY Software

Recently I became involved with a customer upgrading UrbanCode Deploy.  I want to share my experiences, some lessons learned, and some IT basics.

Our customer was looking to upgrade their UrbanCode Deploy to the latest version.  Doing this meant that they had to upgrade in steps, as outlined in the IBM Technote on Upgrading UrbanCode Deploy.  The customer understood this, and they began to work with the IBM support team on getting their upgrade done.  They had a simple plan which outlined the high level steps involved, and they had received a couple of patches to support issues specific to their environment.  I became involved when they were one week away from doing the upgrade on their production systems.

As I became more involved, I became increasingly alarmed at what I was seeing.  The migration plan was too simple – it had no step-by-step guidance on which values to select for configuration options, installation paths, or even server and environment names.  This left fer too much room for error, and made the upgrade process prone to errors when executed in the production environment.  That leads us to our first general IT lesson, which is:

General Lesson #1 – Any upgrade plan must be able to be executed by someone OTHER than the tool administrator or the author of the plan

One other factor that concerned me was that I saw no detailed “rollback plan”.  Part of the upgrade plan HAS to include what staff should do if the production upgrade goes bad.  It could be due to power outages, lack of resources, or some other unforeseen circumstance.  You need to have detailed instructions (see General Lesson #1 above) on how to restore the existing environment if the upgrade fails.  Nobody likes to do this, but if the upgrade does fail for some reason, people will be under pressure and tired.  They need to have easy to understand and easy to execute instructions on how to restore the production environment.  This is our addition to the first lesson,

General Lesson #1a – Any upgrade plan without a “rollback” section instructing how to restore production to it’s pre-upgrade configuration, is not complete

One of the other areas where I had concerns was due to the fact the customer was planning on moving ahead, even though they were receiving delivery of post-migration scripts the day before the planned upgrade of their production environment.  They kept insisting that they had tested everything in their staging environment, but I knew that they would not be able to adequately test the post-migration scripts (more on those later) prior to upgrading production.  They had tested things extensively in their staging environment, but they had only tested individual parts of the upgrade process.  This leads us to our second general IT lesson, namely:

General Lesson #2 – NEVER upgrade software on production systems unless you have done a FULL upgrade scenario in your staging environment

I can hear the grumbling already.  “We can’t push this out, it’s our only upgrade window for the next month”.  I understand that this can be frustrating, and can seem like be over prepared at times, but this is a lesson drilled into people in our business based on traumatic experiences of the past.  A decision to proceed with an upgrade of a production environment should only be done when you have prepared enough so the risk of ANYTHING going wrong is negligible.  Our systems are critical to the organizations that employ us, treating them with a cavalier attitude only puts our profession on trial.  The decision to upgrade, without having done a full upgrade in a staging environment, has been called “IT Malpractice” by some of my friends in the industry.  It’s a great term, and one I plan to use in the future.  The basic question is this: “Is the potential pain in a botched upgrade worse than delaying the upgrade?”.  If you haven’t covered the lessons spelled out above, assume that your upgrade will NOT be successful.

The customer was also upgrading from version 4.x to the latest version of UrbanCode Deploy.  This meant some changes to the architecture of the product which has a direct impact on the end users.  The first of these is a change to the way that security is handled by UrbanCode.  You really need to be aware of your security settings and the changes that may occur as part of the upgrade.  If you are unfamiliar with the UrbanCode Security Model, then review it and make sure that you have a clear understanding of how roles, permissions, and teams impact the ability of end users to deploy software in your environment.

UrbanCode Lesson #1 – Understand the UrbanCode Security Model, and know how it is impacted by the upgrade

Another thing that happens when upgrading to UrbanCode Deploy 6.x is that your resources move from being in a flat list, to a tree structure.  This allows you to organize your resources into common groups, and find them much more easily in the UI (ie. no 2000 item pull down menus).  During the upgrade to 6.x the urbanCode Deploy resources will be reorganized into a “flat” tree structure.  This has an adverse effect on performance for tool administrators, as page loads become slow if you have a large number of resources.

In order to address this, and as a way to better organize your resources, you should refactor your resources into the tree structure provided.  There is a simple script that IBM can share with you that will refactor resources based on the name of the resource.  You can read Boris Kuschel’s blog on how he deployed this on Bluemix.  Essentially you just have a script that breaks up the resources alphabetically.  You’ll probably want to alter the script to refactor your resources based on some other criteria, but the code is all there.

UrbanCode Lesson #2 – Understand the changes to the UrbanCode Resource Model, and know how it impacts your upgrade

Also keep in mind that some of this refactoring could potentially impact your procedures, depending on how you reference those resources.

Summary

Upgrading any software in your production environments is a risk.  We often think of upgrades as being “simple”, but a tool upgrade is often dictated by a foundational change in a product.  These foundational changes will often impact how the product is supported and how it operates, and this may have an impact on your environment.  ALWAYS follow standard IT best practices and TEST upgrades in legitimate testing environments.  Make sure that you have a script (especially for manual steps) for the upgrade that has been fully run through without issues in your testing environment prior to attempting to upgrade your production environments.  I hate seeing my customers going through painful situations that could have been easily avoided with some risk management and planning.

An Easier Cloud Calendar

Timing is……………… everything.  About 4 hours after I did my last post on How About a Generic Calendar in the Cloud?, I saw a post from one of my team members.  It was a post from Sean Wilbur called, Streamlining Your Bluemix Project for One Button Sharing.  It was a great post, and once I followed the directions that Sean outlined, I was able to add a simple little “Deploy to Bluemix” button on my project.

So now if you would like to get a copy of my Generic Calendar project to play with for yourself, it is really easy.  Just make sure that you have a Bluemix account, and that you have a linked DevOps Services account.  Then just navigate to my project in DevOps Services (it’s the dtoczala|ULLCloudCalendar project).  Once there, you can look at the README.md file displayed there, and look for the “Deploy to Bluemix” button.  It looks like this:

deploy-to-bluemix

The Deploy to Bluemix button

Just press that button and you will get a project created in the DevOps services that is a fork of my original project.  The automatic creation of the project will throw an error during the deployment, but you will be able to easily fix this.  The error is due to a problem in the manifest.yml file, we are currently unable to create and bind services for our application through the manifest (see Sean’s question on this in the forum).  You can easily fix this by doing three things:

  1. In your DevOps services console, configure your Deploy stage – In your newly created project, press the build and deploy button, and then configure the deploy stage.  You will add a job to the deploy configuration, a deploy job, that will do a “cf push” of your application.  Then try executing it.  It will still fail (because our MongoDB service is not present), but it will create a new Bluemix application for you.  This is your version of the ULL Cloud Calendar app.
  2. In your Bluemix console, add and bind the MongoDB service – This is straightforward.  Just add the MongoDB service and make sure to bind it to your new application.  When you add the service, Bluemix will ask if you would like to restage your application.  Answer yes, and wait for the application to be deployed again.
  3. In your Bluemix console, click on the link for your new app – just click on the link for the route to your new application. 

Now once that little issue with the manifest.yml is cleared up, you will be able to share your Bluemix applications with the press of a button.  Bringing up applications and capabilities in the cloud is getting easier and easier!


No Nonsense Tech Talk Member

These are my PERSONAL views and observations

The postings on this site are my own and don't necessarily represent IBM's position, strategies or opinions. Anyone is free to use, copy, distribute, modify or sell the source code and other materials directly linked from dtoczala.wordpress.com and is provided "as is" without warranties. I am not responsible for any harm or damage caused to your computer, software or anything else caused by the material.

Recent Twitter Traffic


Follow

Get every new post delivered to your Inbox.

Join 347 other followers

%d bloggers like this: