IBM Cloud Content, Charges, and Delivery for Developer Teams

Note: I updated this a day or two after the original posting, as people sent me some good links to other resources that I wanted to share.

I have been getting this question constantly for the past month, and I have to do a presentation on it for one of my customers, so I figured that it is probably a good topic to share with a wider audience.  I am going to talk about how IBM Cloud customers can organize, manage, and use the IBM Cloud to develop applications and services, which they then can deliver to their customers.

First the Basics

First we need to cover the basics.  I have discussed the basic organization of an IBM Cloud account in my earlier post, “Bluemix and Watson – Getting Started Right” (Note that the IBM Cloud used to be called “Bluemix”).  In that article, I show you the basic organization of an IBM Cloud Account, it’s Organizations, and the Spaces underneath those organizations.  Most of our customers will organize their Accounts/Organizations/Spaces along the lines shown in Figure 1.

Figure 1 – A Typical Enterprise Account Structure

Note that right now there is no support for the concept of an Enterprise Account (or a parent of multiple IBM Cloud accounts), but when that capability DOES become available, I would see it being used as shown in the diagram above.  Now let’s look at what happens when you begin a project.

Launching a Project

When launching a project, you need to determine a few different things.  The most important piece is to figure out what KIND of a project you have.  I will divide projects into 4 major categories for the purposes of this conversation, and they are:

  • Internal Projects – projects that are done by your software development teams, and provide systems/applications for your organization.  This includes internal POCs, and other “exploratory” and “innovation” work.
  • Product Projects – projects that are done by your software development teams, that provide systems/applications that you market and sell as a product.  These products/services are then exposed or delivered to your customers.
  • Hosted Projects – projects that are done by your software development teams, that provide systems/applications that you host and maintain for a single customer.  This may also include products where you host unique copies (instances) for different customers.  Think of your favorite SaaS product.
  • Turnkey Projects – projects that are done by your software development teams, that provide systems/applications that you finish development on, and then deliver to your customer.

These project types are all going to require slightly different deployment and work environments.  The setup for each of these is based on the type of project, and the way that you need to react to and handle a couple of basic limitations that you need to be aware of.

The first limitation we will call the Billing Endpoint limitation.  It’s pretty simple, the bill for your Cloud services goes to the account owner – and nobody else.  So you need to be aware of the charges to any given account, how you will handle those charges (what one entity will pay for them), and how you will pass those charges along to your internal and external customers.

The second limitation is the Resource Portability limitation.  This one is pretty simple too.  You cannot “move” or “relocate” a service from one organization/space to a different organization/space in the IBM Cloud.  In order to move something from one environment to another, you need to recreate that service in the new environment in the same way that you did in the first environment.  This forces us to be disciplined in our software development – and brings us to our next section.

Importance of DevOps Tooling

The resource portability limitation means that we need to be able to recreate any cloud resource instance in some type of automated manner, in any environment we choose.  This demands a solid change management strategy, and solid DevOps tooling that can create the services and applications in your various IBM Cloud environments.

One way to do this is to use the DevOps Toolchains that are available on the IBM Cloud.  These toolchains are easy to use.  You can customize them to use tools that are “Cloud native” on the IBM Cloud, or you can use your own tools and processes.

A healthy Cloud development and deployment environment is strongly dependent on the DevOps environment that is supporting it.  Tools, standards, and automation can help development teams follow better engineering practices, and can help you deliver projects and products more quickly.  If you’re unfamiliar with DevOps, I suggest you Google it and read some of the stuff from Gene Kim, Sanjeev Sharma, Mik Kersten or Eric Minnick.

So keep in mind that setting up a DevOps framework and some administrative automation for your Cloud should be one of the first things that you do.  Investments supporting the automation of projects will pay huge dividends, and allow your teams to easily launch, execute, and retire projects in your Cloud environment.

Handling Projects

So now that I have convinced you that you need to invest some time and effort building up your DevOps capabilities on the Cloud, let’s get back to the main question of this blog post.  “How do I organize projects and content, and handle the financial aspects for these projects?”.

Handling Internal Projects

Internal projects are organized in the same way that I discuss in my earlier post, “Bluemix and Watson – Getting Started Right“.  The account level is a subscription owned by the Enterprise, and projects are run as Organizations in the Account, and the Spaces under those organizations represent the various different environments supported by a project (like development, test, QA, staging, production, etc.).

Figure 2 – Internal Project Organization

This project is going to be developed internally, and it will be deployed internally.  So our need to separate the “billing” is only from an internal perspective.  Since we can see billing at the organization and space levels (see Administering Your IBM Cloud Account – A Script to Help), it should be relatively simple to determine any chargebacks that you might want to do with your internal costs.

You’ll use the DevOps capabilities we discussed earlier to quickly establish the automation of continuous integration/continuous delivery (CI/CD) capabilities.  Teams can do development and can set up pipelines to deliver their project to staging environments, where operations teams can test the deployment and deliver to production environments.  This environment is straightforward and simple because we don’t need to worry about billing issues, and we don’t need to worry about visibility or ownership issues.  Things get more interesting with our other project types.

Handling Product Projects and Hosted Projects

Product and Hosted projects are organized in the same way, even though they are slightly different types of situations.  In these cases I recommend the use of a second IBM Cloud account.  The established Enterprise account that is already established (as described in the section on Internal Projects), should continue to be used for the development of this project.  This project is going to be developed internally, and we will track costs internally.  So our need to separate the “billing” from an internal perspective.  Since we can see billing at the organization and space levels, it should be relatively simple to determine any chargebacks that you need to do for this project.

Figure 3 – Hosted Project and Product Project Organization

You will still have development and test spaces for your project, but you will NOT have production, pre-production or staging areas.  You will use your DevOps capabilities to deliver your project to a different account/organization/space.

In the case of a product that you are hosting for general usage, you will deploy to a specific IBM Cloud account that has been created for the express purpose of hosting production versions of your product (or for the deployment of Kubernetes clusters that are running your product).  Your operations team will have access to this account, and these production environments, ensuring a separation of duties for your deployed products.

In the case of a product that you are hosting for usage by a specific customer, you will deploy to a specific IBM Cloud account that has been created for the express purpose of hosting production applications for that customer. Your operations team will have access to this account, ensuring a well defined set of duties for your hosted products.  This approach also allows you to easily collect any billing information for your customer.

Handling Turnkey Projects

Turnkey projects are organized almost exactly the same way as a hosted project, with two simple differences.  Just like a hosted project, you will need to create a new IBM Cloud account for the work being delivered.

Figure 4 – Turnkey Project Organization

The first big difference is that you are going to either have your customer own the new IBM Cloud account from it’s creation, or transfer ownership of the account to your customer.  Make sure that you are clear about who owns (and pays for) the various environments, and the timing of any account reassignment.

The second difference is that the new account may have more than just production spaces – since your customer will need development and test spaces to be able to do maintenance and further development of the application or system being delivered.

Things To Remember

Now that we’ve covered how to organize content and environments for project delivery, it’s time to remind you about some of key details that you will need to remember to make sure that your IBM Cloud development efforts go as smoothly as possible.

  • Make sure that you have a solid DevOps strategy.  This is key to being able to deliver project assets to specific environments.
  • Make sure that you have solid naming conventions and Administrative procedures.  These should be automated as much as possible (just like your DevOps support for development).  For some guidance on setting up roles and DevOps pipelines, check out some of these best practices for organizing users, teams and applications.
  • Know how you will set up your project – since this will have an impact on the contracts and costing that you have for your IBM Cloud hosted project.

 

Advertisements

Administering Your IBM Cloud Account – A script to help

Note: This post has been edited and updated multiple times, and the most recent and accurate copy of this post can be found on the IBM developerWorks website, in a blog post titled, Administering Your IBM Cloud Account – A script to help.

As many of you know, if I have to do something more than two or three times, I tend to put in some effort to script it.  I know a lot of what I can do on the command line with the IBM Cloud, but I don’t always remember the exact syntax for all of those bx command line commands.  I also like to have something that I can call from the command line, so I can just script up common administrative scenarios.

Other Options

There are some options which already exist out there.  I wasn’t aware of some of them, and none of them allow for scripting access.  One of the best that I have seen is the interactive application discussed in the blog post on Real-Time Billing Insights From Your IBM Cloud Account, written by Maria Borbones Garcia.  Her Billing Insights app is already deployed out on Bluemix.  It’s nice –  suggest you go and try it out.  She also points you to her mybilling project on GitHub, which means that you can download and deploy this app for yourself (and even contribute to the project).  Another project that I have seen is the My Console project, which will show a different view of your IBM Cloud account.

Why Create a Script?

This all came home to me this past week as I began to administer a series of accounts associated with a Beta effort at IBM (which I’ll probably expand upon once the closed beta is complete).  I have 20 different IBM Cloud accounts, and I need to manage the billing, users, and policies for each of these accounts.  I can do it all from the console, but that can take time, and I can make mistakes.  The other thing that I thought of was that I often get questions from our customers about, “How do I track what my users are using, and what our current bill is?”.  So that led me to begin writing up a Python script that would allow you to quickly and easily do these types of things.

So I began to develop the IBM_Cloud_Admin tool, which you can see the code for in its GitHub repository.  Go ahead and download a copy of it from GitHub.  This is a simple Python script, and it just executes a bunch of IBM Cloud CLI commands for you.  If you go through a session and then look at your logfile, you can see all the specific command line commands issued, and see the resulting output from those commands.  This allows you to do things in this tool, and then quickly look in the log file and strip out the commands that YOU need for your own scripts.

How To Use The Script

To run the script, you can just type in:

python IBM_Cloud_Admin.py -t apiKey.json

The script has a few different modes it can run in.

  • If you use the -t flag, it will use an API Key file, which you can get from your IBM Cloud account, to log into the IBM Cloud.  This is the way that I like to use it.
  • If you don’t use the -t flag, you’ll need to supply a username and password for your IBM Cloud account using the -u and -p flags.
  • If you use the -b flag (for billing information), then you will run in batch mode.  This will get billing information for the account being logged into, and then quit.  You can use this mode in a script, since it does not require any user input.
  • If you don’t use the -b flag (for billing information), then you will run in interactive mode.  This will display menus on the command line that you can choose from.

The Output Files

There are a number of output files from this tool.  There is the IBM_Cloud_Admin.output.log file, which contains a log of your session and will show you the IBM Cloud command line commands issued by the tool, and the responses returned.  This is a good way to get familiar with the IBM Cloud command line commands, so you can use them in custom scripts for your own use. 

You may also see files with names like, MyProj_billing _summary.csv and MyProj_billing _by_org.json.  These are billing reports that you generated from the tool.  Here is a list of the reports, and what they contain.

  • MyProj_billing _summary.csv – this CSV file contains billing summary data for your account for the current month.
  • MyProj_billing _summary.json – this JSON file contains billing summary data for your account for the current month.  It shows the raw JSON output from the IBM Cloud CLI.
  • MyProj_billing _by_org.csv – this CSV file contains billing details data for your account, split out by org and space, for the current month.
  • MyProj_billing _by_org.json – this JSON file contains billing details data for your account, split out by org and space, for the current month.  It shows the raw JSON output from the IBM Cloud CLI.
  • MyProj_annual_billing _summary.csv – this CSV file contains billing summary data for your account for the past year.
  • MyProj_annual_billing _summary.json – this JSON file contains billing summary data for your account for the past year.  It shows the raw JSON output from the IBM Cloud CLI.
  • MyProj_annual_billing _by_org.csv – this CSV file contains billing details data for your account, split out by org and space, for the past year.
  • MyProj_annual_billing _by_org.json – this JSON file contains billing details data for your account, split out by org and space, for the past year.  It shows the raw JSON output from the IBM Cloud CLI.

Use the JSON output files as inputs to further processing that you might want to do of your IBM Cloud usage data.  The CSV files can be used as inputs to spreadsheets and pivot tables that you can build that will show you details on usage from an account perspective, as well as from an organization and space perspective.

Getting Your Api Key File

I’ve mentioned the API key file a couple of times here.  If you are not familiar with what an API Key file is, then you’ll want to read this section.  An API Key is a small text file which contains some JSON based information, which when used properly with the IBM Cloud command line tool, will allow anyone to log into the IBM Cloud environment as a particular user, without having to supply a password.  The API Key file is your combined username/password.  Because of this, do NOT share API keyfiles with others, and you should rotate your API Key files periodically, just in case your keyfile has become compromised.

Getting an API Key on IBM Cloud is really easy.

  • Log into the IBM Cloud, and navigate to your account settings in the upper right hand corner of the IBM Cloud in your web browser. Select Manage > Security > Platform API Keys.
  • Click on the blue Create button.
  • In the resulting dialog, select a name for your API Key (something that will tell you which IBM Cloud account the key is associated with), give a short description, and hit the blue Create button.
  • You should now see a page indicating that your API Key has been successfully created. If not, then start over again from the beginning. If you have successfully created an API Key, download it to your machine, and store it somewhere secure.

Note: A quick note on API Keys. For security reasons, I suggest that you periodically destroy API Keys and re-create them (commonly called rotating your API keys or access tokens). Then if someone had access to your data by having one of your API keys, they will lose this access.

Other Tasks

Do you have other administrative tasks that you would like to see the tool handle?  Find a bug?  Want to help improve the tool by building a nice interface for it?  Just contact me through the GitHub repository, join the project, and add issues for problems, bugs, and enhancement requests.

A Final Thought

This script is a quick hacked together Python script – nothing more and nothing less.  The code isn’t pretty, and there are better ways to do some of the things that I have done here – but I was focused on getting something working quickly, and not on efficiency or Python coding best practices.  I would not expect anyone to base their entire IBM Cloud administration on this tool – but it’s handy to use if you need something quick, and cannot remember those IBM Cloud command line commands.

Deploying Production Cloud Applications – A Readiness Checklist

I just had a conversation today with my VP (Rob Sauerwalt – check him out on Twitter – time to do some shameless kissing up to my management team) about a recent internal communication that we both saw.  It was someone looking for a “readiness checklist” for the deployment of an application on the IBM Cloud.  Rob and I both agreed that this seems pretty simple, and we came up with a quick checklist of things to consider.

Now this list is not specific to the IBM Cloud, it’s pretty generic.  It’s just a quick checklist of things that you will want to make sure that you have considered, BEFORE you deploy that cloud based application into a production environment.  I am an Agile believer, so I would suggest that you address these checklist items in the SPIRIT of what they are trying to do, and that you should do what makes sense.  This means that each one of these areas does not need to represent some 59 page piece of documentation.  What you want to do is provide enough information so the poor guy who takes your job after you get promoted, is able to be effective and understand and maintain the application or system.

If you have suggestions about other things that should be on this list, please drop me a line and let me know.  I would love to add them to the list, and make this generic deployment readiness checklist even better.

Production Readiness Checklist

The Basics

⊗ Name and General Description of the Application – this includes the purpose of the application and the number of users that are anticipated to use the application.  Also have an idea of the types of users.  Is it for the general public?  Only for certain roles within our organization?  Is it only for your customers?  Do this in two to three paragraphs – anything more is adding complexity.

⊗ Description of Needed Software/Hardware/Cloud Resources – a list of the needed software packages, and the clou resources needed to run the application.  Do you use third party utilities or libraries?  Do you run on Cloud Foundry buildpacks?  Virtual machines?  Do you use Cloud services for database resources?  Often a high level architectural diagram is useful to help other people understand the system at a high level.  This should be done AS you build – so you can simplify things.  Are your developers using different libraries to accomplish the same thing?  Get them to standardize.  Reduce your dependencies, reduce your complexity, and you improve your software quality.

DevOps Considerations

⊗ Operating Systems and Patching Requirements – do you have specific OS requirements?  Do you require a particular framework to run properly (like .NET, Eclipse, or a particular Cloud Foundry buildpack)?  What OS versions have you tested and validated this application with – and do all of your components need to be on the same OS version?  This becomes important when fixes get deployed to containers, virtual machines get upgraded, and maintenance activities are done.

⊗ Installation and Configuration Guidelines – you should be deploying your application in some automated manner.  So your deployment and promotion scripts should be the only guide that you need…… except when they aren’t.  Take the time and DOCUMENT those scripts – explain WHAT you are doing and WHY, so your application can easily be reconfigured or deployed in different ways in the future.

⊗ Back-up, Data Retention and Data Archiving Policies – let your operations people know what data needs to be archived and retained.  How often do systems need to be backed up?  How will services be restored in the event of a crash?  Explain WHERE and HOW data needs to be retained.  Explain what your DEVELOPMENT teams need to review on a periodic basis.  This can be the biggest headache for development teams, because these are often scenarios that they have not considered.  Backup plans are not sufficient, they need to be executed at least once before you go into production – so you are sure that they are valid and that they work.

⊗ Monitoring and Systems Management – This includes runbooks – what do we need to do while the application is running?  Do we need to take the logs off of the system every day and archive them?  Or do we just let logs and error reports build up until the system crashes?  Should I monitor memory and heap usage on a daily basis?  Should I be monitoring CPU load?  Who do I notify if I see a problem, and what is a “problem”?  (CPU at 50%? CPU running at 20% with spikes to 100%?)  How will this application normally be supported?  You may not have complete information and definition of “problems” when you begin, bu define what you can and acknowledge that things will change as time goes on.

⊗ Incident Management – This details how you react to application incidents.  These could be bugs, outages, or both.  In the case of an outage, who needs to be called, and what actions should they take to collect needed data, and to get the application back up and running.  What logs are needed, what kind of data will aid in debugging issues?  Who is responsible for application uptime TODAY (get things back on track and running), and who is responsible for application uptime TOMORROW (who needs to find root cause, fix bugs, make design changes if needed, etc.).

⊗ Service Level Documentation -This is the “contract” between you and your customers.  How often will your application be down for maintenance?  If your application is down, how long before it comes back up?  Are there any billing or legal ramifications from a loss of service?  Do your customers get refunds – or cash back – when your Cloud application is unavailable?

⊗ Extra Credit – DevOps pipeline – you need to have an automated pipeline for the deployment of code changes into well defined development, test, and production environments.  You need to have a solid set of policies and procedures for the initiation and automation of these deployments.  Who has authority to deliver to test environments?  Production environments?

Software Architecture Considerations

⊗ Key Support & Maintenance Items – the team that built this thing knows where the weak spots are – share that knowledge!  Where does the team know that “tech debt” exists – and how is that impacting your application?  This information will help the teams maintaining and upgrading your application.  They will be able to do this with knowledge about how the application works, and why certain architectural choices were made.

⊗ Security Plan – Everyone is worried about the security of their applications and data on the cloud.  You need to be sensitve to this when deploying cloud based applications.  Your stakeholders and users will want to know that you have considered security, and that you are protecting their data from being exposed, stolen, or used without their knowledge/consent.

⊗ Application Design – This should include some high level description of your use case, a simple flowchart and dependencies.  Give enough detail so someone can easily get started in maintaining your application code, but not so much detail that you waste time and ultimately end up with documentation that does not match the code.

Is That Everything?

That’s not everything, but it is a good minimal list of things that you should have considered and/or documented.  Most applications need some sort of a support plan – who handles incoming problem tickets from customers?  Do you have a support process for your end users?  In your own environments and business context, you may have other things that need to be added to this list.  Do you need to check for compliance with some standard or regulation?  What are your policies for using Open Source software?

So this list is not meant to be exhaustive – but it is designed to make you think, and to help you ensure higher quality when deploying your Cloud applications.

Happy Holidays for 2017

With the end of the year quickly approaching, it is a great time to look back on the past year, and to look forward in anticipation for what is coming in 2018.

2017 was an interesting year.  I saw an explosion in the development of chatbots of various different types.  Some were very simple, others used both Watson Conversation and the Watson Discovery service to provide a deeper user experience – with an ability to answer both short tail and long tail questions.  I saw a huge uptick in interest in basic Cloud approaches, and a lot of interest in containers and Kubernetes.  I expect that both of these trends will continue into 2018.

In 2018 I expect to see the IBM Cloud mature and expand in it’s ability to quickly provide development, test and production computing environments for our customers.  I also expect that more people will become interested in hybrid cloud approaches, and will want to understand some best practices for managing these complex environments.  I am also excited about some of the excellent cognitive projects that I have seen which could soon be in production for our customers.  I also expect that some of our more advanced customers will be looking at how cognitive technologies can be incorporated into their current DevOps processes, and how these processes can be expanded into the cloud.

I hope that your 2017 was a good one, and I hope that you have a happy and safe holiday season.

Using Bluemix DevOps Services to Support Multiple Deployments

In my earlier blog post on using Bluemix to deploy a simple Cloud Calendar on Bluemix, I added something the next day or so that discussed using the Deploy to Bluemix button for an even Easier Cloud Calendar.  Well my post has gotten some responses from within IBM, with various different teams wanting to use this simple cloud based calendar to provide a simple widget that they can use to provide a team calendar capability on their RTC dashboards.  So now I have a few different teams that would like me to deploy this simple solution for them.

Well I started out just showing people how they could do this themselves on Bluemix, because it really is quite easy.  However, some of the people asking for this are not technical types, they’re business type or (worse) management types.  They are afraid or unable to do this for themselves.  I’ll address the fear of cloud in a different blog post in the future.  The main thing here is that I ended up using the same code (or similar code) to deploy about 4 different types of calendars for 4 different groups within IBM.

How do I manage to do this without driving myself crazy?  I use the DevOps services available to me in Bluemix, and I configured a delivery pipeline that allows me to build, test, and then deploy four different variants of the same basic application.  The project was already out there in the Bluemix hosted Git repository, what I needed to do is to make some small changes to how I deploy, and some small changes for each flavor (or variant) of my application.

Overview

The original application was a Node.js app, and was pretty simple.  For details on it, view my original blog post called How About a Generic Calendar in the Cloud?.  Now to deploy this project, I went into my Test Calendar project and clicked on the Build & Deploy button.  This brought me to the Build and Deploy pipeline area.  I then looked and saw a simple build already set up for my Node.js project.  This simple build executes every time I push new changes to the master branch of my Git repository.  That’s perfect, I now have a continuous build (not to be confused with Continuous Integration) environment set up for my project.

Simple Deploy

Next I need to add a new stage to my project.  So I add a new stage to deploy my test configuration.  This stage takes the outputs of the build stage as inputs, and deploys these out to my test calendar environment.  This “test environment” is a simple application/MongoDB pair that I have out in the Bluemix staging environment.  Now since I have multiple instances that I want to deploy, under different names, in different Bluemix environments/spaces, I will need different manifest.yml files.  The manifest.yml file indicates where the application gets deployed, and the services that it will use.

So I decide to create multiple manifest.yml files, with a naming convention that indicates which deployment they belong to.  So I have a manifest_TestCalendar.yml file with the settings for my TestCalendar (which is my test configuration), and a manifest_ToxCalendar.yml file with the settings for my team calendar (which is my production configuration).  I actually have five of these, but I’ll keep it simple and just highlight the two for the purposes of explaining things here.  So my manifest_TestCalendar.yml file looks like this:

applications:
- disk_quota: 512M
- services:
  - mongolab-TestCalendar
  host: TestCalendar
  name: TestCalendar
  random-route: true
  path: .
  domain: stage1.mybluemix.net
  instances: 1
  memory: 512M
declared-services:
  mongolab-TestCalendar:
  label: mongodb
  plan: 100

and my manifest_ToxCalendar.yml file looks the same, except for the host line (which specifies “ToxCalendar”), the name line (which specifies “ToxCalendar”), and the declared services (which name a different MongoDB instance).  Note that the name of the Mongo DB service instance MUST match the name of the service as shown in Bluemix.  You’ll need to go and create that service first, before you try using this to spin up new instances of your application.  Also note that the route here is pointing at an internal IBM instance of Bluemix, if you do this in the public Bluemix, you’ll use mybluemix.net as the domain.

Configuring the Test Stage

Back to our deployment pipeline.  When I look at the pipeline, I decide to leave the first stage of deployment set to automatically deploy by leaving the stage trigger set to “Run jobs when the previous stage is completed”.  This means that this stage will run when the build stage successfully completes.  Now since I want to have a continuous integration environment, the target of this deploy should be my test instance.  What I will have isn’t really a true continuous integration environment, as I don’t have any automated tests being run as part of the deployment, but you can see how you can easily modify the stage to support this.

DeployToTestSo we’ve decided to do an automated deploy of my test instance on any build.  Go click on the gear icon in the stage, and chose Configure Stage.  You will now see a dialog box where you can configure your DevOps pipeline stage.  In this dialog, check the Input tab to make sure that the stage is set to be run automatically, by making sure that the Run jobs when previous stage is completed option is selected.  Then click on the Jobs tab and make sure that you have a Deploy job selected (if you don’t, then go and create one).  The Deployer Type should be set to Cloud Foundry, and the Target (where this should be deployed) should be your Bluemix environment.  Once you select the target, your organization and space drop-down selections should get populated with organizations that you are a part of, and the spaces available in those organizations (For more on organizations and spaces in Bluemix/CloudFoundry, read Managing your account).  Also enter in your Application Name in the field provided, and make sure that this application name matches the name of the application in your Bluemix console, as well as the application name that you used in the manifest.yml file.  In this case, it is the name of the application that we used in the manifest_TestCalendar.yml file, which was “TestCalendar”.

Finally you will see a grey box with the deploy script.  Now in order to make sure that we use the manifest_TestCalendar.yml file to deploy this, we have to copy this over the existing manifest.yml file.  It is a simple Unix/Linux copy command, and your resulting deploy script should now look like this:

#!/bin/bash
# Move proper manifest file into place
cp manifest_testCalendar.yml manifest.yml
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

The result of this is that we copy the version of the manifest file that we want into place, and then CloudFoundry just does the rest.  Go ahead and make a simple code change (just update a comment, or the README file), commit and push it (if you’re using Git), and watch as the application gets automatically built, and then the test calendar gets deployed.

Configuring the Production Stage

Now if we want to deploy using the same mechanism for our production instance, the process is simple.  We just click on the gear icon in the stage, and chose Clone Stage.  This creates a new stage, just like our stage to deploy our test instance.  Click on the gear icon for this new stage.  You will now see a dialog box where you can configure your cloned DevOps pipeline stage.  In this dialog, check the Input tab to make sure that the stage is NOT set to be run automatically, by making sure that the Run jobs only when this stage is run manually option is selected.  Then click on the Jobs tab and make sure that you have a Deploy job selected.  The Deployer Type and the Target (where this should be deployed) should remain the same.  Once you select the target, your organization and space drop-down selections should get populated with organizations that you are a part of, and the spaces available in those organizations.  If you want to deploy your production instance to a different organization and space, you will change the settings for these fields.  Enter in your Application Name (for the Production application) in the field provided, and make sure that this application name matches the name of the application in your Bluemix console, as well as the application name that you used in the manifest.yml file.  In the case of my production instance, it is the name of the application that we used in the manifest_ToxCalendar.yml file, which was “ToxCalendar”.

Finally you will see a grey box with the deploy script.  Now in order to make sure that we use the manifest_ToxCalendar.yml file to deploy this, we have to copy this over the existing manifest.yml file.  We’ll need to modify the cloned deploy script to reflect this change:

#!/bin/bash
# Move proper manifest file into place
cp manifest_ToxCalendar.yml manifest.yml
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

The result of this is that for the production stage, we copy the version of the manifest file that we want (manifest_ToxCalendar.yml) into place for use by CloudFoundry.  Since this has been configured as a manual stage, you now will only deploy a new version of your production calendar when you manually drag a new build to the stage.

What does it mean?

PipelineNow we have a deployment pipeline built that allows us to do automated builds and deployments of changes to our calendar application to the test environment.  It should look something like the picture on the right.

Once we are satisfied with our changes, we can then just drag the appropriate build from the App Build stage, and drop it on the stage where we deploy to production (Deploy to Bluemix Staging – Production in the picture).  This will start the deployment process for the production instance of our calendar application.

What about different variants of our calendar application?

Now we can use this same technique of having multiple file variants to support different deployed “flavors” or variants of our calendar application.  I do the same thing with the Node code in the index.html file of the calendar application.  This file is in the public subdirectory.  I can create two variants of this file, and save them in different files (say index_ToxCalendar.html and index_TestCalendar.html).  The change to the index_TestCalendar.html file is the addition of a single line of code, which will display a heading on the calendar.  The code snippet looks like this:

<body onload="init();">
    <div id="scheduler_here" class="dhx_cal_container" style='width:100%; height:100%;'>
    <center><b>TEST CALENDAR</b></center>
        <div class="dhx_cal_navline">

The single added line just puts a title line (“TEST CALENDAR”) in the application.  For other instances, I can make similar changes to similar files.  To get the deployed application to use the correct version of the index.html field that I want, I need to make one more modification to the deploy script.  It should now look like this:

#!/bin/bash
# Move proper manifest file into place
cp manifest_testCalendar.yml manifest.yml
# Move proper index.html file into place
cp public/index_TestCalendar.html public/index.html
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

Now we have a specific variant for our test calendar, deployed to the test environment.  You would then do similar changes and file variants for all of the other deployed variants of this that you want to deploy and support.

An Easier Cloud Calendar

Timing is……………… everything.  About 4 hours after I did my last post on How About a Generic Calendar in the Cloud?, I saw a post from one of my team members.  It was a post from Sean Wilbur called, Streamlining Your Bluemix Project for One Button Sharing.  It was a great post, and once I followed the directions that Sean outlined, I was able to add a simple little “Deploy to Bluemix” button on my project.

So now if you would like to get a copy of my Generic Calendar project to play with for yourself, it is really easy.  Just make sure that you have a Bluemix account, and that you have a linked DevOps Services account.  Then just navigate to my project in DevOps Services (it’s the dtoczala|ULLCloudCalendar project).  Once there, you can look at the README.md file displayed there, and look for the “Deploy to Bluemix” button.  It looks like this:

deploy-to-bluemix
The Deploy to Bluemix button

Just press that button and you will get a project created in the DevOps services that is a fork of my original project.  The automatic creation of the project will throw an error during the deployment, but you will be able to easily fix this.  The error is due to a problem in the manifest.yml file, we are currently unable to create and bind services for our application through the manifest (see Sean’s question on this in the forum).  You can easily fix this by doing three things:

  1. In your DevOps services console, configure your Deploy stage – In your newly created project, press the build and deploy button, and then configure the deploy stage.  You will add a job to the deploy configuration, a deploy job, that will do a “cf push” of your application.  Then try executing it.  It will still fail (because our MongoDB service is not present), but it will create a new Bluemix application for you.  This is your version of the ULL Cloud Calendar app.
  2. In your Bluemix console, add and bind the MongoDB service – This is straightforward.  Just add the MongoDB service and make sure to bind it to your new application.  When you add the service, Bluemix will ask if you would like to restage your application.  Answer yes, and wait for the application to be deployed again.
  3. In your Bluemix console, click on the link for your new app – just click on the link for the route to your new application. 

Now once that little issue with the manifest.yml is cleared up, you will be able to share your Bluemix applications with the press of a button.  Bringing up applications and capabilities in the cloud is getting easier and easier!

Using Scrum Methods with IBM DevOps Services

Note: I co-authored this with Patchanee Petprayoon, so you can also find this on her WordPress blog, Ppatchanee.

This article is in response to a number of frequently asked questions about sprint planning and scrum methods, as well as some links to other resources we have created or found to answer those requests.  Before we dive deeper into the topic, let’s take a quick look at what type of scrum work items are provided in IBM DevOps Services. If you have never used any scrum methods before, this can be overwhelming at first.  That’s OK, just read through this and try to digest as much as you can.  If you are familiar with Scrum and Agile methodologies, just skim through all of the explanations to get to the meat on how to do things with IDS.

We call out some sections of this whole process in this indented green text.  These represent best practices, or areas where you can dive more deeply into how IDS works. 

You might also note that a lot of the links for terminology in this article lead to Wikipedia references.  It’s not because we love wikipedia, it’s because too many of the good articles out there have a lot of product references (like this one does – with IDS).  I went looking for some content that was tool agnostic – and Wikipedia is often the best bet for that type of material.

Before We Begin

If you haven’t already registered for IBM DevOps Services (IDS) and Bluemix, you should register now.  It will help you follow along with our examples and will allow you to try a few things for yourself.  You can use IDS alone, use Bluemix without IDS, or use them together.  You may have used the Scrum Process with Rational Team Concert in the past.  If you did, then a lot of this will look quite familiar.

Getting Started with IBM DevOps Services (IDS)

Once you have registered with IDS, you will want to create a project.  To do this, go out to your IDS dashboard.  Click on Create a Project.  You will see a new project dialog come up.  Give your project a name (like “Sample Scrum Project”), something that will identify it.  Then select where your source code repository will be.  For the purposes of this article, it doesn’t really matter, since we’ll just be looking at some common Scrum methods, and will not be dealing with code.  Then make sure that you have a checkmark in the box for “Add features for Scrum development”.  If you are an adventurer, and want to play around with some code, and then deploy it to Bluemix, you might also want to check the box for “Make this a Bluemix project”.  Finally, hit the “Create” button in the lower right, which will create your new scrum project.

So now we have a Scrum project created, and you can see it in the browser.  Now we want to set it up correctly.

In the future, we’ll want to add other members of our team to this project.  To do this, just click on the link for “Members” on the left hand nav bar, and then click on the “Invite Members” icon to add members of your team to the project.

Right now, we just want to learn how to use IDS in conjunction with some common Scrum practices.  One of the first things you’ll need to do is to decide on the length of time of each of you iterations.  In Scrum, these are called Sprints.  Each Sprint represents a period of time where the development team will develop and test code, and commits to have an executable application at the conclusion of the sprint.  That means that you have a running application at the end of each sprint.  It may not expose all of the functionality that has been developed, and some functionality may not be present, but it will execute.

Most scrum teams run with 2 week sprints.  To set up your sprints, start on the overview screen, and press the button labelled “Track & Plan”.  Once on this screen, select “Sprint Planning” from the nav bar on the left.  Since you have just started your project, you don’t have any sprints right now.  So let’s create some.  Click on the “Add Sprints” button in the Sprint Planning window, and add 4 new sprints, and make them 2 weeks in length.

If you click on “Edit Sprints” at this point, you can see how you can now go and add new sprints, rename your sprints, and change the dates on your sprints.  When working with IDS, we suggest naming your Sprints the same as the end date of the sprint.  That way your stakeholders who are unfamiliar with Agile can look and see when their user stories will be implemented at a glance.  They really don’t care about sprints or Agile development, they want to know when they can expect things.

Defining Your Initial Work

Now you have the basic project setup done.  It’s now time to start thinking about doing actual work.  The first thing to understand is the different types of work items available to you to capture your work.

Work Item Type What it is used for
 Story (sometimes called User Story) A user story is a description of some functionality needed, told from the perspective of the end user of the application.  Some people find this equivalent to a use-case.
 Task Tasks are single user tasks used to accomplish some well defined and well scoped effort.  Stories are typically broken down into the multiple tasks, representing the work needed to accomplish the implementation of a Story.
 Epic Epics are stories that are too large to be contained within a single sprint.  Typically an Epic can be broken down into two or more Stories.
 Defect Defects are bugs, problems with the application.  Many Scrum teams will treat them similar to how they treat stories, by decomposing them into the tasks needed to be done to fix a particular defect.
 Impediment These are the risks, issues, and environmental factors that stop work from being accomplished.  Typical impediments may include things like waiting for properly signed SSL certificates, waiting on hardware availability, lack of stakeholder availability, and other similar situations.
 Adoption Item Adoption items indicate when changes made by one team need to be accepted (and integrated) with another team.  An upgrade to a new database version is an example of this.
 Retrospective Retrospectives are held at the end of each sprint, and they allow the team to reflect on what is working well, and what isn’t working well.  It is an attempt to keep the team focused on continuously improving their effectiveness.  Retrospective work items capture the notes and observations made by the team during a retrospective.
 Track Build Item Used to track builds, and relate them to them functionality that they will deliver.  Not a lot of teams use these.

So at this point you should begin creating user stories, epics, impediments, and adoption items for all of the work that you are aware of.  Just click on the “+” in the backlog box to begin creating your first work items.  Once you click on it, just type in a brief headline or general description of the work.  Don’t worry, you will be able to fill in more detail later.  Click on the small icons below the text, to change the work item type, add a more detailed description, and so on.  As you add work items, they will begin to display in your team backlog.

First work item
The edit sprints button, some text for a first work item, and the icons to change the work item attributes

Backlogs are a critical part of scrum development.  Your backlog contains all of the work that your team thinks that it may need to do, but that the team has not yet committed to doing.  If you haven’t scheduled it, and assigned the work to a particular sprint, then it should be in the backlog.  Mature products may have backlogs with 100 or more items in them.  We’ll discuss how to manage your backlog in the next section.

After you have created your initial work items, take a look at your backlog.  It should show all of your work items.  Click on any individual work item, and you will see the details about that work item.  Go back to your sprint planning view, and you will see the work items displayed along with some summary information.

When looking at the details of a work item, it is important to note the fields in the work item.  All of them are self-explanatory, but they do impact how your team does it’s work.  Often lists that are displayed (like your backlog) will have higher priority items at the top.  The discussion field is used to discuss work items, and to provide a running status on the work items. 

When you want to ask someone something in a discussion, click on the link icon with a person, and you will get a dialog where you can choose the specific person that you want to identify.  Now once you save your changes, that person will get an email notifying them that you have asked them a question, along with a link to the workitem.

If you have people always asking about the status of some work, just go to the links tab on a detailed work item, and add that person as a subscriber.  You can add yourself or other people as subscribers to a work item.  whenever a work item is modified, all subscribers get an email that notifies that of the modifications made to the work item, as well as a link to the work item.  We end up using much less email, and having far fewer status meetings, because by using subscribers we are able to keep everyone informed on the progress of critical work items.

Backlog Management

Let’s face it; even though we try our best to estimate things, we’re often very wrong.  We’re not wrong because we are dumb, we’re wrong because we don’t have enough information.  So when using Scrum, we look at each story on our backlog, and we estimate the size/complexity of the story using story points.  Teams will use various methods for estimating story points (some use Planning Poker).  The way that we size the story is very easy and straightforward. If you think that the user story can take a short amount of time to complete, then you give it a low number. If you think it’s going to take a lot of effort to get the user story done, then you give it a high number.  As time goes on, you will get a better feel for estimating work in this way.  You can do all of this work while in that same sprint planning view, capturing your decisions and discussions in the work items as you go along.

There is heated debate about assigning story points to user stories.  Some people think that it is a measure of risk and complexity, some think it is a measure of difficulty, some think of it as a measure of effort.  Do what works for you.  I like Michael Tang’s quick view of story points, but you can find other viewpoints from Mike Cohn and others.  Just spend 30 minutes Googling this and educate yourself.

Once you have estimated the relative sizes of your stories, you are now ready to do some backlog grooming.  In the left hand nav bar, you will notice a link called “Backlog”.  Click on that link, and you should see a nice list of all of your stories.

BacklogGrooming
Backlog Grooming Example

Now we can begin grooming this backlog.  Drag the most important story, the most critical one, to the top of the list.  Now drag the second most important story, and drop it below the first story.  Keep doing this until you have a ranked list of your user stories.

Clicking on a user story will bring up details about it, which you can use when discussing them.  Be careful when dragging and dropping user stories.  If you drop a user story on top of another user story, it will make that user story a child of the user story it was dropped on.  Sometimes you want to do this.  If you didn’t want to do this, then click on the small plus sign “+” on the left of the user story, which will display it’s children.  Then click the small “x” on the left of the child user story, to remove it from being a child.  If you drop it in-between two user stories, it will get ranked between them.

This is your ranked backlog.  This is a critical step for your team, because this indicates the relative priorities of the work that you have been asked to do.  When we get into sprint planning in the next section, you will see how we look at the top of the backlog when planning our next sprint.

Backlog grooming is an important and ongoing activity.  Some scrum teams will do backlog grooming once per sprint, others will do it monthly, and still others will try to do it on an ongoing basis.  The thing to remember is that stories on the backlog need to be reviewed periodically, to ensure that the estimates are accurate, and that the business and development environment have not changed and impacted their priority in relation to other stories on the backlog.

Sprint Planning

Now we get to the key piece of software delivery planning, Sprint Planning.  In your IDS browser, in the left hand nav bar, click on the link called “Sprint Planning”.  When you do this, on the left hand side you will see the ranked list of your backlog items, with the highest raked items on top (I told you that backlog grooming was important!).  On the right hand side you will see an empty list, which represents your first sprint.  Now at this point, you would begin to drag user stories and work items from the backlog (on the left), to your sprint (on the right).  Go ahead and try it.  Move the top ranked user story from the backlog to your sprint.

When you do this, you will notice some changes.  Now in the right hand list, click on the link for “Team Progress”.  What should you see?  Well you should see that the story is now scheduled to be done during the first sprint.  You’ll also notice that the story points assigned to this story show up in the team progress (you now show as having completed 0 out of x story points).  Observant users will also see that the ranked list has updated, and that the next item on the backlog now has a ranking of 1.

Continue to assign stories to your first sprint for what your team can GUARANTEE that they will finish in the timeframe of the first sprint.  That means COMPLETELY FUNCTIONAL software that satisfies the user stories.  Do not over-promise – this is your team’s commitment to complete the work in the timeframe of the first sprint.  Keep in mind that you’ll also have to do testing, and that some bugs may pop up and need to be done as part of the work of this sprint.

The amount of story points that a team can normally complete within a single sprint is called the Team Velocity.  You need to have a gut feel for how much your team can do in your first few sprints, until you get enough history of doing Agile development to understand the velocity of your teams.  Once you have this history, it is easy to assign work to sprints up to three of four sprints in the future, because you have an understanding of how your teams perform.

There is a pile of articles and blogs on Sprint Planning.  What we have seen work the best is when teams commit to doing about 60% of their capacity in a sprint.  This leaves them time for unanticipated surprises, bugs, and other things. It also allows them to build confidence and meet their commitments.

If a team notices that they are half way through their sprint, and that they are almost done, they can then go back to the backlog (or even a future sprint), and pick one or more appropriately sized stories to complete early, in the remaining time that they have in this sprint.  Nobody has ever complained about getting functionality early.

Sprint planning is only half done at this point.  You now have a set of user stories that you are committing to implement in your sprint.  But HOW are you going to do it?  You’re going to have the entire team take those user stories and break them down into the discrete tasks needed to implement those stories.  Look at your sprint backlog now, and on the top story click on the icon for “Open child task breakdown”.  You’ll see that label when you hover over the icon.

SprintPlanning
Sprint planning – creating the tasks for your user story

Once you click on the icon, a new screen appears.  This is your area to enter all of the child tasks for the story that you are breaking down.  You should just click on the plus sign to create a new task work item, that will be a child of the user story that you selected.

TaskEntry_1
Starting your work item entry – use icon to designate this as a task

As you enter in some text to define your new task, you’ll also notice some icons below the text area.  Use the icon on the far left to set the work item type.  In our case, we will want to make this a task.  Notice that this adds some additional “shorthand” to your task description.  Try the other icons, and become familiar with the attributes that you can change in your work item.

At this point you should be getting used to how the work items work within a sprint.  You could easily decompose a large story into smaller stories, making them children of the larger story.  Don’t go crazy with parent-child relationships, use them where it makes sense.  It can be effective to have everyone looking at the same screen during these planning meetings, so the whole team is aware of the tradeoffs discussed, and the whole team has input on the estimates and work breakdown.

Use the icons to set attributes for your work item
Use the icons to set attributes for your work item

Work with your team to define all of the tasks needed for the completion of the user story.  When you are done, you should have a list of tasks that are in the Open column.  Now click on the small down arrows at the bottom of the first task in the list, and you will see some detail information for your task.  You and your team should now look at estimating how much effort each task will take.  You can also further clarify things if you need to.

Adding estimates to your tasks
Adding estimates to your tasks

Go through all of the tasks for this story, and then do the same exercise with your team for the remainder of the stories in your sprint.

At this point you are almost done with your Sprint planning.  You now need to go back and review what you have done, and check your estimated work against the calendar.  Navigate back to your original sprint planning page, and then click on the link for Team Progress.

Checking your sprint estimates
Checking your sprint estimates

You will see that you have an total of the estimates of all of your tasks in hours displayed, a number of work items assigned to the sprint, and a total of the story points.  These are all represented as the second number (0/n) in those progress indicators.  Make sure that you have not overcommitted to the sprint in terms of hours (a two week sprint, means 80 hours of work for each person on the team), or in terms of story points.  Also make sure that you have not undercommitted, and if you have, look for additional stories off of the backlog that you can assign to this sprint.  You’ll spend the remainder of your time in sprint planning making adjustments.

In a lot of the literature on Sprint Planning, they discuss a Sprint Backlog (which is what you are building here), and a Product Backlog (which we just call a Backlog here).  The terminology can be confusing, but in the end the important thing is that you commit to right amount of work for your sprint, and that you correctly define that work for your team.

When do we start doing work?

All of that effort, and we have not written a single line of code.  Ugh!  It can be tough sometimes for people eager to dive in and start prototyping and coding.  Planning is important – it helps us focus on the things that will bring value to our stakeholders and to the business.  There is nothing worse than working long and hard on something that never gets used, or has no value.

The daily scrum

So now that everything is planned, we can kick off our first sprint.  Most Agile teams will have a daily standup meeting.  The daily standup (or scrum) is a chance for each member of the team to briefly talk about what they are doing, what they plan to do, and any impediments or obstacles to their achieving these goals.  These are sometimes referred to as The Three Questions.  It isn’t a status meeting.  It’s a chance to share technical news, and other team members are encouraged to share information that can help other members of the team.  The daily standup can be run in a number of different ways, and there are a lot of mistakes that you can make in how you conduct these meetings.

Most people suggest running these daily standup meetings in front of the scrum taskboard.  You can do this with IDS by using the Team work view.  Just go to the original sprint planning view, and look at the nav bar on the left.  In the nav bar, there is an entry for “Team’s Work”, click on it and you will see the team view of the work in your sprint.

Scrum task board - with viewing options highlighted
Scrum task board – with viewing options highlighted

By choosing the viewing option icons at the top right of the view, you can look at your work as a list, as a table, or in lanes.  Most mature scrum teams will choose to view things in lanes.  If you look at your view, you will notice a lane for open work items (which have not started), and In Progress work items (which are currently being worked on).  If you look closely, there is a horizontal slider at the bottom of this view, and you can scroll to the right and see a lane for completed items as well.  Make minor updates to the taskboard as you go, as long as it doesn’t impact the meeting.  You can drag and drop work items between the lanes, and you can update the attributes if needed.

During the standup meeting, members should be notifying others of their progress, what they plan to do next, and highlight any risks or impediments to their progress.    This not only makes every team member accountable to the team (who is going to stand up and say, “Today I am going to finalize my fantasy football lineup, and watch YouTube videos of kittens?“), it also helps team members to collaborate.  They can help each other with suggestions, short cuts, and help avoid known technical issues.  You want to foster a collaborative team environment, not a structured status reporting environment (filled with Scrum zombies).

Ending the sprint

As you get closer to the end of the sprint, most of your work items should be in the resolved column in your scrum board.  You should be seeing steady progress throughout the sprint.  Your sprint should end with a demonstration of running software that demonstrates the new functionality that has been implemented.  Once the stakeholders sign off on the implementation, your sprint is almost done.

At this point you should bring your team together and have a sprint retrospective.  Go out and create one more work item for your sprint (you should do this at the beginning of the sprint, so people don’t forget it at the end).  They type for the new work item should be “Retrospective”.  During the retrospective, the team should reflect on what they did well, what they struggled with, and what they can improve.  All of this information should be captured in the notes section of the Retrospective work item.  Once the retrospective is over, the Scrummaster may want to create some new work items to reflect things that the team can do in future sprints, that will improve overall team performance.

Often these retrospective observations will end up creating stories that do not serve an end user need, but instead that serve a development team need.  Some people have referred to this as technical debt, but there is some argument about how to best characterize this work.  Examples of stories coming from retrospectives may include the refactoring of portions of the code, development of automated testing scripts, implementation of build and delivery automation capabilities, and other related issues.  Often these “tech debt” stories are the ones that are pulled from the backlog when a team finishes their planned sprint work early, and is able to take on additional work within a sprint.

Wrapping Up

At this point we’ve done a quick overview of how scrum teams can use IDS to help organize and track their sprint planning and execution, how they can groom their backlogs, and do planning in a transparent manner so all stakeholders and team members can have a shared understanding of the goals and the progress of the scrum team during a sprint.

In our next article, we’ll touch on some more intermediate topics.  We’ll talk about how to effectively use dashboards with IDS, effective ways to work with code in IDS, and some small tips and tricks that can help your teams use IDS to deliver end user results that will make your customers smile.