Administering Your IBM Cloud Account – A script to help

Note: This post has been edited and updated multiple times, and the most recent and accurate copy of this post can be found on the IBM developerWorks website, in a blog post titled, Administering Your IBM Cloud Account – A script to help.

As many of you know, if I have to do something more than two or three times, I tend to put in some effort to script it.  I know a lot of what I can do on the command line with the IBM Cloud, but I don’t always remember the exact syntax for all of those bx command line commands.  I also like to have something that I can call from the command line, so I can just script up common administrative scenarios.

Other Options

There are some options which already exist out there.  I wasn’t aware of some of them, and none of them allow for scripting access.  One of the best that I have seen is the interactive application discussed in the blog post on Real-Time Billing Insights From Your IBM Cloud Account, written by Maria Borbones Garcia.  Her Billing Insights app is already deployed out on Bluemix.  It’s nice –  suggest you go and try it out.  She also points you to her mybilling project on GitHub, which means that you can download and deploy this app for yourself (and even contribute to the project).  Another project that I have seen is the My Console project, which will show a different view of your IBM Cloud account.

Why Create a Script?

This all came home to me this past week as I began to administer a series of accounts associated with a Beta effort at IBM (which I’ll probably expand upon once the closed beta is complete).  I have 20 different IBM Cloud accounts, and I need to manage the billing, users, and policies for each of these accounts.  I can do it all from the console, but that can take time, and I can make mistakes.  The other thing that I thought of was that I often get questions from our customers about, “How do I track what my users are using, and what our current bill is?”.  So that led me to begin writing up a Python script that would allow you to quickly and easily do these types of things.

So I began to develop the IBM_Cloud_Admin tool, which you can see the code for in its GitHub repository.  Go ahead and download a copy of it from GitHub.  This is a simple Python script, and it just executes a bunch of IBM Cloud CLI commands for you.  If you go through a session and then look at your logfile, you can see all the specific command line commands issued, and see the resulting output from those commands.  This allows you to do things in this tool, and then quickly look in the log file and strip out the commands that YOU need for your own scripts.

How To Use The Script

To run the script, you can just type in:

python IBM_Cloud_Admin.py -t apiKey.json

The script has a few different modes it can run in.

  • If you use the -t flag, it will use an API Key file, which you can get from your IBM Cloud account, to log into the IBM Cloud.  This is the way that I like to use it.
  • If you don’t use the -t flag, you’ll need to supply a username and password for your IBM Cloud account using the -u and -p flags.
  • If you use the -b flag (for billing information), then you will run in batch mode.  This will get billing information for the account being logged into, and then quit.  You can use this mode in a script, since it does not require any user input.
  • If you don’t use the -b flag (for billing information), then you will run in interactive mode.  This will display menus on the command line that you can choose from.

The Output Files

There are a number of output files from this tool.  There is the IBM_Cloud_Admin.output.log file, which contains a log of your session and will show you the IBM Cloud command line commands issued by the tool, and the responses returned.  This is a good way to get familiar with the IBM Cloud command line commands, so you can use them in custom scripts for your own use. 

You may also see files with names like, MyProj_billing _summary.csv and MyProj_billing _by_org.json.  These are billing reports that you generated from the tool.  Here is a list of the reports, and what they contain.

  • MyProj_billing _summary.csv – this CSV file contains billing summary data for your account for the current month.
  • MyProj_billing _summary.json – this JSON file contains billing summary data for your account for the current month.  It shows the raw JSON output from the IBM Cloud CLI.
  • MyProj_billing _by_org.csv – this CSV file contains billing details data for your account, split out by org and space, for the current month.
  • MyProj_billing _by_org.json – this JSON file contains billing details data for your account, split out by org and space, for the current month.  It shows the raw JSON output from the IBM Cloud CLI.
  • MyProj_annual_billing _summary.csv – this CSV file contains billing summary data for your account for the past year.
  • MyProj_annual_billing _summary.json – this JSON file contains billing summary data for your account for the past year.  It shows the raw JSON output from the IBM Cloud CLI.
  • MyProj_annual_billing _by_org.csv – this CSV file contains billing details data for your account, split out by org and space, for the past year.
  • MyProj_annual_billing _by_org.json – this JSON file contains billing details data for your account, split out by org and space, for the past year.  It shows the raw JSON output from the IBM Cloud CLI.

Use the JSON output files as inputs to further processing that you might want to do of your IBM Cloud usage data.  The CSV files can be used as inputs to spreadsheets and pivot tables that you can build that will show you details on usage from an account perspective, as well as from an organization and space perspective.

Getting Your Api Key File

I’ve mentioned the API key file a couple of times here.  If you are not familiar with what an API Key file is, then you’ll want to read this section.  An API Key is a small text file which contains some JSON based information, which when used properly with the IBM Cloud command line tool, will allow anyone to log into the IBM Cloud environment as a particular user, without having to supply a password.  The API Key file is your combined username/password.  Because of this, do NOT share API keyfiles with others, and you should rotate your API Key files periodically, just in case your keyfile has become compromised.

Getting an API Key on IBM Cloud is really easy.

  • Log into the IBM Cloud, and navigate to your account settings in the upper right hand corner of the IBM Cloud in your web browser. Select Manage > Security > Platform API Keys.
  • Click on the blue Create button.
  • In the resulting dialog, select a name for your API Key (something that will tell you which IBM Cloud account the key is associated with), give a short description, and hit the blue Create button.
  • You should now see a page indicating that your API Key has been successfully created. If not, then start over again from the beginning. If you have successfully created an API Key, download it to your machine, and store it somewhere secure.

Note: A quick note on API Keys. For security reasons, I suggest that you periodically destroy API Keys and re-create them (commonly called rotating your API keys or access tokens). Then if someone had access to your data by having one of your API keys, they will lose this access.

Other Tasks

Do you have other administrative tasks that you would like to see the tool handle?  Find a bug?  Want to help improve the tool by building a nice interface for it?  Just contact me through the GitHub repository, join the project, and add issues for problems, bugs, and enhancement requests.

A Final Thought

This script is a quick hacked together Python script – nothing more and nothing less.  The code isn’t pretty, and there are better ways to do some of the things that I have done here – but I was focused on getting something working quickly, and not on efficiency or Python coding best practices.  I would not expect anyone to base their entire IBM Cloud administration on this tool – but it’s handy to use if you need something quick, and cannot remember those IBM Cloud command line commands.

Advertisements

Deploying Production Cloud Applications – A Readiness Checklist

I just had a conversation today with my VP (Rob Sauerwalt – check him out on Twitter – time to do some shameless kissing up to my management team) about a recent internal communication that we both saw.  It was someone looking for a “readiness checklist” for the deployment of an application on the IBM Cloud.  Rob and I both agreed that this seems pretty simple, and we came up with a quick checklist of things to consider.

Now this list is not specific to the IBM Cloud, it’s pretty generic.  It’s just a quick checklist of things that you will want to make sure that you have considered, BEFORE you deploy that cloud based application into a production environment.  I am an Agile believer, so I would suggest that you address these checklist items in the SPIRIT of what they are trying to do, and that you should do what makes sense.  This means that each one of these areas does not need to represent some 59 page piece of documentation.  What you want to do is provide enough information so the poor guy who takes your job after you get promoted, is able to be effective and understand and maintain the application or system.

If you have suggestions about other things that should be on this list, please drop me a line and let me know.  I would love to add them to the list, and make this generic deployment readiness checklist even better.

Production Readiness Checklist

The Basics

⊗ Name and General Description of the Application – this includes the purpose of the application and the number of users that are anticipated to use the application.  Also have an idea of the types of users.  Is it for the general public?  Only for certain roles within our organization?  Is it only for your customers?  Do this in two to three paragraphs – anything more is adding complexity.

⊗ Description of Needed Software/Hardware/Cloud Resources – a list of the needed software packages, and the clou resources needed to run the application.  Do you use third party utilities or libraries?  Do you run on Cloud Foundry buildpacks?  Virtual machines?  Do you use Cloud services for database resources?  Often a high level architectural diagram is useful to help other people understand the system at a high level.  This should be done AS you build – so you can simplify things.  Are your developers using different libraries to accomplish the same thing?  Get them to standardize.  Reduce your dependencies, reduce your complexity, and you improve your software quality.

DevOps Considerations

⊗ Operating Systems and Patching Requirements – do you have specific OS requirements?  Do you require a particular framework to run properly (like .NET, Eclipse, or a particular Cloud Foundry buildpack)?  What OS versions have you tested and validated this application with – and do all of your components need to be on the same OS version?  This becomes important when fixes get deployed to containers, virtual machines get upgraded, and maintenance activities are done.

⊗ Installation and Configuration Guidelines – you should be deploying your application in some automated manner.  So your deployment and promotion scripts should be the only guide that you need…… except when they aren’t.  Take the time and DOCUMENT those scripts – explain WHAT you are doing and WHY, so your application can easily be reconfigured or deployed in different ways in the future.

⊗ Back-up, Data Retention and Data Archiving Policies – let your operations people know what data needs to be archived and retained.  How often do systems need to be backed up?  How will services be restored in the event of a crash?  Explain WHERE and HOW data needs to be retained.  Explain what your DEVELOPMENT teams need to review on a periodic basis.  This can be the biggest headache for development teams, because these are often scenarios that they have not considered.  Backup plans are not sufficient, they need to be executed at least once before you go into production – so you are sure that they are valid and that they work.

⊗ Monitoring and Systems Management – This includes runbooks – what do we need to do while the application is running?  Do we need to take the logs off of the system every day and archive them?  Or do we just let logs and error reports build up until the system crashes?  Should I monitor memory and heap usage on a daily basis?  Should I be monitoring CPU load?  Who do I notify if I see a problem, and what is a “problem”?  (CPU at 50%? CPU running at 20% with spikes to 100%?)  How will this application normally be supported?  You may not have complete information and definition of “problems” when you begin, bu define what you can and acknowledge that things will change as time goes on.

⊗ Incident Management – This details how you react to application incidents.  These could be bugs, outages, or both.  In the case of an outage, who needs to be called, and what actions should they take to collect needed data, and to get the application back up and running.  What logs are needed, what kind of data will aid in debugging issues?  Who is responsible for application uptime TODAY (get things back on track and running), and who is responsible for application uptime TOMORROW (who needs to find root cause, fix bugs, make design changes if needed, etc.).

⊗ Service Level Documentation -This is the “contract” between you and your customers.  How often will your application be down for maintenance?  If your application is down, how long before it comes back up?  Are there any billing or legal ramifications from a loss of service?  Do your customers get refunds – or cash back – when your Cloud application is unavailable?

⊗ Extra Credit – DevOps pipeline – you need to have an automated pipeline for the deployment of code changes into well defined development, test, and production environments.  You need to have a solid set of policies and procedures for the initiation and automation of these deployments.  Who has authority to deliver to test environments?  Production environments?

Software Architecture Considerations

⊗ Key Support & Maintenance Items – the team that built this thing knows where the weak spots are – share that knowledge!  Where does the team know that “tech debt” exists – and how is that impacting your application?  This information will help the teams maintaining and upgrading your application.  They will be able to do this with knowledge about how the application works, and why certain architectural choices were made.

⊗ Security Plan – Everyone is worried about the security of their applications and data on the cloud.  You need to be sensitve to this when deploying cloud based applications.  Your stakeholders and users will want to know that you have considered security, and that you are protecting their data from being exposed, stolen, or used without their knowledge/consent.

⊗ Application Design – This should include some high level description of your use case, a simple flowchart and dependencies.  Give enough detail so someone can easily get started in maintaining your application code, but not so much detail that you waste time and ultimately end up with documentation that does not match the code.

Is That Everything?

That’s not everything, but it is a good minimal list of things that you should have considered and/or documented.  Most applications need some sort of a support plan – who handles incoming problem tickets from customers?  Do you have a support process for your end users?  In your own environments and business context, you may have other things that need to be added to this list.  Do you need to check for compliance with some standard or regulation?  What are your policies for using Open Source software?

So this list is not meant to be exhaustive – but it is designed to make you think, and to help you ensure higher quality when deploying your Cloud applications.

Happy Holidays for 2017

With the end of the year quickly approaching, it is a great time to look back on the past year, and to look forward in anticipation for what is coming in 2018.

2017 was an interesting year.  I saw an explosion in the development of chatbots of various different types.  Some were very simple, others used both Watson Conversation and the Watson Discovery service to provide a deeper user experience – with an ability to answer both short tail and long tail questions.  I saw a huge uptick in interest in basic Cloud approaches, and a lot of interest in containers and Kubernetes.  I expect that both of these trends will continue into 2018.

In 2018 I expect to see the IBM Cloud mature and expand in it’s ability to quickly provide development, test and production computing environments for our customers.  I also expect that more people will become interested in hybrid cloud approaches, and will want to understand some best practices for managing these complex environments.  I am also excited about some of the excellent cognitive projects that I have seen which could soon be in production for our customers.  I also expect that some of our more advanced customers will be looking at how cognitive technologies can be incorporated into their current DevOps processes, and how these processes can be expanded into the cloud.

I hope that your 2017 was a good one, and I hope that you have a happy and safe holiday season.

Hurray for IBM Cloud!! Um, where did my stuff go?

Just went through an issue with a customer, and it’s a somewhat common issue so I figured that I would do a quick blog post on it.

Recently IBM has decided to rebrand our cloud from what we commonly refer to as Bluemix, and we are now referring to as the IBM Cloud.  You may have noticed the changes to the UI, and some new capability (like resource groups!).

Some of these changes have caused some of our customers to “lose” access to some of their data on Bluemix the IBM Cloud (see, even we struggle with the changes in names).  These customers claim that they can not see some of the organizations, spaces and services that they used to have.  DON’T PANIC!.  Your work has not been lost.  What has happened is that as IBM has collapsed things to a single IBM Cloud user (when maybe you used to have a SoftLayer user, and a Bluemix user), you now have access to two different accounts from your IBM Cloud web interface.

Fixing the Issue

So just go and look at your profile in the IBM Cloud UI.  It is the little person icon in the upper right hand corner of your browser.

Now click on the little symbol under Account, and you will notice that you now have access to two different accounts.  Some of your artifacts will be in one account, and others will be in the second account.  You can switch context here in the UI so you can see what is in each account.  Presto!!!  Mystery solved, and now you can go back to being insanely productive working out on the IBM Cloud.

Monitoring Bluemix Usage and Spending

Note: This post is also published on developerWorks, as Monitoring Bluemix usage and spending.  Please refer to that article to catch any updates.

I have been spending the summer working with a number of different Bluemix and Watson customers, and one question seems to come up quite frequently.  It has a lot of variations, but it all boils down to this:

“How much of the Bluemix and Watson services am I using, and how can I monitor this?”

This is pretty simple to do, and you can even automate it yourself.  So it’s worthy of a quick blog post.  First let’s start with the interactive monitoring of your usage.

Checking Bluemix Usage

First you’ll need to log into the Bluemix platform, using your IBM ID.  When the main screen comes up, you’ll see account option up in the upper right of your browser.  Click on “Manage”, and your options will look like this:

If you then select “Billing and Usage”, and then select “Billing”, you will be taken to a screen that will show the current status of your Bluemix subscription (if you have one).  It will show how much you have already consumed, as well as how much of your subscription remains.  It should look similar to this:

You can scroll down through this report to see more details.  You can use this same method and select “Usage” instead of “Billing”, and you can see your current months usage and to see the specific usage on any of the available Bluemix services.  There are other things that you may be interested in as well.  Check out the Bluemix Docs on Viewing your Usage for more information.

Automating the Process

You can also see usage (although not billing) information by using the Bluemix CLI (Command Line Interface).  The two commands that you will be most interested in are “bx billing account-usage” and “bx billing orgs-usage-summary”.  A small GitHub project with a command line tool which will dump your account information (using those commands) is called bmxusagetracking.  Go out there and grab the code – and then modify it to suit your own needs.  The script is simple – it should take no more than 5 minutes to grab it and understand what it is doing and how it is doing it.

I am also looking at creating a Python version of this in the same project area – since I know that some of you would much rather do this in Python – so you can manipulate the returned data and make it more useful.  I invite anyone who wants to contribute to the project and improve it, to do so.

Using Bluemix DevOps Services to Support Multiple Deployments

In my earlier blog post on using Bluemix to deploy a simple Cloud Calendar on Bluemix, I added something the next day or so that discussed using the Deploy to Bluemix button for an even Easier Cloud Calendar.  Well my post has gotten some responses from within IBM, with various different teams wanting to use this simple cloud based calendar to provide a simple widget that they can use to provide a team calendar capability on their RTC dashboards.  So now I have a few different teams that would like me to deploy this simple solution for them.

Well I started out just showing people how they could do this themselves on Bluemix, because it really is quite easy.  However, some of the people asking for this are not technical types, they’re business type or (worse) management types.  They are afraid or unable to do this for themselves.  I’ll address the fear of cloud in a different blog post in the future.  The main thing here is that I ended up using the same code (or similar code) to deploy about 4 different types of calendars for 4 different groups within IBM.

How do I manage to do this without driving myself crazy?  I use the DevOps services available to me in Bluemix, and I configured a delivery pipeline that allows me to build, test, and then deploy four different variants of the same basic application.  The project was already out there in the Bluemix hosted Git repository, what I needed to do is to make some small changes to how I deploy, and some small changes for each flavor (or variant) of my application.

Overview

The original application was a Node.js app, and was pretty simple.  For details on it, view my original blog post called How About a Generic Calendar in the Cloud?.  Now to deploy this project, I went into my Test Calendar project and clicked on the Build & Deploy button.  This brought me to the Build and Deploy pipeline area.  I then looked and saw a simple build already set up for my Node.js project.  This simple build executes every time I push new changes to the master branch of my Git repository.  That’s perfect, I now have a continuous build (not to be confused with Continuous Integration) environment set up for my project.

Simple Deploy

Next I need to add a new stage to my project.  So I add a new stage to deploy my test configuration.  This stage takes the outputs of the build stage as inputs, and deploys these out to my test calendar environment.  This “test environment” is a simple application/MongoDB pair that I have out in the Bluemix staging environment.  Now since I have multiple instances that I want to deploy, under different names, in different Bluemix environments/spaces, I will need different manifest.yml files.  The manifest.yml file indicates where the application gets deployed, and the services that it will use.

So I decide to create multiple manifest.yml files, with a naming convention that indicates which deployment they belong to.  So I have a manifest_TestCalendar.yml file with the settings for my TestCalendar (which is my test configuration), and a manifest_ToxCalendar.yml file with the settings for my team calendar (which is my production configuration).  I actually have five of these, but I’ll keep it simple and just highlight the two for the purposes of explaining things here.  So my manifest_TestCalendar.yml file looks like this:

applications:
- disk_quota: 512M
- services:
  - mongolab-TestCalendar
  host: TestCalendar
  name: TestCalendar
  random-route: true
  path: .
  domain: stage1.mybluemix.net
  instances: 1
  memory: 512M
declared-services:
  mongolab-TestCalendar:
  label: mongodb
  plan: 100

and my manifest_ToxCalendar.yml file looks the same, except for the host line (which specifies “ToxCalendar”), the name line (which specifies “ToxCalendar”), and the declared services (which name a different MongoDB instance).  Note that the name of the Mongo DB service instance MUST match the name of the service as shown in Bluemix.  You’ll need to go and create that service first, before you try using this to spin up new instances of your application.  Also note that the route here is pointing at an internal IBM instance of Bluemix, if you do this in the public Bluemix, you’ll use mybluemix.net as the domain.

Configuring the Test Stage

Back to our deployment pipeline.  When I look at the pipeline, I decide to leave the first stage of deployment set to automatically deploy by leaving the stage trigger set to “Run jobs when the previous stage is completed”.  This means that this stage will run when the build stage successfully completes.  Now since I want to have a continuous integration environment, the target of this deploy should be my test instance.  What I will have isn’t really a true continuous integration environment, as I don’t have any automated tests being run as part of the deployment, but you can see how you can easily modify the stage to support this.

DeployToTestSo we’ve decided to do an automated deploy of my test instance on any build.  Go click on the gear icon in the stage, and chose Configure Stage.  You will now see a dialog box where you can configure your DevOps pipeline stage.  In this dialog, check the Input tab to make sure that the stage is set to be run automatically, by making sure that the Run jobs when previous stage is completed option is selected.  Then click on the Jobs tab and make sure that you have a Deploy job selected (if you don’t, then go and create one).  The Deployer Type should be set to Cloud Foundry, and the Target (where this should be deployed) should be your Bluemix environment.  Once you select the target, your organization and space drop-down selections should get populated with organizations that you are a part of, and the spaces available in those organizations (For more on organizations and spaces in Bluemix/CloudFoundry, read Managing your account).  Also enter in your Application Name in the field provided, and make sure that this application name matches the name of the application in your Bluemix console, as well as the application name that you used in the manifest.yml file.  In this case, it is the name of the application that we used in the manifest_TestCalendar.yml file, which was “TestCalendar”.

Finally you will see a grey box with the deploy script.  Now in order to make sure that we use the manifest_TestCalendar.yml file to deploy this, we have to copy this over the existing manifest.yml file.  It is a simple Unix/Linux copy command, and your resulting deploy script should now look like this:

#!/bin/bash
# Move proper manifest file into place
cp manifest_testCalendar.yml manifest.yml
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

The result of this is that we copy the version of the manifest file that we want into place, and then CloudFoundry just does the rest.  Go ahead and make a simple code change (just update a comment, or the README file), commit and push it (if you’re using Git), and watch as the application gets automatically built, and then the test calendar gets deployed.

Configuring the Production Stage

Now if we want to deploy using the same mechanism for our production instance, the process is simple.  We just click on the gear icon in the stage, and chose Clone Stage.  This creates a new stage, just like our stage to deploy our test instance.  Click on the gear icon for this new stage.  You will now see a dialog box where you can configure your cloned DevOps pipeline stage.  In this dialog, check the Input tab to make sure that the stage is NOT set to be run automatically, by making sure that the Run jobs only when this stage is run manually option is selected.  Then click on the Jobs tab and make sure that you have a Deploy job selected.  The Deployer Type and the Target (where this should be deployed) should remain the same.  Once you select the target, your organization and space drop-down selections should get populated with organizations that you are a part of, and the spaces available in those organizations.  If you want to deploy your production instance to a different organization and space, you will change the settings for these fields.  Enter in your Application Name (for the Production application) in the field provided, and make sure that this application name matches the name of the application in your Bluemix console, as well as the application name that you used in the manifest.yml file.  In the case of my production instance, it is the name of the application that we used in the manifest_ToxCalendar.yml file, which was “ToxCalendar”.

Finally you will see a grey box with the deploy script.  Now in order to make sure that we use the manifest_ToxCalendar.yml file to deploy this, we have to copy this over the existing manifest.yml file.  We’ll need to modify the cloned deploy script to reflect this change:

#!/bin/bash
# Move proper manifest file into place
cp manifest_ToxCalendar.yml manifest.yml
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

The result of this is that for the production stage, we copy the version of the manifest file that we want (manifest_ToxCalendar.yml) into place for use by CloudFoundry.  Since this has been configured as a manual stage, you now will only deploy a new version of your production calendar when you manually drag a new build to the stage.

What does it mean?

PipelineNow we have a deployment pipeline built that allows us to do automated builds and deployments of changes to our calendar application to the test environment.  It should look something like the picture on the right.

Once we are satisfied with our changes, we can then just drag the appropriate build from the App Build stage, and drop it on the stage where we deploy to production (Deploy to Bluemix Staging – Production in the picture).  This will start the deployment process for the production instance of our calendar application.

What about different variants of our calendar application?

Now we can use this same technique of having multiple file variants to support different deployed “flavors” or variants of our calendar application.  I do the same thing with the Node code in the index.html file of the calendar application.  This file is in the public subdirectory.  I can create two variants of this file, and save them in different files (say index_ToxCalendar.html and index_TestCalendar.html).  The change to the index_TestCalendar.html file is the addition of a single line of code, which will display a heading on the calendar.  The code snippet looks like this:

<body onload="init();">
    <div id="scheduler_here" class="dhx_cal_container" style='width:100%; height:100%;'>
    <center><b>TEST CALENDAR</b></center>
        <div class="dhx_cal_navline">

The single added line just puts a title line (“TEST CALENDAR”) in the application.  For other instances, I can make similar changes to similar files.  To get the deployed application to use the correct version of the index.html field that I want, I need to make one more modification to the deploy script.  It should now look like this:

#!/bin/bash
# Move proper manifest file into place
cp manifest_testCalendar.yml manifest.yml
# Move proper index.html file into place
cp public/index_TestCalendar.html public/index.html
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

Now we have a specific variant for our test calendar, deployed to the test environment.  You would then do similar changes and file variants for all of the other deployed variants of this that you want to deploy and support.

An Easier Cloud Calendar

Timing is……………… everything.  About 4 hours after I did my last post on How About a Generic Calendar in the Cloud?, I saw a post from one of my team members.  It was a post from Sean Wilbur called, Streamlining Your Bluemix Project for One Button Sharing.  It was a great post, and once I followed the directions that Sean outlined, I was able to add a simple little “Deploy to Bluemix” button on my project.

So now if you would like to get a copy of my Generic Calendar project to play with for yourself, it is really easy.  Just make sure that you have a Bluemix account, and that you have a linked DevOps Services account.  Then just navigate to my project in DevOps Services (it’s the dtoczala|ULLCloudCalendar project).  Once there, you can look at the README.md file displayed there, and look for the “Deploy to Bluemix” button.  It looks like this:

deploy-to-bluemix
The Deploy to Bluemix button

Just press that button and you will get a project created in the DevOps services that is a fork of my original project.  The automatic creation of the project will throw an error during the deployment, but you will be able to easily fix this.  The error is due to a problem in the manifest.yml file, we are currently unable to create and bind services for our application through the manifest (see Sean’s question on this in the forum).  You can easily fix this by doing three things:

  1. In your DevOps services console, configure your Deploy stage – In your newly created project, press the build and deploy button, and then configure the deploy stage.  You will add a job to the deploy configuration, a deploy job, that will do a “cf push” of your application.  Then try executing it.  It will still fail (because our MongoDB service is not present), but it will create a new Bluemix application for you.  This is your version of the ULL Cloud Calendar app.
  2. In your Bluemix console, add and bind the MongoDB service – This is straightforward.  Just add the MongoDB service and make sure to bind it to your new application.  When you add the service, Bluemix will ask if you would like to restage your application.  Answer yes, and wait for the application to be deployed again.
  3. In your Bluemix console, click on the link for your new app – just click on the link for the route to your new application. 

Now once that little issue with the manifest.yml is cleared up, you will be able to share your Bluemix applications with the press of a button.  Bringing up applications and capabilities in the cloud is getting easier and easier!