Help for Managing Your Watson Bluemix Retrieve and Rank Instance

Every once in a while I see something that has the potential to save people time and aggravation.  When I see these things, I like to blog about them to help other members of the Watson cognitive community.  Yesterday I saw one of those things.

OK – I actually saw it a couple of weeks ago, while it was being tested, but it just became available to the public yesterday on the Cognitive Catalyst.  This is a sub-project for Watson under the larger IBM Open project.  If you navigate to the bottom of the IBM Open project page, you will see a variety of different resources available to developers using Watson – from a variety of different SDK’s, to the Cognitive Catalyst project.

For those of you who are not aware, the Cognitive Catalyst site has a bunch of “open source” tools and utilities that developers can use to help them in the management of their Watson services deployed on Bluemix.  All of the code for the Cognitive Catalyst projects is available via a Cognitive Catalyst GitHub project, and we encourage anyone to contribute to the projects, or to submit projects of their own.  There are some really good tools and utilities out there already.

The latest addition to these tools is the ignite-electron tool.  The author of this tool, Andrew Ayres, is a frequent contributor to the Cognitive catalyst site.  The ignite-electron tooling provides a UI which simplifies the creation and management of Retrieve & Rank (R&R) components (solr clusters/configurations/collections and rankers) while also providing an easy to use interface for the creation of ground truth.  This ability eliminates the need of using a bunch of curl commands (which you probably have in a text file and cut/paste into terminal windows), and provides a visualization of the service to reduce confusion.  The tool will eliminate the inevitable errors that can occur from copying/pasting cluster/ranker/document IDs.

That alone would be pretty valuable, but the tool will also help reduce the amount of time required for a subject matter expert (SME) to create ground truth.  Currently an SME would need to repeatedly go through the process making a curl command to query a collection, adding the question to a csv, copying document IDs to the csv, and adding relevance scores.  With the ignite-electron tool the SME can provide a list of questions, and the tool will step through those questions allowing the SME to simply assign relevance scores.  All of the POST requests, and creation of the CSV file, are handled automatically by the tool.

This is one of those simple utilities and tools that should be in the toolset of any developer or subject matter expert that uses the IBM Watson Retrieve & Rank capability.  It will make the management, maintenance, and training of your R&R instance a lot easier.

What Does Watson Really Do?

Moving to a new role is always exciting, but always tough.  There is a learning curve, new technology, new terminology, new business processes, and new people to meet and work with.  Part of my starting as the Manager of the Watson architects was to address the learning curve around the technology and the terminology.  Since I know that any customer or partner looking to use Watson will need to go through a similar exercise, I thought that it might make sense to capture some of the better sources of information that helped me.

Getting Started

Once I started my new job, the first question that I had to answer is, “What exactly is Watson?  What does it do?”  I had to have an answer to this question, because my wife, kids, parents, and friends were asking me, and I felt dumb when I couldn’t answer this simple question.  one of the better answers that I have found is this 8 minute long video called, IBM Watson: How it Works.  I’m not usually big on videos, and I really don’t like marketing fluff (I’m a Tech weenie, I have always been a Tech weenie), but this one is short and to the point.  There are shorter snippets of this information on the SmarterPlanet – What is Watson page that you can look at as well.

The video introduces some key terminology that is important to understand.  It spends the first couple of minutes introducing cognitive computing, and how this is fundamentally different from more traditional computing.  If you’re not a technical person, this is pretty easy to understand.  This piece is important for more traditional programmers to understand, because it dispels the notion that this is just a turbo-charged expert system.  It is actually more than that.

It also talks about the differences between cognitive computing, and simple search capabilities.  The section on natural language processing is really interesting.  It should also help you begin to understand some of the concepts that are important to cognitive computing ( like data curation, corpus, machine learning, training, and question/answer pairs).

I also found an interesting article on No UI is the New UI, which covers some interesting ground on what the author calls artificial intelligence (which I call cognitive computing).  The processing of natural language, the conversion of speech-to-text, and conversions of text-to-speech are all Watson services that you can access on the Bluemix dashboard.  Bluemix is the IBM Cloud platform that allows you to select particular services (or APIs), and combine them to create customer Cloud applications.  The Watson services and capabilities are all cloud enabled and cloud ready for you, and you can see a list of them in the Bluemix catalog.  Not all of them are intuitively obvious at first (“What is the Alchemy API?”), but if you click on the tiles for each service, you will get a synopsis of what that service offers, and you can then drill down into more information on the service.

Cognitive computing is rapidly maturing and is beginning to be seen in the products and services that you use today.  Understanding the practical capabilities and limitations of cognitive computing will be critical in the future.  Because of this, I plan on following up on some of the services themselves, and the implications of using these services, in future blog posts.

Evolution, Change, and Success

I am now transitioning to a new role, and leaving my old job behind. People always seem to take this job change thing with either:

  • Malicious glee – basically laughing and telling everyone that they have left behind “so long suckers!”, or
  • False modesty – with a carefully worded message telling everyone (and if I didn’t mention you I am so VERY sorry) what an honor it was to work with them, and that we should all get together for lunch sometime.

I will try to do neither (although I am sure my close friends will be able to provide examples of both of the above behaviors), and instead I will attempt to focus on what is in front of me.  For me, it is an exciting time, a time when I am eager to look ahead to new challenges and new experiences.  I think the thing to do when changing jobs/roles is to look back and see how you’ve done, and think about what makes people successful.  I was reminded of this by a recent LinkedIn post by Patrick Leddin, which outlined the Five Invaluable Behaviors of Top Performers.  Now I have read similar posts from other writers on performance, but I read his post and it made me reflect on the role that I had played as leader of the Jazz Jumpstart, Emerging Technologies, and DevOps/Cloud teams.  His five behaviors read like this:

  • Deliver Results; Don’t Just Pleasantly Accomplish Activities
  • Solve Problems; Don’t Just Point Them Out
  • Learn New Stuff; Don’t Just Be Comfortable
  • Experience the Customer’s World; Don’t Just Observe It
  • Provide Value That Is Not Easily Replaced; Don’t Just Do the Job

These all seem very simple things, and they are.  The hard part is maintaining a focus on these areas as you deal with the day-to-day challenges that you and your team face.  The real hard part is putting this into practice as you go about doing the non-glamorous parts of your job.

So why write about this in the first place?  I do it as a way to remind myself to focus on these important areas as I move into my new role, and as a way to help the teams that I leave behind to realize their value, and to remind them that they need to focus on these things even when I am not around to remind them about it.  I hope one of my new team members references this blog post in the future, while trying to convince me to change some decision I have made which doesn’t hold to these principles.

This will be the final blog in this series, and I have started rebranding this blog.  That means new pictures (like my dog Jack) and new areas of focus.  I will still be here at the same old address, and you can still come here to find my past articles on the economics of software development, Agile development, the Jazz tools, DevOps, and Cloud technologies.  But from now on you will read blog posts about new technologies, like IBM Watson, and my observations on how to lead a successful team in helping to launch new technologies.

Using Bluemix DevOps Services to Support Multiple Deployments

In my earlier blog post on using Bluemix to deploy a simple Cloud Calendar on Bluemix, I added something the next day or so that discussed using the Deploy to Bluemix button for an even Easier Cloud Calendar.  Well my post has gotten some responses from within IBM, with various different teams wanting to use this simple cloud based calendar to provide a simple widget that they can use to provide a team calendar capability on their RTC dashboards.  So now I have a few different teams that would like me to deploy this simple solution for them.

Well I started out just showing people how they could do this themselves on Bluemix, because it really is quite easy.  However, some of the people asking for this are not technical types, they’re business type or (worse) management types.  They are afraid or unable to do this for themselves.  I’ll address the fear of cloud in a different blog post in the future.  The main thing here is that I ended up using the same code (or similar code) to deploy about 4 different types of calendars for 4 different groups within IBM.

How do I manage to do this without driving myself crazy?  I use the DevOps services available to me in Bluemix, and I configured a delivery pipeline that allows me to build, test, and then deploy four different variants of the same basic application.  The project was already out there in the Bluemix hosted Git repository, what I needed to do is to make some small changes to how I deploy, and some small changes for each flavor (or variant) of my application.

Overview

The original application was a Node.js app, and was pretty simple.  For details on it, view my original blog post called How About a Generic Calendar in the Cloud?.  Now to deploy this project, I went into my Test Calendar project and clicked on the Build & Deploy button.  This brought me to the Build and Deploy pipeline area.  I then looked and saw a simple build already set up for my Node.js project.  This simple build executes every time I push new changes to the master branch of my Git repository.  That’s perfect, I now have a continuous build (not to be confused with Continuous Integration) environment set up for my project.

Simple Deploy

Next I need to add a new stage to my project.  So I add a new stage to deploy my test configuration.  This stage takes the outputs of the build stage as inputs, and deploys these out to my test calendar environment.  This “test environment” is a simple application/MongoDB pair that I have out in the Bluemix staging environment.  Now since I have multiple instances that I want to deploy, under different names, in different Bluemix environments/spaces, I will need different manifest.yml files.  The manifest.yml file indicates where the application gets deployed, and the services that it will use.

So I decide to create multiple manifest.yml files, with a naming convention that indicates which deployment they belong to.  So I have a manifest_TestCalendar.yml file with the settings for my TestCalendar (which is my test configuration), and a manifest_ToxCalendar.yml file with the settings for my team calendar (which is my production configuration).  I actually have five of these, but I’ll keep it simple and just highlight the two for the purposes of explaining things here.  So my manifest_TestCalendar.yml file looks like this:

applications:
- disk_quota: 512M
- services:
  - mongolab-TestCalendar
  host: TestCalendar
  name: TestCalendar
  random-route: true
  path: .
  domain: stage1.mybluemix.net
  instances: 1
  memory: 512M
declared-services:
  mongolab-TestCalendar:
  label: mongodb
  plan: 100

and my manifest_ToxCalendar.yml file looks the same, except for the host line (which specifies “ToxCalendar”), the name line (which specifies “ToxCalendar”), and the declared services (which name a different MongoDB instance).  Note that the name of the Mongo DB service instance MUST match the name of the service as shown in Bluemix.  You’ll need to go and create that service first, before you try using this to spin up new instances of your application.  Also note that the route here is pointing at an internal IBM instance of Bluemix, if you do this in the public Bluemix, you’ll use mybluemix.net as the domain.

Configuring the Test Stage

Back to our deployment pipeline.  When I look at the pipeline, I decide to leave the first stage of deployment set to automatically deploy by leaving the stage trigger set to “Run jobs when the previous stage is completed”.  This means that this stage will run when the build stage successfully completes.  Now since I want to have a continuous integration environment, the target of this deploy should be my test instance.  What I will have isn’t really a true continuous integration environment, as I don’t have any automated tests being run as part of the deployment, but you can see how you can easily modify the stage to support this.

DeployToTestSo we’ve decided to do an automated deploy of my test instance on any build.  Go click on the gear icon in the stage, and chose Configure Stage.  You will now see a dialog box where you can configure your DevOps pipeline stage.  In this dialog, check the Input tab to make sure that the stage is set to be run automatically, by making sure that the Run jobs when previous stage is completed option is selected.  Then click on the Jobs tab and make sure that you have a Deploy job selected (if you don’t, then go and create one).  The Deployer Type should be set to Cloud Foundry, and the Target (where this should be deployed) should be your Bluemix environment.  Once you select the target, your organization and space drop-down selections should get populated with organizations that you are a part of, and the spaces available in those organizations (For more on organizations and spaces in Bluemix/CloudFoundry, read Managing your account).  Also enter in your Application Name in the field provided, and make sure that this application name matches the name of the application in your Bluemix console, as well as the application name that you used in the manifest.yml file.  In this case, it is the name of the application that we used in the manifest_TestCalendar.yml file, which was “TestCalendar”.

Finally you will see a grey box with the deploy script.  Now in order to make sure that we use the manifest_TestCalendar.yml file to deploy this, we have to copy this over the existing manifest.yml file.  It is a simple Unix/Linux copy command, and your resulting deploy script should now look like this:

#!/bin/bash
# Move proper manifest file into place
cp manifest_testCalendar.yml manifest.yml
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

The result of this is that we copy the version of the manifest file that we want into place, and then CloudFoundry just does the rest.  Go ahead and make a simple code change (just update a comment, or the README file), commit and push it (if you’re using Git), and watch as the application gets automatically built, and then the test calendar gets deployed.

Configuring the Production Stage

Now if we want to deploy using the same mechanism for our production instance, the process is simple.  We just click on the gear icon in the stage, and chose Clone Stage.  This creates a new stage, just like our stage to deploy our test instance.  Click on the gear icon for this new stage.  You will now see a dialog box where you can configure your cloned DevOps pipeline stage.  In this dialog, check the Input tab to make sure that the stage is NOT set to be run automatically, by making sure that the Run jobs only when this stage is run manually option is selected.  Then click on the Jobs tab and make sure that you have a Deploy job selected.  The Deployer Type and the Target (where this should be deployed) should remain the same.  Once you select the target, your organization and space drop-down selections should get populated with organizations that you are a part of, and the spaces available in those organizations.  If you want to deploy your production instance to a different organization and space, you will change the settings for these fields.  Enter in your Application Name (for the Production application) in the field provided, and make sure that this application name matches the name of the application in your Bluemix console, as well as the application name that you used in the manifest.yml file.  In the case of my production instance, it is the name of the application that we used in the manifest_ToxCalendar.yml file, which was “ToxCalendar”.

Finally you will see a grey box with the deploy script.  Now in order to make sure that we use the manifest_ToxCalendar.yml file to deploy this, we have to copy this over the existing manifest.yml file.  We’ll need to modify the cloned deploy script to reflect this change:

#!/bin/bash
# Move proper manifest file into place
cp manifest_ToxCalendar.yml manifest.yml
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

The result of this is that for the production stage, we copy the version of the manifest file that we want (manifest_ToxCalendar.yml) into place for use by CloudFoundry.  Since this has been configured as a manual stage, you now will only deploy a new version of your production calendar when you manually drag a new build to the stage.

What does it mean?

PipelineNow we have a deployment pipeline built that allows us to do automated builds and deployments of changes to our calendar application to the test environment.  It should look something like the picture on the right.

Once we are satisfied with our changes, we can then just drag the appropriate build from the App Build stage, and drop it on the stage where we deploy to production (Deploy to Bluemix Staging – Production in the picture).  This will start the deployment process for the production instance of our calendar application.

What about different variants of our calendar application?

Now we can use this same technique of having multiple file variants to support different deployed “flavors” or variants of our calendar application.  I do the same thing with the Node code in the index.html file of the calendar application.  This file is in the public subdirectory.  I can create two variants of this file, and save them in different files (say index_ToxCalendar.html and index_TestCalendar.html).  The change to the index_TestCalendar.html file is the addition of a single line of code, which will display a heading on the calendar.  The code snippet looks like this:

<body onload="init();">
    <div id="scheduler_here" class="dhx_cal_container" style='width:100%; height:100%;'>
    <center><b>TEST CALENDAR</b></center>
        <div class="dhx_cal_navline">

The single added line just puts a title line (“TEST CALENDAR”) in the application.  For other instances, I can make similar changes to similar files.  To get the deployed application to use the correct version of the index.html field that I want, I need to make one more modification to the deploy script.  It should now look like this:

#!/bin/bash
# Move proper manifest file into place
cp manifest_testCalendar.yml manifest.yml
# Move proper index.html file into place
cp public/index_TestCalendar.html public/index.html
# push code
cf push "${CF_APP}"
# view logs
#cf logs "${CF_APP}" --recent

Now we have a specific variant for our test calendar, deployed to the test environment.  You would then do similar changes and file variants for all of the other deployed variants of this that you want to deploy and support.

Basics for Upgrading ANY Software

Recently I became involved with a customer upgrading UrbanCode Deploy.  I want to share my experiences, some lessons learned, and some IT basics.

Our customer was looking to upgrade their UrbanCode Deploy to the latest version.  Doing this meant that they had to upgrade in steps, as outlined in the IBM Technote on Upgrading UrbanCode Deploy.  The customer understood this, and they began to work with the IBM support team on getting their upgrade done.  They had a simple plan which outlined the high level steps involved, and they had received a couple of patches to support issues specific to their environment.  I became involved when they were one week away from doing the upgrade on their production systems.

As I became more involved, I became increasingly alarmed at what I was seeing.  The migration plan was too simple – it had no step-by-step guidance on which values to select for configuration options, installation paths, or even server and environment names.  This left fer too much room for error, and made the upgrade process prone to errors when executed in the production environment.  That leads us to our first general IT lesson, which is:

General Lesson #1 – Any upgrade plan must be able to be executed by someone OTHER than the tool administrator or the author of the plan

One other factor that concerned me was that I saw no detailed “rollback plan”.  Part of the upgrade plan HAS to include what staff should do if the production upgrade goes bad.  It could be due to power outages, lack of resources, or some other unforeseen circumstance.  You need to have detailed instructions (see General Lesson #1 above) on how to restore the existing environment if the upgrade fails.  Nobody likes to do this, but if the upgrade does fail for some reason, people will be under pressure and tired.  They need to have easy to understand and easy to execute instructions on how to restore the production environment.  This is our addition to the first lesson,

General Lesson #1a – Any upgrade plan without a “rollback” section instructing how to restore production to it’s pre-upgrade configuration, is not complete

One of the other areas where I had concerns was due to the fact the customer was planning on moving ahead, even though they were receiving delivery of post-migration scripts the day before the planned upgrade of their production environment.  They kept insisting that they had tested everything in their staging environment, but I knew that they would not be able to adequately test the post-migration scripts (more on those later) prior to upgrading production.  They had tested things extensively in their staging environment, but they had only tested individual parts of the upgrade process.  This leads us to our second general IT lesson, namely:

General Lesson #2 – NEVER upgrade software on production systems unless you have done a FULL upgrade scenario in your staging environment

I can hear the grumbling already.  “We can’t push this out, it’s our only upgrade window for the next month”.  I understand that this can be frustrating, and can seem like be over prepared at times, but this is a lesson drilled into people in our business based on traumatic experiences of the past.  A decision to proceed with an upgrade of a production environment should only be done when you have prepared enough so the risk of ANYTHING going wrong is negligible.  Our systems are critical to the organizations that employ us, treating them with a cavalier attitude only puts our profession on trial.  The decision to upgrade, without having done a full upgrade in a staging environment, has been called “IT Malpractice” by some of my friends in the industry.  It’s a great term, and one I plan to use in the future.  The basic question is this: “Is the potential pain in a botched upgrade worse than delaying the upgrade?”.  If you haven’t covered the lessons spelled out above, assume that your upgrade will NOT be successful.

The customer was also upgrading from version 4.x to the latest version of UrbanCode Deploy.  This meant some changes to the architecture of the product which has a direct impact on the end users.  The first of these is a change to the way that security is handled by UrbanCode.  You really need to be aware of your security settings and the changes that may occur as part of the upgrade.  If you are unfamiliar with the UrbanCode Security Model, then review it and make sure that you have a clear understanding of how roles, permissions, and teams impact the ability of end users to deploy software in your environment.

UrbanCode Lesson #1 – Understand the UrbanCode Security Model, and know how it is impacted by the upgrade

Another thing that happens when upgrading to UrbanCode Deploy 6.x is that your resources move from being in a flat list, to a tree structure.  This allows you to organize your resources into common groups, and find them much more easily in the UI (ie. no 2000 item pull down menus).  During the upgrade to 6.x the urbanCode Deploy resources will be reorganized into a “flat” tree structure.  This has an adverse effect on performance for tool administrators, as page loads become slow if you have a large number of resources.

In order to address this, and as a way to better organize your resources, you should refactor your resources into the tree structure provided.  There is a simple script that IBM can share with you that will refactor resources based on the name of the resource.  You can read Boris Kuschel’s blog on how he deployed this on Bluemix.  Essentially you just have a script that breaks up the resources alphabetically.  You’ll probably want to alter the script to refactor your resources based on some other criteria, but the code is all there.

UrbanCode Lesson #2 – Understand the changes to the UrbanCode Resource Model, and know how it impacts your upgrade

Also keep in mind that some of this refactoring could potentially impact your procedures, depending on how you reference those resources.

Summary

Upgrading any software in your production environments is a risk.  We often think of upgrades as being “simple”, but a tool upgrade is often dictated by a foundational change in a product.  These foundational changes will often impact how the product is supported and how it operates, and this may have an impact on your environment.  ALWAYS follow standard IT best practices and TEST upgrades in legitimate testing environments.  Make sure that you have a script (especially for manual steps) for the upgrade that has been fully run through without issues in your testing environment prior to attempting to upgrade your production environments.  I hate seeing my customers going through painful situations that could have been easily avoided with some risk management and planning.

An Easier Cloud Calendar

Timing is……………… everything.  About 4 hours after I did my last post on How About a Generic Calendar in the Cloud?, I saw a post from one of my team members.  It was a post from Sean Wilbur called, Streamlining Your Bluemix Project for One Button Sharing.  It was a great post, and once I followed the directions that Sean outlined, I was able to add a simple little “Deploy to Bluemix” button on my project.

So now if you would like to get a copy of my Generic Calendar project to play with for yourself, it is really easy.  Just make sure that you have a Bluemix account, and that you have a linked DevOps Services account.  Then just navigate to my project in DevOps Services (it’s the dtoczala|ULLCloudCalendar project).  Once there, you can look at the README.md file displayed there, and look for the “Deploy to Bluemix” button.  It looks like this:

deploy-to-bluemix

The Deploy to Bluemix button

Just press that button and you will get a project created in the DevOps services that is a fork of my original project.  The automatic creation of the project will throw an error during the deployment, but you will be able to easily fix this.  The error is due to a problem in the manifest.yml file, we are currently unable to create and bind services for our application through the manifest (see Sean’s question on this in the forum).  You can easily fix this by doing three things:

  1. In your DevOps services console, configure your Deploy stage – In your newly created project, press the build and deploy button, and then configure the deploy stage.  You will add a job to the deploy configuration, a deploy job, that will do a “cf push” of your application.  Then try executing it.  It will still fail (because our MongoDB service is not present), but it will create a new Bluemix application for you.  This is your version of the ULL Cloud Calendar app.
  2. In your Bluemix console, add and bind the MongoDB service – This is straightforward.  Just add the MongoDB service and make sure to bind it to your new application.  When you add the service, Bluemix will ask if you would like to restage your application.  Answer yes, and wait for the application to be deployed again.
  3. In your Bluemix console, click on the link for your new app – just click on the link for the route to your new application. 

Now once that little issue with the manifest.yml is cleared up, you will be able to share your Bluemix applications with the press of a button.  Bringing up applications and capabilities in the cloud is getting easier and easier!

How about a Generic Calendar in the Cloud?

Working with my team is often fun and rewarding.  I learn a lot from the people I work with, and I get the chance to try and learn new technologies all of the time, in an effort to solve real business problems.  One of our most recent challenges is having the ability to have a “team calendar” that we can all update.  We wanted to have some light weight way to coordinate our activities, and to keep on top of vacations and travel plans.

We wanted something that would work within the External Content widget of RTC, because we wanted to expose this calendar on our RTC dashboard.  The dashboard is where we track our work, watch our progress, and document our progress on our key measures, so it seemed to be a logical place for the calendar to live.  RTC doesn’t have any kind of native calendar ability, and it is something we miss.  I considered just using a Google Calendar, but we dismissed that because often our calendar entries will contain sensitive information.  So we wanted something that could be done within the IBM firewall.  That led me down the path of creating a simple calendar application using Bluemix.  IBM has a small internal implementation of Bluemix behind our firewall, so our simple privacy and security needs could be met by this.

I decided on a simple implementation using Node,js (which recently announced it’s own Node.js foundation).  I thought about using Cloudant for the underlying datastore, but in the end I decided on using Mongo DB, because I didn’t want this to be an “IBM solution”.

Keep in mind that this is a lightweight solution, it uses the Sandbox plan for the MongoDB service, and the code is not expected to be robust enough for 50 or 100 people to use.  It’s meant as a nice sample project, and one that could be useful to a small team.  It’s not going to replace your enterprise calendar solution.  It also uses the dhtmlxScheduler component, which has it’s own licensing concerns.  dhtmlxScheduler Standard Edition is available under GNU GPLv2, so be aware of the implications of this.

An example of the calendar application

An example of the calendar application

Would you like to see how to deploy it for yourself?  Then read on……

Deploying the Generic Calendar on Bluemix

You’ll need a Bluemix account with all of the usual capabilities.  You’ll first want to grab the code for this from my Github project called ULLCloudCalendar, and save it on your laptop/workstation somewhere.  Just hit the button to download the zip of the contents of the project.  I developed this on a Linux box, so hopefully the character sets and encoding don’t screw you up too much.  Once you have a copy of the code, you’ll want to login to Bluemix.  Once there, you will create a new application using the SDK for Node.js runtime.  Give the app a good name (like “AcmeCalendar“), and wait for Bluemix to create your skeleton app for you.

Once Bluemix is done, you should see you new application on the Bluemix console.  Now you will want to go and click on the box to “Add a Service or API”.  Scroll through the list of services until you come to the “MongoLab” service.  Click on the icon for this service, and on the next screen, create a new instance of the MongoLab service (which is a cloud hosted Mongo DB).  Make sure to give it a name that corresponds to the name of your application (like “MongoLab-AcmeCalendar“).  Also make sure that it is being set up in the correct space, and that it will be bound to the correct application (in my case, the “AcmeCalendar” application).  When you have checked everything, press the “Create” button to create your service.

At this point, you will be ready for that code that you copied earlier.  Have the code in a directory by itself.  Make sure that you have downloaded the Cloud Foundry CF Command Line interface, and have installed it on your computer.  Open up a command line interface, and navigate to the new directory where your code lives, at the top level directory.  This directory contains the manifest.yml, app.js, and package.json files.  Now we’ll log into our Cloud Foundry instance, set into the right environment, and push all of this code up to the cloud.

  • Login to the Cloud Foundry instance.
cf api https://api.ng.bluemix.net
  • Login to your account space on the Cloud Foundry/Bluemix instance.  Use your Bluemix ID for the owner (-o) and user (-u) parameters, and the space name of your space on Bluemix for the space (-s) parameter.  After you do this, you will be prompted for your password.
cf login -u dtoczala@us.ibm.com -o dtoczala@us.ibm.com -s 'dev'
  • Now you will want to modify the manifest.yml file to make sure that you can find your new project.  Edit the manifest.yml file and change all of the “ULLCloudCalendar” entries to the name of your application (Acme Calendar in this example).
host: AcmeCalendar
name: AcmeCalendar
  • Now modify the package.json file to reflect the new name of your application as well.  Edit this file and change the line with the name to your new application name.
"name": "AcmeCalendar",
  • Finally, you can now push all of your code up to the Bluemix infrastructure.  Use the name of your application (which is Acme Calendar in my example)
cf push AcmeCalendar

Keep in mind that there are other ways to do this (using an Eclipse plugin is one of them), so do a little research and find out the method that works best for you.  Once you do the cf push of your code, you will see Bluemix/Cloud Foundry do it’s work.  At some point it will tell you that your application has been deployed in the cloud, and that it is running.

Accessing Your Calendar

On the Bluemix console for your application, you will see a link to your new calendar application.  It will be something like https://AcmeCalendar.mybluemix.net (depending on your application name and the route that you have chosen).  You can change the route, but that is a technique for a future blog post (it’s not that hard, I just don’t want to get into it here).  Clicking on that link will launch you to the website where you can access your new calendar.  Play around with it.  Double clicking on a day will open a dialog box for adding a new event.  Double clicking on an event will allow you to change or delete the event.  It’s pretty simple.

Adding this to your RTC dashboard is pretty simple too.  Just create a new tab on your RTC dashboard.  Click on the caret next to the tab name and be sure to set the tab up to display widgets in a single column (otherwise the calendar becomes too small to be useful).  Now click on the “Add Widget” button on the upper right of the dashboard.  Select the “External Content” widget.  Once the widget is displayed on the dashboard, click on the small pencil in the widget menu bar.  You will now enter in

  • The External URL (which is the web address of your new app, maybe something like “https://AcmeCalendar.mybluemix.net“)
  • The height of the widget (try 550 pixels for starters, adjust it as you need to)
  • The refresh rate (go with a simple 5 minutes, or 300 seconds)

Once you do this, you should now see you calendar application right on your RTC dashboard.  You can even navigate through the calendar and add/change/delete events.

How Does it Work?

If you’ve read this far, you have enough knowledge to be able to deploy a simple cloud based calendar for your team.  If you want to get into the code, and possibly change and enhance this calendar app, then read this section.

The calendar has three big pieces that control everything.  The first piece is the dhtmlxscheduler piece.  The code for this component (which controls the calendar look and feel, and drives functionality) is in the public folder.  I didn’t touch this stuff, but if you want to try messing around with the CSS files to change the look and feel of things, be my guest.

The next big piece is the code that controls the rendering of the HTML page.  This is in the index.html file in the public folder.  There are two important pieces of code in this file.  There is the script.  The script first will go and make some configuration settings to the calendar, it then sets up the basic colors used, and sets up the look of the dialog box to add/modify/delete events.  The script then initializes the calendar component with these settings.  At the end of the script, the default date/time format is specified, your existing data is loaded, and a data processor is set up to handle the interactive user requests.  Then there is the body of the HTML page, which sets up the display of the calendar itself, and initializes things.  I didn’t fool around with this section.

The final piece is the app itself, in the top level directory in the file app.js.  This file handles the storage and retrieval of data from the MongoDB, and does some data checking and data manipulation to help format things appropriately.

The script starts out by setting a bunch of global variables and reading the various VCAP settings provided by the Bluemix environment.  This allows the application to connect to the MongoDB that is bound to this application with the correct credentials, and it also provides some other important runtime information.  You will notice that there are references to SSO and certs that are in the code, but have not been tested.

Once this initial code is complete, you can see where we connect to the MongoDB.  Following this is a section of code that is NOT tested (and not used) that deals with the SSO and passport functionality.  This all ends up with the section on customer authentication middleware.

Finally we get to the Application routes.

  • The code for the /init route is simple, it just adds a single event on New Year’s Day to get you started.
  • The get code for the /data route supplies the calendar object with all of it’s events from the MongoDB.  It retrieves ALL of the events in the datastore, builds a JSON object with these events, and provides them as a stream of JSON data to the calendar object in a response.  Be careful with the formatting of your dates in the JSON response, an invalid date can cause problems.
  • The post code for the /data route processes the creation/modification/deletion of events by the calendar object.  A user who changes something in the calendar will post the change to the /data route.  This section handles the request, and processes it accordingly.

Finally at the end of the app.js file, we start the app.

What’s Next?

There are things you can do to change how this calendar works, and expand or change it’s functionality.  I’ll cover a couple of the simple things here.

I Hate The Colors

You hate the colors that I picked?  So change them yourself.  There are two sections that deal with the colors in the calendar interface.  In the index.html file, there is a variable called colorpicker.  You can change the names of the colors by changing the label property of the array entries, or the color itself by changing the key property of the array entries.  This key property defines the RGB mix of colors.  I used the HTML color picker to get these values.  You can even add even more colors by adding more entries to the array.

These key values (the “rgb(x,y,z)” entries) are stored with the events in the database.  If you look at the code in the app.js file, look at the get /data section of code.   In here you see a section of code where we check the color property of the returned event data.  This represents the color of the event box.  Based on this, the if statement will either assign a grey box and black text (if no color information is provided), or the proper color and black text, unless the box is indigo in color, in which case white text is selected because it shows up better.  Kind of hard to explain – easier to see in the code.

            if (!data[i].color)
                {
                color = 'rgb(204,204,204)';  // grey block
                textcolor = 'rgb(0,0,0)';    // black text
                }
            else
                {
                color = data[i].color;
                textcolor = 'rgb(0,0,0)';    // black text
                if (color == 'rgb(77,77,184)')  // if block is indigo
                    textcolor = 'rgb(255,255,255)';    // go with white text
                }

What About Repeating Calendar Entries?

The dhtmlxscheduler will support repeating entries.  To implement this, check out this entry in their documentation on implementing repeating entries.  In fact, take a look at their documentation overall.  I found the sections on custom event colors, Lightbox manipulations (Lightbox is the user entry popup), and the code associated with the coloring events example to be quite helpful.

Weeks start on Mondays, not Sundays

This one is an easy change in the index.html file.  Just find this line of code:

scheduler.config.start_on_monday = false;

and change it to:

scheduler.config.start_on_monday = true;

Summary

I wanted a calendar that was stand-alone, that could be displayed in an RTC widget, and that I could easily deploy in a Bluemix environment.  I hope this guide has shown you how easy this is to do, and allows you to add this calendar ability to your RTC environment.  If you have comments or issues, please comment and I will do my best to answer your questions.b


No Nonsense Tech Talk Member

These are my PERSONAL views and observations

The postings on this site are my own and don't necessarily represent IBM's position, strategies or opinions. Anyone is free to use, copy, distribute, modify or sell the source code and other materials directly linked from dtoczala.wordpress.com and is provided "as is" without warranties. I am not responsible for any harm or damage caused to your computer, software or anything else caused by the material.

Recent Twitter Traffic


Follow

Get every new post delivered to your Inbox.

Join 332 other followers

%d bloggers like this: