This week I have been involved in a series of discussions about the upgrade of the Jazz products from their 2.x versions to the 3.x versions.  As we discussed some of the challenges that our customers have faced, and the impact that an improved upgrade process would have, we were at a loss for being able to assess the impact to ALL of our customers.  We know the customers that we have sold products to, but we don’t know how big their Jazz repositories are, we don’t know how many licenses have been deployed,  and we don’t know how the products have been deployed.  So we began to discuss how to best find out some basic information from our customers.  People began to discuss how we should best ask our customers for this information.

After a few minutes of discussion, it occurred to me that we were thinking about this all wrong.  Why should we constantly be calling our customers and asking them about their Jazz deployments?  The Jazz servers capture a number of statistics about themselves and the repositories that hold their artifacts.  You can see these in the administrative reports available on your Jazz servers.  If we could somehow harvest this information from our customers who have deployed the Jazz solution, then we would be able to be more proactive about the advice and guidance that we give our customers.

My idea is to have a small task that runs each time the data warehouse jobs run (typically nightly).  With each run, a small email is written which contains some basic high level information, which then gets sent back to the Jazz team.  The email is in plain text, so customers can easily monitor (and scan) the information that is being sent out.  Each server would have a switch where they could configure if the emails are sent out at all (so customers could elect not to participate), and a setting to configure where the email is sent (for those customers that want to review the contents of the email before forwarding to the Jazz team).  The basic information to collect would be the following:

  • License usage – average, and peak for each license type
  • Repository sizes
  • Number of application servers, type, and locations (ie. a customer may have one JTS, one RTC, and two RQM applications, in different data centers, deployed)
  • Type of application server used (Tomcat or WAS), and type of database used (DB2, Oracle, SQL Server)
  • Basic REST response times (as a rough measure of performance)
  • Potentially a series of simple performance metrics

I would then propose that we make this information readily available on, with customer names obscured, so our customers would be able to see what other Jazz deployments look like.  Are there other customers using the full set of CLM tools?  How many other customers are using two RTC applications sharing the same JTS?  Are our repositories considered large?  In addition, we would be able to monitor customer situations and offer predictive help.  We might also find that customers using the full set of CLM tools have RTC repositories that grow at an average rate of 20% per year, while stand alone RTC repositories only grow at a rate of 15% per year.

So now I have a question for the people who read this blog.  Does this seem like a good idea for you and your organization?  Does this seem like a vendor that is looking to work with it’s customers to monitor it’s deployed products to improve them, or an intrusive vendor seeking to grab and gather information for their own use?  Would you turn on this capability?  Would you find value in seeing these metrics on  Are there things that I should include in my list of collected data?  Things I should take out?

I am looking for feedback on this idea.  If I get enough support for this, I will try to promote it so that it is included in a future version of the Jazz products.  So don’t be shy – let me know what you think.