Open Source and Vendor Tools – What makes sense?

It’s been a long time (Jazz CLM and Open Source in Sept. 2011 and Jazz Compare with Open Source in Feb 2009) since I posted on the subject of open source tools and vendor supplied tools.  Since most people don’t have the patience to look that far back in my blog’s history, and some things have changed over the interim, it’s time for a new post.  For anyone new to this blog, and in the interests of full disclosure, I work helping enable customers with tools and technologies from IBM/Rational (a vendor).  Hopefully you will find this a useful and objective discussion of the relative merits of these approaches.

The Simple Case for Open Source

So let’s look at an example of the typical argument that most people trot out in support of open source, “Open source is free, the support is much better than what you get with vendor supplied tools, and you avoid vendor lock in“.  Um…. yeah.  Let’s take a closer look at this statement, and see where it falls apart.

Nothing is Free

Open source tools are not free.  People make this false assumption because there are no licensing costs associated with open source tools.  However, you still have the costs associated with provisioning hardware for them, setting them up, and administering and maintaining them.  To be fair, you also have these costs when using vendor supplied tools, however vendor supplied tools tend to focus on administrative concerns, and thus tend to require a little less effort.  Open source tools are getting better in this respect, but there is a whole industry of companies that have a business model based on the administration and upkeep of these types of environments.  In some cases, an approach with one of these companies may be a more cost effective solution for you.

The point to be made here is that you need to evaluate these things with ALL of these costs in mind.  Think to yourself, do I want to maintain these tools and development environments myself, or would I prefer to pay someone else to do it for me?

Nothing is Easy

Open source tools will often make broad and sweeping claims for support.  They point to Git or Maven as examples of open source tools with great user support from the other users, in the forums and the self help sections dedicated to this purpose.  I really admire this type of support, because it can be quite good.  I find the various Linux forums quite useful when debugging my own Linux issues.  I try to get our support areas and forums to adhere to this type of standard (but to do so with a little bit of class and respect – sometimes those forums can be a little rough).  The thing to keep in mind is that while these are examples of good user support through forums, this only really holds true for open source tools that have reached a certain period of longevity, and a critical mass of users and administrators.  Some open source tools do not have this level of maturity or this size of user base.  Those tools may grow to achieve that level of support at some time in the future, but it all depends on the popularity of the tools over a period of time.

When evaluating support, pay attention to how long it takes for the forum to provide answers to complex issues.  Don’t forget, it “counts” even if the original submitter ends up posting the solution.  That is an example of good behavior in these environments, as administrators share solutions with others who might have a similar issue at a later date.  Vendors typically have teams dedicated to addressing issues as soon as you alert them.  Forums tend to be less time critical.

Transparency is Good (or is it?)

Some people claim that, “since I can see the source code, I can be sure of what is going on”.  They have a point.  It also means that anyone who wants to exploit weaknesses in the tool code can easily do so.  The other thing to keep in mind is that many vendors are shipping source code with their tools now anyhow.  Vendors would LOVE it if you made a suggestion to improve their products (“Hey look!  Free developers!”).  And modifying open source is not as easy as some would have you believe.  Often you will need to maintain your modifications to the open source code base, and then port and test these changes with each new release.  Wouldn’t it be easier to just have someone else maintain these fixes for you?

This argument is about code transparency, and should not be confused with business transparency (which I am a BIG proponent for).  It’s not bad or good, it’s just something else that you need to be aware of.  The fact that you can see (and modify) the code has implications that can impact the overall strategic value of your solution.  Be aware of those implications.

The Simple Case for Vendor Supplied Tools

Now let’s look at the reasons that people give for going with vendor supplied software development tools, “Vendor tools have better functionality, they do the code maintenance for us, and they deal with incorporating new technology.  We’ll always have support!”.  Um…. OK.  Let’s take a closer look at this statement and see where it falls apart.

The Cost of Functionality

Vendors love to claim that their tools are “best of breed”.  Often the functionality that can be found in vendor tools is specialized and more advanced than what you will find in an open source tool.  Vendors will provide functionality and usage patterns that will support certain industry standards or regulatory standards.  Often with open source tools, the administrator has to create these type of mechanisms as either a “plugin” or some “bolt on” addition to the open source tool.  This is one of the general strengths of the vendor supplied tools.  As a consumer of these tools you need to evaluate the relative worth of this additional functionality.  If you are doing some simple software development project, with a small co-located team, then you may never even use these capabilities.  Why pay for something that you will never take advantage of?  It’s like buying a sports car in a city where you cannot drive faster than 30 mph.  You are paying a premium for performance that you will never utilize.

However, if you need that additional functionality, then the availability of this specialized capability makes the licensing costs associated with vendor supplied software extremely attractive.  Why should you come up with a custom solution for this issue when somebody has already done all of the thinking, development, and testing for you?  You are not in the software tools business, so why should you dedicate any time or effort into developing software development tools?  You don’t collect rainwater on the roof, you pay for (and take advantage of) infrastructure and work that has already been done by an organization that specializes in delivering water to your office.  Water isn’t that complicated.  The costs associated with having some else worry about it, so you can focus on where you bring value, are seen as an acceptable trade off.

Incorporation of New Technology

Vendors will claim that they will provide the “latest and greatest” technology in their products.  For many vendor products this may be true.  Successful vendor products are like the support of successful open source products in this respect  For vendor tools, once a critical sized user base has been established, there is enough revenue to justify the constant upgrading and incorporation of new technology into the tool.  Vendors have entire teams dedicated to improving their tools, funded with maintenance contracts.  However, if a vendor tool does not have enough revenue associated with it, often the vendor will leave these tools on “life support”, with a skeleton staff tasked with maintaining the tools, but with no real innovation going on.  Just because something is a vendor tool does not ensure that it will be supported for a long time, or that it will continue to incorporate new technologies and innovate.  Have your eyes open, and know what you are getting into.

The Reality

Neither of these approaches seem to address all of the concerns and risks involved with software development tools by themselves.  It leads some to ask the question, “Why use software development tools at all?”.  If you think that you don’t need tools, then spend some time trying to develop a real product without them.  I have, and it’s not very pretty.

What About Integration?

The actual difference between vendor tool functionality and support, and the functionality and support provided by open source tools is not as vast as many vendors would like you to believe.  There is one notable exception to this.  That is the area of integration.

Integration should be a vital part of your strategy when deciding on tools to support your software development environment.  You should be seeking to break down barriers between business analysts, developers, testers, and IT administrators.  That is the whole idea behind the DevOps movement, which uses automation to help break down these barriers and improve communications.

Vendor tools tend to focus on integrations, and will usually have a supported API for integrations to their tools.  Open source also makes use of API’s, but the stability of these API’s isn’t the same as it is with vendor supplied tools.  With open source tools, the focus is commonly on providing extension points for the consumer to write the integrations themselves.  With vendor supplied tools, these integration points exist, but vendors will often make the implementation of these integrations available with a simple configuration change.  It is often how vendors will differentiate themselves from their open source alternatives.

Integrations can be tricky, and they often come with unadvertised consequences.  Is your tool going to integrate with some other tool in your environment?  If the other tool becomes unavailable, will your tool be able to work without a loss of functionality?  Will performance suffer?  Often the best integrations introduce a high degree of inter-dependency between the tools, which can provide a nice user experience, but makes your overall environment tougher to upgrade and maintain.

Probably the most important integration to consider is the one that you don’t know about yet.  You have other tools and automations that you may incorporate in the future.  Does the tool that you are considering provide an easy interface for sharing it’s information, and for linking it’s data to other (at this time unknown) repositories and systems?  You can never really know what the future holds, so it’s important that your tools adhere to your philosophy with respect to sharing data and integrations.  If you plan on using a centralized data warehouse for this, will the new tool be able to populate that data warehouse with the necessary information?  If you like the concept of linked data (which I really like), do the new tools support the idea of leaving data in place and creating linkages between data in different repositories?

The approach that you choose will be up to you, just make sure that your tools are consistent in applying this approach.

What Your Strategy Should Be

So what is a software development organization supposed to do?  Are there no good answers?

The More That Things Change….

…. the more they stay the same.  In my earlier blog post on this subject (Jazz CLM and Open Source) I argued for treating your software development tools like an investment portfolio.  I see no reason to change my thinking.  Many things have changed in the interim, and there are different factors like cloud computing, DevOps, and emerging standards for data sharing (like OSLC).

I still think that treating your investment in tools like a portfolio is important.  Don’t invest in a single vendor’s tools, or in an all open source tool environment.  Either one of these approaches has the potential to leave you vulnerable in different ways.  Financial investors spread their investments around to limit risk, and you should think the same way about your tools.  It is more important to have a software development environment philosophy.  Have some general standards that you consider to be important, and choose the tools that provide the best fit for your philosophy and environment.  Don’t fall into the trap of having a single tools vendor, or insisting on an all open source solution.  A mixture of these will probably be the path that provides the best fit for you, and imposes the least amount of risk.

For example, I would insist on adherence to open standards.  I think that having open standards allows you to more easily transition into and out of new technologies, since huge interfaces do not need to be written (or re-written) to accommodate the changing of one tool in the environment.  I would also insist on easy integrations via REST based technologies.  That allows you to create even simple integrations between tools using hyperlinks between RESTful objects.  Some people insist on more full featured integrations, but I typically find that the more “full featured” an integration is, the more “brittle” it is.  It makes it more difficult to upgrade tools, or introduce new tools into the environment.

I would also insist that the tools selected be able to easily scale to fit the size of my organization.  If you only have 100 developers, then your tools should scale to support those users and more, without large amounts of support staff needed.  Larger organizations will have more significant support requirements.  Keep in mind that security plays a role here.  Some cloud offerings will scale “forever”, but may not support the security requirements that your business requires.  I would look for tools that provide easy an intuitive user interfaces.  Nothing is worse than sitting in front of some tool or application, knowing that it can do everything you could ever want, and having no clue about how to do it.  Finally, I would look at the relative costs of the tools.  This includes licensing, support, hardware needed, and the administrative burden.

Your organization and industry may have some other things that need to be part of your philosophy.  Some organizations would put support of software development process on their list.  Others would consider support for particular regulatory standards.  The key here is to have a consistent philosophy for what you consider important in your tools.  You should then look at your current tools, and any potential tools, through this philosophical lens.


So you’ve read all of this, so now what should you take away?  You should have a philosophy regarding the tools that support your software development infrastructure.  Have 4 or 5 key capabilities that you value in your environment, and be consistent.  For me, I would consider the following:

  • Don’t put all of your investment into a single vendor, or solely into open source.  Diversify to reduce your risk.
  • Insist on adherence to open standards
  • Look for integrations that allow you to link DATA, not link different tool applications
  • Look for scalability to fit the needs of your organization, while keeping security in mind
  • Look at overall cost – not just the cost of the licenses, but the cost of the hardware needed, the cost of administration, and the cost of support

and finally……

  • Don’t just follow what I say, decide on the key things that are important to your organization.  Choose 6 or fewer things that are critical, make that part of your philosophy on software development tools, and focus on having all of your tools support that philosophy

Other Things To Read

I provide some links below to some of the better arguments for/against both open source and vendor supplied software.  I do NOT agree with the arguments in these articles/blogs, I only supply them to you as a way for you to be informed.  I “get” both arguments, and I agree with some of the points raised on both sides of this.  You should understand both sides of the argument, and decide what works best for your individual situation.  Life isn’t black and white; you ignore the shades of grey at your own peril.  Some of these posts border on fanaticism, but they represent well written arguments and observations on some of these points.

Why Open Source Misses the Point of Free Software - a convincing argument from Richard Stallman, but one that begins to attach too much ethical importance to the development of software.

The Problems of Open Source – a convincing argument from Dr. Tarver that rips Mr. Stallman, but one that tends to discount the quality of some open source products.

Open Source Sucks – one person’s view on the problems of open source….

Open Source Rocks – ….the same person’s view on why open source is great. and the Business of Open Source – a real-world look at how open source is used in producing vendor products, and the impacts of open source on those products.

Using Python to Build a Jazz Widget – Part 1

I have been spending the past couple of weeks wondering about what I would write about next.  As a manager, I don’t get a lot of time to dig into the deep technical issues like I used to, and I miss that.  So I had some spare time on my hands, and I wanted to do something different that would help me learn something, and also be useful.  I am picking up Python, so I figured that something useful in Python would be good.

As always, the code in this post is derived from examples from as well as the blog posts I mention. The usage of code from that example source code is governed by this license. Therefore this code is governed by this license, which basically means you can use it for internal usage, but not sell it. Remember that this code comes with the usual lack of promise or guarantee. I did all of this against a v4.0.2. RTC instance.  So read on, and feel free to try this out for yourself.

The Problem and the Approach

One of my big issues as a manager is getting the story of my team across to our stakeholders.  I have widgets which show the state of our significant stories and epics, but my stakeholders are not all technical types, and sometimes they need something a bit more graphic.  The presentation needs to have some color, and some interesting graphics.  I didn’t want to fool around with BIRT, because I might have some custom calculations, and some custom icons that I want to use.  I also don’t want to have to upgrade the report when my tools get upgraded.  So I thought that I could create a Python script that would use OSLC to get the information that I need.  I can then take this information, and create a small HTML page with the information and links that I want to show.  Using the new External Content widget (see my earlier post on Management by Results), I can then display this content on a dashboard.  If I set up a cron job on my machine to run every six hours, I will not put an undue burden on the Jazz infrastructure, and I will give my stakeholders a nice dashboard view of what they want to see.

So this solves my reporting issue, allows me to learn Python, allows me to get deeper into OSLC, and should meet the needs of my stakeholders.  Time to get to work!

Figuring it Out

So since I am a Python newbie, I decided to do what I always do when starting with something new.  Do a web search for using Python with OSLC, and try to borrow as much code as possible from others who may have done this before.  Why reinvent the wheel?  I did see some good blogs on using Python and OSLC with RQM, easy authenticating with Python, as well as some OSLC with Python posts on the Jazz forums.

My First Sample Program

The blog posts were a good place to start, since everything begins with authentication.  So the first thing I did was to download and install the requests library by Kenneth Reitz that was mentioned in the easy authentication with Python blog post.  Since I am running Ubuntu, I just needed to make sure that I had the Python pip module installed.  I then downloaded the requests library,  Then I did a quick “sudo pip install requests”, and BLAM, the whole thing was all set.  My Python environment was set, now to try a quick sample program.

So I just want to get into the repository, authenticate, and see if I can get ONE single resource out of the repository.  No luck.  I keep having authentication issues.  So the next thing that I go and look at is the first blog that I listed, using Python and OSLC with RQM, and use the JazzClient class defined there.  I really like how simple it makes the main program read.  Still no luck.  As I look through the responses that I am getting back, I notice that I am hitting https://jazzserver:9443/ccm, and when I go and look at the Jazz repository the base URI is https://jazzserver:9443/rtc.  Looks like I have been hitting the wrong spot!!  Once I change that to the correct address, and add a second storage of the appropriate cokkies as described in a Jazz forum post, everything begins to work.

Now most of this code is “borrowed” from the blogs that I referenced earlier, but you can find the code out on JazzHub.  You can find my project on JazzHub, under the title “Python Jazz Client“.  Check out the and files.  You can use them, but you will need to change the user, password and base URI in the main module.  Your base URI will probably look like “”.

This Python Jazz Client project on JazzHub is interesting, as it is the first real “development” that I have ever done on JazzHub.  I made it a public project, so if you are interested in this type of work, or you have suggestions for expanding this JazzClient class, please join the project and pitch in.  I can honestly use all of the help that I can get.


So there it is, a bunch of Python code that helped me get more comfortable with Python, utilized OSLC, and gets through authentication and can pull information on individual work items.  It was so easy that even a slow-witted manager like me was able to do it.  Next I will begin to build on these basic capabilities, to pull specific information from work items, and then use this to create some simple HTML presentations of the data.

Management by Results – With an Assist from Jazz

Earlier today one of the people who works on my team (Sudhakar Frederick, or Freddy) sent me an email challenging me to do a blog post on how I manage our technical team.  He is going through his yearly review, and he found that he was able to pull his information together in record time.  I don’t know if it is worthy of a blog post, but Freddy is a pretty smart guy.  I learned a long time ago that it pays to listen to his opinions.  I’m not sure if he wanted me to highlight another effective usage model for RTC and Jazz, or if he felt that I should highlight the management style and philosophy, so I’ll try to highlight both things.


I am a pretty technical guy, and I like being a technical guy.  However, at some point in time each one of us technical types runs into the horror of being recognized for our work, and rewarded with a position as a manager.  For some folks the thought of it is so unpleasant that they turn the job down.  I didn’t, and for the past three years I have been a manager.

IBM/Rational has a performance rating system that we go through every year.  Each year people are evaluated on the results that they have achieved over the previous 12 months.  Each person needs to briefly describe how their actions contributed to some type of measured results.  Now the results being looked for differ depending on your job.  So the relative importance of my results is viewed in the context of my current job.

So each year our technical people are asked to put together a short narrative on their significant accomplishments over the past year.  The key here is that we want a focus on measurable results.  Management will then go over these narratives, and be able to give people feedback on where their results rank within the organization, and then give the technical people feedback on what they can do to improve.

How We Ran Jumpstart – and Run Emerging Technologies

When the Jumpstart team formed, we decided that the best way to learn everything about Jazz was to actually use the tools ourselves.  So while RTC might not be a perfect fit for team focused on customer enablement and education, it can do a pretty good job.  We designed our own work items, with one type representing our customer engagements, as well as the existing Agile work items for epics, stories and tasks.

We record information in the work items, using the discussion area to keep notes, observations, and to share the progress that we make.  We record what we have done, what we learn, and current status of our various tasks and customer engagements.  By having someone indicated as an owner of each work item, we force ourselves to be accountable for moving things forward.  We also added an additional owners field which we use to indicate efforts where a small team of people are collaborating on something.  My team runs with the motto of, “if it isn’t on my dashboard, then it doesn’t exist”.

When we help customers, or provide some sort of enablement and training session, we record these in a specific work item type.  We also use epics, stories, and tasks, to help define and track the work that we do that might not be related to a specific customer.  So if we know that we want to create a “Getting Started Guide”, we will create an epic for this.  Then each chapter of the guide will be a story that is a child of this epic.  In this way we can divide the work across the team, while everyone is able to see what the other guide authors are working on.

Because we collect notes and observations on ALL of our work, the Jazz instance becomes something of a knowledge base for my team.  We often get questions from either our IBM field teams or our customers that we know we have seen before.  We just cannot remember the answer.  When this happens, we can go back through our old work items and search for the situation that we dimly remember from before.  Even if we have not noted the solution to the issue in the work item, we can see other people who were involved, and this often allows us to get input from other people who DO remember how to address that particular situation.

We also use the subscribers field in ALL of our work items, and add the names of team members (or managers) that we want to be aware of what is happening with any particular piece of work.  One of our other techniques is to use @username in the discussion areas of our work items.  When someone is called out in the subscribers area, or in the discussion through the use of their username, they will get an email alerting them to changes in the work item.  It’s a simple technique that we use to help us get the right people involved in any work that we are doing.

This is all predicated on people updating their work items, and in people keeping their notes in the discussion area of the work items.  As long as we stay disciplined about this, then we all reap the benefits of this approach.

How We Share Our Information

We use dashboards, and I refuse to use slides.  We have multiple Jazz instances within our organization, and I need to be able to share information with all of these stakeholders.  Some of this information is direct from the widgets that I use on the dashboard that my team uses.  Because we use special work items for our customer engagements, we are able to easily create queries and widgets that quickly inform people who we are working with, and what we are working on.  In these instances, we need to establish friend relationships between the different Jazz instances.

We use a common convention on what we use for the Summary field of our work items.  We use the  customer name followed by WHAT you are helping them with.  This allows me to use Work Item widgets coupled with queries on open work, to provide a quick list of the things that my team is doing.

How do I filter out just the open content?  I look at the state of the work item, and check to see that it is unresolved.  To see not just what is open, but what people have been working on, I use a different query.  In this Active work query, I look at the open work items, and then I add an additional condition that the Modified Date is after some period of time (I use 14 days, so I see what we have been working on in the past two weeks).  This allows the team to see what has really been actively worked on in the past 14 days.

I often provide analysis and bullets that help describe the data that we are posting on our dashboards.  This is usually in the form of single sentences and bullet lists, which is EASY to do with HTML.  I used to do this with HTML widgets, but I quickly ran into the issue of having to update the same HTML widget in multiple dashboards.  I didn’t like doing that, and I made mistakes.  Not a good solution.  So I decided that I could leverage the new External Content widgets in Jazz.  These widgets allow me to build some simple HTML pages, and then provide a simple pointer to include the content from those pages on my dashboard.  And on a bunch of other dashboards.  So I now can just update a simple HTML status file, and my updates get reflected immediately on ALL of the dashboards of my various stakeholders.  This allows me to focus on giving all of my stakeholders a consistent message and content, based on real data from my team, with each of them seeing the things in the context that they want to see.

Using this approach, with some simple HTML files used to report common status across multiple Jazz instances, does have a downside.  Our various Jazz instances use secure HTTP (with an https:// web address).  My simple HTML status pages are served out as unsecure http content (with an http:// web address).  Newer browser versions will often block this content by default.  You can get around this in a couple of ways:

The first option is the best option, but for the short term I am currently employing option #2.  I don’t have an secure HTTP server just lying around that I can use to host my little status updates.

What Do People Think?

My bosses love being able to surf around and see how my team is doing.  They love having this ability.  They also like being able to put widgets that update automatically on their own dashboards.  They probably also love the fact that they can find out what my team is doing without having to actually talk to me (although none of them will admit it!).

I like only having to update a small set of HTML files to report a consistent message to all of my stakeholders.  If I use a WYSIWYG editor to access those, then I can essentially give this to someone who is not technical, and have the reasonable expectation that they would be able to successfully use this solution.

My team loves this because they get to tell me what is happening, as it happens.  When we talk, they don’t have to remember everything that they have done over the past week.  If they have been updating their work items, then I already know about it.  No need for status reports!  (Hooray!  I hate writing status reports, and I hate reading status reports.)  It allows us to focus on and discuss the issues that we can collaborate on and resolve together.  When we get to end of the year reviews, my team can easily do reports and queries that remind them of all of the work that they did during the year.  They are able to easily and quickly pull together a powerful narrative that highlights their accomplishments and results from the past year.  It also helps us validate who participated in certain efforts, and by looking at the work items, we can determine the relative contributions made by our team members.

As a manager, I love it.  It allows me to see things at a glance, and see what my team is involved in.  By looking into the work items, I can “get up to speed” on any issues that have come up, and I can ask intelligent questions to help people discover solutions to their problems.  So while the dashboard gives me good information, it is only valuable in that it allows me to ask good questions.  You cannot just use it to blindly drive your team, and I don’t try to.  What it does do is give me an efficient way to get an overview of what we are doing, how we are doing it, and discover ways that I can help us improve.  It also gives me a way to consistently report status and analysis to my numerous stakeholders, so they can all have a consistent picture of how my team is performing, and the things that we see in our customer environments.

Key Points to Remember

  • In a team with people solving new problems and discovering new approaches every day, it is important to capture this knowledge somewhere.  We often get so focused on the NEXT thing, that we forget about the solutions that we have already discovered.  Being disciplined about updating work items helps us remember things, and helps us share information inside of our team.
  • We use subscription and the “@username” indicator in discussions a lot.  We use this to pull people into conversations and work.  You team should have a culture of holding each other accountable to responding to these types of alerts.
  • We also use the tags field.  We use this to tag certain work items for inclusion in very specific searches.  If I have a series of work items related to a particular issue, or pattern of issues, then I will tag all of the work items with the same tag, and create a query that returns only the work items with this tag.  That allows me to use a work item widget with this query on my dashboards to bring these things some executive level visibility.
  • I create dashboards for a lot of my stakeholders.  I use the team dashboard for my team, but I create a personal dashboard (and then share it) for many of my stakeholders.  If stakeholders only care about a subset of the activity that my team does, I create a specific tab for them on this dashboard.  That way they can see things in the context that they want to see.

What is DevOps?

Recently I took over as the manager of the Emerging Technologies team at IBM/Rational.  One of the Emerging Technologies that my team is supposed to focus on is the area of DevOps.  So now I have a small team that is focused on DevOps.  But what in the world is DevOps?  I thought that I knew what it was, but I figured that I would do a quick experiment to improve my understanding.  I decided to ask several different people for their definition of DevOps.  Surprise!!  I got back about as many different answers as people that I  asked.  (OK, I wasn’t surprised, but that is why I did the experiment in the first place)

“Big deal”, you may be thinking, “all you have done is to state the obvious”.  But the more that I thought about it, the more that it bothered me.  The goal of my team is to help launch our new technologies in the area of DevOps into the market.  We do this by having conversations with our customers, participating in conferences, writing in our blogs (Freddy has some good DevOps material in his blog already), and other various enablement activities.  If we cannot have a common definition of what DevOps is, then how can I expect my team to establish a common language and terminology to talk about DevOps?  It would be like trying to teach math when you cannot agree on what a number is.

So I looked at the IBM DevOps page on, and found a set of entry points for DevOps.  Drilling into those led me to specific IBM products, but it really didn’t answer my question.  So then some smart people pointed me at the IBM DevOps page out on developerWorks.  I read this and found myself agreeing with most of it (even though it is marketing), so I decided that maybe this is something that I could use as a working definition of DevOps.  On that page they say:

“It’s a set of principles, practices, and products to help organizations deliver high-quality software to market faster, while minimizing cost and risk. The IBM approach to DevOps drives end-to-end innovation throughout the entire software lifecycle.”

This still feels somewhat fuzzy to me.  I am a nuts and bolts guy.  I like “Janitors” a lot better than “Sanitation Engineers”, and I like “Software Testers” more than I like “Quality Assurance Engineers”.  Let’s face it, I am a “Manager”, not a “Talent Direction Technician”.  So I want a little more direct definition DevOps.

So I dug in a little deeper and looked at the DevOps Practices page.  This is a little better, it has more for me to sink my teeth into.  They basically highlight five key practices for what IBM considers DevOps.  These are:

  • Plan, Track, and Version Everything
  • Dashboard Everything (my personal favorite, I am a big believer in transparency)
  • Automate Everything
  • Test Everything
  • Monitor and Audit Everything

That helped me sort it out in my mind.  Now I think I understand the IBM definition of DevOps.  It’s the blanket of improved software delivery (better, faster, cheaper) by using the five practices listed above.  That is a good starting point for me.  Unfortunately, it covers the automation, tracking, and reporting of (you guessed it) “Everything”.  That is a pretty massive scope.  I am still wrestling with this part of the DevOps definition.

So maybe by now you’ve become convinced that you need to address DevOps, but you’re struggling with what DevOps is, and just what you should address first.  My advice is to talk to people in your organization, and find out what the most painful piece of the software delivery lifecycle is.  Address that piece first, and attack it in small chunks.  You need to start breaking down the technical and organizational barriers, and share these five key practices with everyone in your organization.  DevOps is a philosophy, a way of looking at the entire software development process.  By encouraging these 5 practices in your organization, you’ll slowly see improvements in how your software development is done, and begin to realize the promise that “DevOps” has to offer.

Now I am lucky, because I have a dedicated team that is focused on DevOps.  You can see this in Freddy’s DevOps blog, in Darrell’s blog, and soon in Smith’s blog.  I can ask them questions when I am too busy (or dumb) to figure something out.  You can use some of the same resources that I used, and you can ask our experts questions on their blogs.

Add Your Voice to the Chorus

Today is the start of one of the really enjoyable activities that we have each year.  Today marks the start of the Rational Voice Jam, a two week long activity where the people at IBM/Rational look for feedback and ideas from EVERYBODY.  Some of the people in the Jam are IBM employees (like me) who have a hand in the development of the Jazz tools, or other IBM/Rational technologies.  Some of the people in the jam are interested in where our products are going.  Some of the people in the Jam are customers of our products, who have some strong opinions about what we should be bringing to the Jazz tools in the future.

The Jam itself is pretty wide open, with people submitting ideas in various theme areas.  You can select a theme, and zero in on all of the suggestions and commentary for just that theme.  Or you can watch the whole thing unfold.  It’s up to you how much time and effort you spend here.  I know some of our customers see this as a place where they can bring up ideas for new functionality, and attempt to get broad support from the user community for their ideas.  Others just like to read the ideas presented, and then vote for the ideas that have the most value to them.  Engage in a way that you feel comfortable with.

So if you’ve read this far, please take an additional couple of minutes to check out the Voice Jam.  You can share your ideas, or just take a look at the ideas that other people have already posted.  Vote for your favorites, and give us your take on how things should be done.  Make sure that the Jazz products move in directions that benefit you and what you do.

P.S.  I do know that some of the comments and ideas that came up in last year’s jam ended up getting translated into product requirements for some of the new functionality around server monitoring that we are exposing this year.  I really want to continue to see this work, so we can better serve our customers, and give them the features and functionality that is valuable to them.

Deploying the Mental Infrastructure

Worldwide travel sounds great, until you have to do it in coach, flying for 24 hours to get to where you are going. Then it becomes a pain in the rear (or back, or leg, pick your stiffest joint after sitting like a sardine for 24 hours). This post isn’t about whining about air travel, it is about learning new things, and seeing how our customers use things in new and interesting ways.  I recently traveled to India for our India Innovate and VoiCE conferences, and while I was there, I heard many of the same types of issues and challenges from our customers in India.  Many of these challenges have to do with what you might think of as the MENTAL infrastructure required for a Jazz deployment.

Why Won’t They Use It?

One of the interesting things that I kept hearing our customers struggle with is in getting people/teams to adopt the CLM/Jazz solutions. The teams seem to get overwhelmed with all of the options available to them, and often struggle with how to get started and best use the tools. After hearing this multiple times, I thought that it might be a good idea to share some of things that I have seen work for our customers who most successfully adopt the Jazz tools.

Keep the process simple

It is easy to fall into the trap of trying to implement a lot of process control in Jazz. Don’t fall into that trap. Keep the process enforcement loose, with simple state models and a limited amount of restrictions. This way you can support a variety of teams with a common process template. For those teams that want more process enforcement, have them do it by creating “bad behavior” queries. A “bad behavior” query is a query that looks for conditions that a project wants to enforce (like no delivery without approvals). Having a widget that alerts a manager to bad practices allows the leader to address the situation with individuals, and also allows them to ignore these rules for exceptional situations. It also allows teams to adopt process enforcement based on the maturity of their team, the type of project, and their specific business situation, without having to change project templates or tooling customizations.

Share a Roadmap

One of the greatest strategies is the use of what I call a usage model. A usage model covers the basic use cases for the various development roles as they develop their software. If I am a developer, how do I fix a defect? How do I know what to work on next? A high level (no need to get super detailed) guide that let’s people know their workflow does two things. One, it lets everyone know that you are concerned about their role, and that you have thought about how each role will do their work, and written down some basic instructions for them. This builds confidence and reduces the fear and anxiety that many have when having to move from the tools/processes that they used to know. Two, it can serve as a small easy to consume reminder that developers, testers, analysts and others can use as a reference. Think of it as one of those “for Dummies” guides. Keep it simple, and at a high level, so people can see the bigger picture.

Adopt in Phases

Organizations can only absorb a limited amount of change, without losing effectiveness. Role out these changes to teams in waves of adoption, do not try to get everyone to adopt at once. As time goes on, your previously adopted teams will help your newly adopting teams, and you will begin to identify best practices for your organization. Having too many teams attempting their initial adoption at the same time will overwhelm your tools administrators. By staggering the adoption of the tool and process changes, you manage to level the workload that you are placing on your tools support infrastructure (both people and hardware).


Organizations that follow these three rules will typically have much smoother rollouts of the Jazz capabilities to their organizations. These rollouts involve organizational change, in terms of tools, process, and often development methodology, so they require careful management. Getting people to change their habits can be a challenge. The organizations that focus on changing these habits often see huge improvements in the quality and quantity of work produced by their development teams.

Deploying Jazz Correctly – The Rules for Doing it Right

Recently I have seen a flood of requests from our customers looking for the best ways to deploy their Jazz solutions (RTC/RQM/RRC).  Everyone wants the “optimal” deployment architecture for their environment, and everyone is fighting with these four basic concerns:

  • Wanting to use the minimum amount of hardware, to save on costs and allocation of hardware resources.
  • Wanting to have a highly performing Jazz solution – a stable environment with minimal performance issues
  • Wanting to have an architecture that will easily scale to meet rising (or falling) demand
  • Wanting to have an ability to monitor system performance and predict future hardware and organizational needs

So looking at this list, you are immediately struck by the fact that some of these concerns are opposed to each other.  To have a highly stable and high performing solution, you could have an abundance of hardware resources.  This is directly opposed to the first goal which is the specification of the minimum amount of hardware.  People need to understand that the “optimal” conditions for their deployment depends highly on the relative weight that they assign these four basic concerns.  Most people realize this, but it is important to keep this in mind as we discuss how a successful Jazz deployment will impact these factors.

So in this blog, I want to share with you some of the basic rules that I tend to follow, and the reasoning behind them.  We’ll start with a couple of broad statements and highlight of some classic “broken” thinking, move on to a simple architectural model, and then highlight the things that people don’t consider, but MUST consider if they want a healthy and happy Jazz solution.

Statement #1 – Virtualization is great because it is flexible

A majority of my customers are looking to deploy the Jazz CLM solution into virtual environments.  This is not a bad approach, and I honestly really like this approach.  Some organizations do not understand the value that virtualization has for a Jazz deployment.  The values that virtualization will bring to a Jazz solution are:

  • The ability to have “predefined” Jazz application instances, ready for some final configuration prior to deployment.  This means easier and lower risk scalability as demand for the Jazz capabilities grow within an organization
  • The ability to modify the amount of resources used by existing Jazz application instances.  If you discover that you have over allocated or under allocated resources, you can easily modify this.

Myth #1 – Virtualization does not provide magic elves that allow your single CPU desktop host applications that would normally be done by 3 enterprise server machines.  Computing depends on basic physics at it’s core, and those electrons can’t move any faster…..

Many IT departments look at virtualization as a way to harvest more CPU cycles, and so they cram applications into a virtual appliance and end up having an “oversubscribed” virtual environment.  This works for applications that are used sporadically, and those applications that are not business critical.  This does NOT work for an application like the Jazz applications, which usually end up servicing a highly variable (but constant) workload throughout the day, and which are considered business critical.

So how does virtualization benefit someone deploying the Jazz solution?  It allows architects and administrators to quickly define and deploy application servers with a common footprint.  This allows a team to monitor their Jazz solution over time, and gain an understanding of how your specific organization utilizes the infrastructure.  By monitoring these “common” components, it is easy for administrators to quickly determine performance bottlenecks, and identify when additional Jazz application instances are needed.

Statement #2 – Your Workload is Different

Let’s begin with a myth.

Myth #2 – You Jazz guys have a number of users that can be supported by each of the Jazz applications, and you are just too lazy/scared/chicken/paranoid to tell anyone.

We understand how RTC works in general, and we understand REALLY well how RTC works in the IBM internal environments where it has been deployed.  The problem that we face is that everyone who uses the Jazz solutions uses them differently.

Myth #3 – We have an “average” organization, so you should know exactly how many licenses/servers/application instances we need to deploy.

Nobody is average.  Every organization uses slightly different usage models.  To add to this, each user population has a different culture, and they use the tools differently.  Some cultures are obsessed with collaboration, and communicate heavily in work item discussions.  Some cultures focus on continuous integration, and have a constant stream of builds and automated testing being done.  Some cultures manage requirements by User story, and don’t use the requirements management capabilities (RRC) of Jazz.

None of these cultures is wrong.  However, each one of these will put stress on different areas of the Jazz solution architecture.  Now we can add another layer of complexity in that usage models and cultures change over time.  Teams that never used planning tools in the past begin to use the tools, and then they begin using them in different ways as they discover the practices that work best for them.  Keep in mind that software engineers are like electricity, they will find the path of least resistance with ANY tool that they use.  So usage patterns, best practices, and usage models will change over time.  This will change your workload.

The point of this is that attempting to determine the exact workload and the capacity of a Jazz solution prior to it’s rollout and implementation is an exercise in futility.  However, the entire exercise does have merit.  We can rely on prior experience with deployments, and the law of large numbers, to provide an estimation of what the load on a Jazz solution should be, and what capacity (in terms of hardware, deployed resources, etc.) is needed.  Keep in mind that it is an estimation, and the variables in your environment will make the actual usage vary from your estimation.

So use the estimation to help you predict hardware needs, licensing, performance, and the costs associated with these, but do so with the understanding that this is an estimation.  There are far too many variables to be able to make accurate predictions in this area.

A Simple Architectural Model

Now that we have established some basic “rules”, and explored a couple of common myths, let’s get to something that you CAN use for your Jazz deployment.  Let’s explore a very simple architectural model that will allow you to easily scale and monitor the performance of your Jazz solution.

The Basic Building Block

The basic building block of my Jazz architecture is the application server.  This should be a 4 core system, with 8GB of memory, and at least 80GB of available disk space.  It is a pretty standard system (which is why I chose it).  This is a good set of parameters to use in either physical or virtual environments.  You also need at least a 100MB dedicated connection.  So if you are in a virtual environment and splitting the network with another 9 virtual machines, the physical box had better have a 1GB network connection.

I would also deploy on WebSphere (WAS) instead of Tomcat.  WebSphere supports more advanced application server features (like single sign on), and is supposed to be more reliable than Tomcat.  I have had some customers argue that they would prefer Tomcat, as it fits with their open source strategy.  I am OK with Tomcat, I would just prefer WAS.

The operating system is up for debate, and depends on what your organization has for a data center strategy.  I am a Linux bigot, so I would choose to deploy on Linux.  The IBM deployment of Jazz for the Jazz development team (which also hosts is on AIX.  So if I had to be pragmatic, I guess I would choose AIX, since the team developing Jazz is self hosted on AIX, so I think they would have those bugs worked out pretty quickly.

So a brief recap of what I am thinking.  My basic Jazz architectural building block consists of a single server resource (physical or virtual), which hosts a single instance of WAS, which serves up a single Jazz application.  This is all running on top of AIX.  The WAS and AIX are negotiable, you can use other products and technologies if you want.  The hardware specs of my basic building block for a Jazz application server are:

  • 4 cores
  • 8GB memory
  • 80GB available disk space
  • 100MB dedicated network connection

There is one more point on this basic building block that needs to be understood.  In virtual environments, this building block may be variable.  More on that later, when I discuss scalability.

Putting It All Together

So what do I need to put into place to support my Jazz environment?  If I look at the Standard Topologies, I see that the one that fits this model the best, and that I have seen as being the most stable, is the E1 – Distributed Topology.  This diagram is taken directly from the article:

E1 – Distributed Enterprise Topology

How Many Users Will This Support?

This is the section that most people will read, take out of context, and end up in a very unhappy situation.  So please do not just read this section of the post, and use it out of context, because the context here is critical.

With the hardware that I discussed above, and the topology shown above, I would expect that this implementation would support roughly 350 concurrent RTC users, 300 concurrent RQM users, and 150 concurrent RRC users.  Note that if your organization is not using RTC for source control and builds are not being done from the RTC repository, that you can probably expect to support almost twice as many concurrent RTC users.  Keep in mind that this is a rough guesstimate.  This is not IBM policy, it is a simple rule of thumb, a spot to begin your planning from.  Remember the section above where I said that this was an estimation, and that you should monitor the performance of your solution to understand what the limitations and scaling are in YOUR environment.  Every environment is different, and identical topologies implemented by different organizations can have different performance and scalability based on the differences in their usage of the environment.

One other word of caution here.  I mention concurrent users.  My definition of a concurrent user is someone who is actively engaged with the system.  So a developer that brings up an Eclipse client and works on their code, checking in some changes, for an hour would count as a concurrent user.  If they leave their Eclipse client open, and go to lunch, then I would not consider them a concurrent, or active, user.  So when I state that this can support 350 concurrent RTC users, that means 350 users, with some development going on, and some builds executing.  It does not mean 350 people scanning individual work items, and it does not mean 350 concurrent checkins and builds.

You will see other data that suggest other numbers for a deployment like this.  I am just sharing a rough approximation of what I have seen deployed in our customer environments.  As subsequent releases of the Jazz tools come out, and some performance issues are addressed, I would expect that this could improve.

At this point I know that you are probably thinking, “Geez Tox, you put so many qualifications on these estimates.  It makes them almost worthless”.  I understand your pain, but what I have seen in numerous instances is that people are treating these installations as if we were supporting automated teller machines, or a stock trading application, where performance can be measured in transactions per second.  Unfortunately, the software development space is quite unbounded, and our transactions are not consistent.  Changing a work item provides a much different workload on the system than running a report, or executing a build.  We just cannot accurately forecast what the specific load characteristics of a developer population will be.


Now if we need to add additional Jazz applications, in order to scale our solution to support more concurrent users, then we would just add additional application instances, and additional corresponding logical database storage areas.  Scaling is easy because we have a basic architectural building block (defined above), and we just need to place another block into this diagram for the new Jazz application instance.

In virtualized environments, you may find that your particular implementation has a consistent bottleneck in one particular area.  For example, we might find that memory and JVM heap utilization is very high in all of our Jazz application instances.  If this was the case, then we could increase the memory in our basic building block from 8GB to 12GB, and update ALL of virtual machines hosting our Jazz applications.  In this way we remain consistent, and we leverage the flexibility of virtual technology to minimize changes to the logical architecture of our Jazz solution.


Deploying change to an organization is never easy.  It involves having people change their habits , and move from what they know (their current tools and processes), to something that they don’t know.  This is scary for many people, and many people fear the unknown.  The best way to combat this fear of the unknown is education.  Let your end-users know WHAT is going on.  Keep your stakeholders informed of what is happening, and WHY it is happening.  Most of what I have here deals with the PHYSICAL infrastructure of a Jazz deployment, but don’t forget to address the MENTAL infrastructure that needs to change.  More on that in my next blog post.

No Nonsense Tech Talk Member

These are my PERSONAL views and observations

The postings on this site are my own and don't necessarily represent IBM's position, strategies or opinions. Anyone is free to use, copy, distribute, modify or sell the source code and other materials directly linked from and is provided "as is" without warranties. I am not responsible for any harm or damage caused to your computer, software or anything else caused by the material.

Recent Twitter Traffic


Get every new post delivered to your Inbox.

Join 125 other followers

%d bloggers like this: