There are a number of things that can impact the performance of your Jazz servers.  Some of these things you can address with better architecture.  Some things can be addressed with better tuning of the web servers, JVMs, databases and Jazz applications.  Some things cannot be changed.  Some things are just slaves to the laws of physics.  Electrons can only move so fast (“186,000 miles per second – Not just a good idea, it’s the law“), and there are distinct physical limitations that we need to be aware of in our environments.  What are some of these limitations that you need to be aware of?

Latency

The biggest physical limitation to the performance of any computing system is latency.  The amount of time it takes for your data to make the physical journey from your Jazz server to the end user machine is just one place where latency comes into play.  There is also the amount of time data needs to travel between the various servers that support your Jazz applications.  What are some of the more common causes of latency?

Since the Jazz applications often depend on LDAP for user authentication, the latency between the Jazz JTS application and the LDAP server can have an impact on end user performance.  That is why we always recommend deploying Jazz in a data center, since most corporate data centers have a local LDAP repository, thus minimizing the amount of latency between the JTS and any associated LDAP servers.

Likewise, the Jazz application servers, web servers, and backend database servers should be co-located if at all possible.  Some Jazz customers do have them in different locations, but I always insist that these be located within the same physical location.  A 100ms ping time may not seem like much, but when a query needs to return data from 100’s of work items, and the round trip between the Jazz server and the database server ends up being 200ms for every operation, the time adds up.

You cannot control where end users will access your Jazz solution from, but you can control expectations.  Users that are overseas need to understand that their performance will not be as good as the performance for the people who are in a more local region.  There are some ways that you can address this, using SCM caching proxies and distributed Jazz SCM capabilities, but you always have to be aware of the issue.  Make sure that your end users have realistic expectations.

One of the less known causes of latency, and one that most people ignore, is the presence of switches and routers in a network.  A physical database server may be only 30 feet away from the Jazz application servers, but if these are on different subnets, and require routing between multiple switches to communicate, then the net effect is the same as having the machines thousands of miles apart.  Because of this is it always important to have a feel for what the typical round trip time is for your data.  You can use network tools to determine this, or you can use the Jazz Performance Health Check widget.  Do NOT use ping as a measure of latency.  Ping sends small packets, and networks will handle ping packets differently from Jazz application packets.  Trust the results that you see in the performance health check widget, since it does not use ping, but sends actual packets of data between the web client and the Jazz application servers.

Jazz applications are intended to perform in wide area networks, but anything that you can do to further minimize latency will result in improved end user performance.

Throughput

The second big physical factor that will limit your Jazz performance is the concept of throughput.  This is the amount of data that can be transitioned or transferred from one of the components of your solution architecture over a given period of time.  The common places that most people will observe throughput as a bottleneck to performance is at the web server layer.  A web server can only service a certain amount of requests in a given period of time.  This is a limitation of the Jazz solution that is imposed by the web server technology that you choose to deploy with (either Tomcat or WebSphere).

A more common limitation is the network.  Your network can only supply a certain volume of data in a given time period, regardless of the number of requests waiting to go onto the network.  Network bandwidth is not typically an issue, but it can become an issue in a couple of different situations.  In the case large builds, often teams will first create a workspace in which to execute the build.  In cases where the code base is large (like a full Android build), all of the code needs to be transferred from the repository to the local workspace.  Users need to understand that this is the equivalent of downloading the entire code base over the network.  In the case of large code bases, this transfer of data would take tens of minutes (if not hours) if just done via FTP over the network.  It will not go any quicker with Jazz, the files still need to be physically copied from the repository to the workspace area.  Coordinating large builds, and doing incremental builds, can help spread out and reduce some of the pain.  Having a build farm that is close to the Jazz application servers is another way to reduce the impact of builds on your network.

The second area where I have seen network bandwidth come into play is in virtual environments.  In some virtual environments, the virtual machine appears to have a full 100BaseT or gigabit ethernet connection to the network.  However, the virtual machine is actually sharing this connectivity with other virtual machines that exist on the same physical hardware.  So if you have 4 Jazz application servers running in their own virtual machines, on the same physical hardware, then the maximum amount of throughput for ALL machines will be limited by the network bandwidth.  in this situation, a large build being done out of the repository of one of your Jazz SCM instances, could potentially impact the performance of ALL of your Jazz applications, since the network bandwidth is being saturated by the files being transferred in support of the build.

Another area where throughput and bandwidth come into play is with the disk I/O used with a Jazz solution.  The Jazz application servers are not heavy users of the disk storage on their systems, since data is stored in the repositories, which are located on the database servers.  Since Jazz deployments do store a large number of artifacts, some of which have large sizes, the disk I/O on the database servers is critical.  The disk I/O controllers will limit how fast data can be pulled from the database, so it can be returned to the Jazz application server, and ultimately to the user.  Make sure that you are monitoring the speed and load put on the disk I/O controllers, so you can see when disk I/O throughput issues are limiting your performance.

Summary

Jazz performance has been a hot topic with many customers recently.  Everyone wants to know how to optimize their Jazz deployments.  What most people REALLY want is to balance the costs associated with the hardware needed to support a Jazz deployment, with the user expectations of acceptable performance.  Jazz administrators can tune and adjust their deployments to make them more efficient, but they need to realize that basic architecture and the physical limitations of the hardware environment will also have an impact on the overall performance of their deployments.  Being aware of how these physical limitations impact how you deploy your Jazz solution will allow you to make better architectural choices for your specific deployment.

Other Articles in the Series

This series of articles is completed.  Here is a list of the topics covered:

Advertisements