Thursday, July 30, 2015

Cluster Lifecycle Management: Capacity Planning and Reporting

 clusters
In the previous Cluster Lifecycle Management column, I discussed the best practices for proper care and feeding of your cluster to keep it running smoothly on a daily basis. In this column, we will look into the future and consider options for making sure your HPC system has the capacity to meet the needs of your growing business.
I wrapped up the Care and Feeding column by noting how critical it is to monitor the HPC system as a routine part of daily operations to detect small problems before they become big ones. This concept of gathering system data dovetails with capacity planning and reporting because the information you collect each day will paint an overview of larger operational trends over time that help you plan for the future.
An HPC system is not static. In fact, most will need to undergo major upgrades, expansions or refreshes after two to three years. And in all likelihood, these changes will be prompted by new demands put on the system related to growth in your business. New users and new projects will require changes and upgrades. Or in some cases, new or upgraded applications may require more processing capacity.
Perhaps most often, the trigger that prompts a capacity upgrade is related to data. In today’s world, the problems being solved by the HPC cluster are getting bigger and more complicated, requiring the crunching of larger and more complex data sets. Adding capacity is sometimes the only way to keep up.
Monitoring and reporting can tell you how efficiently the processes and applications are running, but the information must be analyzed to determine how busy the system is overall. These details can help you make better decisions on upgrading and changing the system. Specifically, you need to anticipate when to implement capacity upgrades and which components of the system should be changed. Armed with this data, you are more likely to spend your money wisely.
The HPC reporting system should help provide the information needed to decide when to upgrade or add capacity as well as what type of resources to add. Some of the typical analyses needed are:
  • What are the most commonly run projects and applications?
  • How do they rank by CPU time?
  • How do they rank by CPU and memory usage?
  • What is the throughput of various architectures (ideally in business metrics such as ‘widgets built’)?
  • Who are the heaviest users of the system?
  • How many resources are used for their jobs?
  • What are the cost allocations of compute and storage by users and projects?
It’s critical to understand that the answers to these questions will vary by location. With the distributed architecture that is so common with HPC implementations at large organizations, clusters, end users, and data centers may be spread around the world. Their hardware and software may not be the same from one location to the next.  In addition, local variables, such as labor and electricity expenses, will impact system operating costs.
The real challenge, therefore, is for the reporting system to be able to provide this information consolidated from multiple locations so that it can be analyzed. This analysis must be conducted with a return on the investment (ROI) in mind both regionally and centrally. This analysis must examine HPC usage data from the perspective of the business. Usage metrics must be monetized so capacity expansion decisions can be weighed against an ROI. Capacity upgrades have to be planned and implemented in a way that maximizes service to end users and their projects while still providing a positive return on investment for the whole organization.
So, how is this reporting and analysis best conducted?
Smart planning for capacity changes obviously can benefit from solid data reporting. Ideally, data is collected from systems and schedulers continuously and made available for analysis as needed, rather than spending time trying to find the right data when a decision has to be made.
There are several types of tools available to provide the above information. For example, open source charting tools like Ganglia, Cacti, Zabbix and others collect performance data from the systems and the network. Several of these can be extended to add custom metrics. Some cluster managers, also come with reporting tools that give insights into cluster health and performance. Most of these solutions work across heterogeneous architectures.
At the next level are job specific reporting tools from various commercial scheduler vendors. They are able to provide basic user and job information with varying level of sophistication. In general these are proprietary to each scheduler.
At a higher level, there are generic data analytics tools like Splunk that can provide insight from various types of the above sources. These require significant expertise, customization and upkeep to provide effective results. Finally there are a few platform-independent, HPC specific, analytics tools such as DecisionHPC that provide globally consolidated, single pane-of-glass system and job reporting for heterogeneous clusters and schedulers.
On the other side of the spectrum, some HPC operators can choose to build their own custom reporting tools. This also requires significant HPC knowledge as well as development expertise to ensure that the solution can scale, can meet ever-changing user needs, and is supportable and maintainable long-term.
As adaption into commercial uses for HPC increases, the importance of having good reporting and analytics also increases, thus leading to more solutions becoming available. Ideally, you should have a long-term strategy for a reporting and analytics solution that is independent of the various operational tools that tend to change over time, that can be easily supported, and that can be easily customized to your business needs.
At the end of the day, the system manager needs a solid reporting system to meet the evolving needs of the end users and the business. Knowing in advance when you need to increase resources and capacity is a critical part of that, as getting into the budget cycle and procuring the upgrades usually require long lead times. When it comes to upgrading resources, last minute decisions and investment, without insight into current and historical trends, are not practical.
In the next column I will discuss the final stage of the cluster lifecycle – Recycling and Rebirth.

No comments:

Post a Comment