column,
I described the crucial steps that should be taken to deploy and
validate your new cluster. In this column, I discuss how best to move
the system into production, configure, and maintain it so that
operations run smoothly and efficiently for the long term.
Once the deployment and validation of your new HPC cluster are
completed, it is time for the HPC systems management functions to begin.
I am assuming the advice from the previous columns was followed and the
primary HPC system administrator was identified and in place in the
deployment phase. This is no time to discover you do not have an HPC
expert on your staff or at your disposal. Just because the hardware and
software are humming now doesn’t mean they will stay that way. Like any
other complex system, the HPC cluster needs to be continuously
monitored, analyzed, and maintained to keep it running efficiently.
The mistake I’ve seen made too often, especially by larger
organizations, is the assumption that someone on the existing IT staff
can probably figure out the HPC system, perhaps with some minor
training. Unfortunately, this rarely works out. Although HPC is a niche
within the larger Information Technology space, even the best IT
generalist will have little or no experience in supercomputing. It is
NOT just a collection of Linux or Windows servers stacked together. HPC
is a specialization unto itself.
You must have HPC expertise available to you if you want the new
system to perform as expected. There are two options – hire one or more
full-time HPC administrators or contract for ongoing HPC system support.
Budget will likely dictate which works best for your organization. For
several scenarios, contract support may be a better option due to the
difficulty of finding and retaining HPC experts on staff due to intense
market demand or because you may not need a full-time person. Check with
your system vendor or integrator to see if they offer contracted
management services.
Now that your cluster is operational and you have a skilled HPC
administrator(s) on staff or under contract, the first job is to
configure the cluster so that it works well operationally. The two major
aspects of this responsibility are that the cluster must be configured
to work optimally from both an end user usability perspective and from a
systems operation perspective.
The administrator must first set up proper security access for the
end users. There are two major components to a successful security
design. The first addresses connectivity to the appropriate
authentication system that makes sure users can securely log in. Often
the cluster has to be configured to tie into an already established
enterprise system such as LDAP, Windows, etc. It is critical that this
authentication performs with speed and reliability. HPC jobs running in
parallel will fail often if the authentication system is unreliable. The
second component to success addresses the authorization requirements.
The administrator must validate that the file systems and directory
permissions follow the authorization policies. This is critical so that
users can work smoothly all the way from submitting the jobs to
reviewing the results from their workstation. These must then be set up,
configured, and tested across both the compute and storage components
for the unique user groups.
Additionally, policies may need to be set up on the scheduler to
allocate for various user groups and application profiles, as well as on
storage to meet the varying space requirements. When security,
computer, and storage are configured, users can safely log into the
system and know where to securely put their data.
If your cluster is brand new, the users are most likely first-time
users of HPC technology. This means they will need training and
instruction on how to run their applications on the system. The
applications they ran on a desktop or mainframe will not perform the
same way on the cluster. Users will likely need application-specific
training. Depending on the scheduler, there will be different ways to
submit jobs from various applications.
It will be the administrator’s responsibility to begin building a
written knowledge base pertaining to the cluster and each application.
This hardcopy or web-based document will serve as a guide for users to
understand how to submit and track jobs and what to do if a problem
occurs. Depending on the level or size of the user base, it may also
make sense to look at some portals that can make job management easier
for the end users.
For the cluster itself, the administrator should set up monitoring
and alerting tools as soon as the system becomes operational.
Monitoring, reporting, and alerting of storage, network, and compute
services on a continuous or periodic basis are critical to identify
signs of trouble before they turn into major malfunctions. Minor usage
problems could simply mean disk space is filling up, but soft memory
errors could be signs of impending node failure.
Such monitoring and analysis tools are readily available. Many HPC
clusters come equipped with system-specific tools, while other more
robust technical and business analysis packages are commercially
available. Whatever their source, these tools should be set up to
identify and predict routine maintenance issues, such as disk cleanup
and error log review, as well as actual malfunctions that must be
repaired.
In my experience, however, pinpointing the cause of several problems
in the HPC domain requires looking for clues in multiple components.
When things are going wrong with an HPC cluster, alarms may be triggered
in several places at once. The skilled administrator will review all of
the flagged performance issues and figure out what the underlying cause
actually is. Few software tools can take the place of a human in this
regard.
Proper care of the cluster also requires the administrator to be
proactive. Every three to six months, I recommend running a standard set
of diagnostics and benchmarks to see if the cluster has some systemic
issues or has fallen below baselines established during deployment. If
so, further scrutiny is in order. Last, but not least, the HPC
administrator must find the right way to make changes so that all
applications keep working well on the cluster. Patches and changes for
applications, or libraries, or OS/hardware must be carefully considered
and tested if possible, before implementing. I have seen quite a few
expensive outages where a simple change for one application has caused
failures in other co-existing applications.
Finally, a viable back-up plan must be enacted so the system can be
brought back online quickly in the event of failure. The most important
things to back up are the configurations of the scheduler, head node,
key software, applications and user data. While intermediate data does
not often need to be backed up, user input and output data should be,
especially if the time to regenerate results is high. The organization
should also establish data retention policies determining when data
should be backed up from the cluster to offsite storage.
An extension of caring for and feeding your new cluster is “Capacity
Planning and Reporting,” which I will cover in the next column.
Deepak Khosla is president and CEO of X-ISS Inc.
In the previous Cluster Lifecycle Management
No comments:
Post a Comment