In the fast-paced world of HPC, data storage is an essential component in guaranteeing business critical deliverables. HPC data storage systems rely on parallel file systems to deliver maximum performance – but CIOs have two options to choose from, as Jim Donovan explores
High-Performance Computing (HPC) and its ability to store, process and analyse vast amounts of data in record time is driving innovation all around us. With enterprises increasing their use of emerging technologies such as AI, machine learning and augmented reality to improve productivity and efficiency, they are looking for the best high-performance data storage infrastructure to support business operations and make automated decisions in real-time.
HPC data storage systems rely on parallel file systems to deliver maximum performance and there are two options to choose from: Open source or commercial parallel file systems. Opinions abound on both, so it’s worth examining what’s hype and what’s real.
Cost of acquisition – What’s better than free?
An inherent part of any open source product is the fact that its acquisition is free to the user and this is no different with open source parallel file systems such as Lustre and BeeGFS.
While there are highly proficient Lustre and BeeGFS architects and developers in HPC labs around the world ready to tackle each system’s complex set-up, tuning and maintenance requirements, enterprise users can become overwhelmed by a system that lacks the manageability and ease of use they have grown accustomed to in their existing IT environment.
This means that by the time CIOs factor in the cost of additional staffing requirements to implement and manage an open source parallel file system, there’s quite a price tag associated with the ostensibly ‘free’ purchase.
Here’s where commercial parallel file systems have a competitive edge over open source offerings. Commercial parallel file systems are delivered as plug-and-play systems that offer some of the lowest total-cost-of-operations and ownership in the business. This is due to ease of deployment and simple manageability, which results in negligible administrative overhead. In addition, commercial file systems are capable of automatic tuning and retuning as workloads change, thereby reducing the opportunity cost of downtime.
If true customisation is needed, enterprise users should assess the type of skill set and number of staff required to successfully modify and support the open source code. If flexibility is the ultimate goal, today’s modern commercial file systems offer dynamic adaption to changing workflows without making changes to code.
Built on industry-standard hardware that allows for the rapid adoption of new technology, commercial parallel file systems are self-tuning solutions, and purpose-built for adaptability and flexibility to handle a wide range of use cases. Users can configure the system to their exact workload needs without overprovisioning any single component. Systems scale without limitation and bandwidth, capacity, and metadata performance can be independently set with granular control.
Elimination of the ‘performance’ gap
The performance of today’s open source parallel file systems is on par with commercial portable file systems. The ability to quickly scale in increments without interruption and tuning is crucial for commercial applications to stay on track and meet demanding time-to-market schedules. The processing of large and complex data sets with high precision while handling thousands of I/O operations simultaneously is a must for high-end computing deployments in the commercial space, such as computer-aided engineering (CAE) simulation and analysis, energy exploration and drug development, as well as emerging workloads such as AI and autonomous driving.
Performance is optimised and reliably consistent when the software and hardware are pre-tuned, allowing the system to automatically adjust to increasing complexity. This is the case with portable, commercial parallel files systems that have been optimised for, and are in tune with, pre-qualified commodity hardware components.
Open source file systems don’t benefit from the same level of seamless integration as they often require deep knowledge of how the storage system works, in order to tune and re-tune it for the maximum level of performance and bandwidth utilisation required by different workloads.
System maintenance – What does it take to keep things running reliably?
Commercial file systems have mastered this ‘lights-out’ operational approach, while many of their open source counterparts still spend a considerable amount of time on day-to-day storage management and maintenance, dealing with the time-consuming, complex, and error-prone activities of tuning, in order to optimise the interaction of software and hardware.
Bringing it all together
Today, the need for high performance data storage infrastructure in commercial enterprise cannot be understated. The massive volumes of data generated from emerging technologies such as AI and machine learning is growing exponentially due to the ease of application integration with enterprise business, covering all industries from manufacturing to life sciences.
Fueled by hardware innovations and software-driven services, HPC data storage systems are allowing enterprises to use new technology to achieve greater levels of productivity and operational efficiency than ever before, and it’s the outstanding performance capabilities of parallel file systems that are servicing demand.
When all evidence is considered, enterprise CIOs who want to avoid potential operational and reputational risk of failure will see that the benefits of choosing a commercial parallel file system strongly outweighs the exposure of undertaking the task to finance in-house resources and build the infrastructure required to implement an open source solution.