Cyber security measures are a critical component in data centre remote monitoring.
Mon 10 Oct 2016

Torben Nielsen, Senior Research Engineer, IT Business, Schneider Electric
Online cloud-based remote monitoring platforms are a essential tool in the efficient management of mission critical data centres, but care must be taken at the deployment stage to guard against security vulnerabilities.
Centralised monitoring stations, frequently operated by specialist third parties, are a familiar, efficient and effective means of managing the operations of mission-critical IT facilities. In recent years remote, or online monitoring has evolved rapidly from a method in which centralised management teams were provided by email with intermittent status reports to more sophisticated real-time systems providing constant monitoring through the use of cloud services, data analytics and mobile apps.
Thanks to such software, many issues concerning capacity or failure of a single piece of infrastructure equipment can be anticipated and rectified quickly, downtime is reduced, mean time to recovery is shortened and energy efficiency for all systems including power and cooling is improved. However, a disadvantage is an increased vulnerability to cyber attack, which is a growing problem for all connected businesses in the world today.
Juniper Research estimates that the cost of data breaches will reach $2.1 trillion globally by 2019. Naturally, there is a large and growing arsenal of counter measures available to guard against unwarranted intrusion into one’s vital information systems. But in the case of the digital remote monitoring platforms on which many data centres rely for their effective operation, special attention needs to be paid at the development stage to ensure that they are as robust as possible in the event of any attack.
DevOps, or development operations, is one of the latest modern approaches used to prevent cyber attacks from causing theft, loss of data and system downtime. In this instance dedicated teams are deployed to bridge the gap where development and operations entities were traditionally split. This ensures a stronger level of communication and collaboration in the effort to protect an organisation, user or platform from cyber attack.
Although the primary responsibility for the development of a monitoring platform lies with the software vendor, data centre operators must nevertheless be able to evaluate the effectiveness of such systems not just on the basis of features and functions but also on their effectiveness in terms of security.
Knowing how secure a monitoring platform is requires an understanding of how it is developed, deployed and operated. A recognised standard, ISO 27034, provides guidance on specifying, designing, selecting and implementing information security controls through an organisation’s system development life cycle. Data centre operators evaluating remote monitoring platforms should ensure that they are developed using a Secure Development Lifecycle (SDL) methodology that is compliant with ISO 27034.
A typical SDL methodology is based around eight key practices, namely: training, establishment of requirements, design, development, verification, release, deployment and response to incidents.
A vendor should have in place a continuous training program for its employees covering all aspects of designing, developing, testing and deploying secure systems. It should also have in place procedures to ensure that its employees do not become themselves the vector for a cyber attack. Quite apart from identifying potentially malicious employees via proper recruitment and vetting procedures a vendor should be able to satisfy a prospective customer that its employees are continuously trained on aspects of cyber security based on their role, whether they are a developer, operator or field-service technician. There should also be a hierarchy of access privileges, with the vendor’s employees only given access to IT and network functions and resources that are needed for them to perform their job.
Cyber security features and customer security requirements should be clearly stated and documented at an early stage of product development.
At the design stage, these features and requirements should be encapsulated in documents describing an overall security architecture. Threat modelling, a structured approach to identifying, quantifying and addressing security risks, should then be enacted to ensure that security is built into the application from the very start. Threat models look at the system from the perspective of an attacker, rather than a defender, thereby enabling developers to counter the threats once they have been revealed by the modelling process.
When adding remote monitoring to a data centre, it is important to consider the security aspects of the connection method. Some (older) systems require each monitored device to connect directly to the internet. This adds a significant security risk, since each monitored device is exposed to cyber attacks.
A much better and more secure approach is to use a dedicated gateway to connect. In this instance a continuous stream of informational data is gathered and sent through a secure gateway which then transmits it outside the network or to the cloud. This data is monitored and analyzed by people and data analytic engines.
In addition to the data stream, there is a feedback loop which runs between the monitoring team and systems, directly back to the data centre operator. The user(s) have access to monitoring dashboards via the gateway or to the platform’s cloud when outside the network via a mobile app or computer, allowing realtime decisions to be made.
From the point of view of a remote monitoring platform, certain specific elements should be of special concern from a security perspective. The gateway that collects data from within a customer’s data centre should allow outbound connections only; it has no need to allow inbound connections and so they should be prohibited, thereby removing the gateway as a conduit for attack. This gateway should also be the only one to initiate connections to the outside. No one outside of the gateway should be able to connect to it first.
The platform should only communicate over secure protocols, such as HTTPS, to protect the confidentiality of data in transit. As an additional safeguard, all authentication procedures should be multifactor in nature; a simple password should not be sufficient. Sensitive data should be encrypted both in transit and at rest and the platform’s source code should be compliant with security standards such as NIST SP800-53 Rev 4 and DISA STIG 3.9. All code changes should be peer-reviewed before being accepted.
The development stage following this detailed design process implements the design into code in accordance with best practices and coding standards. Once completed, verification can then take place by testing the coded product against the threats anticipated in the threat models to ensure that the platform is resilient.
Recommended testing practices during the verification stage include static code analysis, penetration testing and continuous security scans. Static code analysis is a means to identify weakness in the source code itself prior to deployment. All code should be scanned prior to each build to eliminate coding weaknesses.
Penetration testing simulates the typical methods of attack that malicious intruders might adopt. Testing can be done from the perspective of an external, or Black Box, attacker or from that of an insider or White Box attacker. Test teams should be separate and independent from the development team and specially trained in penetration testing.
Continuous security scans should be performed after a product has been deployed to test for new vulnerabilities. This should be done using scanning tools that look for publicly known security vulnerabilities.
The release stage requires that security documentation be developed describing how to install, maintain, manage and decommission the solutions that have been developed.
Deployment requires the project development team to be available to help, train and advise service technicians on how best to install and configure security features.
Once deployed, an emergency response team should be put in place by the vendor to manage vulnerabilities and support customers in the event of an incident. Ideally, this should be a DevOps team, comprised at least in part, of people who developed the system.
The DevOps team should have three basic functions, namely to detect security weaknesses by continuously scanning for vulnerabilities or anomalies; to react to threats on a 24×7 basis; and to provide remedies for any frailties discovered. This includes patches for software vulnerabilities as well as new developments in the face of new threats.
DevOps teams have to focus on two key metrics: Mean Time to Detect and Mean Time to Recover. These should be focused on network security, ensuring that remote attacks arising over the network are neutralised; and physical security which concentrates on preventing unauthorised access computers within a premises itself. To this end all developers and operators should be required to secure laptops with disk encryption, use a local firewall, have strong passwords and enable screen lockouts after a short timeout period.
Data centre operators making the choice between remote monitoring solutions should work in partnership with developers to ensure that the benefits of continuous systems monitoring are not compromised by security flaws.
The consequences of choosing a structure without sufficient security can be quite severe, therefore development and deployment of specific safety processes at the design stage have become paramount to ensuring business critical infrastructure remains safe from cyber attack.