IT has always liked things that were modular. It plays to the engineering geek inside most IT professionals.
When Sun Microsystems announced Project Blackbox back in 2006, the company started a race to move from modular systems to modular data centres and a market that is estimated by Companies and Markets Research to be worth $40.41bn by 2018. Before Sun Microsystems (now Oracle) launched Project Blackbox – later known as the Sun Modular Datacenter – a number of companies, including Google, had played around with the idea of modularising the data centre. However, it was Sun that took the headlines when it launched the Sun Modular Datacentre in 2008. Sun took a 20-foot standard shipping container and built in all the racks, cabling and equipment required for it to become a data centre. All it needed was external power, chiller units and a network link.
The answer is that it all depends on whom you talk to. While there is no standardised description, most in the industry see it as a container based solution. However, the length of the container and even the width can vary by manufacturer and product range. Despite this, what makes it modular is that the data centres can be bolted together and even stacked on top of each other to create a larger facility. So what is driving this change? This is a complex question with a lot of different answers. Some of them are about the technology while others are about cost.
There is no more powerful incentive to adopt an approach than cost. When Sun launched the modular data centre, it talked of being able to buy a data centre for less than one per cent of the price of a traditional facility. There were many reasons for this. There is often no planning permission, no building costs or need to acquire an existing facility, as well as no decommissioning or reconfiguration costs as the business demand changed and no business rates on the use of a building.
All of those reasons are as true today as they were back then but perhaps the biggest cost reason was speed. It can take months and even years to build or refurbish a facility but a modular data centre can be provisioned in just weeks from the order being placed. For service providers and data centre owners, modular is a huge opportunity. Google and Facebook today have hundreds of containers stacked together to create their data centres. When they need additional capacity, they just drop in more containers.
Change is expensive
One of the real challenges for the data centre is flexibility. For decades, the data centre didn’t change much. Even as we moved from mainframes to minicomputers and into early rack-based solutions, the data centre was reasonably static. With the explosion of commodity computing and blade servers, the data centre became a place of constant change. For older facilities that means redesign, overhauling and updating in order to deliver the power and cooling requirements of new technologies. This is expensive. An overhaul of a data centre can cost millions of pounds and take months. During that period, no money is coming in and in a competitive market that means the potential loss of customers. Modular data centres remove that need to lose customers and refurbish a facility. A refurbishment is simply a replacement module. Customer systems moved from one module to another. Meanwhile the under performing module can be replaced or updated as required.
Modular data centres are not just about the provision of extra capacity or for cover when refurbishing a data centre. With dense computing, data centres have shrunk in size for many companies. Downsizing a data centre is just as expensive as adding more capacity. Power and cooling systems need to be maintained even if they are not being used. Data centre halls that are not being fully used, need to be partitioned to reduce waste and that means refurbishment costs. Using a modular data centres, companies can quickly downsize or move from older, larger, systems to smaller, more efficient ones.
One of the real benefits of modular data centres has been the ability to deploy in new areas. The football World Cup, the US Football SuperBowl, the summer and winter Olympics, the football World Cup – as well as other global events – require data centre facilities, especially for media.
These events process large amounts of data, film, audio and run vast Internet sites to provide public information. Disaster recovery operations, oil rigs, intelligence headquarters in war zones and even major political conferences have also bought into modular data centres. They all need to manipulate very large volumes of data and modular data centres make this possible and provide an opportunity to do so securely. In 2010, the US government took a long and hard look at how it responded to major disasters. One of the failings was the inability of government departments to respond caused by a lack of data and IT facilities. The result was a document highlighting how departments should evaluate and then commission modular facilities.
Modular data centres are engineered to customer requirements. As they can be accessed from all sides, the components are integrated to create the most optimal configuration for power and cooling. Over time, as components alter, some of the initial integration may be lost but those losses will be offset by the power efficiencies of new generations of IT equipment.
The next generation of switches that run at 40, 100 Gbps and even faster, require a lot more power. Storage systems – even those with hundreds of Sold State Drives, also need a large amount of power. Modular data systems are capable of supporting racks of blade systems, petabytes of storage and very high bandwidth, something that traditional facilities would need 10 times the floor space to accommodate. Modular data centres such as the Cannon Technologies T4 MDC drive down costs, can be deployed wherever there is a need for compute power and are highly efficient.
Article by Matt Goulding, Managing Director, Cannon Technologies