Most colleges and universities have some sort of data centre. Ever-increasing data storage and processing needs mean that the energy needed to run the computers and to deal with the heat they generate is constantly increasing.
“Data centres use around 8% of electricity consumption in UK universities"
Not everyone can start with a blank sheet and construct a purpose-built, state-of-the-art data centre. That means that for most people, retrofit or adaptation of a less-than-optimal space is the job they have to tackle. This is not always straightforward as less-than-ideal locations (basements, roof spaces) or internal constraints (pillars, load bearing walls, headroom) can make finding a good engineering solution difficult.
However, there is a lot of help available which should enable you to design a solution that works in your particular environment.
EU code of conduct for data centres
The EU code of conduct for data centres provides a clear, practical framework to review your existing data centre and make changes to improve your energy efficiency. This guide is based on the code of conduct which sets out all the areas where you may be able to improve efficiency within your data centres and serves as a management framework that you can use when planning changes.
The code of conduct is applicable for all types of organisations and sizes of data centres. In the UK three universities – Sheffield, Hertfordshire and Imperial College – are signed up as ‘participants to the code’ and have realised significant improvements in efficiency and savings in energy.
“Our refurbished data centre has squeezed the maximum amount of capacity and power out of limited space, whilst also lowering total operational cost … The EU code of conduct highlighted energy efficiency opportunities, and now provides a widely recognised ‘badge’ that tells our stakeholders that we’ve done the right thing.”
Steve Bowes-Phipps, data centres manager, University of Hertfordshire
Before you plan changes to your data centre you first need to identify the areas of inefficiency and the options for improving them.
Measuring, monitoring and benchmarking current energy use identifies improvement opportunities, and make costs and benefits of investment more visible to key decision-makers. And of course will allow you to demonstrate improvements once they are made.
The code of conduct suggests:
- Design and implement a monitoring and reporting strategy
Metering the energy use and temperatures of various areas and of equipment is essential for efficient maintenance and planning further change. Readings can be manual or automated and regular reports can be generated through an automated energy and environmental reporting console to reduce workload. Reporting on utilisation levels of IT equipment – server, network, storage, etc. are also useful when planning resilience and provisioning levels.
- Develop a data management strategy
Review how long data needs to be stored for, level of protection it needs, and how much should be retained
Cardiff University have created a useful web-based modelling tool, to allow IT managers, engineers and decision-makers to calculate the energy and financial savings they could make from introducing tiered storage technology.
These two tools may also be useful at this stage:
- The SusteIT ICT carbon footprint tool is an easy-to-use excel tool to help estimate the energy consumption, costs and carbon emissions associated with an institution's ICT estate. It also contains a worksheet that will help estimate power usage effectiveness (PUE) in server rooms and data centres
- GreenGrid data centre maturity model touches upon every aspect of the data centre, including power, cooling, computing, storage, and networking. You can use this assessment tool to benchmark current performance, determine levels of maturity, and identify the changes necessary to achieve greater energy efficiency and sustainability
University of the Highlands and Islands - energy inefficiencies
New cost-effective meters and sensors for the servers and computer room air conditioning (CRAC) at the University of the Highlands and Islands showed that the CRAC systems were far more inefficient than expected. The university is now looking at replacing the entire CRAC system in their main datacentre, based on the business case evidence from energy metering.
A key precondition for a successful improvement project is to get all the affected disciplines – IT, the mechanical and electrical specialists, software, estates and facilities – actively involved in planning change.
Involving colleagues in different departments is vital. Ownership and responsibility, and sometimes technical interface issues, can block or slow change.
For example, main electricity meters are typically managed by estates and data is collected manually or into their building management system or energy management system software. Getting one system to talk to another takes shared ownership, responsibility and action.
If you need it, get expert help when planning and implementing upgrades. For example, energy monitoring systems should be installed to the standards expected by qualified electrical engineers. Cooling equipment for a data centre is not the same as standard air conditioning.
Getting agreement and a budget for a plan of works requires everyone to be on board – from those deciding wider strategy and those responsible for allocating budgets, planning projects and managing change, to the IT, estates and facilities staff who may be affected by the work or carrying out parts of it.
Establish an approval board containing representatives from all disciplines. Get the approval of this group for any significant decision to ensure that the impacts of the decision have been properly understood and an effective solution reached.
Our detailed guide to managing change in institutions, although not directly aimed at data centre projects, contains general principles that may be useful in this phase. Our impact calculator can to help you evaluate the efficiency of your change initiative, whilst another of our detailed guides focuses on using PRINCE2 for project management.
Upgrade or build a new data centre?
Changing the building that houses your data centre and/or starting with new equipment will not be an option for many. However many of the recommendations that relate to new builds in the EU code of conduct are also applicable to refurbishments:
- Consider the physical layout, structure and orientation, its location, and potential sources of cooling water
- Consider reusing waste heat from the data centre for adjacent office heating etc
- Review power equipment - select high-efficiency versions of UPS (uninterruptible power supplies), distribution units etc and use efficient operating modes
- Do not overprovision - review resilience levels and think about accommodating variable loads
- Hardware - look at:
- energy efficiency
- working operating temperatures and humidity ranges
- power density
- airflow direction
- power management features
- reporting and
- external control features
- Look at changes to plant and consider free/ambient cooling, recycling waste heat to power cooling plant, or investing in high-efficiency plant
- New IT service provision and software: for service architecture consider deploying using grid/virtualisation techniques and move to more efficient software
- Review other sources of energy use in the building – eg consider low-energy lighting
Award-winning shared service data centre
The University of Aberdeen had a typical ‘legacy’ data centre - a jumble of racks and equipment, which had grown by accretion over the years. The PUE was around 2.6. At the same time as the university was planning its upgrade, it became apparent that there was an opportunity to create and efficient data centre that would also be used by Robert Gordon University and Aberdeen College.
The decision was to go with direct evaporative cooling of filtered air with re-circulation of warm air when the outside temperature falls (quite often in Aberdeen). The PUE is now 1.07 and the whole transformation was achieved with only 30 minutes of unplanned downtime to one service.
This impressive project has been recognised with a Green Gown Award in 2013 and also the British Computer Society IT Industry award for the Data Centre project of the year.
identify changes that will make most difference to the existing set-up
If you are going to be improving your existing facility, it is important to identify changes that will make most difference to the existing set-up.
- Air conditioners:
- Use variable speed fans, review optimisation and control systems
- Look at airflow management and design (containment, blanking plates, raised floors, equipment segregation).
- The cooling system should be flexible to account for load.
- Avoid overcooling; review IT equipment intake air temperature and humidity levels as newer equipment may have higher tolerances; review chilled water set points.
- Manage existing equipment and services better to make them more efficient - audit, consolidate, virtualise, decommission, and turn off idle equipment
Webinar: improving energy efficiency in your data centre
Recent guidelines from ASHRAE (American Society of Heating, Refrigerating and Air Conditioning Engineers) recommend widening the temperature and humidity level tolerances in data centres. Even the most 'mission critical' data centres can range in temperature from 15°C to 32°C and in relative humidity from 20% to 80%.
This means that the energy efficiency of most data centres can be immediately improved by increasing the temperature and decreasing the minimum relative humidity. Newer equipment can easily cope with this and it would mean that air conditioning would only be needed, on average, for around 12 days a year in UK data centres.
Outsourcing to the cloud is becoming a serious alternative to institutions running their own data centres and buying and managing their own equipment.
The cloud option can appear expensive, as fees are up front, but in-house provision has to be specified to cope with peak load, whereas fees for using the cloud are typically based on typical, rather than peak usage.
Cloud costs need to be compared with the real costs of providing IT services in house, something that is often hard to measure accurately, as the associated costs of running a facility (eg maintenance, cleaning, lighting etc) are often left out of IT budgets. Once all these things are taken into account, you may find that cloud services are considerably more cost-effective.
While cloud computing has the potential to make huge changes to the availability and capability of computing across many areas of activity, this very increase in availability may mean that the overall environmental impact of the move to cloud is either neutral or even negative. While cloud providers have an obvious interest in keeping their costs (which include energy) as low as possible, the ease of access and the need for larger and larger amounts of data across networks are both potential users of more energy.
Newcastle University already has £20m of research projects supported by the cloud, and was attracted to the cloud for data services because it meant the university did not have to buy its own computer hardware. “Cloud computing has the potential to revolutionise research by offering vast computing resources on-demand.”
Paul Watson, Professor of Computing Science, Newcastle University
Janet offers help in the form of impartial advice and brokerage for those wanting to put data centre services in the cloud. Janet has useful links with the government G-Cloud programme, and has also recently entered into a partnership with Microsoft, establishing a private link between the Janet Network and the Microsoft Windows Azure datacentre in Dublin, allowing a far greater range of data-heavy projects and tasks to be supported.
The data can be used to demonstrate payback on investment, inform future changes and also show where there may be new problems or where equipment could be reprogrammed to improve performance.
30% of the University of Sheffield’s £1 million pa IT-related energy bill is associated with its two data centres. The university has used multiple metering sources to drive ad hoc improvement, including:
- Main estates meters into the data centre buildings – not real time, but important for PUE (power usage effectiveness) and cross-checking other sources
- UPS–SNMP capability is used to monitor phase balance and other performance issues
- Continuous monitoring of those servers that can report their own consumption via SNMP
- Non-invasive spot measurements with Unite CL-amp clamp meters on cabinets and other servers are used to create a map of consumption and to inform future purchases
Reducing PUE by half
The University of London Computer Centre (ULCC) run a mix of in-house, co-location and managed services for clients in the higher and further education sectors among others (including some of Jisc's own infrastructure). Given the heterogeneous nature of the equipment in their racks and the location of the data centre (in the basement), many of the options available in other places for reducing energy use are not possible for ULCC. However, they show how large improvements can be made even without touching the servers.
Colin Love, the ULCC data centre manager, says
"We can't run fresh air cooling or raise set points, but we estimate that swapping out the 8-year-old CRAC [computer room air conditioning] units for modern units with variable speed fans, ultrasonic humidifiers and more efficient compressors will reduce the overall PUE from 2.0 to 1.5. Aisle separation and containment may be possible as we move forward to drive further improvements."
University of Hertfordshire green data centre
The university, as part of a Jisc-funded project, refurbished a six-year-old 75m2 data centre to comply with the EU code of conduct for data centres – the first university in Europe to achieve this. The centre was designed to have a PUE of 1.22.
Key energy features of the refurbishment included:
- Equipment with higher operating temperatures and wider humidity bands
- Contained hot aisle to prevent mixing of warmer air exiting the server racks with incoming cold air from very efficient computer room air handling (CRAH) units
- Recovery of heat from the exhaust air for use within the domestic hot water system of the Learning Resources Centre where the data centre is situated
- A proportion of 'free' cooling for 86% of the year
- A very high efficiency stand-alone humidifier
- 99% efficient UPS units
- Remote monitoring of input power and IT load to rack level.
The refurbishment reused existing equipment and facilities where possible, and made use of UK-sourced suppliers as much as possible to cut transportation impacts. It has also pre-installed cabling and pipework to enable future upgrading with additional cabinets, cooling and UPS.
HPC in the sky
The University of Bristol has sited its high-performance computing (HPC) facility as high up as possible, in this case on top of the Physics department, which stands on the highest spot in Bristol. They made use of a space that once held a huge water tank that supplied the university. That meant the floor could cope with the 1 tonne a square meter load of the HPC kit.
Bristol has chosen to have hot aisle containment with in-row coolers. The heat is then carried in the water to be cooled through heat exchangers units on the roof. Bristol sees HPC as a core part of its offering to successfully attract and retain leading research groups to the university.
Making Imperial College’s data centre EU code of conduct compliant
Imperial College’s main data centre has 2500 servers, set up in two adapted rooms in a 1960s building. It costs around £0.5 million a year in energy consumption.
Imperial were committed to becoming a participant in the EU code of conduct for data centres. As part of a Jisc-funded project they used thermal imaging and data logging to map current conditions in the data centre. Staff from IT and estates departments modelled potential solutions and made recommendations for implementation of improvements.
The implementation of cold aisle containment and raising the allowable temperatures in the rooms, installation of plant for free cooling, better maintenance and cleaning regimes for chillers, and upgrading light fittings and controls will together bring projected savings of around £32,675 pa.
As well as showing the savings that can be made from carefully planned improvements, the project demonstrated the importance of estates and IT departments working closely together.
Real-time measurement dashboard
The University of Hertfordshire's carbon accounting and reporting of baselines for services (CARBS) project is using inexpensive hardware that makes use of internal server system metrics to enable more accurate measurement of power usage within systems and across hardware domains.
Working with state-of-the-art components from a partner company, Concurrent Thinking, the aim is to create a dashboard that provides real-time measurement of the environmental (carbon) cost of delivering the individual services that have been identified.
This is one in a series of guides around green ICT. You may find also the following of interest: