How to make your IT "Greener"

In general, green IT aims to reduce the effects on the environment that are the result of using IT. In the realm of IT infrastructure, it typically comprises electricity usage and carbon dioxide (CO2) emissions.

Datacenters consume up to 1.5% of all the world's electricity and (according to a study by Gartner) IT accounts for approximately 2% of all the world’s CO2 emissions. This is approximately the same amount as all airplanes combined.

There are a number of drivers organizations have to aim for green IT, like publicity – We are a green company – or actual care for the environment. But most organizations want green IT for one reason only: saving money.

The amount of money that can be saved by implementing green IT can be substantial. For instance, the amount of money spent on electricity during the lifetime of a server can be much higher than the cost of the server itself. The price of electricity has raised with 50% between 2007 and 2012, and the electricity bill will probably only go up in the forthcoming years.

It is important to know who pays the electricity bill in an organization. In most cases, the facilities department pays the electricity bill, not the IT department. Often systems managers and architects know pretty well how much a server costs, but they rarely have a clue about the cost of electricity. Do you know how much one kWh cost for your organization?

There are basically three ways to make the IT in your organization greener:

  • Use better equipment
  • Enhance the efficiency of the datacenter
  • Use less resources

In the next sections these are explained.

Use better equipment

PCs

An average desktop PC with an old-fashioned 17” CRT monitor consumes about 200 watts of electricity. In one year, with 1900 working hours, this comes down to 380 kWh, which cost approximately $38 per year per PC (given an average price of $0.10/kWh).

So, an organization with a thousand of these PCs spends $38,000 per year on electricity alone.

When the CRT monitors are replaced by LCD displays (which consume about 50 watt less energy), the cost will go down with approximately $9,000 (each year!). Most laptops use about 15-60 watts, far less than desktops, which can lead to even more cost reductions.

This is important, as there are approximately 50 million servers in the world and more than one billion PCs. It therefore often makes more sense to get more power efficient PCs than to optimize the datacenter.

Datacenters

Some significant savings can be made in the datacenter as well, for instance by:

  • Using blade servers – Because of the shared use of power supplies and other parts, blade servers use approximately 30% less power than equivalent rack-mounted servers.
  • Using flash disks instead of rotating disks  – Because flash disk have no moving parts they use much less power than rotating disks. And when flash disks are not used, they use no electricity at all, as opposed to rotating disks, that must rotate 24/7.
  • Upgrading old servers– Every year manufacturers offer more power efficient equipment. It is good practice to upgrade servers in the datacenter every few years to benefit from this.
  • Using low power hardware – Where possible try not to use the most power-hungry servers. For example, an Intel Atom CPU uses 30 watts, while a high-end CPU uses 200 watts. Most CPU manufacturers aim at getting the same CPU power for less electricity usage each year.

Enhance the efficiency of the datacenter

Apart from the power used by the IT infrastructure components in the datacenter, the datacenter itself uses power as well. Most of this power is used by the cooling system, but power is also needed for lighting, heating of the operator rooms, etc..

To measure the power used by the datacenter the Power Usage Effectiveness (PUE) metric is most used. In a white paper published by the Green Grid in February 2007 called "Green Grid Metrics: Describing Data Center Power Efficiency" the use of the PUE metric was introduced.

The PUE is calculated by dividing the amount of power used by the datacenter, by the power used to run the IT equipment in it. PUE is therefore expressed as a ratio, with efficiency improving as the metric decreases towards 1.

For example, running a datacenter with a PUE of 2 means that for each watt of power used by the IT equipment an extra watt is used by the rest of the datacenter. This means that if this datacenter has 1 MW of IT components installed, another MW is “wasted” by the datacenter (mainly for cooling, which does not directly lead to better or more customer service).

In this example, with an average electricity cost of $0.10 per kWh, every year 1000 kW * 24 hours * 365 days * $0.10 = $876,000 is spent on running the datacenter alone (not including the actual IT equipment)!

In this example by optimizing the datacenter’s power usage to a PUE of 1.5 almost half a million dollar per year can be saved.

Over the years the trend has been to decrease the PUE from more than 2 a few years ago to a typical value of 1.8 to 1.5 today. Google claims its datacenters reach a PUE of 1.22, and a Facebook datacenter built in 2011 even claims to reach a PUE of only 1.07, as a result of cooling optimizations and large scale operations.

The best way to lower the PUE of a datacenter is to implement efficient cooling systems.

It is important to understand that PUE only measures datacenter power efficiency, and not for instance server efficiency, the efficiency of the power supplies used, let alone the amount of useful work that is done by the IT equipment!

Another thing to remember is that a high PUE is not always bad. If a datacenter uses its IT infrastructure components very efficiently, for instance by virtualizing all servers to a few large physical systems, much energy is saved compared to using many individual servers. The PUE, however, will be relatively high as much cooling is needed for the fully loaded physical machines.

Use less resources

A very simple way to be more green is to use less resources in the first place. Some examples are explained in this section.

Print only when needed. And if printing is needed, use two-sided printing.

Switch off unused equipment. Use screensavers with a black screen to automatically switch off monitors. Switch off PCs during the night.

I once worked at a large client with 20,000 PCs. These PCs were not only old (and hence very power-hungry), but start-up of these PCs took a long time. Having to wait for fifteen to twenty minutes to fully startup a PC was no exception. The result was that people tended to keep their PCs on all the time. While they worked eight hours a day, their PCs were kept on for twenty-four hours. Most people even let them on during the weekends. After replacing all PCs with more modern ones, the power bill went down considerably and the world is not a little bit greener.

Using virtualization, the number of physical machines can be reduced. And when servers are not used (for instance during the evenings), the virtualization software can move all running virtual machines to fewer physical machines and automatically switch off the rest of the physical machines.

Rationalization of the server pool can also help – I have seen occasions where servers were running, but no one knew what they actually were used for. And since everyone was afraid to switch them off, they kept on running for years, possibly doing nothing at all.


This entry was posted on Friday 28 December 2012

Earlier articles

Quantum computing

Security at cloud providers not getting better because of government regulation

The cloud is as insecure as its configuration

Infrastructure as code

DevOps for infrastructure

Infrastructure as a Service (IaaS)

(Hyper) Converged Infrastructure

Object storage

Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Storage (SDS)

What's the point of using Docker containers?

Identity and Access Management

Using user profiles to determine infrastructure load

Public wireless networks

Supercomputer architecture

Desktop virtualization

Stakeholder management

x86 platform architecture

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

What are concurrent users?

Performance and availability monitoring in levels

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

The first computers

Open group ITAC /Open CA Certification


Recommended links

Ruth Malan
Gaudi site
Esther Barthel's site on virtualization
Eltjo Poort's site on architecture


Feeds

 
XML: RSS Feed 
XML: Atom Feed 


Disclaimer

The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.

 

Copyright Sjaak Laan