Google took the wraps of its secretive data centre installations last week at its Efficiency Data Center Summit, revealing details of its custom-designed DIY server, its PUE (power usage effectiveness) results for six facilities, a water-cooled facility in Belgium and a host of other details on its global network of data centres.
The entire day’s sessions have been put on YouTube. You can access them here.
According to the presentations, Google has managed to significantly improve the efficiency of its data centres by creating an architecture that optimizes each component of the installation, including the servers, the power supplies and the cooling systems.
Google presented PUE data from six of its data centres, which all achieved a PUE of less than 1.3. PUE is calculated by comparing the power entering a data centre facility and the amount of power that actually is used to run the electronics. According to US EPA studies in the past, older data centres have PUEs of around 2.0, or for each watt that powers the electronics inside a data centre, another watt is needed for the cooling and other passive infrastructures, such as the UPS.
The best performing Google data centre had a PUE of 1.16, which means that the passive infrastructure only requires 0.16 of a watt of power for every watt that is used to power the servers.
At the core of Google’s efficiency strategy is its server, designed by company Server Platform Architect, Ben Jai, who engineered the units to achieve a UPS efficiency of 99.9% by implementing a distributed backup power solution. Instead of relying on large site-based UPS systems, Google moved the UPS element onto each server by putting in a battery. This eliminates wastage as UPS capacity is matched to the number of servers and avoids unnecessary current conversion by typical UPS solutions. The Google server also features a 12-volt motherboard, which further minimizes power conversion of the components.
Check out this video interview with Ben Jai from DataCenterPulse,
According to the speakers at the event, Google went as far as to calculate the efficiency of transmitting electricity over the power grid when designing the servers, choosing the 12-volt motherboard over a conventional 5-volt motherboard part because 12-volt transmissions are more efficiency.
Google also highlighted the important of water as a resource inside the data centre, with statistics that show for every 10 MW of power consumed, an average data centre uses as much as 150,000 gallons of water per day for cooling. However, that same 10 MW of power takes 480,000 gallons of water per day to generate, Google’s Joe Kava said.
The data shows that using water to cool data centres actually saves water because it uses less electricity. “The typical ‘water-less’ data centre uses about a third more water than the evaporatively cooled Google data centre,” Kava said. “Using less power is the most significant factor for reducing water consumption.”
During a workshop, Google also revealed that its Belgium data centre goes a step further, and actually gets its own water from a nearby river, operating its own water treatment plant in the process. Google is now planning two completely water self-sufficient data centres, the presenters said.
35 GOOGLE DATA CENTRES WORLDWIDE
According to statistics compiled in a FAQ document by Data Center Knowledge, Google now operates 35 data centres worldwide, including 19 in the US, 12 in Europe, 1 in Russia, 1 in South America and 3 in the Asia Pacific.
In Asia, Google’s data centres are located in Hong Kong, Beijing and Tokyo and is currently scouting locations in Taiwan and Malaysia, the document said.