Tech —

The Beast unveiled: inside a Google server

If you've ever been curious about what Google uses for server hardware, how …

Google doesn't talk about its server operations very often; most of what we know boils down to one word: "big." The company lifted the lid ever-so-slightly yesterday (no April Fool), and gave the world a peek inside a data center that's normally locked up tighter than Fort Knox. The results (and the company's focus) might surprise you.

Each Google server is hooked to an independent 12V battery to keep the units running in the event of a power outage. Data centers themselves are built and housed in shipping containers (we've seen Sun pushing this trend as well), a practice that went into effect after the brownouts of 2005. Each container holds a total of 1,160 servers and can theoretically draw up to 250kW. Those numbers might seem a bit high for a data center optimized for energy efficiency—it breaks down to around 216W per system—but there are added cooling costs to be considered in any type of server deployment. These sorts of units were built for parking under trees (or at sea, per Google's patent application).

By using individual batteries hooked to each server (instead of a UPS), the company is able to use the available energy much more efficiently (99.9 percent efficiency vs. 92-95 percent efficiency for a typical battery) and the rack-mounted servers are 2U with 8 DIMM slots. Ironically, for a company talking about power efficiency, the server box in question is scarcely a power sipper. The GA-9IVDP is a custom-built motherboard—I couldn't find any information about it in Gigabyte's website—but online research and a scan of Gigabyte's similarly named products implies that this is a Socket 604 dual-Xeon board running dual Nocono (Prescott) P4 processors.

GoogleServerLargeResize.jpg

Consider the board in question: (you can view a larger version of it here). It's not officially listed as a Gigabyte product in the company's database, but there are images of what we might call cousins: the GA-9ILDR and GA-9ILDTH. Based on the shape, socket layout, board component design, and overall characteristics, it seems a safe bet that Google is still using at least some older Nocona Intel hardware in their servers

No offense to Google intended—it's possible that Prescott-flavored Netburst delivered a unique and significant performance advantage over any other type of server hardware—but color me dubious on that particular proposition. There are any number of modern server combinations from both Intel and AMD that would offer a higher number of cores-per-socket and better performance-per-watt. Google isn't saying much about its own upgrade cycle or how long it expects a server to last once purchased, but if the company is serious about offering power-efficient, on-location types of servers, it needs to move away from what was once Intel's most notorious power-sucking platform.

CNET has additional information on how Google calculates and maintains energy efficiency; anyone interested in how the company evaluates and ranks these sorts of numbers should definitely have a look. When the concept of using shipping containers as data centers was first floated, most pundits laughed it off (save for those who saw the potential of using such centers in disaster-struck areas). With Google and Sun both on board with the idea, it's a lot less funny, and as existing server storage areas hit max capacity, a lot more practical.

Channel Ars Technica