Slideshow: A guided tour inside Google’s data centers
Google has recently joined the Open Compute Project (OCP), and has announced a 48V DC power distribution technology to the rack and then down-converted to 1VDC or less as close to the processor as possible. Google’s senior vice president of technology, Urs Hölzle, made the announcement about the 48VDC “shallow” data center rack architecture, on the first day of the Open Compute Summit.
Data Centers are traditionally challenged with getting the heat under control inside the data center building and inside the racks of servers and other equipment. Google seems to have mastered the cooling technique with their clever techniques to reduce “overhead” energy or non-computing energy, as it is called, for cooling and power conversion. They have achieved a comprehensive trailing twelve-month (TTM) Power usage effectiveness (PUE) [PUE is a measure of how efficiently a computer data center uses energy; specifically, how much energy is used by the computing equipment (in contrast to cooling and other overhead)] rating of 1.12 across all of their data centers, in all seasons and including all sources of overhead.
To find out how they achieved such a good efficiency rating, click here.
Now let’s take a look at what a Data Center looks like on the inside to the building. All these images are courtesy of Google from their website showing Inside our data centers. Or your can take our own guided tour.
A data center is a centralized building housing servers for storage, management and the distribution of information. Inside a typical campus network room, routers and switches reside, enabling many data centers to communicate with each other. Google uses state-of-the-art fiber optic networks to connect the myriad of sites since fiber enables incredible speeds at more than 200,000 times faster than any typical home Internet connection. The fiber cables can be seen running along the yellow cable trays near the ceiling in this image. (Image courtesy of Google)
Server racks using switches have certain advantages: They lower the risk of running out of ports in that rack (You can easily add a switch if the need arises) and connectivity requirements can be easily delegated (Server administrators can choose to wire their own cables to the provided port, easing the burden on network teams with respect to cabling from switch port to connecting device). Each Google server rack contains four switches, connected by a different colored cable. The same colors are maintained throughout the data center in order to quickly and easily identify which one to replace in the event of a failure. (Image courtesy of Google)
Here Google allows us a glimpse to see what is behind the server aisle. We find literally hundreds of fans that direct hot air away from the server racks and into a cooling unit to be recirculated. This technique is a more efficient alternative than just cranking up more air-conditioning power. The latest technology server designs are able to handle heat pretty effectively but many servers will shut down when they reach 99 percent limits of temperatures that can result in serious damage. This downtime can cost anywhere from an average of $8,000 a minute to more than $60,000 per minute with a company like Google. Ensuring that heat does not affect service or server reliability in an efficient manner is certainly a huge challenge, but it seems that Google is handling this task quite capably with the creativity of their engineers.
The green lights seen here are the server-status LEDs reflecting from the front of the servers. This is a really easy way to ensure that all functions are operating correctly with just a quick glance. (Image courtesy of Google)
Moving towards exascale computing for massive datasets requires liquid cooling, this is a necessary addition to most large data centers which continue to be challenged by the fast-growing Internet usage, the coming of 5G and the Internet of Things. Huge storage tanks shown here can hold up to 240,000 gallons (900,000 liters) of water at any given time. The insulated tank shown here holds water that will be sent to the heart of the data center for cooling those high speed processors and other electronic functions and power supplies. (Image courtesy of Google)
Tape backup has been around since the 1950s and is still relevant today. As for the tape versus disk storage discussion, I vote for tape because the disk is a storage media inside a box with many moving mechanical parts and a complex read/write mechanism compared with tape. I think there is more of a chance to lose your precious data with disk than tape because of this complexity – my motto is simpler is usually better. The backup tapes in Google’s tape library are critical to reliability and storage. Each tape has a unique barcode so their robotic system can locate the correct one. This function is essential as it periodically copies the contents of data from a main storage device location to the tape cartridge in the event of a hard disk crash or other failure with could be catastrophic to users. (Image courtesy of Google)
Here we see the tape library that uses robotic arms (visible at the end of the aisle) to help in loading and unloading tapes when access is needed (Of course, you might say – well here is a mechanical possibility for failure – but a failure in the loading and unloading process will not result in the loss of data.) (Image courtesy of Google)
Google uses a Clos, or Spine/Leaf topology which solves many problems that the Data Center has, see this site. The latest Google network is known as Jupiter, and can deliver 1.3Pbits/s of aggregate bisection bandwidth across an entire datacenter, which is enough for 100,000 servers to be linked to the network at 10Gbit/s each. Can 25Gbit/s or 50Gbit/s links to servers and storage and 100Gbit/s links across the aggregation and spine layers of its networks be far behind? A myriad of Ethernet switches connect the facilities network. This enables communication with and monitoring of the main controls for the essential cooling system in the data center. (Image courtesy of Google)
A massive amount of floor space is needed as well as efficient power to run the full family of Google products for the world. We see here a facility in Hamina, Finland, where Google renovated an old paper mill taking advantage of the building’s infrastructure and its strategic short distance to the Gulf of Finland to access cooling water for the data center. (Image courtesy of Google)
The blue LEDs on this row of servers indicate that all functions are running smoothly. LEDs are used because they are energy efficient, long lasting and bright so a quick glance by a tech can discern if there are any singular problems. (Image courtesy of Google)
We see here colorful pipes that send and receive water for cooling the facility. In an example of Google’s energy efficiency consciousness, we see here a G-Bike, the vehicle of choice for team members to efficiently move around outside the data centers. (Image courtesy of Google)
In today’s security and hacker-conscious world, Google has a commitment to keep users’ data safe by destroying all failed drives right on site. (Image courtesy of Google)
The efficient use of plastic curtains hanging in a network room inside Google’s Council Bluffs data center can keep servers cooler. The system sends up cold air through the floor, and these clear plastic curtains act as a means to help keep the cold air in while keeping hot air out. (Image courtesy of Google)
In Google’s Council Bluffs, Iowa, we see the overhead structures above the huge expanse of the data center. Powerful overhead steel beams both support the structure itself as well as helping to distribute power of 400VDC or AC voltages along the expanse of the data center to be ultimately converted to 48VDC to the rack. (Image courtesy of Google)
Finally, we see the Council Bluffs data center, which provides over 115,000 square feet of space. Every inch is precious to house as many servers as possible so that users can have efficient search capability and access to sites like YouTube in a most efficient manner all transparent to the user. (Image courtesy of Google)
This article first appeared on sister site Planet Analog.