MENU

Microsoft calls for ‘cloud’ flash, server SoCs

Microsoft calls for ‘cloud’ flash, server SoCs

Technology News |
By eeNews Europe



"We are maniacally focused on performance per Watt per dollar," said Dileep Bhandarkar, chief architect of Microsoft’s global foundation services group that runs an undisclosed number of global data centers.

Electrical and mechanical issues consume up to 80 percent of the budget for the big data centers than draw 25 to 50 Megawatts and house tens of thousands of servers each. "That’s why it’s a big deal if we can save even one Watt on a server processor," he said, speaking at an annual conference here hosted by LSI Corp.

Specifically, Bhandarkar called for a new flash variant he called "cloud multi-level (CMLC) flash" that has moderate performance and lower power consumption. "We have started to use MLC flash drives because they give 30 to 40 percent more performance in some apps and have an added cost that’s lower than that, and they only drive power up two or three percent," he said.

Microsoft uses the flash drives in two configurations, one with a hardware cache controller and one without. "I don’t see flash drives replacing hard disks—they are just another layer in the memory hierarchy," he said.

"But we don’t need maximum bandwidth in flash–that’s expensive and the power goes up," Bhandarkar said. "I don’t want a drive with two million IOPS–I’ll settle for one with 20,000 IOPS, but there’s an opportunity for the industry to define what I call cloud MLC," he said.

Microsoft operates tens of data centers in at least eight countries to run more than 200 services including its Bing search, MSN, Hotmail and Azure cloud services.

As for processors, "once the industry gets up to 16 cores per chip, we’d prefer to see the transistors used for building server SoCs," Bhandarkar said. "The performance is going up more rapidly than we can use it, and at some point designers will realize the shouldn’t add more cores but instead integrate features to drive cost down, and energy efficiency up," he said.

Higher core counts will only lead to imbalanced systems with bottlenecks in memory and network bandwidth, he added. In another recent talk, he called for 16-core server SoCs based on low-power Intel Atom or AMD Bobcat cores.

Bhandarkar softened his skeptical tone on ARM-based server SoCs since speaking against them in January. In the LSI talk he lumped ARM cores in a list with Atom and Bobcat for the first time, but said "it’s to early to commit to any" alternative architectures.

He also suggested Microsoft is nudging closer to a shift from Gigabit to 10 Gbit/s Ethernet. "The 10 Gbit switch costs are going down, and we are pretty excited about that," he said.

He also called for three-phase power supplies, UPS boxes that can be integrated in standard computing racks and enhanced voltage regulator modules. "I will pay 50 cents more for more efficient VRMs to get savings," he said.

Today, Microsoft uses dual-socket 1U rack servers with just one PCI Express slot, minimal memory and no RAID running at temperatures of 27 degrees Celsius and rising. "Our long term goal is to get to running them at up to 35 degrees C" to save cooling costs, he said.

Microsoft defined a new building block for its latest data centers called an IT PAC (pre-assembled component, pictured below), larger than traditional shipping containers used in the past. In includes 15 server racks with 90 dual-socket servers each. Giant fans sit above the servers drawing air out of them. Beside the racks and fans sit an evaporator unit for cooling and an air mixer for heating, both used as needed to supplement outside air.

All four units are combined into one sealed enclosure that draws as much as 5 MW. The ITPACs are housed in giant but simple warehouses without raised flooring. In future the buildings may not even have four full walls given the need for ambient air in the ITPACs.

By purchasing gear in such large chunks, Microsoft has cut its need to forecast capacity requirements from two years down to just six months. Thus the company aims to always have extra floor space—and power capacity–to quickly add ITPACs as needed.

"When I first got to Microsoft, we had two stove pipes–server and data center engineers, but now we optimize for the whole because the data center is the server," he said.

 

Microsoft’s ITPAC includes 15 server racks, fans and an evaporator.
If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s