The €20m EuroEXA project started this month, bringing together three existing exascale projects on FPGA accelerators, interconnect and 3D chip technologies to reach performance of 10 18 operations, 10 times that of today's fasest supercomputers.
The limiting factor for exascale computing is energy efficiency, says Dr Dirk Koch of the University of Manchester who is part of the Ecoscale project. He points to the Chinese Sunway Taihulight supercomputer as not too far away from exascale, with 10m cores that use 28MW of power for 125 petaflops of performance. “If you consider exascale we need 8x that performance, but this is more than all the performance of all the top 500 supercomputers,” he said. “That would be over 85m cores which would need 224MW of power. That’s $40,000/hr or $340m/yr in the US, and it would cost over $2bn.”
“So energy efficiency is the ultimate key, and integration is the key for performance, the more you can do on a single chip the better,” he said.
He points to FPGA performance as double that of a GPU on a specific problem for 1/10th of the power. “That’s exactly where we want to go,” he said. “That would use 40MW at exascale. This is the beauty of FPGAs, integrating more of the functions on a single chip. Data management is key. Don’t move data to compute, but move compute to the data, which means reconfiguring the FPGA where the data is.”
“Just using CPUS and GPUs will not do the job and it has to be done with FPGAs which will need new programming models. At Manchester we are working on OpenCL as the programming model to configurable modules that can be plugged into a system as an HPC accelerator.”
The two other projects are looking at high performance computing interconnect and 3D chip packaging for faster local interconnects.