Artificial intelligence chips could spill out of data centers, onto desks

January 18, 2018 //By James Morra
Artificial intelligence chips could spill out of data centers, onto desks
No one knows what the future of artificial intelligence will look like and no one knows what computer architecture will take it there. For years, Nvidia has been trying to expand the market for its graphics chips, which are the current gold standard for training and running algorithms based on deep learning. And it has often used tactics unrelated to the performance of its chips.

Last year, the company released the DGX Station to enable software engineers to experiment with software libraries used in artificial intelligence and improve algorithms before sending them to the cloud, where the software is trained on enormous amounts of data. The workstation contains chips based on Nvidia’s Volta architecture and provides 480 trillion floating-point operations per second, or teraflops.

The DGX workstation shares the same software stack as the DGX-1 appliance, a miniature supercomputer that provides 960 teraflops of performance. That way, software engineers can swiftly swap software between Nvidia’s workstations and appliances, which can be installed in data centers where training typically happens.

Nvidia introduced both products to tighten its grip over the artificial intelligence market and promote its Volta architecture, which contains custom tensor cores for handling deep learning. But according to one industry executive, the company’s rivals could use the same strategy to push their custom chips onto software engineers.

“They say, in the early phases of designing neural networks, we don’t want to go to data centers,” said Jin Kim, chief data science officer for machine learning chip startup Wave Computing. “We want a workstation right next to us for experimentation, taking elements of existing neural networks and putting them together like Lego blocks.”

He declined to disclose whether Wave Computing plans to release its own workstation. But the company, which has raised $117 million over the last nine years, has been putting the finishing touches on an appliance equipped with its dataflow processing unit (DPU), which supports lower precision operations that consume less power and memory than traditional chips.

When it is finished, the appliance is projected to provide performance of 2.9 quadrillion operations per second for machine learning workloads. Wave Computing has also built a special compiler that translates code into a form that its silicon can understand. The company designed its coarse-grained reconfigurable array chips to have 16,384 cores.

Wave Computing is acutely aware that software engineers are asking for workstations to experiment with algorithms outside of the data center, said Kim. Other startups have almost certainly gotten the same requests. But none have ventured to challenge Nvidia yet.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.