AI-based tool delivers accurate code performance models

January 08, 2020 //By Julien Happich
Researchers at MIT have designed a machine-learning tool that delivers accurate code performance models for chips, in effect predicting how fast computer chips will execute code from various applications.

To get code to run as fast as possible, developers and compilers typically use performance models that run the code through a simulation of given chip architectures. Compilers use that information to automatically optimize code while developers use it to tackle performance bottlenecks on the microprocessors that will run it. But performance models for machine code are handwritten by a relatively small group of experts and are not properly validated, the researchers argue, negatively impacting the simulated performance results which often deviate from real-life results.

Last summer, the researchers presented a novel machine-learning pipeline that automates the creation of a performance model. Ithemal is a neural-network model that trains on labelled data in the form of “basic blocks” or fundamental snippets of computing instructions to automatically predict how long it takes a given chip to execute previously unseen basic blocks.

Then at the November IEEE International Symposium on Workload Characterization, the researchers presented a benchmark suite of basic blocks from a variety of domains, including machine learning, compilers, cryptography, and graphics that can be used to validate performance models. They pooled more than 300,000 of the profiled blocks into an open-source dataset called BHive. During their evaluations, Ithemal predicted how fast Intel chips would run code even better than a performance model built by Intel itself.

Ultimately, given enough data, developers and compilers can use the tool to generate code that runs faster and more efficiently on an ever-growing number of diverse and “black box” chip designs.

“Modern computer processors are opaque, horrendously complicated, and difficult to understand. It is also incredibly challenging to write computer code that executes as fast as possible for these processors,” explains co-author Michael Carbin, an assistant professor in the Department of Electrical Engineering and Computer Science (EECS) and a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “This tool is a big step forward toward fully modelling the performance of these chips for improved efficiency.”

In another paper, the MIT researchers proposed a new technique to automatically generate compiler optimizations. Specifically, they automatically generate an algorithm, called Vemal, that converts certain code into vectors, which can be used for parallel computing. Vemal was demonstrated to outperform hand-crafted vectorization algorithms used in the LLVM compiler — a popular compiler used in the industry.

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.