Researchers in Korea have developed a technique to combine the advantages of analog ‘in memory’ AI with digital processing for unsupervised learning for the first time.
"Today's computer applications generate a large amount of data that needs to be processed by machine learning algorithms," said Yeseong Kim of Daegu Gyeongbuk at the Institute of Science and Technology (DGIST) who led the project.
The researchers believe this is the first digital-based PIM architecture for AI that can accelerate unsupervised machine learning.
Unsupervised machine learning trains an algorithm to recognize patterns in large datasets without providing labelled examples for comparison. One popular approach is a clustering algorithm, which groups similar data into different classes.
"Running clustering algorithms on traditional cores results in high energy consumption and slow processing, because a large amount of data needs to be moved from the computer's memory to its processing unit, where the machine learning tasks are conducted," said Kim.
Using processing in-memory (PIM) approaches can reduce the power consumption but these are analog and require analog-to-digital and digital-to-analog converters, which also take up a huge amount of the computer chip power and area.
Instead, DUAL enables computations on digital data stored inside a computer memory. Rather than working with the original data, DUAL maps all data points into high-dimensional space, replacing complex clustering operations with memory-friendly operations. The team then designed a PIM-based architecture that supports all essential operations in a highly parallel and scalable way.
This was tested on several popular AI clustering algorithms for a wide range of large-scale datasets and showed comparable quality to existing clustering algorithms while using a binary representation and a simplified distance metric. DUAL also provides 58.8x speedup and 251x energy efficiency improvement as compared to the state-of-the-art AI frameworks running on a graphics processing unit (GPU).