MENU

Neural ASIC platform promises easy AI integration

Technology News |
By eeNews Europe


Up to now, hardware accelerators for machine learning have been built primarily with GPUs and FPGAs. According to eSilicon, the machine learning ASIC platform (MLAP) segment of the market has been under-served due to the dynamic nature of AI/machine learning algorithms. These algorithms typically experience a high degree of change as they are adapted to the end application, making it problematic to use a static, full-custom ASIC platform.

Through customized, targeted IP offered in 7nm FinFET technology and a modular design methodology, the neuASIC platform is said to remove the restrictions imposed by changing AI algorithms. The platform includes a library of AI-targeted functions that can be quickly combined and configured to create custom AI algorithm accelerators. With the use of a Design Profiler and AI Engine Explorer, eSilicon-developed and third-party IP can be configured as AI “tiles” via an ASIC Chassis Builder, allowing early power, performance and area (PPA) analysis of various candidate architectures. The neuASIC platform also uses a sophisticated knowledge base to ensure optimal PPA.

The elements of neuASIC IP library include functions that are found in most AI designs, resulting in a core architecture that is both optimized and durable with respect to AI algorithm changes. Specific algorithm modifications can be accommodated through a combination of minor chip revisions that integrate appropriate AI “tiles” or modifications of the 2.5D package to integrate appropriate memory components.


The eSilicon-developed AI-targeted “tiles” include subsystems such as: multiply-accumulate (MAC), convolution and transpose memory, among others. The physical interface to the HBM memory stack (or PHY) is also part of the library.

A typical AI design requires access to large amounts of memory. This is usually accomplished with a combination of customized memory structures on the AI chip itself and off-chip access to dense 3D memory stacks called high-bandwidth memory (HBM). Access to these HBM stacks is accomplished through a technology called 2.5D integration. This technology employs a silicon substrate to tightly integrate the chip with HBM memory in a sophisticated multi-chip package. The current standard for this interface is HBM2. The development of customized on-chip memory and 2.5D integration represent eSilicon core competencies that are required for a successful AI design.

eSilicon says it is currently engaged with several tier one system providers and high-profile startups to deploy the neuASIC platform and its associated IP, with initial applications focusing on the data centre and information optimization, human/machine interaction and autonomous vehicles.

“We see a vast array of possibilities for acceleration of AI algorithms,” said Patrick Soheili, vice president of business and corporate development at eSilicon. “ASICs provide a clear power and performance advantage. Thanks to our neuASIC platform, the MLAP segment of the market can now expand to serve a wide range of applications.”

eSilicon – www.esilicon.com


Share:

Linked Articles
10s