 
                                    SiFive Launches New RISC-V AI IP with Scalar, Vector, and Matrix Compute
 Cette publication existe aussi en Français
                                                Cette publication existe aussi en Français
                                            
                                                                            SiFive has expanded its RISC-V AI portfolio with the launch of its 2nd Generation Intelligence family, introducing five new processor IPs designed to accelerate AI workloads from the far edge to the data center. According to SiFive, the additions include two new cores — the X160 Gen 2 and X180 Gen 2 — alongside refreshed versions of the X280, X390, and XM.
For eeNews Europe readers, this development highlights how RISC-V is maturing as an architecture for AI acceleration, providing scalable compute options for designers working on IoT devices, high-performance data center platforms, and more. The availability of these IPs also signals a growing industry shift toward open-standard instruction sets in advanced AI applications.
Expanding the RISC-V AI Lineup
The X160 Gen 2 and X180 Gen 2 are targeted at far-edge compute and IoT, where space and energy efficiency are essential. The cores enable advanced AI functionality in embedded environments such as automotive, robotics, and industrial automation. By contrast, the upgraded X280 Gen 2, X390 Gen 2, and XM Gen 2 scale toward higher-performance markets, with the XM leveraging matrix acceleration to handle complex AI workloads.
“AI is catalyzing the next era of the RISC-V revolution,” noted Patrick Little, CEO of SiFive. “We’re seeing strong traction including adoption of the new X100 series by two Tier 1 U.S. semiconductor companies. Our new 2nd Generation Intelligence IP builds on this momentum, adding new features and configurability to accelerate our customers’ designs and time to market.”
Vector and Matrix Compute for AI
A key differentiator across the lineup is vector processing. By executing multiple data items in parallel, vector engines cut down instruction overhead and reduce power consumption compared with scalar-only CPUs. This allows AI models to run faster, with less energy and silicon area — a crucial factor for edge AI devices.
At the high end, the XM Gen 2 integrates a matrix engine that can be scaled for demanding AI and machine learning tasks. Across the portfolio, SiFive emphasizes configurability, offering designers a single ISA that spans diverse compute requirements.
Accelerator Control Capabilities
Another highlight is the ability of all X-Series IPs to act as Accelerator Control Units (ACUs). Through co-processor interfaces such as SSCI and VCIX, the cores provide control and assist functions for custom accelerators. This approach gives customers flexibility to innovate at the platform level while reducing software complexity.
Availability
According to SiFive, all five 2nd Generation Intelligence products are available for licensing now, with first silicon expected in Q2 2026. For developers building next-generation AI-enabled systems, SiFive’s expanded portfolio offers a path to scalable, power-efficient RISC-V compute at both the embedded edge and the data center.
 If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :
                                        
                                            
                                               eeNews on Google News
                                        If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :
                                        
                                            
                                               eeNews on Google News
                                        
                                                                    
 
                    
                 
                    
                 
                    
                 
                    
                 
                    
                 
                                            
                                        