Using non-volatile memory IP in system on chip designs
In their presentation "Design of future embedded systems: toward system of systems" in May 2012, IDC authors Alain Pétrissans and associates, stated that the embedded systems market in 2012 was worth just over €1 trillion. Through the period ending in 2015, the market would enjoy a compound annual growth rate of 12 percent reaching €1.5 trillion. These figures exclude all PCs and mobile phones. The researchers declare that emerging applications such as smart cities, health, energy, and mobility will drive market growth and expansion.
Embedded systems are small microcontroller-based components that collect data and automate simple functions. Initially built around 8-bit processors such as the 8051, these systems can be found in energy management, portable medical devices, automotive electronics touch screens and a wide variety of tags and sensors. IDC is predicting 1.5 trillion intelligent tags and sensors alone in 2020. These systems today are migrating toward higher performance compute engines with the availability of low-cost 32-bit CPUs and the demand for more functionality in network-attached devices.
Using an intelligent tag/sensor as a typical embedded system, the attributes of the design are (1) finite feature set, (2) very high volume, (3) security, and (4) low power. In many instances these requirements will dictate a system-on-chip (SoC) design containing a CPU core with the program code stored on-board or in external EEPROM (see Fig. 1). The rationale for erasable memory is the need for frequent software updating during development, design retargeting or customization from stock prior to shipment and limited bug fixes or enhancements after the product is shipped into the field. The benefits of being able to change the code an unlimited number of times are appealing. However, in many if not all instances, the code is not changed frequently enough, if at all, to warrant unlimited programmability, nor is it wise to burden development costs onto the volume product. Low cost requirement is making embedded flash and EEPROM economically undesirable.
And because “security of information is a high priority and a very strong force on the market,” as IDC declares, there is increased demand to securely integrate program code on chip. While unlimited re-programmability might be seen as an advantage during software development, once the device is shipped it becomes a product’s greatest vulnerability. Where software developers see flexibility, hackers see opportunity.
This discussion will make the case for an alternative solution. Integrating anti-fuse non-volatile memory (NVM) with a finite number of rewrite cycles on chip and detail the benefits to the SoC manufacturer as well as to the system OEM that incorporates the SoC into a final system design. Anti-fuse, as its name implies, stores information by creating a low-resistance path instead of an open circuit as is the case for fuses, blown by a laser or high voltage potential, or ROM, in which memory is written in the metal layers of a device during place and route.
Figure 1 illustrates the compute core of a simple Zigbee wireless SoC that might be found in a variety of security and lighting applications being designed into smart home applications. The circuit could also be found in small control elements used in power line communications applications: motors, heating and air conditioning control, etc. for the home as well as automotive infotainment and passenger comfort control. The programs in these applications are relatively small compared to the code found in smart phones: kilobytes versus megabytes in sizes. They require no virtual memory operating system but rather execute a tiny scheduler. The program code for a small Zigbee connected sensor and controller application, for example, can be implemented in 256 kbits.
Figure 1: The compute core of a simple Zigbee wireless SoC that might be found in a variety of security and lighting applications being designed into smart home applications
Evaluating the OEM’s cost benefits of embedding code storage on-chip
To make the case for embedded antifuse NVM as replacement for external serial EEPROM, a 256kb memory will be used as a reference. As of this writing the distributors’ price for a commercial grade memory component runs from $0.14 to $.18 in volume. An OEM ordering large quantities can acquire these parts at 60 percent of the distributors’ price or around $0.08 to $0.10 each. However, if the OEM requires industrial or military grade components, these prices can be twice the amounts cited. Assuming these benefits are compelling, the designer must consider that he is replacing an external device with virtually unlimited rewrite cycles with an internal embedded memory having a finite number of rewrite cycles. The arguments for embedding code storage on-chip will include cost, performance, and security. See Figure 2.
Besides savings the cost of one component, the OEM eliminates the need to qualify primary and secondary vendors and their associated serial EEPROM components. It removes the necessity to forecast the number of these memory chips that will be needed over the life of the OEM’s end product and the worry of the chip’s availability from affecting the OEM’s ability to ship product (and the SoC supplier to sell components to the OEM). Eliminating the component also means a reduction in manufacturing cost for the final product, for example reducing a pick-and-place operation and final test by one unit. Eliminating the total burdened cost of this single external component can result in a savings of $0.05 or more for an OEM’s bill-of-materials, in addition to the actual component cost.
Figure 2: Embedding code storage on-chip with an antifuse NVM IP like the Kilopass Technology Inc. Gusto-2 reduces cost, improves performance, and enhances security.
Integrating the code on-chip also improves the OEM’s device performance. Better performance results from eliminating the need to serially read code from the external EEPROM into the on-board SRAM before the program can execute. Integrating the code on-chip means it executes on power-up directly from the embedded on-chip NVM memory. Since the memory is on-chip, the designer has a wider datapath memory access, which improves program execution performance.
The third benefit to the OEM is security. Every device that contains a CPU is susceptible to hacking from wide variety of attacks. With program code stored outside of the SoC, the attack can be as simple as reading the code on the interface pins. Floating gate NVM are also vulnerable to low-cost passive information attacks including “glitching” and “data remanence”, and semi-invasive approaches including “UV attacks”, “fault injection”, and “voltage contrast”. Even program code stored on-chip in read-only memory can be read if the hacker uses an invasive attack such as delayering and microscopy or micro-probing as any video game supplier relying on ROM cartridges to distribute his game code can testify.
Embedding program code on-chip in anti-fuse NVM
Embedding program code on-chip in anti-fuse NVM memory greatly reduces these vulnerabilities. The means by which data is written gives anti-fuse NVM its resistance to hacking attacks. Anti-fuse NVM stores data in an array of bit cells that use a transistor as the storage element. Un-programmed the bit cell transistor provides a high resistance path to current flow. To program the bit cell, an elevated voltage produces gate oxide breakdown in a bit cell transistor thus converting the open-circuit transistor gate into a resistive path allowing current flow when selected during a read cycle.
Two benefits accrue to this form of storage. First, the storage elements are logic transistor thus the memory can be fabricated on any standard logic process at any process node from 180nm to the latest 20nm and FinFet processes. Second, because there is no stored charge or easily detectable physical alteration, the bit cell has been proven impervious to hacking attacks. This is one reason that anti-fuse NVM is the method of choice used by set-top box manufacturers to store business mission critical information.
Evaluating the SoC manufacturer’s cost benefits of embedding code storage on-chip
The benefit to the SoC manufacturer to integrating anti-fuse NVM on chip for program storage is gained margin and independence from vagaries of supply chain and component availability. Integrating embedded flash NVM on-chip will apply a 30-50 percent cost adder to the manufacture of the SoC. Unless the embedded flash being integrated represents more than half the die area, this cost adder makes the SoC uncompetitive. Adding the program code on-chip in the form of ROM is cost-effective but if the program code requires a change, then the months of lost revenue because of a design spin makes ROM cycle time untenable. Using embedded efuse technology is impractical for capacities over a handful of kilobits.
The most compelling reason for integrating anti-fuse NVM on-chip is that the SoC supplier gets the gained margin that the OEM would have paid to the external flash or EEPROM device vendor. Even with the cost the SoC manufacturer pays to derive this margin, the integration is well worth the effort and expense. Here’s why. At the start of this article the price of $0.14 to $0.18 was cited as the price for an external 256kb memory component. Since the OEM’s burdened cost for a commercial grade 256kb device is around $0.15 (industrial or military grades will increase the cost), the the SoC design company needs to provide the capability at or lower than the OEM’s cost yet still maintaining around 50 percent ASIC margin.
IP cost to the SoC design company comes in the cost of silicon area used by the anti-fuse NVM IP block on chip and the associated acquisition costs of the IP. Some of the silicon area used to incorporate the NVM IP on chip is offset by eliminating the shadow SRAM and boot ROM needed to load and execute the code stored in external serial EEPROM/Flash. Replacing the serial EEPROM/Flash with on-chip storage also eliminates the pins that are required for accessing the external memory. The burdened die area cost for a 1Mb antifuse NVM IP block is on the order of $0.04 in a 55nm process.
Because the SoC design company is incorporating external functionality (memory) that would have previously cost the OEM an external device, supply chain management, inventory stock financing and manufacturing cost, the SoC design company can expect to add an additional $0.10 cents to the price of his chip. The benefit to the OEM is the savings from eliminating the burdened cost of the external memory component from his BOM. The benefit to the SoC design company is the additional margin from incorporating the external component on-chip. The benefit to both is the freedom from being held hostage by a component shortage of the external memory chip, a win-win for all involved.
About the author:
Emerson Hsiao is vice president of marketing at Kilopass Technology, Inc. He is responsible for Kilopass’ expanded marketing initiatives worldwide, from new eNVM product rollout, channels of distribution, promotional platforms, to IP enforcement. He joined Kilopass from Faraday Technology Corp., where he was USA general manager. He brings over a decade of semiconductor industry experience, which spans the spectrum of chip production from managing the operations and profit and loss of a major Taiwan fabless chip company, all the way to developing successful system-on-chip designs. As project manager, he has overseen the SoC design flow and tape-out process. As SoC designer, he has performed front-end chip design and back-end chip tape-out. Emerson holds a Ph.D. in Electrical Engineering from National Taiwan University in Taipei City, Taiwan.