As there would be in a face-to-face ISSCC, there are plenty of tutorial sessions, short-courses and forums around the core technical program comprising some 36 sessions.

At the top of the ISSCC program invited plenary speakers usually reflect the pulse of the semiconductor industry. These year is no different with Mark Liu, executive chairman of TSMC, taking the stage first followed by Victor Peng, CEO of Xilinx.

TSMC is the most valuable semiconductor company by market capitalization (see TSMC becomes world’s biggest chip company) and its boss intends to sing the praises of the foundry-fabless business model and point out how it has opened up access to semiconductor manufacturing. But Liu will also look to the future and rising significance of wider considerations than just monolithic integrated circuit design and manufacturing: from materials to chiplet packaging and systems engineering considerations.

Meanwhile Peng is also one of the men of the moment having negotiated the sales of Xilinx to AMD (see AMD values Xilinx at $35 billion in take-over bid). His keynote will look to that other era-defining trend; the adoption of machine learning computation, or as he calls it Adaptive Intelligence.

Below I will mention a few of what I consider to be highlights of the main technical program.

Next Infineon’s Soli

Starting with session 2 on 5G and radar there is paper 2.3 from Infineon Technologies and Google on Soli, a radar-based interface control technology. Google has been working with Infineon since 2016 on the device which works as a field disturbance sensor (see Google uses Infineon radar for gesture control and Google radar gesture sensor given OK by FCC). It has been developed by the Advanced Technology and Projects group (ATAP) of Google and a single-chip incorporates the sensor and antenna array. It is designed to operate in the 57GHz to 64GHz band to capture hand gestures in 3D space to enable touchless control of personal electronics, such as smartwatches, smartphones, tablet and personal computers.

Session 3 is devoted to three high-profile chip releases: the SoC that powers the Xbox Series X gaming console from Microsoft; the A100 Datacenter GPU from Nvidia; and Kunlun, a 14nm AI processor from Baidu.

Europe is represented by just one paper in session 4 on processors; and that is Greenwaves and the University of Bologna and ETH Zurich in paper 4.4: A 1.3TOPS/W @ 32GOPS fully integrated 10-core SoC for IoT End-Nodes with 1.7μW cognitive wake-up from MRAM-based state-retentive sleep mode (see Greenwaves stresses GAP9’s hearing credentials).

In Session 9 Sony presents its image sensor with on chip neural network processor in paper 9.6: 1/2.3inch 12.3Mpixel with nn-chip 4.97TOPS/W CNN processor back-illuminated stacked CMOS image sensor. This sounds like a detailed exposition of the iMX500/iMX501 launched in June (see Sony adds AI processor to image sensors and Opinion: Sony’s plan for smart image sensors could be a game changer). The sensors are made with a two-die, stacked configuration with pixel chip and logic chip. The signal acquired by the pixel sensor is analysed by the AI processor eliminating the need for high-performance processors or external memory and minimizing data transfer both within the system and also up to the cloud. These latest sensors can perform AI processing in 3.1 milliseconds (MobileNet v1) on the logic chip. As this is within a single frame period it allows real-time tracking of objects while recording video at 30fps.

Next: Compute-in-memory

Session 16 is one of a number of sessions that includes papers on compute-in-memory (also 15 and 29). This topic is well represented at the upcoming ISSCC and is something TSMC is clearly pushing hard.

Paper 16.1 – A 22nm 4Mbit 8bit-precision ReRAM computing-in-memory Macro with 11.91 to 195.7TOPS/W for tiny AI edge devices – is more academic with the principal authors coming from National Tsing Hua University, Hsinchu, Taiwan, but with TSMC well represented in the authoring team (see TSMC offers 22nm RRAM, taking MRAM on to 16nm).

But Paper 16.3 – A 28nm 384kbit 6T-SRAM computation-in-memory Macro with 8bit of Precision for AI edge chips – is an all TSMC paper. Meanwhile Intel has an embedded DRAM compute-in-memory paper in the same session (16.2) and TSMC has a paper on an SRAM block capable of 89TOPS/W and 16.3TOPS per square millimeter (16.4).

Session 24 is on advanced embedded memories and most notable here is Samsung’s paper on its 3nm gate-all-round (GAA) manufacturing process. Paper 24.3 is A 3nm gate-all-around SRAM: featuring an adaptive dual-BL and an adaptive cell-power assist circuit.

Paper 24.4 from TSMC fills in some more detail of that foundry’s 5nm manufacturing process offering with A 5nm 5.7GHz@1.0V and 1.3GHz@0.5V 4kbit standard-cell-based two-port register file with a 16T bitcell with no half-selection issue.

Paper 24.2 is nominally from Chinese academics at the Institute of Microelectronics of the Chinese Academy of Sciences, Beijing, Zhejiang Lab, Hangzhou and Fudan University, Shanghai. The title is A 14nm-FinFET 1Mbit embedded 1T1R RRAM with a 0.022 square micron cell size using self-adaptive delayed termination and multicell reference. This gives rise to the thought that his may well represent part of the leading-edge at Chinese foundry SMIC.

Session 30 is nominally about non-volatile memory but in fact portrays the start-of-the-art in 3D-NAND with papers from SK Hynix (176 layers), Intel (144 layers), Samsung (160+) and Kioxia/Western Digital (170+ layers).

Next: Back to biology

The final highlights come from sesson 34 on emerging imaging solutions. Paper 34.1 – An 8960-element ultrasound-on-chip for point-of-care ultrasound – comes from authors at Butterfly Network, which recently announced it is to be listed on the New York Stock Exchange via a merger with Longview Acquisition Corp. Paper 34.2 – 21pJ/frame/pixel imager and 34pJ/frame/pixel image processor for a low-vision augmented- reality smart contact lens – is authored by engineers at Mojo Vision.

Mojo Vision has been developing what it is claiming will be the first augmented reality smart contact lens, the Mojo Lens. The lens overlays images, symbols and text on users’ natural field of vision. The uses could include an aid for those with a variety of health impairments serving as a mobility aid or displaying speech-to-text. Or it could be used to provide enhanced image overlays or to provide professional workers to gain real-time information without pausing to look at a mobile device.

The company recently announced a development agreement with Menicon with collaboration on contact lens materials, cleaning and fitting.

The performance and efficiacy of such AR contact lenses remains to be seen but Q&A and the end of an ISSCC paper is one place to gain facts and form an opinion.

Registration for the 2021 virtual ISSCC is now open.

Related links and articles:

News articles:

TSMC becomes world’s biggest chip company

AMD values Xilinx at $35 billion in take-over bid

Google uses Infineon radar for gesture control

Google radar gesture sensor given OK by FCC

Greenwaves stresses GAP9’s hearing credentials

Sony adds AI processor to image sensors

Opinion: Sony’s plan for smart image sensors could be a game changer

TSMC offers 22nm RRAM, taking MRAM on to 16nm

SK Hynix joins Micron on 176 layers for 3D-NAND flash

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles