
This one is called ThinCI (pronounced “Think-Eye”), founded by Dinakar Munagala, an accomplished engineer/architect with an Intel pedigree.
Surprising about ThinCI (El Dorado Hills, Calif.), however, is its well-heeled, big-name backers with credible technological expertise, and a unique “massively parallel architecture” which Munagala describes as an engine “purposely built for vision processing and deep learning.”
Munagala promises that his patent-pending chip architecture can bring “two orders of magnitude improvements in performance” compared to other deep learning/vision processing solutions.
ThinCI, after operating in a garage on a shoe-string budget for six years, is emerging from stealth mode this week. It recently snagged two big automotive tier ones as institutional investors and secured a roster of who’s who in the tech industry as private investors.
The two tier ones signed up are DENSO International America, Inc., and Magna International Inc.
Private investors include: Dado Banatao, chairman ThinCI board of directors and managing partner, Tallwood Venture Capital; Dadi Perlmutter, former executive vice president, general manager of Intel Corp.’s Architecture Group; Jürgen Hambrecht, chairman of the Supervisory Board of BASF SE and Member of Supervisory Board of Daimler AG; and several others of similar stature.
Simplicity, flexibility
Perlmutter, asked why invest in ThinCI, told EE Times, “Through all my career I significantly appreciated simplicity and flexibility. I always preferred approaches that went away from brute force, and looked at the bottlenecks of a new computing problem, and found ways to eliminate the bottlenecks by finding new approaches. ThinCI has done just that.”
While other solutions are limited to moving data in and out to feed a big, hungry computing engine, Perlmutter described ThinCI computing as “tailored to Deep Learning graph analysis.” He said it “eliminates by a huge factor unnecessary access to memory.”
Its end results? “This results not only in speeding up the computation but reducing cost and power,” he added.
Munagala told EE Times that he quit Intel six years ago with ambitions to develop a new chip architecture that can meet the needs of next-generation technologies, such as deep learning.
ThinCI, however, has yet to disclose details of its processor architecture. The company describes it “a revolutionary graph streaming processor.”
Munagala explained to EE Times that this is “a massively parallel architecture designed to process multiple compute nodes of a task graph at the same time.”
Deep learning, in essence, is based on a set of algorithms that try to model high level abstractions in data by using a deep graph with many processing layers, composed of multiple linear and non-linear transformations.
What’s unique about ThinCI’s architecture appears to be in the way it handles a deep graph.
Instead of processing data sequentially through a deep graph, with multiple processing layers, “ThinCI’s architecture streams data through the entire graph using extreme parallelism,” explained Munagala.
But that’s only half of the story.
As Perlmutter put it to EE Times, “The other major factor [about ThinCI’s processor architecture] is programmability.” He explained that many make the mistake of hardware tailoring for a given solution, while history’s lesson is that the problems keep changing, and programmers have great innovative powers. What ThinCI needs is “a way to program the processor and get new solutions that are ever evolving,” noted Perlmutter.
Apparently that’s what the startup delivers. Munagala told EE Times, designers “benefit from a unique programming approach, while using industry standard API’s. [This] facilitates the ease of building deep networks that are optimized for their processor.”
Asked to compare ThinCI to other processors on the market, Munagal said, “Take an example of GPUs.” Although GPU has been used for deep learning (i.e. Nvidia), “it isn’t designed for data analysis” he said. “It’s inefficient at vision processing, and it is power and memory hungry.”
The problem with DSP is that it’s inefficient to program and complex to program.”
What about hardwired devices? This isn’t feasible because algorithms for deep learning are evolving too fast for fixed solutions.
CPU, meanwhile, is “only for general purpose,” because it lacks performance while consuming too much power.
On-die graph execution
In contrast, the claim to fame of ThinCI’s visual computing engine is its ability to offer “on-die graph execution.” It’s designed to accelerate CNN (Convolutional Neural Networks), DNN (Deep Neural Nets) and other complex algorithms. More important, data coming from a camera sensor “would be stored and processed on chip, without DRAM access,” according to the company.

hardware architecture (Source: ThinCI)
Hence Munagala believes the ThinCI visual computing engine can bring more performance, lower power consumption, programmability and a smaller memory footprint compared to other processing architecture.
Of course, the vision processing SoC market is beginning to see a number of new processors. Movidius, recently acquired by Intel, is a good example. It offers a vision processor designed for embedded market.
But when asked to compare with architectures currently used in deep learning, Munagala said, “Our solution is more than 13x in performance to size, power.” More importantly, “Architecturally, ours is more forward looking. Being able to solve problems other architectures aren’t able to.” He also added the significance of “simple programming model.”
Automotive Tier Ones
Investments from the two big tier ones clearly illustrate three things: First, the auto industry’s huge appetite for vision processing and deep learning technologies (they don’t think they have seen all the answers yet), second, their strong commitment to make autonomous driving a reality, and last but not least, tier ones in particular need a technology breakthrough that can give them a certain leverage to get back at the same table with big boys.
Look no further than the Mobileye/Intel/BMW alliance announced last July. Conspicuously absent from that deal were tier ones.
“DENSO has been researching new developments in the area of computer vision processing, and our investment in ThinCI represents a strong belief that ThinCI’s technology will soon become a key component of next generation autonomous driving systems that require advanced computing techniques combined with deep learning capabilities,” Tony Cannestra, DENSO International America’s Director of Corporate Ventures, said in a statement.
Swamy Kotagiri, Chief Technology Officer at Magna, also said in a statement: “We are excited to combine the work ThinCI is doing in the area of processing and software with Magna’s overall understanding of automotive systems.”
Beyond automotive markets
ThinCI isn’t just hanging its hat on the automotive market, however. Automotive is a notoriously slow-moving industry, especially considering all the testing and certifications that must go into final products.
It makes sense for any startup is to look for near-term opportunities elsewhere.
Vision processing and deep learning applications “can be applied everywhere” from natural user interfaces to surveillance cameras and even white goods, explained Munagala.
Perlmutter agreed. “Automotive is just one class of deep learning, but deep learning answers a huge new class of problems.”
He explained, “Creating adaptive solutions applies to all human-like actions. They range from vision, to speech, optimizing algorithms on big data collections, and sophisticated BOTs and assistants.”
To Perlmutter, deep learning is hugely effective, especially “when we move away from smartphones, into Augmented Reality (AR)-like devices.” He said, “Our interaction with AR devices, and the level of sophistication we will need from them, in the office, in manufacturing floor and on the go, will be much more than the clumsy way we interact with smartphones today.”
Fortunately, ThinCI’s edge in the embedded market is that its visual computing engine is very scalable. “We can address a diverse market ranging from wearables to servers with a common software stack,” said Munagala.

Fig 2: (Source: ThinCI)
Timeline
According to the startup, the architecture of its visual computing engine “is frozen and was proven in a test chip in 2015.”
The company needed to raise money to fulfill the first production silicon scheduled in 2017. ThinCI has its complete software tool suite in beta-test since early this year.
ThinCI’s investors generally appear very confident of what this team can deliver.
Jürgen Hambrecht, chairman of the Supervisory Board of BASF SE and Member of Supervisory Board of Daimler AG, told EE Times that he personally decided to invest in ThinCI because of “its outstanding team and their competence.”
Hambrecht likes the fact that “the startup is bringing together breakthrough hardware software for very diverse industry applications.”
Junko Yoshida is Chief International Correspondent, EE Times
