
Generative AI or “GenAI” dominates discussions today, whether with technology founders, investors or even regulators. Most of the early cynicism that GenAI might be another “web3” or “crypto” opportunity has subsided. GenAI’s growing popularity is undeniable – it seems that almost everyone is interested in building solutions, discovering how to use and monitor them, and inevitably regulating them to ensure GenAI is productive rather than harmful.
“Given its potential for such profound impact, we at Intel Capital believe all of us – particularly the technology industry – need to be learning, and talking about Generative AI and its implications, as much as possible,” say managing and investment directors Srini Ananth, Assaf Araki, Avi Bharadwaj and Mark Rostick.
Intel Capital is one of the leading deeptech investment funds, backing chip technology companies such as Ayar Labs, Astera Labs, Metalenz, Pilops, Dust Photonics and SiFive.
- Google X trends in generative AI: teaching computers to code
- $1.3bn to build world’s largest AI supercomputer
Intel Capital has been investing in AI for over a decade, including GenAI, with companies like Anyscale, Common Sense Machines, Inworld AI, Landing AI, Lilt and SambaNova Systems, among others.
What is GenAI?
The term GenAI is overused today and has become, correctly or incorrectly, a synonym for Artificial Intelligence (AI) or Large Language Models (LLM).
Lately, Foundation Models (FM) that can perform more than one task have emerged. FMs are large AI models trained on a vast quantity of data at scale, resulting in a model that can be used for a wide range of downstream tasks. An LLM is a unique type of FM – a language model that can recognize, summarize, translate, predict, search, rewrite, translate, and generate text and other content based on knowledge gained from massive datasets. When people talk today about GenAI, they are typically talking about LLMs. Nevertheless, other types of foundation models exist for computer vision and speech, and we may even see FMs for other types of data or domains in the future.
FMs can generate new data that does not currently exist in the training data, hence “Generative AI.” For example, a new image of the Golden Gate Bridge was created in Picasso’s painting style. GenAI can produce highly realistic and complex content that mimics human creativity. However, generating content is not the only use of FMs. It can also handle traditional ANI workloads, for instance, entity extraction, summarization, object detection, image classification, and more. This versatility allows practitioners to replace a variety of disparate models – each requiring a unique data set, training, and maintenance – with just one FM.
A history of foundation models
Foundation models, especially LLMs, transformed the way models are created and used. In 2017 Google released a paper called Attention Is All You Need. This paper established the foundation of LLMs by introducing Transformer, a new neural network architecture based on attention. Before the Transformer architecture, algorithms were trained for each specific task, which meant that a data scientist needed to gather proprietary data, pick algorithms, and execute a training process.
Since transformer-based models were introduced, language models have become reusable. Not only have language models continued to improve in accuracy but are also trained on a vast amount of public data and created a general language knowledge that can be reused for different uses. This expanded access to trained models, making it easier and faster to deploy AI solutions by AI experts. In August 2021, the term foundation model (FM) was officially coined by The Stanford Institute for Human-Centered Artificial Intelligence’s (HAI) Centre for Research on Foundation Models (CRFM).
GenAI: Will the best model win?
Despite remarkable advances since Transformers were introduced, AI models are far from an off-the-shelf product. AI models are a core building block of a product, but to be a full-fledged solution that a customer can readily buy and use, it would require more building blocks that translate the model value into business or consumer value. Historically, AI companies have done a poor job of providing these solutions.
Machine learning has existed in the industry for several decades and the deep learning breakthrough happened a decade ago, but we still have only a small number of AI companies that have crossed $100M in recurring revenue and even fewer that have achieved impressive exits for their founders or investors. A company that attempts to differentiate itself solely based on the AI model at the core of its product, and competes on model performance alone, will, in our opinion, have a very difficult time finding long-term success.
The fleeting advantage of one model over another was recently illustrated very clearly. In November 2022, OpenAI introduced ChatGPT, which set a record for the fastest-growing user base, crossing 100 million active users in two months. Open AI offers a full-stack product with a prompt layer and an interface with an AI model at its core. While enterprises have used AI for B2B cases to reduce costs and increase revenue for years, consumers did not have the same access. On the B2C side, AI was embedded within the product and wasn’t a product itself. ChatGPT changed that by offering a B2C full-stack product that lets anyone use Generative AI directly.
- Keeping Generative AI on track
- Generative AI silicon design challenge readies for fab
- d-Matrix delays chiplet processor to better address GenAI
Data scientists, a small, highly skilled community, dominated the early days of deep learning. FM model-based solutions open the door for a vastly large pool of developers and consumers to use AI. OpenAI’s solution demonstrated the opportunity before us all, but it also exposed the myriad challenges to building a business and the opportunities which exist in solving them.
Enterprise GenAI
When an enterprise deploys a software solution, including LLMs, there are several prerequisites for enterprise software-like compatibility with the company’s IT infrastructure, security certificates, logging, and documentation:
- Fine-tuning an FM involves adjusting and adapting a pre-trained model to perform specific tasks. It requires training the model further on a smaller, targeted dataset that is relevant to the desired task. General-purpose GenAI tools like ChatGPT are like Swiss Army Knives for AI use cases. However, enterprise use cases require a surgical knife that does one job very well. Given that most LLMs are trained on large corpora of public data, the models may lack an understanding of company- or industry-specific terminology. Enterprises optimize the model to their specific use cases by fine-tuning using instruction tuning, reinforcement learning through human feedback or other methods.
- AI Explainability allows users to understand and trust the results and output created by machine learning algorithms. FMs are used for tasks ranging from generative (e.g., content creation) to discriminative (e.g., classification, sentiment analysis). For generative tasks, while there is some room for what a “correct” result is, significant deviation from reality (e.g., by hallucinations) can be a critical issue for enterprises. For discriminative tasks, enterprises typically prefer the ability to explain how a certain result was derived. Moreover, in certain industries like consumer finance, being able to explain the logic behind discriminative models may be a regulatory requirement.
- Multimodality in AI combines multiple types of data to create tables, images, text and more, enabling the precise prediction of real-world problems. Enterprises have access to data in several formats and LLM solutions that support only unstructured text data are not enough – it’s critical that the models are able to extract data from images and tables to ensure the highest quality outcome.
- Containerized Models bundle together an application’s code with all the files and libraries needed to run on any infrastructure. Foundation models remove much of the complexity of collecting data and training models but add other types of complexity. An enterprise needs building blocks that will help bridge the gap from the AI model to the solution; the enterprise requires building blocks that will help fine-tune the model with confidential data, build the prompting layer (an API layer), manage data embedding in a vector database, and optimizing hardware to reduce the total cost of ownership. The enterprise needs a containerized, optimized solution that includes all the building blocks needed to develop and run foundation models.
- Run Everywhere, in the cloud or on-prem, via SaaS or in your VPC. Some AI solutions focus on productivity and marketing data, while others may leverage confidential data to improve product development and increase revenue. The sensitivity of the task and data directly influences whether the foundation model is run in the cloud, SaaS, VPC or on-premises. A full-stack foundation model product needs to be able to run anywhere the customer requires, and a SaaS-only solution will drastically limit the GenAI enterprise use cases.
- Privacy and Security of proprietary data is a top concern of enterprises looking to use foundation models. In the most common modality of using closed-source FMs like GPT-4, users are required to pass on potentially sensitive and confidential information through the FM’s API. One concern is that such data may be used by FM providers to train their models, which could pose a risk to the competitive advantage that enterprise customers have. Additionally, companies often worry about inadvertently leaking sensitive information such as Personally Identifiable Information. OpenAI has integrated privacy guardrails into its business offering, but it’s too soon to tell if the security efforts are sufficient enough to gain enterprise adoption. On the other hand, enterprises could consider using open-source models that they deploy within their infrastructure, with the tradeoff being an increased complexity of implementation.
The future of GenAI
Looking ahead, five to ten years from now, Intel Capital predicts algorithms will be commoditized; any algorithm advantage will not last long. It is not anticipating that one model will rule all, but rather, there will be different foundation models for different data types/segments, which are generalized, and other models that are specialized for a specific task.
- ChatGPT craze drives Ayar Labs to add $25m in C1 round
- Metalenz, UMC launch metasurface optics onto the open market
An example is the db-engines ranking which has hundreds of databases to use. These are divided into twenty different models, including relational, key-value, graph, etc. Each model has its advantages. For example, some are in-memory databases with fast data retrieval, while others scale out to several servers and can store and retrieve huge datasets. A singular database cannot execute all enterprise workloads; an enterprise uses a variety of databases for different workloads.
Once an enterprise chooses a database for a specific data type or workload, it promotes the adoption of the same database for similar use cases, thereby minimizing the necessity of supporting numerous technologies. We anticipate GenAI will follow the same pattern; there is no one-size-fits-all solution, and different models will be used for different use cases.
An enterprise will implement a handful of FM sets as an Enterprise Intelligence Layer (or Enterprise Brain) that will serve dozens or even hundreds of use cases. In some applications, the FM will replace traditional ML solutions like entity extraction, and in others leverage new uses, like semantic search.
Common Enterprise Use Cases
- Search: Enterprise search has always been the proverbial holy grail of knowledge management in companies. Historically, search within the enterprise was riddled with issues due to the absence of indexes and the inability to contextualize user input. FMs enable semantic search, a set of special techniques and algorithms to understand the relationships between words, the concepts they represent and how they relate to each other.
- Summarization: LLMs excel at condensing lengthy passages of text into concise summaries that capture essential information such as customer feedback, training materials, and legal terms.
- Natural Language Querying: An interesting emergent behavior of LLMs is their ability to convert natural language into code that can run on existing systems. Specifically, when applied to convert natural language to SQL, LLMs can help further accelerate the democratization of data by enabling a greater number of users to query data without the need for technical know-how.
- Content Creation: LLM-based applications are already being employed to create marketing copy and other creative assets. Increasingly, other types of content including requirements specifications, training manuals, test scripts, technical documentation, and audio and visual media could end up being generated through a combination of LLMs paired with relevant “knowledge engines.”
The select use cases listed above are only a handful of possible applications across most enterprises. Beyond these, there is a plethora of industry and function-specific deployments emerging, from system to chip design.
- New Generative AI Technique for Vehicle Design
- Generative AI silicon design challenge readies for fab
The industry has a ton of learning and work to do to realize the potential for Generative AI in the enterprise, says Intel Capital. Plenty of the requirements mentioned above don’t exist yet, at least not for GenAI-based applications, and others will change completely given the pace and extent of change we’ve seen so far in this domain.