The reference architecture for AI applications is offered as providing enterprises comprehensive, out-of-the-box configurations to easily scale conversational AI to millions of customer interactions. The company’s model creation and knowledge graph workloads can function optimally on a standard configuration of an Intel Xeon Gold 6140 processor-based server, says the company, allowing businesses to quickly deploy enterprise-grade virtual assistants on existing data center resources, without significant investments or learning curve.

“Our goal from the beginning has been to design and deliver conversational AI technology that effortlessly becomes part of large enterprises,” says Avaamo CEO Ram Menon. “Working with Intel we’ve been able to create a reference architecture that’s a one-stop shop for large enterprises looking to massively scale their in-house conversational AI deployments. This technology will allow enterprises to focus on the benefits of AI for better customer relationships and a streamlined workforce.”

The conversational AI platform is built to address the traditional “cold-start” problem in AI by:

  • Ingesting unclassified data
  • Performing unsupervised machine learning (ML) model creation
  • Optimizing the model for runtime execution
  • Enhancing the ML model with customer-specific knowledge resources

The technology can scale with a 72-core configuration of a single server to address up to 100 concurrent sessions, providing flexibility for large enterprises to share Intel hardware across standard and AI-specific computing workloads. For more, see “Avaamo Conversational AI Reference Architecture with Intel® Xeon® Scalable Processors.”


If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles