MENU

Foxconn to offer manufacturing and supply chain AI to partners

Foxconn to offer manufacturing and supply chain AI to partners

Business news |
By Nick Flaherty



The Hon Hai Research Institute has developed the first Traditional Chinese Large Language Model (LLM) for manufacturing, supply chain and code generation and will offer it to partners of manufacturing giant Foxconn.

FoxBrain is optimised for manufacturing and supply chain AI and is based on Meta’s open source Llama 3.1 LLM with 70bn parameters covering data analysis, decision support, document collaboration, mathematics, reasoning and problem solving, and code generation. It is optimized for Taiwanese users’ language style, showing excellent performance in mathematical and logical reasoning tests.

This will be used to drive the upgrade of Foxconn’s three major platforms for Smart Manufacturing, Smart EV and Smart City.

The model was trained on 120 of Nvidia’s H100 GPUs and Quantum-2 InfiniBand networking links in just four weeks, highlighting the acceleration in training time for specialist models. FoxBrain will also be open sourced and shared publicly in the future. More details of the model will be shown at the Nvidia General Technology Conference (GTC) next week.

The Hon Hai Research Institute is backed by Foxconn (Hon Hai Industries), which is also a key manufacturing partner for Nvidia.

“In recent months, the deepening of reasoning capabilities and the efficient use of GPUs have gradually become the mainstream development in the field of AI. Our FoxBrain model adopted a very efficient training strategy, focusing on optimizing the training process rather than blindly accumulating computing power,” said Dr. Yung-Hui Li, Director of the Artificial Intelligence Research Center at Hon Hai Research Institute. ”Through carefully designed training methods and resource optimization, we have successfully built a local AI model with powerful reasoning capabilities.”

In test results, FoxBrain showed comprehensive improvements in mathematics compared to the base Meta Llama 3.1 model. It achieved significant progress in mathematical tests compared to Taiwan Llama, currently the best Traditional Chinese large model, and surpassed Meta’s current models of the same class in mathematical reasoning ability. While there is still a slight gap with DeepSeek’s distillation model, its performance is already very close to world-leading standards says the Institute.

“Although FoxBrain was originally designed for internal group applications, in the future, the Group will continue to collaborate with technology partners to expand FoxBrain’s applications, share its open-source information, and promote AI in manufacturing, supply chain management, and intelligent decision-making,” it said.

During model training, Nvidia provided support through the Taipei-1 Supercomputer and technical consultation, enabling Hon Hai Research Institute to successfully complete the model pre-training with the Nvidia NeMo microservices.

www.foxconn.com

 

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s