Foxconn Unveils FoxBrain

Foxconn Unveils FoxBrain, Taiwan's First Traditional Chinese LLM

Asia Manufacturing Review Team | Tuesday, 11 March 2025

 Foxconn Unveils FoxBrain

FoxBrain is a traditional Chinese Large Language Model developed by an AI research institute, Foxconn, after training for four weeks. The intention for developing this LLM is to enhance manufacturing and supply chain management.

Uniquely optimized for local language patterns, FoxBrain demonstrates strong reasoning competency, performing such functions as data analysis, decision support, document collaboration, mathematics, and even code generation.

While intended, initially, for Foxconn's internal support, Foxconn plans to treat the LLM as open-source technology fairly soon, thus expanding access. Dr. Yung-Hui Li directed the AI Research Center at Hon Hai Research Institute and confirmed how well calibrated the aforementioned training in terms of optimization was instead of computing power.

Foxbrain took about four weeks of training for 120 Nvidia H100 GPUs via Nvidia's Quantum-2 InfiniBand, allowing data to transfer extremely quickly. Model pre-training was enabled by Nvidia's support via the Taipei-1 Super-computer and the NeMo framework. Foxbrain is based on Meta's Llama 3.1 architecture, comprising 70 billion parameters, the latter creating capability to be aware of complicated information.

According to Foxconn, it outstripped Llama-3-Taiwan-70B, a rival traditional Chinese model, especially in mathematics and logical reasoning. It incorporates a whopping 98 billion tokens issues fused into 24 categories, a distinctiveness-associated fact being the built context available when using 128,000 tokens for the dialogue. Foxconn aims to showcase FoxBrain's capabilities at Nvidia GTC 2025 on March 20, expanding AI applications in manufacturing and supply chain management.


🍪 Do you like Cookies?

We use cookies to ensure you get the best experience on our website. Read more...