Freeman researcher helps create open-source AI that rivals industry leaders

A Freeman School of Business professor is part of a research team behind a new large language model (LLM) that delivers comparable performance to leading proprietary AI systems while offering something they don’t: complete transparency.
Yumei He, assistant professor of management science, collaborated with researchers from institutions including Northeastern University, Harvard University, Cornell University and the University of Washington as well as three companies to create Moxin 7B, a new language model that’s fully open source. Moxin 7B's entire design — not only model architecture and weights, but also pre-training code, training data, and all intermediate and final checkpoints — is freely available to the public.
“Many commercial AI models are like black boxes, incredibly powerful but impossible to examine,” explains He. “By making every aspect of Moxin 7B accessible, we’re inviting the entire scientific community to understand how it works, verify its safety, and build upon our research.”
This transparency addresses growing concerns about AI development. While companies often claim their models are “open,” many withhold critical components like training code and data, creating barriers for researchers and businesses wanting to understand or improve these systems and preventing the transparent and responsible use of the models.
Moxin 7B achieves the highest classification level of “open science” under the Model Openness Framework (MOF), a system that rates AI models based on their completeness and openness yet still performs impressively. In zero-shot evaluations, the base model achieved an average score of 75.44 across multiple benchmarks, outperforming other 7-billion-parameter models like Mistral-7B and Meta’s Llama 2-7B.
The research team also developed several specialized versions of Moxin 7B for different applications, such as complex reasoning tasks, that match or exceed proprietary models.
“This technology could democratize access to advanced AI,” He notes. “Smaller organizations and academic institutions can now leverage powerful specialized language models without the costs of commercial alternatives. These smaller, application-focused models will enable more organizations to deploy agentic AI.”
The research team has released all components needed to reproduce and improve the model, including pre-training code, training data, and development checkpoints, a level of transparency rare in today’s AI landscape.
“We’re excited to see what innovations emerge when we remove barriers to understanding how these systems work,” says He. “I’m looking forward to seeing what users create with it.”
For more information about Moxin 7B, visit its developers page.
Interested in advancing your education and/or career? Learn more about Freeman’s wide range of graduate and undergraduate programs. Find the right program for you.
Recommended Reading
- Matthew Higgins: The Strategy of Innovation
- Pierre Conner: The Future of Energy Is Now
- Claire Senot: Leveraging the Power of Data
- What Can You Do With a Business Analytics Degree?
- Join the Freeman School for Homecoming 2012
- Students face off in inaugural Tulane Energy Trading Competition
- Freeman to host first energy trading competition
Other Related Articles
- Tulane Energy Institute gets major gift from Templeton family, new name for Trading Center
- Research Notes: Daniel Mochon
- Tulane launches technology ethics course bridging science, business and the humanities
- Forbes: AI Eating Tech And Other Jobs? It’s A Matter Of Perspective
- Techstrong.ai: Musk Sues OpenAI, Apple For ‘Anticompetitive Scheme’
- HR Brew: Disclosing CEO-to-worker pay ratios made employees happier with their compensation
- Lepage Center and UNO announce entrepreneurship fellows program
- Research Notes: Yumei He