Meta Is About to Release Its First AI Models Built Under Alexandr Wang — Some Will Be Open Source

Meta is preparing to release its first batch of AI models developed under Alexandr Wang — and according to Axios, some of those models will be offered under an open source license. The timing matters: Wang joined Meta just last year as part of the company's $15 billion acquisition of Scale AI, where he served as CEO, making it one of the largest AI talent acquisitions in history.
The new models are expected to arrive soon, with Wang himself saying they're coming. What they represent — in terms of capability and Meta's broader AI strategy — is something the company's been deliberately quiet about until now.
Who Is Alexandr Wang and Why Does This Matter
Scale AI was arguably the most important AI infrastructure company most people had never heard of. It built the data labeling and annotation pipelines that trained many of the frontier models — including those from OpenAI and Anthropic — and developed deep expertise in evaluating model quality and safety.
When Meta acquired Scale AI and brought Wang on board, the implication was that Meta wasn't just buying a company; it was buying a way of building AI differently — with better data pipelines, more rigorous evaluation, and cleaner training practices. The models about to be released are the first visible output of that investment.
The Open Source Strategy — But Not for Everything
Meta's relationship with open source AI is complicated. It pioneered the open release of large language models with the Llama series — a decision that reshaped the competitive landscape by giving developers free access to capable models and challenging OpenAI's closed approach.
But Meta has recently pulled back from that position for its most capable models. Reports from late 2025 indicated that its internal "Mango" and "Avocado" models — its most powerful — would remain proprietary.
The new Wang-era models appear to follow a hybrid strategy: smaller and mid-tier models released openly, largest models kept closed. This mirrors what Google has done with Gemma vs. Gemini, and what Mistral has done with its own open/closed split.
What Meta Is Actually Trying to Do
The strategic logic is straightforward: open source builds the developer ecosystem. Developers who build on Meta's open models create applications, fine-tunes, and integrations that reinforce Meta's platforms. It also signals Meta as a trustworthy actor in the AI space — important for regulatory relationships and public perception.
But keeping the most powerful models proprietary preserves competitive advantage. OpenAI's GPT-4 and Anthropic's Claude are not open. If Meta's best models are, that undercuts its ability to compete at the enterprise level where capability is the differentiator.
The hybrid play is a bet that Meta can win developer mindshare with open models while still competing at the frontier with closed ones. Whether the Wang-era models are actually competitive at the frontier — or whether the $15 billion acquisition will prove its worth — is the question the AI industry will be watching closely when they ship.