Social media conglomerate Meta has completed a single AI model capable of translating across 200 languages, including many not endorsed by current commercial tools.
The company is open-sourcing the project, expecting others will build on its work.
The AI model is characteristic of an ambitious R&D project by Meta to make a so-called “universal speech translator,” that the company sees as critical for growth across its many platforms — from Facebook and Instagram to designing domains like VR and AR. Machine translation authorizes Meta to understand its users better (and enhance the advertising systems that generate 97 percent of its revenue). It could also be the grounds of a killer app for future projects like its augmented reality glasses.
Experts in machine translation expressed that Meta’s latest research was ambitious and thorough. Still, they reported that the quality of some of the model’s translations would likely be well under that of better-supported lingoes like Italian or German.
“The significant contribution here is data,” Professor Alexander Fraser, a specialist in computational linguistics at LMU Munich in Germany, said. “What is significant is 100 new languages.”
Meta’s Ambitions to Construct a ‘Universal Translator’ Continue
Meta’s achievements stem, somewhat paradoxically, from the scope and focus of its research. While most machine translation models endure only a handful of languages, Meta’s model is all-encapsulating: it’s a single system able to translate in better than 40,000 different directions between 200 other languages. But Meta is curious about including “low-resource languages” in the model — languages with fewer than 1 million publicly available translated sentence pairs. These include numerous African and Indian languages not usually reinforced by commercial machine translation tools.
Meta AI analysis scientist Angela Fan, who worked on the project, said the team was inspired by the lack of awareness paid to lower-resource languages in this field. “Translation doesn’t even function for the languages we speak, so that’s why we created this project,” said Fan. “We have this inclusion motivation of preference — ‘what would it take to produce translation technology that works for everybody’?”
The model described in a research paper is already being tested to support a project that helps Wikipedia editors translate documents into different languages. The techniques developed to complete the model will soon be integrated into Meta’s translation tools.
The translation is a challenging task at the best of times, and machine translation can be notoriously crisp. However, when applied at scale on Meta’s platforms, even a small number of errors can produce disastrous results.
To assess the quality of the new model’s output, Meta completed a test dataset consisting of 3001 sentence pairs for each language protected by the model, separately translated from English into a target language by somebody who is both a professional translator and an aboriginal speaker.
The researchers conducted these sentences through their model and approximated the machine’s translation with the human reference sentences using a benchmark standard in machine translation called BLEU.

BLEU permits researchers to assign numerical scores measuring the overlap between pairs of sentences, and Meta says its model improves 44 percent in BLEU scores across supported languages. However, as is often the issue in AI research, judging progress established on benchmarks requires context.
Although BLEU scores authorize researchers to compare the progress of different machine translation models, they do not show an absolute measure of software’s ability to create human-quality translations.
Meta’s dataset consists of 3001 sentences, and each has been translated only by a single individual. It delivers a baseline for judging translation quality, but the total explicit power of an entire language cannot be captured by such a tiny sliver of actual speech. This problem is limited to Meta AI — it affects all machine translation work and is especially acute when considering low-resource languages — but it illustrates the scope of the field’s challenges.
Christian Federmann, a principal research manager who operates on machine translation at Microsoft, said the project as a whole was “commendable” in its desire to expand the scope of the machine translation software. It goes to lesser-covered languages but noted that BLEU scores alone could only provide a limited measure of output quality.
The translation is a creative, generative process that may result in many different translations that are equally good. Therefore, it is impossible to provide general levels of ‘BLEU score goodness’ as they depend on the test set used, its reference quality, and the inherent properties of the language pair under investigation.
Working on AI translation is often presented as an unambiguous good, but creating this software is difficult for speakers of low-resource languages. For some communities, the attention of Big Tech is simply unwelcome: they don’t want the tools needed to preserve their language in anyone’s hands but their own. For others, the issues are less existential but more concerned with questions of quality and influence.
Meta’s engineers explored some of these questions by conducting interviews with 44 speakers of low-resource languages. These interviewees raised several positive and negative effects of opening up their languages to machine translation.
One vibrant example is that such tools allow speakers to access more media and information. They can be used to translate rich resources, like English-language Wikipedia and educational texts. At the same time, if low-resource language speakers consume more media generated by speakers of better-supported languages, this could diminish the incentives to create such materials in their language.
Balancing these issues is challenging, and the problems encountered even within this recent project show why. Meta’s researchers note, for example, that of the 44 low-resource language speakers they interviewed to explore these questions, the majority of these interviewees were “immigrants living in the US and Europe, and about a third of them identify as tech workers” — meaning their perspectives are likely different to those of their home communities and biased from the start.
Professor Fraser of LMU Munich said that despite this, the research was undoubtedly conducted “in a way that is becoming more of involving native speakers” and that such efforts were “laudable.”
More of this from companies like Google, Meta, and Microsoft, all of whom have substantial work in low resource machine translation, is excellent for the world,” said Fraser. “And of course, some of the thinking behind why and how to do this is coming out of academia, as well as the training of most of the listed researchers.”
Meta attempted to preempt many of these social challenges by broadening the expertise they consulted on the project. It is the decision to open-source as many elements of the project as possible — from the model to the evaluation dataset and training code — which should help redress the power imbalance inherent in a corporation working on such an initiative. Meta also offers grants to researchers who want to contribute to such translation projects but cannot finance their projects.