The system has been dubbed the M2M-100, which is currently under research, but will gradually be used by Facebook users to translate newsfeed posts.
Keep in mind that about two-thirds of Facebook users do not have English as their mother tongue.
"Over the years, AI researchers have been working to develop a global model that can understand every language in the world for a variety of tasks," Angela Fan, Facebook's research assistant, said in a blog post.
He added: "A model that can support all languages, dialects and their harmonies, which will help us serve more people, keep translations up to date and create a new experience for billions of people alike." This work is closer to that goal.
This AI model has been trained from a data set of 7 billion sentences in 100 languages taken from the web.
Facebook says all of these resources are open source and that the data is available for public use.
For language translation, the researchers avoided the most sought-after languages and preferred languages.
Languages were then divided into 14 different groups based on linguistic, geographical and cultural similarities.
The languages belonging to different groups are connected by a language bridge, such as Hindi, Banali and Tamil Indo. Aryan language bridges work.
According to Facebook, the combination of techniques resulted in the first multilingual machine translation (MMT) model that could translate any of the 100 language pairs without relying on English data.
According to the company, this data has not yet been made part of any product, but the test indicates that it can help people translate into different languages on Facebook.