Meta launches new open-source AI models 

Meta has unveiled its latest open-source model, MMS, “Massively Multilingual Speech and LIMA “Less is More for Alignment,” a new language model.

MMS can process speech-to-text and text-to-speech in 1100 languages, as well as recognize up to 4,000 spoken languages. LIMA, as the name implies, is designed to demonstrate that a few samples are adequate to get high-quality outcomes with an extensively pre-trained AI model.

MMS described the extensive range of supported languages as “a 10x increase from previous work.” Meta’s aspirations to extend into speech translation for many more languages appeared to be hinted at in the research paper launching MMS. According to Meta’s blog, many of the world’s languages are on the verge of extinction, and the limits of existing voice detection and generating technologies will further hasten this tendency.

In test settings, LIMA reaches GPT-4 and Bard level performance, albeit fine-tuned with relatively few samples. It has an incredible 65 billion parameters, uses LLaMA as its base, and is fine-tuned using a supervised loss approach using only 1,000 precisely picked prompts and replies. Meta created it in partnership with major universities such as Carnegie Mellon, the University of Southern California, and Tel Aviv University.

The sources for this piece include an article in AnalyticsIndiaMag.

IT World Canada Staff
IT World Canada Staff
The online resource for Canadian Information Technology professionals.

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada

Featured Download

ITW in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

More Best of The Web