Meta is working on concepts such as universal speech translation, AI that can learn like a human and a more conversational AI assistant.
Facebook’s parent company Meta has revealed where its research into AI technology is focused for the year ahead as it works on building its concept of the metaverse.
In a statement yesterday (23 February), the tech giant said the metaverse is the “most ambitious long-term project” the company has attempted and will require major advances in “almost every technology we work with” including AI breakthroughs.
CEO Mark Zuckerberg revealed late last year that the company was placing a heavy focus on developing an AR and VR universe, which he described at the time as the “successor to the mobile internet”.
Speaking at an online event yesterday, Zuckerberg shared some of the company’s ambitious AI projects and said they would “deliver the highest levels of privacy”.
“As we build for the metaverse, we’ll need AI to do much of the heavy lifting that makes next-generation computing experiences possible,” the company said in a statement.
“This means continuing to break ground in areas like self-supervised learning, so we aren’t dependent on limited labeled data sets, and truly multimodal AI, so we can accurately interpret and predict the kind of interactions that will take place in persistent, virtual 3D spaces with lots of participants.”
Meta has been investing in AI research for some time. The company recently said its AI research team has been working for years on a supercomputer that could be the world’s “largest and fastest” when fully built out.
However, the upcoming research projects listed yesterday have no detailed mention of using AI for moderation, a challenge that online social spaces are facing.
Yesterday, the National Society for the Prevention of Cruelty to Children said improving online safety is an urgent matter. This was in response to a BBC investigation where a researcher posing as a 13-year-old girl witnessed grooming, sexual material and threats of rape on a virtual reality app that can be downloaded from an app store on Facebook’s Meta Quest headset.
Here are some of the key AI research projects that Meta has announced:
Universal speech translation
The tech giant said billions of people are unable to access information on the internet in their native language due to barriers in machine translation (MT) systems.
Certain languages such as English, Spanish and Mandarin are easily translated by modern MT systems, but these systems can struggle to translate languages that lack available training data or a standardised writing system.
In order to address this, Meta is working on two long-term projects. The first is called No Language Left Behind, an AI model that could learn from languages with fewer examples to train from, with the aim of translating hundreds of languages.
The second idea is a Universal Speech Translator, which aims to use novel approaches to translate speech from one language to another in real time, in a way that could better support languages that lack a standardised writing system.
“It will take much more work to provide everyone around the world with truly universal translation tools,” the company said. “But we believe the efforts described here are an important step forward.”
The tech giant also said it plans to work on universal translation “responsibly”, looking into ways mitigate bias and “preserve cultural sensitivities” as information passes from one language to another.
Last year, documents shared by Facebook whistleblower Frances Haugen indicated that Meta was ill-equipped to address issues such as hate speech and misinformation in languages other than English. Speaking at a Joint Oireachtas Committee meeting yesterday, Haugen also said Facebook research has indicated that using AI or MT systems to address issues such as hate speech is limited as “language is nuanced”.
AI that can learn like humans and animals
As part of Meta’s presentation yesterday, its chief AI scientist Yann LeCun highlighted the current issues in the ability of AI to learn, saying a human can learn to drive in about 20 hours while autonomous driving systems need massive amounts of data and trials in virtual environments.
In order to improve the ability of AI to learn, LeCun is looking into new ways to develop “human-level AI” by mimicking the way animals learn.
“Human and non-human animals seem able to learn enormous amounts of background knowledge about how the world works through observation and through an incomprehensibly small amount of interactions in a task-independent, unsupervised way,” LeCun said.
“It can be hypothesised that this accumulated knowledge may constitute the basis for what is often called common sense.”
The company noted in a blog post that developing machines that can pick up information like humans is a long-term endeavour “with no guarantees of success”.
“But we are confident that fundamental research will continue to produce a deeper understanding of both minds and machines, and will lead to advances that benefit everyone who uses AI,” Meta said.
Meta is also looking to create better AI assistants that can be more conversational and natural when dealing with users.
The company said it has developed an end-to-end neural model that can power “more personal and contextual AI conversations”. A model has been implemented into Portal, Meta’s video calling device, for testing.
“Even in this early test, we believe the model outperforms standard approaches,” Meta said in a post.
“On Portal, we observed a significant improvement compared with our existing approach in the evaluation of the reminders domain as measured by the success rate of completing a set of reminders goals, while maintaining on-par number of turns.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.