Facebook using AI to help blind ‘see’ images in News Feed

5 Apr 2016

Image of visually impaired user using new Facebook alt text feature via Facebook

Using the power of machine learning, Facebook has announced that it has introduced a new automatic alt text image audio description for images on the social network.

More than any other social network, Facebook is heavily dependent on its AI algorithms to create a streamlined site for its users, most notably last month when it was found it created a profile based on your ‘ethnic affiliation’ to target you with specified advertising.

Facebook has now said that it wants to allow the millions of visually-impaired people across the world to view the estimated 2bn photos shared each day across its varied services.

The alt text feature is a common tool found on websites, with a description being laid behind an image, which, when a voice-enabled computer is active, will describe the image using words in the alt text for a blind user.

On Facebook, however, before today’s update on iOS, a visually-impaired user using alt text would only hear the name of the person who shared the photo, followed by the term ‘photo’ when they came upon an image in News Feed.

Now, using an AI algorithm that is able to determine what’s in a photo, Facebook will automatically describe via audio something like: “Image may contain three people, smiling, outdoors.”

Facebook said its object recognition algorithm is based on a neural network that has billions of parameters and is trained with millions of examples, much in the same way intended by Google Photos, which unfortunately found itself in hot water not long after it launched due to unfortunate misidentification.

First launching on iOS in English, Facebook said that it will add support for other platforms and languages soon.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com