It has been reported that the profiles were used for marketing and sales purposes.
An investigation by researchers at Stanford Internet Observatory uncovered more than 1,000 LinkedIn profiles using facial images that appear to have been created using artificial intelligence.
According to NPR, researcher Renée DiResta was contacted by someone on LinkedIn, with little things appearing off about the profile image. For DiResta, the face “jumped out at me as being fake” for reasons including the central positioning of the eyes in the image and vague backgrounds.
This prompted her to begin an investigation with her colleague Josh Goldstein on the number of computer-generated – or deepfake – images on LinkedIn profiles.
Deepfakes use a form of artificial intelligence to combine and superimpose existing images and videos to make fake images of people or make it look like a person has said or done something they have not.
While there have been examples of deepfakes used as a source of humour, there have also been fears that this technology could be used to discredit individuals or as a tool to interfere in elections.
And people on social media use faces from LinkedIn to appear like users with specific demographic characteristics while spreading disinfo… https://t.co/j4cSwgX0qs
— Timnit Gebru (@timnitGebru) March 28, 2022
Now it appears the technology has entered the corporate world. NPR found that many of these profiles with AI-generated images appear to be for marketing and sales purposes. When someone connects with the fake profile, they’ll end up speaking to a real salesperson.
NPR suggested this tactic could allow companies to “cast a wide net online” without having to employ more staff.
Several of the companies listed as employers on the profiles with AI-generated images told NPR that they used outside vendors to pitch potential customers on LinkedIn.
One of these vendors is AirSales, which said it hires independent contractors to provide marketing services and that these contractors may make LinkedIn profiles “at their own discretion”.
“To my knowledge, there are no specific rules for profile pictures or the use of avatars,” AirSales CEO Jeremy Camilloni told NPR. “This is actually common among tech users on LinkedIn.”
LinkedIn’s professional community policies state that it does not allow fake profiles or entities on its platform. This includes using images of someone else or “any other image that is not your likeness” for profile photos.
In general, the company says that users should not post deepfake images or videos or “otherwise post content that has been manipulated to deceive”.
NPR technology correspondent Shannon Bond said on Twitter that LinkedIn has removed most of the profiles found during the investigation and is updating its defences to catch fake accounts.
A community report on LinkedIn’s transparency page said it removed more than 15m fake accounts in the first half of 2021, with most of these stopped by automated defences.
However, it can be difficult for people to spot a computer-generated image. A study released last month saw participants examine similar facial images to decide which one was real and which was a deepfake. The participants had an average accuracy of 48.2pc, slightly lower than if the results were based on chance.
Last year, Cork teen Greg Tarr was named the overall winner in 2021’s BT Young Scientist and Technology Exhibition for his work in developing an improved method to detect deepfakes.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.