Tackling the onslaught of fake news online won’t be an easy task

10 Nov 2017

Facebook Explore Feed. Image: Pe3k/Shutterstock

In this world of so-called ‘post-truth’ politics, many people are questioning how to best go about fact-checking fake news, and trying to unravel who exactly should shoulder this responsibility.

With recent revelations about malicious groups and botnets spreading misinformation or ‘fake news’ on numerous platforms, both the public and Silicon Valley experts are rightly concerned with how to create a smart and reliable fact-checking process.

Michael Fauscette is the chief research officer at G2 Crowd, and a former software analyst with IDC. In recent years, he has examined emerging trends in business software, business modernisation and customer experience strategies, with particular focus on social media technologies.

Siliconrepublic.com asked him about the challenges search engines and social media are facing in the prevention of fake news dissemination.

Fake news is a complex issue

Fauscette said: “This is, of course, a complex issue. While in theory at least, everyone is personally responsible for the credibility of what they consume online, it can be very difficult for some people to identify ‘fake’ versus ‘real’.”

He explained that the fundamental issue in the context of ads is even more difficult to control.

The bulk of social media sites make their money selling advertisements, and it has come to light in recent months that the regulations around online ads need to be seriously overhauled.

“Facebook’s business model is based on the success of ads, including political ads. They try to downplay the success of political ads, yet at the same time they can’t push very hard there or they impact the bottom line.”

He added that curation of the news stream at this stage in our digital society’s development is difficult, but there is some opportunity to apply newer artificial intelligence-based tools to assist in curation in the future.

Companies have only recently taken more responsibility when it comes to political advertising. In October, Facebook implemented some new regulations around political ads as scrutiny deepened ahead of US midterm elections, with Twitter also following suit.

Media literacy in the digital age

As we wait and hope for social media sites and search engines to implement further corporate responsibility initiatives, Siliconrepublic.com asked Fauscette how we as individuals can examine what we see online with a more critical eye. He said users should take advantage of the ability to see the source of posts as one important tool.

He added that there are also several websites that allow you to fact-check news articles, but warned that the vetting sites themselves must be seen to be trustworthy. By simply searching the reporter or post author’s online credentials, readers can discern whether they are a legitimate source or member of the media.

In this politically tumultuous time, many people can be guilty of sharing an article that correlates with their own views before reading a single word. It sounds simple, but taking the time to read the information will ensure you are savvy before you share misinformation with your friends, family and colleagues on the likes of Facebook or LinkedIn.

A final context clue recommended by Fauscette is a thorough examination of the advertising surrounding the article – if something seems off, or the ads look particularly spammy, proceed with caution.

Fauscette concluded by saying vetting content can’t all be up to us either. “While it’s a personal responsibility to ensure what you accept as true is vetted as much as possible, we know many people just don’t do that. Whether it’s ads or news posts, more transparency to the source, sponsor etc is essential.”

Facebook Explore Feed. Image: Pe3k/Shutterstock

Ellen Tannam was a journalist with Silicon Republic, covering all manner of business and tech subjects

editorial@siliconrepublic.com