How to spot misinformation and avoid spreading it


1 Apr 2021

Image: © skypicsstudio/Stock.adobe.com

Cybersecurity researcher Liviu Arsene delves into how users can spot misinformation online.

Social media is a double-edged sword. It has been proven to have an incredibly positive impact in terms of connecting and mobilising people that have a common goal. Yet, it also has the ability to segregate, and isolate, people based on their views. This is because social media platforms may feed users with content that is similar to their observed preferences or that of their online friends.

Likewise, people with differing values or views are likely to be served with relevant materials to them, which the other group of people would never see. This echo chamber means that sometimes misinformation could be harder to identify as not everyone could be seeing it and therefore it takes longer for it to be called out.

In fact, so-called ‘fake news’ and the spread of misinformation is an ever-growing problem. So much so that a new, voluntary industry code of practice has been launched aimed at reducing misinformation and disinformation on digital platforms, and has been adopted by the likes of Twitter, Google, Facebook, Microsoft and TikTok.

It comes in the wake of a digital platforms inquiry conducted by the Australian Communications and Consumer Commission (ACCC) into the dominance of these platforms and the News Media Bargaining Code currently in Australian parliament. The code of practice commits the platforms to a range of scalable measures to reduce the spread and visibility of disinformation and misinformation.

Though these social media platforms could potentially be used as a vessel to spread misinformation, there are many ways in which bad actors are spreading malicious or misinformed content.

Website farms and bots are commonly used to publish and amplify misleading content and remain some of the most effective mechanisms for the spread of ‘fake news’ and misinformation. Therefore, it is important that users make themselves aware of what to look out for, in order to protect themselves from falling victim to misinformation.

Spotting the signs of misinformation

Website farms use a common tactic of repeatedly posting and reposting ‘fake news’ content, in an attempt to legitimise the content by making it seem like it’s widespread. Social media bots are also used in a similar vein, posting and sharing misleading information regularly in order to amplify it.

Spotting fake news websites can be a simple matter of checking how long they have been posting news for. Sometimes, these websites only have two or three weeks’ worth of ‘content’, which should be a first sign of suspicion.

The same applies to social media accounts. If looking through a feed reveals multiple messages of a similar nature, no personal posts and a small number of ‘friends’ that all seem to be sharing the same or similar posts, then this should also be a warning sign that they are likely to be posting misleading information.

Troll farms or factories – generally understood to be an institutionalised group of internet trolls that seeks to interfere in political opinions and decision-making – can further contribute to the dissemination of misinformation.

They either harass or discredit legitimate information in an attempt to promote their own fake information. This is why it’s important to get your information from trusted, official and reputable sources.

One of the most obvious signs of misinformation is when the website or the ‘source’ of information has no reputation or history. Some content farms may copy the theme and format of legitimate news outlets, in an attempt to seem legitimate, and then just post fake news.

Spotting the subtleties of false information sharing

It’s worth noting that misinformation could also take on more subtle forms. It may not necessarily involve outright ridiculous claims. Instead, there will be a slight rephrasing, rewording or attempt to tie the current topic to a completely different claim that’s out of context in order to alter the original story.

Mixing facts with misinformation could arguably create very compelling stories. Of course, this would require more than just machines and algorithms for generating text, but actual human intervention and creative content creation. This kind of misinformation could be considered fake journalism or propaganda.

On a much lower scale, phishing emails and carefully worded spam could be regarded as misinformation. These look authentic at first glance, or may even manipulate legitimate information but, if engaged with, can potentially cause damages for the victim.

For example, Covid-themed phishing emails, promising exclusive information about vaccines and treatments, could be regarded as a type of fake news. However, the end result is to get the victim to open an infected attachment or click on a fraudulent link, which qualifies as a cybercrime.

Distinguishing between reliable and unreliable sources

The simplest way to distinguish between reliable and unreliable information online is to fact-check everything you read. You should always cross reference the information you see against information from reliable sources, such as government websites or reputable news websites and agencies.

Combating fake news is more than just a matter of technology but also a matter of training ourselves to become investigative journalists to ensure the content we are consuming is correct. Checking the legitimacy of the information that we read online should become second nature for all of us.

It’s also really important to report sources of misinformation and not contributing to its amplification by sharing it. It’s not just about you, but also about protecting the community of people around you.

The bottom line is social media is a powerful tool, as it can mobilise people for good and bridge the distance between everyone in the world, wherever they are. However, it depends on each of us to curb the spread of misinformation. In 2021, we should all be brave enough to step out of the echo chamber and take a hard look at the real facts.

By Liviu Arsene

Liviu Arsene is a global cybersecurity researcher for cybersecurity software company Bitdefender.