If the internet meltdown caused by a botnet called Mirai proves anything, it is that the tech industry is still on the back foot with hackers, and it has a moral duty to step up to the plate, writes John Kennedy.
“Don’t be evil.”
Those words are the corporate motto of Google, now a subsidiary of a company known as Alphabet. Alphabet is so named because it harks back to Google’s policy of taking big bets or “alpha bets” on things. Past examples have included acquiring such companies as YouTube and Nest for billions of dollars, and creating operating systems like Android for 80pc of the world’s smartphones. It also makes big things on solar energy, driverless cars and internet-emitting balloons.
On a rainy day in Killarney in 2009, then CEO of Google Eric Schmidt – who flew his own jet to the nearby airport for a big team meet with Google’s Irish workforce – was asked about those words. Schmidt said that if Google even just once betrayed the trust it had with its users, it was game over.
That same reasoning is why I believe Apple CEO Tim Cook stubbornly resisted attempts by the FBI to unlock the encryption on the iPhone belonging to one of the San Bernardino killers, because it would betray a principle of privacy and an understanding between the company and its customers.
And yet, as I observed the damage caused on Friday by a botnet called Mirai (which launched tens of millions of denial of service (DDoS) attacks against a hosting firm called Dyn, taking some of the biggest internet brands offline), I reflected.
‘We are now in the realm of shadow IT where the internet and devices from fridges to phones and thermostats are all connected to clouds of clouds, and organisations don’t know what apps employees are downloading, and businesses are buying services without talking to IT. The truth is IT can’t control any bit of technology anymore’
– TERRY GREER-KING
The malware was the creation of a hacker known as Anna-senpai and its source code was released into the wild in recent weeks. It is known for causing the high-profile attacks on infosec writer Brian Krebs of Krebs on Security, as well as French hosting provider OVH, sending data at levels never seen before.
The malware effectively finds internet of things (IoT) devices and hardware, such as insecure CCTV cameras and DVR players, and uses them to mount attacks on websites, flooding them with traffic to make them inaccessible. Imagine hundreds of thousands of people trying to enter through the front door of a single shop on Grafton Street at the same time – you get the picture.
In this way, some of the biggest internet brands from Amazon to Spotify and Twitter were taken offline by an attack of machines connected to the internet.
The future is scary, but we are the ones designing it
We live in a tremendous era of innovation and science – a time that some people have compared to Florence during the Renaissance. Impressive advances in biotech are rivalled only by advances in technology such as artificial intelligence, big data, neural networks, machine learning; machines that can heel, see and think for themselves.
However, the fact that these very machines are being enlisted to attack – due to lax security measures and components – makes the Renaissance metaphor a lie. This is more like Florence during the Great Plague and not only that, but the Golden Horde is also at the gates.
The point I am making is that the tech industry is proceeding at a speed of knots; innovating and selling stuff to keep shareholders happy, but not cleaning up the bigger mess left in its wake.
This is contributing to advances in machine learning and artificial intelligence undoubtedly designed to sell, sell and sell, but we should be turning those advances onto the bigger problem itself: data security.
The tech industry created the products, but has not solved the security problem.
Maybe it never will, but if the Mirai attack has proven anything, it is that the tech industry is inexorably pushing the world into a frightening future where the gap in the cat and mouse game between hackers and security researchers is only going to widen. All in the name of progress.
Perhaps the hackers themselves are trying to tell us something.
There will be 50bn IoT connected devices on the planet by 2030, according to Cisco. How many of these will be truly secure?
I spoke with PwC’s global cyber leader Grant Waterfall last week and he pointed out that some medical device makers are working on products including IoT connected stents and pacemakers.
That might sound incredulous but I don’t think so.
The new face of evil might be invisible
If you read the history of the 20th century (especially the various world and proxy war), a recurrent theme is the banality of evil behind massacres and atrocities. From Ukraine in 1941 to Cambodia in 1973 and Srebrenica in 1995, many of the perpetrators were neighbours who knew their victims, and perhaps drank or ate with them in more peaceful times. The shocking truth of the violence of 20th century Earth was the perceived ordinariness of the people behind the attacks.
In the 21st century world, if you are being attacked online, you may never see the face of your attacker. The commonality of a component in that attack, from a CCTV camera to a DVR or maybe even a toaster, may never cross your mind. A computer being hacked or enlisted to form a DDoS attack might not seem as big a crime as the taking of a life, but what if one day a cyberattack costs lives?
A car being hacked is one example. Let’s not forget that US and Israeli secret services created the Stuxnet worm ostensibly to target the SCADA and PLC systems that managed Iranian nuclear plants. Instead, the worm went wild and threatened utility services like water and electricity plants around the world. The same systems are used to run rail services across the world.
Tech firms have a moral duty to close the gap between the hackers and their old and new technologies.
We now live in a world where it seems Russia is deliberately tampering with the US elections, with Russian natives allegedly behind the hacking of the Democratic National Party’s systems.
We live in a world where hackers are making a fortune from ransomware, by holding systems to ransom after an individual clicks on a phishing link. According to Kaspersky Labs, 40pc of businesses targeted in this way pay out. A Data Solutions survey reported that 20pc of Irish firms hit by ransomware have also paid out.
We live in a world where teenage boys are asking girls to share compromising photos of themselves via Snapchat or WhatsApp, only to later share them publicly with their friends and ultimately wider society.
We live in a world where people who have shared images of themselves with strangers online have been tapped for money, and instead some took a different solution – suicide.
Cybersecurity is already costing lives.
Data science and the moral responsibility of the tech world
My point is that the tech world is pushing out new technologies all the time, encouraging punters to buy, or sign up, or sign their rights away. And while all of this is happening, a porous world is emerging where attacks on people and things can come from any direction, on any device.
Earlier this year, while we were looking at the Future of Security, Terry Greer-King, the director of cybersecurity at Cisco UK and Ireland, put it clearly: “We are now in the realm of shadow IT where the internet and devices from fridges to phones and thermostats are all connected to clouds of clouds, and organisations don’t know what apps employees are downloading, and businesses are buying services without talking to IT.
“The truth is IT can’t control any bit of technology anymore.”
And yet the tech industry is booming, urging companies to buy more and more platforms, get their workers better connected, and turning us all on to the promise of the digital, connected home.
Greer-King pointed out the industry average for detecting a breach is 100 days, long after this damage has been done.
In a world where thousands – if not millions – of devices can be marshalled on the whim of a hacker, this is not good enough.
It is the job of tech companies to innovate in order to create products people want. But that should come with a moral responsibility to ensure those devices and services are better protected.
Yes, advances in machine learning, data science and artificial intelligence are wonderful, but perhaps they could be employed to fight the infosec fight first?
Facebook’s Mark Zuckerberg loves artificial intelligence, it’s his new toy. But if the same Mark Zuckerberg can be pictured with security tape over the camera and mic of his personal MacBook, should he not also be sharing these personal security tips with the 1.5bn people who use Facebook every month? What do you know Mark, that you are not telling the hundreds of millions of punters who flock to your site every day?
Perhaps Zuckerberg wants a personal robot butler in five years’ time and by all means, he is welcome to achieve that – but shouldn’t those incredible advances in AI and data science be enlisted to better protect people too?
If Facebook can amass 1.5bn users who willingly sign away their privacy, then with that comes a massive moral responsibility. This is true too for Microsoft, Apple, Intel and many other companies that have shaped our present.
Maybe Facebook is working on an AI-based security platform that will protect people and their data in this new machine-based world. If Facebook isn’t, then it should be.
The machines are coming. Ultimately, their job is to make our lives better, and augment and enhance our world. They should not take our jobs or intrude.
But it seems the hackers have the upper hand, and they are doing a better job at reminding the world about that fact than the tech giants are.
Evil is what happens when good people stand aside and do nothing.
Remember, don’t be evil.