As the appetite for big data increases, are citizens becoming little more than information-gathering sensors? Researcher Dr Rachel O’Dwyer explores the ethics and privacy concerns burdening the internet of things.
You Will Go to the Moon was published by children’s authors Ira and Mae Freeman in 1959, several years before the first moon landing. The illustrated children’s book tried to imagine a future of space exploration that has yet to come to pass. Space exploration eventually did happen, just not in the imaginary of a 1950s futurism with moon cars, dome-shaped houses and aluminium miniskirts.
The internet of things (IoT) is caught in a similar moment of suspension between wild imaginaries of connected futures (‘Your house will know when granny has had a stroke’) and an already pervasive reality that we don’t always recognise – maybe because it doesn’t quite look like Minority Report. But this doesn’t mean it isn’t already a reality and, even now, shaping key areas of our lives.
‘While IoT is still an emergent field, we need to ask what this future will look like and what the broader political and economic implications are for citizens and users’
Many everyday objects are being redesigned to include sensors, actuators, computational intelligence and telecommunications capability. The OECD recently estimated that there will be as many as 25bn devices connected to the internet by 2020. This includes the personal devices we use to communicate, make purchases, and stay safe and healthy. It also includes many mundane objects in homes, workplaces, cities and the countryside.
The development of a network of interconnected physical objects with the ability to sense, respond and act on their environment, coupled to new forms of cloud storage and analytics, is producing an abundance of data. But what are the social and political implications of this ‘big data’?
Citizens ruled by software
While there are potentially positive applications, such as improved healthcare and transport, the growth of the IoT and big data raises concerns about the governance and commercial business models being developed in several areas including public services, marketing, media and games, risk and insurance, and security.
While IoT is still an emergent field, we need to ask what this future will look like and what the broader political and economic implications are for citizens and users. Who are the key stakeholders in IoT? Who owns the data produced in the IoT ecosystem? What business models are emerging around the monetisation of data, advertising, and credit and risk assessment? And what are the potential implications for citizens in relation to dataveillance, data discrimination, privacy and new forms of algorithmic governance driven by IoT data?
Dr Alison Powell from the London School of Economics has argued that IoT platforms will affect our capacity to speak, to listen and to be heard.
If public data is freely available and actionable then citizens can better see how resources are utilised, how decisions are made or even how public funds are allocated. IoT data might be used to ‘optimise’ public services like transport, water or energy – making them run more efficiently.
On the other hand, scholars such as Jennifer Gabrys maintain that, in a big data society, “citizens become sensors”. ‘Participation’, in other words, becomes equated with data production rather than other meaningful forms of engagement. The internet of things, therefore, reduces our ability to act as citizens since, while users produce a lot of the big data, the process of responding or acting on this data becomes delegated to software.
‘We urgently need to ask how IoT algorithms, using real-time data, are making decisions about how our cities are run, how healthcare is provided and how critical resources are managed’
Indeed, one of the more worrying aspects of IoT concerns the kinds of decisions being made on the back of this big data, either by governments and institutions or by devices acting autonomously. The internet of things produces a lot of ‘actionable data’ – streams of information that are used to make decisions ranging from the banal, such as how best to route traffic, to the more worrying, such as who gets access to employment opportunities or a particular line of credit.
Many researchers point out that automated forms of governance and management often have embedded forms of bias and discrimination, and these have a greater impact on vulnerable portions of the population such as the poor and minorities. As such, we urgently need to ask how IoT algorithms, using real-time data, are making decisions about how our cities are run, how healthcare is provided and how critical resources are managed. It’s also important that these algorithmic processes are transparent and accountable and that a system of due process is in place for situations where software-based decisions may be harmful to individuals or communities.
Ethics and security
New markets are also emerging around IoT data, particularly in the areas of advertising, credit and risk assessment. Companies that specialise in targeted marketing or the calculation of risk are coming to the fore, using thousands of data points produced during everyday activities. Because these companies trade on user-generated data, some may find these business models particularly invasive. Whether it’s tracking your movements in bricks-and-mortar spaces and pushing notifications for products (a practice Verizon has cheerfully dubbed ‘cookies for the real world’) or charging an insurance premium based on your fitness data, the ethics of these pervasive and often invisible systems need to be debated before they become a reality.
At the very least, we need to be educated about protecting our personal data and made aware of precisely what data is being gathered. We also need to develop stronger strategies for permission-giving and opting out – not an easy task in a world of connected everything.
‘The ethics of these pervasive and often invisible systems need to be debated before they become a reality’
Finally, IoT security and encryption are significant issues. The Mirai botnet was recently used to coordinate a global DDoS attack on Dyn, a cloud-based internet performance management company, resulting in outages for Airbnb, HPQ, The Guardian, Visa, Xbox Live, Twitter, Amazon and many more.
Significantly, the hackers targeted not ‘personal computers’ in the sense many of us would understand it, but a host of seemingly innocuous devices – printers, baby monitors, IP cameras and digital video recorders. The botnet targeted IoT devices that were protected by little more than factory-default passwords, gained access and enlisted these objects in coordinated attacks on major internet platforms. It all felt like ‘Revenge of the Domestic Devices’.
This raises serious issues about the dangers of IoT infrastructure. As the OECD report reminds us, soon there will be billions of devices that can be enlisted in attacks unbeknownst to and outside the control of their ‘owners’. If the average user struggles to encrypt and manage security settings on a computer, with sophisticated security settings and a programmable interface, how are we supposed to protect our things from attacks, or sense when something is no longer under our control? How can we be sure that the data and devices under our control have not been intercepted?
While IoT is still at a speculative stage, it’s extremely important to engage in these critical debates about our data and how it is shaping citizenship, government, commerce and personal privacy.
Dr Rachel O’Dwyer is a research fellow at Connect, the Science Foundation Ireland research centre for future networks and communications based at Trinity College Dublin.