‘Trolling is a challenge I see colleagues encounter in this field too often’

6 Mar 2019

Erin McAweeney, research analyst at Data & Society. Image: Data & Society

Erin McAweeney is a research analyst with the Data & Society research institute looking at how information can be weaponised on social media.

After receiving her bachelor’s degree in journalism from the University of Wisconsin-Madison in 2014, Erin McAweeney went on to do a graduate degree in data science and computer interaction at the University of Washington.

After graduating, she joined the Google News Lab as a research fellow in New York, and then joined the Data & Society research institute as a research analyst in November 2018.

What inspired you to become a researcher?

As a journalism student, it became clear to me early on that information technology was making substantial shifts in how we make sense of the world around us. I worked at a legacy news institution, where I watched them adapt to the digital age and set up a partnership providing technology to a local, grassroots paper, which I was an editor for.

These sparked some ‘big questions’ about technology and information consumption for me that I could really only explore as a researcher. I knew that if I wanted to make significant contributions to the journalism field, I was going to have to continue education in an ostensibly different field: technology.

I didn’t seek out research opportunities until graduate school, where I was very fortunate to be mentored by a few incredible researchers. My original intent was to study data science and human-computer interaction, and apply these skills as a technical journalist, but my graduate education recalibrated how I understood information problems, especially after taking machine-learning and data ethics courses from Jevin West and Anna Lauren Hoffmann.

I realised I could ask these ‘big questions’ and finally had the resources and acumen to start pursuing them.

Can you tell us about the research you’re currently working on?

I just wrapped up a project as a Google News Lab fellow on computational propaganda. I worked with Witness Media Lab, a non-profit that provides tools, platforms and training for social rights activists to record human injustices on their phone.

The power and potency of images are incredibly valuable for social movements, and I wanted to better understand the lives contentious, networked images take on once they are released on social media.

The project was a network analysis of around 1m tweets concerning the crisis at the US border and ‘end family separation’ marches in June 2018. My analysis tracked the lifespan of images and how they propagated through clusters of like-minded users as well as how they were recontextualised to support opposing ideologies and views on immigration.

I think it’s often difficult to draw definitive conclusions from abstract ‘big data’, so I enriched the tweet data with qualitative interviews with immigrant organisations to show how the fight for social justice has both benefited from social media affordances as well as been disrupted by digital disinformation.

I compiled the findings into recommendations for activists that rely on visual social media, and the research will also come out in a report from the Institute for the Future this spring.

In your opinion, why is your research important?

When I started my graduate programme, it was a bit unusual to come from a journalism background as most of my peers had computer science or other hard science degrees. That autumn, the 2016 US elections happened, and everyone began to see the importance of how journalism and technology interact.

They were weaponised without deep understanding of the social and technical processes governing how we consume. I think the intersection of technology and social science is an increasingly important and drastically growing area – not just as it pertains to journalism and platform regulation – of which we experience the impacts of in our mundane, everyday lives.

Interdisciplinary conversations and questions like ‘how is this algorithm determining the information that I see?’ become important to contemplate – not just for researchers, but for everyone as our world becomes more data-driven.

What are some of the biggest challenges you face as a researcher in your field?

Networked information online is inherently political and often contentious. Making sure I’m acknowledging my own biases in my work is a major and ongoing challenge. There is still a lot of grey area in this emerging area of work, which can mean less standardised definitions, even in what we consider ‘misinformation’ and ‘disinformation’.

This leaves more room to interject personal bias. The controversial nature of work in this field, from immigration rights to the vaccination debate, makes the research vulnerable to attack and harassment. I haven’t experienced online attacks and trolling as backlash to work yet, but it’s a challenge I see colleagues encounter in this field too often.

Are there any common misconceptions about this area of research?

Looking through a critical lens at data and technology can be misinterpreted as a stance that technology is inherently ‘bad.’ It’s often the exact opposite.

I often make the important distinction that my research is not necessarily examining technology alone, but rather how human behaviour is being amplified and made visible by technology. This changes the conversation from ‘what is wrong with technology and how can we fix it?’ to ‘what systemic issues are surfacing and supported with new technical strategies?’.

Bemoaning the change technology is bringing is nothing new and there will be more iterations of moral panic to come with new information technology. However, accepting malicious practices bolstered by technology as the norm is no better than decrying technological innovation as a whole.

What are some of the areas of research you’d like to see tackled in the years ahead?

I hope to see more research on what communities are disproportionately exposed to the effects of misinformation and disinformation, especially in areas like health, where reliable information is crucial for making decisions regarding one’s personal and family wellbeing.

I would also like to see more research translated into tangible, evidence-based solutions, such as digital tools, coalitions, artificial intelligence literacy curriculums and platform accountability.

Are you a researcher with an interesting project to share? Let us know by emailing editorial@siliconrepublic.com with the subject line ‘Science Uncovered’.