Researcher Fei Fang shows why AI doesn’t have to be inherently evil

28 Feb 2018

Fei Fang, an assistant professor in the Institute for Software Research at Carnegie Mellon University. Image: CMU

Fei Fang of Carnegie Mellon University is aiming to break the current stigma around AI, starting with a way to use it to stop animal poachers in their tracks.

Artificial intelligence (AI) is in the midst of a PR battle between those who think it will usher in a new age of technological development, and those who think its unfettered development will bring about the end of humankind.

Someone more aligned with the former is Fei Fang, an assistant professor in the Institute for Software Research at Carnegie Mellon University (CMU).

After receiving her bachelor’s degree in electronic engineering from Tsinghua University, Fang went on to receive her PhD from the Department of Computer Science at the University of Southern California in June 2016.

With numerous research awards to her name, other notable achievements include the deployment of Protection Assistant for Wildlife Security (PAWS) in multiple conservation areas around the world, providing predictive and prescriptive analysis for anti-poaching efforts.

What inspired you to become a researcher?

I started working on research projects related to computer vision when I was an undergraduate student.

After joining the PhD programme at the University of Southern California, I worked on computational game theory, a subfield of AI.

The research projects I have worked on led to applications that are deployed in the real world, which gave me a great sense of achievement.

I enjoy tackling the research challenges, and I am excited to see my work lead to a change in the real world and help address the most significant challenges faced by the society.

This is why I chose to be a researcher and continue working on research problems that are challenging and can lead to societal impact.

Can you tell us about the research you’re currently working on?

My current research is in the area of AI, focusing on integrating game theory and machine learning with applications to security, environmental sustainability and mobility domains.

As an example, I have been working on developing computational tools to help conservation agencies combat poaching.

I started the work by building a game theoretic model for the strategic interaction between the poacher and the rangers, and compute the optimal patrolling strategy for the ranger in this idealised setting.

As we went deeper into the challenge and developed collaboration with different non-government organisations that focus on wildlife conservation, we got more data from the real world, eg poaching activities found in the past years in protected areas.

This motivated us to embed learning into the game-theoretic framework to address these challenges, including learning a behavioural model for poachers, based on which is a better patrol strategy.

As we started working on bringing our algorithms to the field, we faced additional challenges that motivate the development of new models and algorithms.

For example, the design of practical patrol routes requires incorporating terrain data and taking into consideration multiple types of uncertainties.

Now my lab has PhD students, one master’s student and a few undergraduate students working on several real-world challenges with significant potential societal impact, ranging from predicting and combating poaching threat, to designing, scheduling and pricing mechanisms for ride-sharing platforms.

In your opinion, why is your research important?

My research aims to address some of the most pressing challenges faced by the society: security, sustainability and mobility.

I believe that AI can deliver benefit to society in multiple areas now and in the near future, and this is also the aim of my research.

What commercial applications do you foresee for your research?

I foresee my research leading to applications that help improve security level in various scenarios, ranging from infrastructure security to cybersecurity and potential applications that can improve the pricing mechanism in commercial ride-sharing platforms.

Are there any common misconceptions about this area of research? How would you address them?

Some people get overly concerned about the negative impacts of AI and oversee the positive impacts that it can make to society beyond the convenience it brings to everyone’s daily life.

I hope my research can serve as examples of how AI can be used to tackle the societal challenges we are facing today, illustrating the concept of ‘AI for social good’.

Also, I started a new course, Artificial Intelligence Methods for Social Good, at CMU. In this course, we highlight state-of-the-art AI research and how the advances are leveraged for social good.

What are some of the areas of research you’d like to see tackled in the years ahead?

I hope there will be more and more research under the theme of AI for social good.

From healthcare to security, sustainability, mobility and enhancing social welfare, there are still so many significant challenges that need the continuous efforts of AI researchers.