How can we democratise AI away from big tech and the military?

14 Nov 2018

Dr Ernesto Diaz-Aviles, co-founder of Libre AI and adjunct assistant professor at UCD. Image: Colm Gorey

As a co-founder of Libre AI and adjunct assistant professor at UCD, Dr Ernesto Diaz-Aviles wants to take some of the machine learning power from big tech corporations.

Dr Ernesto Diaz-Aviles completed his bachelor’s degree in electrical engineering in his home country of El Salvador and went on to found his first start-up. Needing to expand his skills abroad, he travelled to the University of Freiburg in Germany to complete a master’s degree in computer science.

After returning for a period to El Salvador, he again travelled to Germany to join the L3S Research Centre at the University of Hannover.

Four years ago, he made the move to Dublin. Along with his role at University College Dublin (UCD), he also worked as chief data scientist at Citi’s Innovation Lab and, last year, co-founded his latest start-up, Libre AI.

What inspired you to become a researcher?

This is a good one! As a kid I used to watch Mazinger Z, a Japanese anime show about a super robot. I remember that in the intro of the show they showed the robot’s blueprint and I was amazed.

In the series, the robot creator was a researcher and a professor. Interestingly enough, the enemy’s leader was also a scientist. This might have been one of the initial sparks for my career as a scientist and engineer.

As a kid, I was always curious of how things worked. I remember disassembling many of my toys and a radio at home to see what was inside them, and trying to figure out how they worked the way they did. I also tried to reassemble them – most of the time there were some remaining parts after I ‘finished’.

Later on, I came across a quote from [American computer scientist] Alan Kay: “The best way to predict the future is to invent it”, which sums up my view that besides a pure academic research approach to invent or improve the future, we need an engineering one to actually build it. That is why I have a hybrid profile of a researcher and engineer.

In the path I have followed, I have contributed and published research work, but also applied research in industry as a practitioner. I like it this way.

Can you tell us about the research you’re currently working on?

The mission we have set at Libre AI is to work towards artificial intelligence (AI) democratisation and make its benefits accessible to all. For example, we are currently working on a project reimagining AI and news. We envision a future where journalists will no longer be limited to reporting past or current affairs, but they will be empowered by AI or machine learning (ML) to write about future events with a fair degree of certainty.

The project’s name is ‘AI and News: Learning to Predict the Global Risks Interconnections from the Web’ and the codename is ‘Minerva’. In this context, we are building a prototype to predict and visualise the (non-obvious) interconnections of global risks that will be at the core of tomorrow’s news.

Minerva leverages news data collections available on the web and applies AI/ML to discover the multiple relations among global risks. Minerva’s data-driven approach is more appealing, in terms of timeliness and efficient discovery of global risks’ relations, than current annual reports based on opinion surveys, such as the reports offered by the World Economic Forum.

The AI/ML pipeline we are developing is designed to be flexible and general enough to cover other domains beyond the prediction of global risks interconnections. We can easily adapt the pipeline to cover additional areas and topics of interest, for example, to better support journalists’ daily work across different disciplines

This project is funded by Google’s Digital News Innovation Fund (DNI Fund).

In your opinion, why is your research important?

I truly believe AI based on machine learning will impact every industry. Whether the impact is positive or not so positive depends largely on all of us. However, the advances we currently observe are taking place behind the doors of big tech companies or as part of military projects, which makes the adoption of AI/ML technology slower for many other organisations, industries or countries.

Our research seeks to democratise AI and make advances more accessible to relevant players in society or, in this case, journalists.

What commercial applications do you foresee for your research?

For this particular research, we think that a subscription model for newsrooms could be a viable model for commercialisation.

What are some of the biggest challenges you face as a researcher in your field?

I think one of the challenges we face in research or prototypical projects is to evaluate the work in live settings.

Are there any common misconceptions about this area of research?

It is easy to find a lot of things to critique about the influence that automation and AI could have in society. More challenging is articulating plausible alternatives for how these technologies should be designed and deployed.

I think that we need more AI and machine learning literacy to better understand the advantages, pros, and cons of these technologies.

What are some of the areas of research you’d like to see tackled in the years ahead?

I would like to see more efforts tackling explainable AI, fairness and bias in machine learning. I think solving some of the challenging issues in these areas would have an overall positive impact on the adoption of AI/ML platforms.

Are you a researcher with an interesting project to share? Let us know by emailing editorial@siliconrepublic.com with the subject line ‘Science Uncovered’.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com