Machine learning and AI systems need data to function, but they also need to be actively protected. IBM researcher Dr Irina Nicolae is applying her skills to these complex issues.
Dr Irina Nicolae is a research scientist at IBM Research Ireland. With a background in computer science and software engineering and a PhD in machine learning (ML), Nicolae has carved out a career battling one of security’s most pressing issues: protecting artificial intelligence (AI) and ML systems from attacks.
Siliconrepublic.com spoke to Nicolae about the future of these technologies and IBM’s research around adversarial ML, including the Adversarial Robustness Toolbox (ART).
What needs to happen to data in order for it be usable to train AI/ML systems?
The quality of the data and its relevance to the task are the most important points to consider. Most often, data also needs pre-processing to be brought to a format that the model can ingest and that can ensure effective learning. The actual pre-processing operations depend on the type of data and the problem to be solved.
What kind of adversarial attacks are possible in terms of confusing AI/ML systems? What kind of real-world effects would these have?
There are multiple threat vectors against ML. The two most common ones target the two phases of ML: training and prediction. During training, tampering with the data allows for the creation of backdoors to be triggered later, at prediction time, by an attacker. This is known as a poisoning attack. If the adversary does not have access to the training data, they can perform an evasion attack at prediction time, by which they add adversarial noise to model inputs in order to obtain a more favourable decision.
How does the adversarial noise attack vector work?
The attacker adds a certain amount of noise to a regular input, eg an image distortion, then puts it into the ML model for prediction. What is observed in most cases is that the model will no longer have the same prediction for the noisy version as it would have had for the original input. The added noise is deliberately engineered to change the prediction, most often to the attacker’s benefit, eg changing their identity to gain access to a system.
Where do you see AI/ML being deployed more broadly in the next five years?
AI systems will become more pervasive and personalised in the following years. Most industrial processes will benefit from further increases in automation based on AI. Conversational agents might become the new interfaces, as natural language understanding develops even more and becomes readily applicable to specialised personal assistants.
The medical and pharmaceutical fields will benefit from the development of explainable models, with personalised diagnostics and drugs as an upcoming hot topic. These are just a few examples of how AI/ML will impact our lives in the next few years.
Can you explain how adversarial ML is unique?
Adversarial attacks in a broad sense are not unique to ML. However, evasion attacks against ML are different from classical information security in that they are revealing an intrinsic vulnerability of the models, which creates the need for new learning paradigms.
How does the ART support developers and researchers?
Researchers can use the toolbox for rapid prototyping and for benchmarking novel defences against existing methods. For developers, the library helps deploy practical defences for real-world AI systems by composing individual methods as building blocks.
Is the open source nature of ART seeing it being used widely?
Since its launch six months ago, ART has received attention from the research community, and the number of downloads is increasing steadily. We continue to work towards extending the catalogue of proposed methods in future releases.
How can the robustness of deep neural networks (DNNs) be measured and hardened?
Measuring the robustness of DNNs is not a trivial problem. Initially, the only way to go by it was to simulate an attack on the model and observe the effects. More recently, the research community has put effort into developing efficient estimates of robustness. The most well-known such metrics are part of ART. In terms of model hardening, adversarial training – that is, training models on both clean and adversarial samples – is the most effective defence to date and is also present in ART.
How do tools such as ART aid the provisions of good-quality results from data science projects?
Open tools for measuring different qualities of data science projects, such as adversarial robustness, fairness or explicability, provide easy-to-use processes for what would otherwise require considerable effort.
Moreover, they offer a controlled environment where baselines can be compared in a systematic and standardised way. This encourages users to assess the quality of their projects and work towards their improvement.