AI breakthrough helps us get one step closer to nuclear fusion success

15 Dec 20172.17k Views

Model of a working nuclear fusion reactor. Image: Cholpan/Shutterstock

As we edge closer and closer to a working nuclear fusion reactor, a new breakthrough in AI could send us over the tipping point.

The carrot of near-limitless, clean energy in the form of nuclear fusion continues to dangle over the heads of a number of research teams and international collaborations. However, without applying artificial intelligence (AI) to it, it could all be for nothing.

According to ScienceDaily, that could be about to change, thanks to a new breakthrough achieved by a team from Princeton University.

While achieving a sustainable plasma – the energy that drives the sun – is crucial in generating electricity, before that can happen, scientists must be able to predict when major disruptions could occur. These disruptions could potentially damage the walls of doughnut-shaped fusion devices known as tokamaks.

This is achieved through Princeton’s new predictive AI called the fusion recurrent neural network (FRNN), an amped-up version of machine-learning software that is capable of analysing sequential data patterns over an extended period of time.

Support Silicon Republic

Using FRNN, the team was able to demonstrate an ability to accurately predict plasma disruptions substantially more than previous attempts, by gathering data from the Joint European Torus (JET) project, the largest tokamak device of its kind.

Real-world testing

The next major step is to test the software on the biggest fusion research project, the International Thermonuclear Experimental Reactor. This project is being undertaken by Europe, the US, China, India, Japan, Russia and South Korea to create a working nuclear fusion reactor.

The Princeton Plasma Physics Laboratory team aims to have a prediction rate of disruptions at 95pc, while also making sure that its false alarms remain no higher than 3pc.

All of this is by no means an easy task, as Princeton researcher Alexey Svyatkovskiy explained: “Training deep neural networks is a computationally intensive task that requires engagement of high-performance computing hardware.

“That is why a large part of what we do is developing and distributing new algorithms across many processors to achieve highly efficient parallel computing. Such computing will handle the increasing size of problems drawn from the disruption-relevant database from JET and other tokamaks.”

Model of a working nuclear fusion reactor. Image: Cholpan/Shutterstock

Colm Gorey was a senior journalist with Silicon Republic