Aurora incorporates more than 1,024 storage nodes, providing 220 petabytes of capacity at 31 terabytes per second of total bandwidth.
US-based Argonne National Laboratory, HPE and Intel have announced that their Aurora supercomputer is expected to be operational by the end of this year.
Aurora is a very powerful supercomputer that incorporates Intel and HPE’s technology. It will be able to handle a range of computing tasks on a very large scale, hence its designation as a high-performance or supercomputer. Meta, Microsoft and Nvidia are some of the other major players currently working on supercomputer projects of their own. IBM recently outlined its plans to create a quantum supercomputer.
Supercomputers have more storage and are capable of performing more tasks at a higher speed than regular computers. When it comes to high-performance computing tasks requiring a system to perform at a high capacity across AI, simulations and data analytics, supercomputers are key. Researchers and third-level institutions believe supercomputers have the capacity to advance research and innovation, which can in turn help them solve societal problems.
“While we work toward acceptance testing, we’re going to be using Aurora to train some large-scale open-source generative AI models for science,” said Rick Stevens, Argonne National Laboratory associate laboratory director. The Argonne National Laboratory is overseen by the US Department of Energy and is federally funded.
“Aurora, with over 60,000 Intel Max GPUs, a very fast I/O system, and an all-solid-state mass storage system, is the perfect environment to train these models,” Stevens continued.
Aurora incorporates more than 1,024 storage nodes, providing 220 petabytes of capacity at 31 terabytes per second of total bandwidth. When it comes online later this year it is expected to reach a peak performance of more than 2 exaflops.
Before Aurora can be deployed, Argonne researchers will have to migrate their work from the test bed they are using to the full-scale supercomputer. The machine’s early users will have to stress test the supercomputer to identify potential bugs that need to be resolved before it is deployed.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.