The “Aurora” program aims to put together an “exascale” computing system for Argonne National Laboratory by 2021. The “exa” is prefix indicating bigness, in this case 1 quintillion floating point operations, or FLOPs. They’re kind of the horsepower rating of supercomputers.
For comparison, your average modern CPU does maybe a hundred or more gigaflops. A thousand gigaflops makes a teraflop, a thousand teraflops makes a petaflop, and a thousand petaflops makes an exaflop. So despite major advances in computing efficiency going into making super powerful smartphones and desktops, we’re talking several orders of magnitude difference. (Let’s not get into GPUs, it’s complicated.)
And even when compared with the biggest supercomputers and clusters out there, you’re still looking at a max of 200 petaflops (that would be IBM’s Summit, over at Oak Ridge National Lab) or thereabouts.
Just what do you need that kind of computing power for? Petaflops wouldn’t do it? Well, no, actually. One very recent example of computing limitations in real-world research was this study of how climate change could affect cloud formation in certain regions, reinforcing the trend and leading to a vicious cycle.
This kind of thing could only be estimated with much coarser models before; Computing resources were too tight to allow for the kind of extremely large number of variables involved here (or here — more clouds). Imagine simulating a ball bouncing on the ground — easy — now imagine simulating every molecule in that ball, their relationships to each other, gravity, air pressure, other forces — hard. Now imagine simulating two stars colliding.
The more computing resources we have, the more can be dedicated to, as the Intel press release offers as examples, “developing extreme-scale cosmological simulations, discovering new approaches for drug response prediction and discovering materials for the creation of more efficient organic solar cells.”
Intel says that Aurora will be the first exaflop system in the U.S. — an important caveat, since China is aiming to accomplish the task a year earlier. There’s no reason to think they won’t achieve it, either, since Chinese supercomputers have reliably been among the fastest in the world.
If you’re curious what ANL may be putting its soon-to-be-built computers to work for, feel free to browse its research index. The short answer is “just about everything.”