Take a butcher’s at the below: pretty, isn’t it? It’s the visualisation of a supernova performed on a supercomputer in the U.S. Department of Energy’s Argonne National Laboratory.

supernova.jpg

What’s perhaps more interesting though than the fact a multi-million dollar computer can draw a pretty picture, is that it’s all now being drawn on the supercomputer itself, rather than the numbers crunched and then visualised using different software on graphics processing units.

To produce the image on Argonne’s Blue Gene/P supercomputer, 160,000 computing cores all work together in parallel. Today’s typical laptop, by comparison, has two cores. In fact if you wanted to try and do this kind of picture on a typical home PC, it would take you three years just to download the data.

The latest volume rendering techniques being used by Argonne can be used to make sense of the billions of tiny points of data collected from an X-ray, MRI, or a researcher’s simulation.

Usually, the supercomputer’s work stops once the data ha­s been gathered, and the data is sent to a set of graphics processors (GPUs), which create the final visualizations.

But the driving commercial force behind developing GPUs has been the video game industry, so GPUs aren’t always well suited for scientific tasks. In addition, the sheer amount of data that has to be transferred from location to location eats up valuable time and disk space.

“It’s so much data that we can’t easily ask all of the questions that we want to ask: each new answer creates new questions and it just takes too much time to move the data from one calculation to the next,” said Mark Hereld, who leads the visualization and analysis efforts at the Argonne Leadership Computing Facility. “That drives us to look for better and more efficient ways to organize our computational work.”

Argonne researchers wanted to know if they could improve performance by skipping the transfer to the GPUs and instead performing the visualizations right there on the supercomputer. They tested the technique on a set of astrophysics data and found that they could indeed increase the efficiency of the operation.

“We were able to scale up to large problem sizes of over 80 billion voxels per time step and generated images up to 16 megapixels,” said Tom Peterka, a postdoctoral appointee in Argonne’s Mathematics and Computer Science Division.

So it really is more than just a pretty picture: it’s something of a breakthrough in supercomputer visualisations.

Read more about it here.