The most notable thing to me in this risk forecasting story is the word SuperComputer.
A decade or so ago “cloud computing” (e.g. 1950s concepts of shared-time) was pitched to the market to replace SuperComputer forecast projects.
I vividly remember, however, executives at Amazon in a panic ringing phones off the hook to say “please stop sucking up all our compute resources, we can’t handle it”.
We asked innocently. We were running simulations of what a big explosion would look like on the streets of San Francisco, and such insurance stuff using infinite scale was supposed to move to cloud (everything from pandemic modeling to misinformation spread).
So we had cranked up consumption of shared compute all the way to 11 and… ring, ring “go somewhere else, we can’t ramp selling knock-off brand underwear and cheap Chinese charging cables with you allocating all our server time to science and societal safety”.
Thus it’s interesting to read today’s SuperComputer news, evidence of dedicated and valuable engineering being very alive and well.
To calculate the CyberShake 22.12 hazard model, Maechling’s team used Pegasus, a workflow management system designed by research director Ewa Deelman and her team at the University of Southern California, or USC, Information Sciences Institute. Maechling’s team continuously ran a diverse collection of jobs on Summit over 10 weeks. Pegasus automatically managed 2.5 petabytes of data, which is equal to about 500 billion pages of standard printed text, including an automated transfer of 70 terabytes to USC’s archival storage.
Summit was born of the DoE CORAL program with an estimated $200m budget. Small potatoes to see 2,500 years into the future, or more powerfully, to avoid being constrained by a willful hyper-short-term ignorance culture of captalism.