Koric, NSCA collaborations with industry result in computational breakthroughs

9/19/2018 National Center for Supercomputing Applications

Written by National Center for Supercomputing Applications

MechSE's Seid Koric.
MechSE's Seid Koric.
Rolls-Royce has an engineering problem that needs a large-scale software solution. Solving that problem, however, requires a multi-team effort: Illinois' National Center for Supercomputing Applications (NCSA) delivers the scale courtesy of Blue Waters, its Cray® XC™ supercomputer; Livermore Software Technology Corporation (LSTC) provides the software, LS-DYNA; and Cray contributes additional expertise to help ensure the software setup is optimal for the machine. By working together, these four players are creating a wide-reaching solution that no single player could achieve alone. As the common denominator among these disparate groups, NCSA is helping to facilitate the collaboration.

NCSA has a long history of cultivating strong partnerships across industry, academia, and governmental organizations to accelerate innovation in high-performance computing (HPC) applications. NCSA previously worked with Cray and Rolls-Royce, an NCSA partner for over a decade, to help LSTC scale up explicit finite element analysis (FEA) in LS-DYNA, LSTC's "gold-standard" multiphysics computer-aided engineering software. That initial collaboration, starting in 2013, scaled explicit FEA up to 16,382 cores on Blue Waters—a world record for a commercial engineering code at that time. Seid Koric, Technical Assistant Director of NCSA, Research Professor in Mechanical Science and Engineering and NCSA PI on the project, notes that NCSA provided "the only place in the world with enough computing cores, memory, and expertise to run those experiments, and a mature industrial program partners could trust to run the tests."

Work on explicit FEA concluded in 2014, but when Rolls-Royce brought another challenge to the table in 2017, this time requiring improvements to implicit FEA, the collaboration was reignited. Rolls-Royce uses implicit FEA to model thermal-mechanical relationships, or the effects of heat on the structure of gas turbine engines used in commercial airplanes. These simulations decrease the need for expensive and time-consuming physical tests, but in order to do so, they need to be high-fidelity, with meaningful predictive capabilities.

Finite element models become more predictive as the elements becomes smaller and more numerous. "When analyzing complex geometries, you need a finer mesh for high-fidelity models," says Erman Guleryuz, a Research Scientist with NCSA's Modeling and Simulation group who works on the project.

But with that higher fidelity comes greater computational needs: the memory footprint and CPU count increase with model size.

Cue the interorganizational teamwork. Drawing on real-life models from Rolls-Royce and technical consulting from Cray, NCSA and LSTC optimized LS-DYNA to reduce the memory footprint of running high-fidelity models and improve the software's performance and scalability. Their work was guided by Koric's prior groundbreaking explorations into scaling implicit FEA, which earned him the 2017 HPCWire Editors' Choice Award for Top Achievement in Supercomputing. In the first year of the collaboration, with Blue Waters as their testing ground, the four organizations' hard work managed to double the solver's ability to solve large models, from 50 million degrees of freedom to over 100 million—a record-breaking number. In the second year, which will be also supported by a new computational allocation from the U.S. Department of Energy, they plan to shatter that ceiling and achieve 200 million degrees of freedom.

The project's success will allow Rolls-Royce to run better, more predictive models at a lower cost. Moreover, LS-DYNA will be able to offer users from across the manufacturing spectrum the ability to scale up their own simulations. "By improving the algorithm and finding bottlenecks, we are raising the level of performance for everyone, from users on laptops to HPC clusters," Koric states. Guleryuz adds that this project is special because of the collaborative model in play: "None of these parties could create this level of impact on their own." But by joining together, they're breaking records and bringing benefit to the community.


Share this story

This story was published September 19, 2018.