Skip to:

LisztFE Finite Element Codes for Exascale Computers

Pat Hanrahan, Stanford University
Alex Aiken, Stanford University
Eric Darve, Stanford University
Charbel Farhat, Stanford University
Dale Shires, Army Research Laboratory

New HPC machines have many more processors than previous architectures but have much smaller local memories associated with each processor and more levels of memory hierarchy, at least some of which need to be explicitly managed by software.  Along with these changes in the computing platform, the increase in problem size and complexity of the simulations requires changing many of the numerical methods that have been traditionally adopted in the field of computational mechanics.  In this project, these challenges will be addressed in the context of finite element codes.  The project will advance the state-of-the-art in parallel languages, through domain specific languages (DSLs) and scalable algorithms for formulating and solving finite-element systems using hybrid solvers. 

One major component of the project is developing the List DSL.  DSLs combine several important features: high productivity, portability, and performance, but at the cost of restricting the class of applications supported by the DSL, in this case to mesh-based solvers such as finite-element and finite-volume methods.  The DSL will raise the level of abstraction of the codes that programmers write, both to maintain portability across increasingly diverse hardware and to give the language implementation more scope for choosing the best way to map a program on to the hardware.

Another major component of this project is a set of mathematical and numerical methods with a focus on scalability.  As mentioned, the increase in problem size and complexity of simulations requires changing many of the numerical methods.  In particular, we believe that codes will need to evolve from explicit time integration algorithms, which are simple to implement but limited in their choice of time step, to implicit methods, requiring more advanced numerics but allowing significantly larger time steps.  Importantly, the performance of implicit methods does not significantly degrade as the problem size increases or when the mesh becomes irregular.  Current methods cannot run well on exascale computers because they do not exhibit enough parallelism and scale poorly.  The current-state-of-the-art for extreme scalability is hybrid solvers that use sub-domains and combine local sub-domain solvers (direct or iterative) along with global solvers ensuring appropriate compatibility conditions between the sub-domains.  In addition, we will investigate a new class of direct solvers, called fast direct solvers (FDS), which use low-rank-matrix approximations to reduce the computational complexity.

This project is of relevance to the Army in multiple ways.  It addresses a large class of applications that represent the bulk of the Army’s parallel scientific applications, provides a systematic and practical approach towards developing software and software infrastructure for exascale computing, and finally provides a set of mathematical and numerical methods that are ubiquitous in scientific computing (solutions of large systems of equations) with a focus on scalability and high-performance.