Publications

OpenCUBE: Building an Open Source Cloud Blueprint with EPI Systems

Authors: Ivy Peng, Martin Schultz, Utz-Uwe Haus, Craig Prunty, Pedro Marcuello, Emanuele Danovaro, Gabin Schieffer, Jacob Wahlgren, Daniel Medeiros, Philipp Friese and Stefano Markidis.

Euro-Par ’23: Proceedings of the European Conference on Parallel Processing (Lecture Notes in Computer Science). August 2023.

Download

Accelerating drug discovery in autodock-gpu with tensor cores

Authors: Gabin Schieffer, Ivy Peng

Euro-Par ’23: Proceedings of the European Conference on Parallel Processing (Lecture Notes in Computer Science). August 2023. https://doi.org/10.1007/978-3-031-39698-4_41

In drug discovery, molecular docking aims at characterizing the binding of a drug-like molecule to a macromolecule. AutoDock-GPU, a state-of-the-art docking software, estimates the geometrical conformation of a docked ligand-protein complex by minimizing a scoring function. Our profiling results indicate that the current reduction operation that is heavily used in the scoring function is sub-optimal. Thus, we developed a method to accelerate the sum reduction of four-element vectors using matrix operations on NVIDIA Tensor Cores. We integrated the new reduction operation into AutoDock-GPU and evaluated it on multiple chemical complexes on three GPUs. Our results show that our method for reduction operation is 4–7 times faster than the AutoDock-GPU baseline. We also evaluated the impact of our method on the overall simulation time in the real-world docking simulation and achieved a 27% improvement on the average docking time.

Download

A Quantitative Approach for Adopting Disaggregated Memory in HPC Systems

Authors: Jacob Wahlgren, Gabin Schieffer, Maya Gokhale, Ivy Peng

SC ’23: Proceedings of the SC ’22 of The International Conference on High Performance Computing, Network, Storage, and Analysis. November 2023. https://doi.org/10.1145/3581784.3607108

Memory disaggregation has recently been adopted in data centers to improve resource utilization, motivated by cost and sustainability. Recent studies on large-scale HPC facilities have also highlighted memory underutilization. A promising and non-disruptive option for memory disaggregation is rack-scale memory pooling, where node-local memory is supplemented by shared memory pools. This work outlines the prospects and requirements for adoption and clarifies several misconceptions. We propose a quantitative method for dissecting application requirements on the memory system from the top down in three levels, moving from general, to multi-tier memory systems, and then to memory pooling. We provide a multi-level profiling tool and LBench to facilitate the quantitative approach. We evaluate a set of representative HPC workloads on an emulated platform. Our results show that prefetching activities can significantly influence memory traffic profiles. Interference in memory pooling has varied impacts on applications, depending on their access ratios to memory tiers and arithmetic intensities. Finally, in two case studies, we show the benefits of our findings at the application and system levels, achieving 50% reduction in remote access and 13% speedup in BFS, and reducing performance variation of co-located workloads in interference-aware job scheduling.

Download

Survey of adaptive containerization architectures for HPC

Authors: Nina Mujkanovic, Juan J. Durillo, Nicolay Hammer, Tiziano Müller

SC-W ’23: Proceedings of the SC ’23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis. November 2023. Pages 165–176. https://doi.org/10.1145/3624062.3624588

Abstract: Containers offer an array of advantages that benefit research reproducibility and portability. As container tools mature, container security improves, and high-performance computing (HPC) and cloud system tools converge, supercomputing centers are increasingly integrating containers into their workflows. Despite this, most research into containers remains focused on cloud environments. We consider an adaptive containerization architecture approach, in which each component chosen represents the tool best adapted to the given system and site requirements, with a focus on accelerating the deployment of applications and workflows on HPC systems using containers. To this end, we discuss the HPC specific requirements regarding container tools, and analyze the entire containerization stack, including container engines and registries, in-depth. Finally, we consider various orchestrator and HPC workload manager integration scenarios, including Workload Manager (WLM) in Kubernetes, Kubernetes in WLM, and bridged scenarios. We present a proof-of-concept approach to a Kubernetes Agent in a WLM allocation.

Download

Kub: Enabling Elastic HPC Workloads on Containerized Environments

Authors: Daniel Medeiros, Gabin Schieffer, Jacob Wahlgren, Ivy Peng

IEEE 35th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD). 10.1109/SBAC-PAD59825.2023.00031

Abstract: The conventional model of resource allocation in HPC systems is static. Thus, a job cannot leverage newly available resources in the system or release underutilized resources during the execution. In this paper, we present Kub, a methodology that enables elastic execution of HPC workloads on Kubernetes so that the resources allocated to a job can be dynamically scaled during the execution. One main optimization of our method is to maximize the reuse of the originally allocated resources so that the disruption to the running job can be minimized. The scaling procedure is coordinated among nodes through remote procedure calls on Kubernetes for deploying workloads in the cloud. We evaluate our approach using one synthetic benchmark and two production-level MPI-based HPC applications – GRO-MACS and CM1. Our results demonstrate that the benefits of adapting the allocated resources depend on the workload characteristics. In the tested cases, a properly chosen scaling point for increasing resources during execution achieved up to 2x speedup. Also, the overhead of checkpointing and data reshuffling significantly influences the selection of optimal scaling points and requires application-specific knowledge.

Download

A GPU-Accelerated Molecular Docking Workflow with Kubernetes and Apache Airflow

Authors: Daniel Medeiros, Gabin Schieffer, Jacob Wahlgren, Ivy Peng

International Conference on High Performance Computing. ISC High Performance 2023: High Performance Computing pp 193–206. https://link.springer.com/chapter/10.1007/978-3-031-40843-4_15

Abstract: Complex workflows play a critical role in accelerating scientific discovery. In many scientific domains, efficient workflow management can lead to faster scientific output and broader user groups. Workflows that can leverage resources across the boundary between cloud and HPC are a strong driver for the convergence of HPC and cloud. This study investigates the transition and deployment of a GPU-accelerated molecular docking workflow that was designed for HPC systems onto a cloud-native environment with Kubernetes and Apache Airflow. The case study focuses on state-of-of-the-art molecular docking software for drug discovery. We provide a DAG-based implementation in Apache Airflow and technical details for GPU-accelerated deployment. We evaluated the workflow using the SWEETLEAD bioinformatics dataset and executed it in a Cloud environment with heterogeneous computing resources. Our workflow can effectively overlap different stages when mapped onto different computing resources.

Download