Data analytics, AI, and visualisation in the HiDALGO2 project

Technologies used in more detail   

HiDALGO2 is building, from an AI point of view, on state-of-the-art algorithms and frameworks such as PyTorch, TensorFlow, or Theano to implement the intelligent workflow composition mechanism. From the HPDA perspective, well-adopted tools and their integration ability are being investigated to enable in-situ analysis of produced data. Specifically, the data analytics part’s algorithmic aspect is discussed to implement more HPC-like functionality. HiDALGO2 focuses on the AI application itself to improve the training mechanism and the precision of the tools developed. However, it is of utmost importance that the training process is efficient and precise to handle a wide range of data sources. Consequently, the implementations will rely on Convolutional Neural Networks (CNNs) or Deep Belief Networks (DBN), which will be tailored to the HiDALGO2 and, with this, HPC requirements.

Another area where HiDALGO2 goes beyond state-of-the-art is research into new technologies HPC vendors provide for project participants. As part of the cooperation between such companies offering processors (Intel, AMD, Huawei) and graphics accelerators (NVIDIA, AMD, Huawei), it will be possible to study the impact of newly introduced solutions on the performance of pilot applications and the related potential correction in the code optimization strategy. Intel Xeon Ice Lake-SP is based on Sunny Cove architecture, offering 40 cores with 10nm transistors. Notable changes from the previous generation include a 20% IPC bump and improved handling of advanced workloads like cloud, virtualization, and AI. It offers security-related features like Software Guard Extensions and Intel Crypto Acceleration that ease encryption-intensive workloads like cloud business operations, 5G infrastructure and SSL web serving. AMD EPYC Milan-X CPUs are based on Zen3 architecture, incorporate 64 cores, and are manufactured in a 7nm technological process. Their characteristic feature is 3D die stacking technology, allowing for three times the L3 cache compared to 2D dies while staying within the same power and thermal frameworks of the existing Milan chips. This will benefit workloads with large datasets sensitive to L3 cache misses.

The newest offering from NVIDIA is the recently announced H100, based on Hopper architecture with DPX instructions added that can accelerate dynamic programming by up to 40x compared with CPUs and up to 7x with previous-generation GPUs. It also comes with 80 GB HBM3 VRAM, fourth-generation Tensor Cores for deep learning networks, 3x faster IEEE FP64 and FP32 processing rates chip-to-chip compared to A100, due to 2x faster clock-for-clock performance per streaming multiprocessor (SM), plus additional cores (16,896) and higher clocks. AMD’s Instinct MI250X accelerators are powered by the company’s latest CDNA 2 architecture that is optimised for high-performance computing (HPC) and features 128 GB HBM2e of VRAM, and two dies with 14,080 stream processors for the entire chip; it powers the exascale Frontier supercomputer. Furthermore, the applications will be continuously benchmarked so that different levels of designs and implementations can be evaluated. A newly established EuroHPC JU supercomputer will serve as a foundation for deployment, improving scalability and benchmarking new infrastructures.

To understand and optimise application kernels, profiling tools like CrayPat, VAMPIR, VTune Amplifier, VAMPIRTRACE, sar, iotop, iostat, nfsiostat will be used to recognize bottlenecks on different architectures. Finally, this information will be used to optimise the kernels by introducing a new algorithmic approach, optimising the code implementation, but also by porting the application to another, more suitable architecture.

Making a qualitative leap in the HiDALGO2 project 

The goals and ambitions of the HiDALGO2 project go far beyond the current state of knowledge and application possibilities, exploring areas that have not been explored so far. Thanks to HiDALGO2, it will be possible to make a qualitative change, allowing us to look at the current state of knowledge of the analysed problems from a new perspective. Through the progress achieved in the project, we expect a qualitative leap in scale, resolution, and accuracy of the results obtained. This can be compared to using a microscope with much higher magnification to study extremely small organisms. Something previously invisible to the researcher suddenly becomes available, allowing us to understand the essence of the challenges being analysed.

A strong multidisciplinary team

The composition of the HiDALGO2 consortium is characterised by its interdisciplinary nature, resulting from the involvement of teams from various European countries. In science, the participants represent the following fields: mathematicians, meteorologists, hydrologists, physicists, specialists in modeling natural phenomena, parallel and distributed processing, optimisation, co-design, data analytics, and machine learning. All participants are specialists in their field, confirmed by numerous publications and involvement in national and European projects. The synergy effect resulting from the multidisciplinary on the one hand and the team members’ experience on the other inspires us and gives us optimism for the project goals. Moreover, the orchestration solution used for implementing the workflows has federation capabilities, enabling the possibility of spreading tasks among several HPC centres and exploiting the capabilities of different and particular resources.

Scroll to Top