Keynote

Dror Feitelson

Empirical Science and Parallel Systems

Abstract

Massively parallel computers are often extolled as the vehicle for scientific progress - they are used to study the basic physics of the universe, to predict next week's weather, and to design and analyze bio-medical molecules. But what about the scientific study of the parallel systems themselves? These systems span many orders of magnitude both in size and in their rate of operation, from nano-scale transistors switching at GHz rates to whole buildings housing machines with millions of cores churning away at a problem for weeks or months. Most research at the system level is concerned with engineering issues: how do build a system that does something. But the basis for engineering is an appreciation for requirements and a scientific understanding of the options and their implications. This requires the collection of data on how existing systems are used in practice. Such data is hard to come by, and riddled with threats to the validity of the study. As a concrete example, I'll focus on job scheduling - how we partition a parallel system's resources among competing user jobs. The data in this case comes from workload logs. And analyzing these logs uncovers a wealth of interesting behaviors, which sometimes question our assumptions on how parallel systems are used, and sometimes challenge our ability to use the data effectively.

Short Bio

Dror Feitelson is on the faculty of the School of Computer Science and Engineering at The Hebrew University of Jerusalem. He received his PhD in 1991 and then worked at IBM Research for three years in the group developing the software for IBM's first parallel computer, the SP2. His research interests are in experimental aspects of computer science, and especially in human-related issues. This includes work on the workloads on parallel supercomputers and on program comprehension in software engineering. He is the co-founder of JSSPP (the workshops on Job Scheduling Strategies for Parallel Processing, which recently had its 25th meeting), and of the Parallel Workloads Archive.



Viv Kendon

Integrating quantum computing with HPC

Abstract

Quantum systems can naturally do some tasks in parallel, which is where part of their computational power comes from. Early quantum computers will be much smaller -- in terms of the amount of classical data they can process in one go -- than current HPC. But the processing can be much faster, by exploiting their quantum properties of superposition and coherence.

I will explain why quantum computing is so promising for enhancing computational capability, avoiding the current hype, and focusing on the hard work required to realise this potential for useful applications.

The most promising way to use them is to accelerate those parts of the computations that are slow for HPC.

There are problems that need to be solved to interface quantum co-processors with HPC, in particular, clock speed mismatch and data encoding differences.

It also requires detailed study of the algorithms, both quantum and classical, and I will mention some of the projects setting out to do this.

Short Bio

Viv Kendon is professor of quantum technology at the University of Strathclyde. She has been developing quantum computing for the past twenty years, focusing on understanding how it works and how to turn theory into practical applications. She leads the UK projects QEVEC (Quantum Enhanced and Verified Exascale Computing), and CCP-QC, a UK network to bring the computational science and engineering communities together with the quantum computing community. She is on the management board of INQA (International Network in Quantum Annealing).



Neil Chue Hong

Doing Science in the Digital Age – If Everyone is Parallel Processing, What’s the Problem?

Abstract

The phone I have in my pocket is more powerful than the first supercomputer I used, and my phone is 4 years old! As we head towards exascale and beyond, what is the future of parallel computing and, more importantly, what challenges to its use still remain?

When we think of massively parallel computers, we think of modelling and simulation in the physical sciences. But the same techniques can be applied to other disciplines, given the right tools and skills. So why isn’t parallel programming ubiquitous in research? Do we need to change our definition of what using a high performance computing means?

In this talk I will argue that parallel computing is not just about bigger and faster machines, but supporting more people to get the best performance from them. I will discuss work the Software Sustainability Institute has been doing to understand how researchers use computing resources, the role that software plays in modern research, and why Research Software Engineers are an important part of what comes next. I’ll also cover how work funded by the UK’s ExCALIBUR programme is looking to provide people with the skills and knowledge to exploit exascale as we prepare to meet the challenges that the next decade of research will bring.

Short Bio

Neil Chue Hong is Professor of Research Software Policy and Practice at the University of Edinburgh, and is based at EPCC, the UK’s leading centre of supercomputing and data science expertise. He is the founding director of the Software Sustainability Institute, a collaboration between the universities of Edinburgh, Manchester, Oxford and Southampton, which has worked for over a decade to improve the way software is used and developed in research, through consultancy, training, community engagement and policy development. He is chair of the steering committee for ExCALIBUR, the UK’s exascale programme, co-founder of the Research Software Alliance, and a member of the BBSRC/MRC Supercomputing Task Force. He is a Fellow of the British Computer Society, Editor-in-Chief of the Journal of Open Research Software, and co-author of “Software Engineering for Science” and “Best Practices for Scientific Computing”.



Alina Shadrina

Heterogeneous Programming with OneAPI and Performance across multi-Architectures