High-Performance Computing, or HPC, is the use of computers orders of magnitude faster and more powerful than even the best desktop PC. As research becomes more and more data-intensive, an ever broader range of researchers are using HPC for their data analysis. This workshop will give you an introduction to using high-performance computing systems effectively (as well as their limitations), and will introduce the basics of how to access a HPC cluster, load and use software, and submit jobs to run.
Any researcher who intends to use a HPC cluster or is already using a HPC cluster for their research. Participants *must* have access to an HPC cluster to attend the course, as well as some previous experience with Unix or Linux.
The workshop is based around the use of the command line, and will be of limited use for researchers accessing HPC systems using remote desktop or graphical user interface (GUI) connections.
- Understand the difference between a HPC and a server or laptop
- Learn how to log into a HPC and transfer files
- Understand about available storage and its limitations
- Learn how to load software on the HPC
- Learn how to submit a job and understand how to request appropriate resources
- Why use an HPC?
- Introducing HPC file systems and directories
- Software to access HPC: Putty, FileZilla, and MobaXterm
- A guide to software modules
- Submitting interactive and batch jobs, and the PBS scheduler