Objectives

You will learn:

  • where your migrated data will end up
  • where you should be working until the migration from Pan is complete
  • how Mahuika and Māui are different from Pan

Data Migration

Home directories from Pan have been copied into a subdirectory of your Mahuika home directory named pan_home, and project directories similarly copied into a subdirectory of you new project directory /nesi/project/projectcode. We will be synchronising these repeatedly while Pan is still available, so please don’t attempt to change their contents on Mahuika.

You can copy your Pan files out into your Mahuika home directory with the command

cp -Tprn $HOME/pan_home $HOME

Similarly for your project directory or directories:

cd /nesi/project/projectcode
cp -HTprn ./pan_project ./

Differences from Pan

Most Slurm batch scipts will require at least some changes to work on the new platform, so please review all of the following points.

Filesystems

  • On Pan the project directories were located under /projects, which was a symlink to /gpfs1m/projects. On Mahuika (and Māui) the project directories are located under /nesi/project, so any scripts referencing these directories must be updated.

  • In addition to the project directory, each project has a “nobackup” directory found under /nesi/nobackup which is faster but not backed up. Old files on /nesi/nobackup are guarenteed to be deleted by the system when space is needed.

  • The per-job temporary directories SCRATCH_DIR, TMP_DIR and SHM_DIR on Pan are not provided on Mahuika.

    Pan Mahuika/Maui Comments
    SCRATCH_DIR /nesi/nobackup/projectcode Use your project’s nobackup directory
    TMP_DIR TMPDIR Temporary, per-job directory under /tmp
    SHM_DIR TMPDIR TMPDIR (and /tmp) is located in memory, as SHM_DIR was on Pan

    As a substitute for SCRATCH_DIR you can use any location within your project’s nobackup directory /nesi/nobackup/projectcode, eg:

    SCRATCH_DIR=$(mktemp -d --tmpdir=/nesi/nobackup/$SLURM_JOB_ACCOUNT "scratch_${SLURM_JOB_ID}_XXX.tmp")
    

Software

  • Many older environment modules which were present on Pan have not been recreated on Mahuika. Supported Applications.

  • Locations of our installed software is different. This should not matter if you have been using environment modules.

Hardware

  • Ordinary Mahuika compute nodes have 36 CPU cores and 128 GB of memory, yielding only 3 GB per core rather than Pan’s 7.5 GB per core. Please review your memory requests when submitting jobs.

  • Hyperthreading is enabled, and so multithreaded jobs will by default be allocated only half as many physical cores per task as they would get on Pan. This can however be avoided with --hint=nomultithread.

  • Mahuika uses the newer “Broadwell” and Maui the “Skylake” generation of Intel CPUs. Pan’s optional Slurm constraints “wm”, “sb” and “avx” are obsolete on Mahuika/Maui.

  • Mahuika has only 8 GPU nodes, however the GPUs are the more powerful Tesla P100. They are accessed in the same way as on Pan with --gres=gpu. It may be necessary to also specify --partition gpu.

  • Mahuika has 5 “bigmem” nodes of 512 GB, and one “hugemem” node with 4 TB of memory. These have to be specifically requested by telling sbatch --partition=bigmem, --partition=prepost (for jobs shorter than 3 hours) or --partition=hugemem.

Job limits

  • Jobs requesting a timelimit of more than 3 days have to be explictly submitted to the “long” partition, eg: sbatch -p long ..., while other ordinary jobs can be submitted to the “large” partition. This kind of partitioning was more automated on Pan.

  • Instead of Pan’s little-used debug partition Mahuika has a debug QoS (Quality of Service) used like sbatch --qos debug .... Jobs requesting the debug QoS can only request a maximum of 15 minutes and you can only execute one of them at a time.

Accounts

  • On Pan it was always necessary to specify your project account to sbatch. This is no longer necessary on Mahuika if you have only one project.

Licenses

  • Some of our Slurm virtual licenses are obsolete and so have been removed: “io” because the filesystem bandwidth is considerably better than on Pan, “sci_matlab” and “eng_matlab” because their numbers of actual license tokens have been dramatically increased.

The other machine - Māui.

Mahuika shares its filesystem with the co-located Cray XC supercomputer Māui, so if your work is suitable for running on Māui (ie: large MPI jobs) and you are granted an allocation of Māui CPU time then you will be able to access your data in the same locations from either machine.