(machines)= # Machines Polaris attempts to be aware of the capabilities of the machine it is running on. This is a particular advantage for so-called "supported" machines with a config file defined for them in the `polaris` package. But even for "unknown" machines, it is not difficult to set a few config options in your user config file to describe your machine. Then, polaris can use this data to make sure tasks are configured in a way that is appropriate for your machine. ## config options The config options typically defined for a machine are: ```cfg # The paths section describes paths that are used within the ocean core test # cases. [paths] # A shared root directory where MPAS standalone data can be found database_root = /lcrc/group/e3sm/public_html/mpas_standalonedata # the path to the base conda environment where polaris environments have # been created polaris_envs = /lcrc/soft/climate/polaris/chrysalis/base # Options related to deploying a polaris conda environment on supported # machines [deploy] # the compiler set to use for system libraries and MPAS builds compiler = intel # the system MPI library to use for intel compiler mpi_intel = openmpi # the system MPI library to use for gnu compiler mpi_gnu = openmpi # the base path for spack environments used by polaris spack = /lcrc/soft/climate/polaris/chrysalis/spack # whether to use the same modules for hdf5, netcdf-c, netcdf-fortran and # pnetcdf as E3SM (spack modules are used otherwise) use_e3sm_hdf5_netcdf = True ``` The `paths` section provides local paths to the root of the "databases" (local caches) of data files for each MPAS core. These are generally in a shared location for the project to save space. Similarly, `polaris_envs` is a location where shared conda environments will be created for polaris releases for users to share. The `deploy` section is used to help polaris create development and release conda environments and activation scripts. It says which compiler set is the default, which MPI library is the default for each supported compiler, and where libraries built with system MPI will be placed. Some config options come from a package, [mache](https://github.com/E3SM-Project/mache/) that is a dependency of polaris. Mache is designed to detect and provide a machine-specific configuration for E3SM supported machines. Typical config options provided by mache that are relevant to polaris are: ```cfg # The parallel section describes options related to running jobs in parallel [parallel] # parallel system of execution: slurm, pbs or single_node system = slurm # whether to use mpirun or srun to run a task parallel_executable = srun # cores per node on the machine cores_per_node = 36 # account for running diagnostics jobs account = e3sm # quality of service (default is the first) qos = regular, interactive ``` The `parallel` section defined properties of the machine, to do with parallel runs. Currently, machine files are defined for high-performance computing (HPC) machines with multiple nodes. These machines all use {ref}`slurm` to submit parallel jobs. They also all use the `srun` command to run individual tasks within a job. The number of `cores_per_node` vary between machines, as does the account that typical polaris users will have access to on the machine. (slurm)= ## Slurm job queueing Most HPC systems now use the [slurm workload manager](https://slurm.schedmd.com/documentation.html). Here are some basic commands: ```bash salloc -N 1 -t 2:0:0 # interactive job (see machine specific versions below) sbatch