- English
- Français
Multi-processors Jobs and Usage Cases
Job's Nodes File
For each job, PBS creates a job-specific “host file” which is a text file containing the name of the nodes allocated to that job, one per line. The file is created by PBS on the primary execution host and is only available on that host. The order in which hosts appear in the node file is the order in which chunks are specified.
The full path and name for the node file is set in the job's environment variable $PBS_NODEFILE.
MPI
The number of MPI processes per chunk defaults to 1 unless it is explicitly specified using the mpiprocs resource. Open MPI and IntelMPI automatically obtain both the list of hosts and how many processes to start on each host from PBS Pro directly through the $PBS_NODEFILE. Hence, it is unnecessary to specify the --hostfile, --host, or -np options to mpirun if the MPI software default interpretation of this file corresponds to what you want. For example:
- IntelMPI : default is hostfile which means that duplicated hostname lines are removed.
- OpenMPI : The reordering of the lines is performed in order to group the same nodes.
Open MPI and IntelMPI versions installed on zenobe use PBS mechanisms to launch and kill processes. PBS can track resource usage, control jobs, clean up job processes and perform accounting for all of the tasks run under the MPI.
OpenMP
PBSpro supports OpenMP applications by setting the OMP_NUM_THREADS variable in the job's environment, based on the request of the job.
If ompthreads is requested, OMP_NUM_THREADS is set to this value, if ompthreads is not requested, OMP_NUM_THREADS is set 1.
For the MPI process with rank 0, the environment variable OMP_NUM_THREADS is set to the value of ompthreads. For other MPI processes, the behavior is dependent on MPI implementation.
Usage Cases:
- Mono-processor jobs
- OpenMP jobs
- MPI jobs
- Hybrid MPI/OpenMP jobs
- Jobs Array
- Homogeneous/ Heterogeneous Resources
- Embarassingly parallelism: pbsdsh