SCORPIO
1.7.0
|
Initialize the I/O subsystem. More...
Functions/Subroutines | |
subroutine | spio_init::pio_init_intracomm (comm_rank, comm, nioprocs, naggprocs, ioprocs_stride, rearr, iosys, base, rearr_opts, ierr) |
Initialize the I/O subsystem that is defined using an MPI intra-communicator. More... | |
subroutine | spio_init::pio_init_intercomm (ncomps, peer_comm, comp_comms, io_comm, iosys, rearr, ierr) |
Initialize the I/O subsystem that is defined using an MPI inter-communicator. More... | |
subroutine | spio_init::pio_init_intercomm_v2 (iosys, peer_comm, comp_comms, io_comm, rearr, ierr) |
Initialize the I/O subsystem that is defined using an MPI inter-communicator. More... | |
Initialize the I/O subsystem.
Each I/O subsystem (corresponding to a set of MPI processes that belong to an MPI communicator and collectively work on a set of files) needs to be initialized before invoking any PIO APIs on the subsystem
subroutine spio_init::pio_init_intercomm | ( | integer, intent(in) | ncomps, |
integer, intent(in) | peer_comm, | ||
integer, dimension(ncomps), intent(in), target | comp_comms, | ||
integer, intent(in) | io_comm, | ||
type(iosystem_desc_t), dimension(:), intent(out) | iosys, | ||
integer, intent(in), optional | rearr, | ||
integer, intent(out), optional | ierr | ||
) |
Initialize the I/O subsystem that is defined using an MPI inter-communicator.
The I/O subsystem is created using one or more groups of compute processes (Defined by an array of MPI communicators, comp_comms
, where each MPI communicator in the array corresponds to a group of compute processes. The compute and I/O processes are disjoint set of processes. Note that typically each MPI communicator/process_group is associated with an computational component in the application) and a group of I/O processes (Defined by an MPI communicator, io_comm
). The compute processes redirect all I/O requests (reading/writing variables, attributes etc) to the I/O processes by asynchronously communicating with the I/O processes (via asynchronous messages)
This is a collective call on the MPI communicator, peer_comm
, used to initialize the I/O subsystem. For all compute processes this API returns after the I/O subsystem is initialized. The API blocks inside the library for all I/O processes waiting for asynchronous messages from the compute processes and returns once the I/O subsystems are finalized (via PIO_finalize()) on all compute processes.
[in] | ncomps | The number of compute components (Also determines the size of the comp_comms array) |
[in] | peer_comm | The peer MPI communicator to use to create an MPI inter-communicator between compute and I/O MPI communicators. This is typically the MPI communicator from which the component and I/O communicators are created (MPI_COMM_WORLD or the "world communicator" for all the MPI communicators) |
[in] | comp_comms | An array of MPI communicators corresponding to the groups of compute processes. Typically each application component has an MPI communicator, containing all the processes in the component, in this array. This array needs to contain at least ncomps elements, and all processes belonging to compute component i (a unique global index decided by the application for each component) provides the MPI communicator for component i in comp_comms(i). All I/O processes pass MPI_COMM_NULL for all communicators in this array #param[in] io_comm The MPI communicator containing all the I/O processes. The I/O processes are separate (disjoint set) from the compute processes All I/O processes provide the MPI communicator containing all the I/O processes via this argument. All compute processes pass MPI_COMM_NULL in this argument. |
[out] | iosys | The handle to the initialized I/O system is returned via this array. For compute component i (a unique global index decided by the application for each component) the initialized I/O subsystem is returned in the ith index of this array, iosys(i). IO system descriptor structure. This structure contains the general IO subsystem data and MPI structure |
[in] | rearr | (Optional) The three choices to control rearrangement are:
|
PIO_rearr_subset : Use a PIO internal subsetting rearrangement
ierr | (Optional) : The error return code. Set to PIO_NOERR on success, or an error code otherwise (See PIO_seterrorhandling for more information on how to customize/set error handling) |
subroutine spio_init::pio_init_intercomm_v2 | ( | type(iosystem_desc_t), dimension(:), intent(out) | iosys, |
integer, intent(in) | peer_comm, | ||
integer, dimension(:), intent(in), target | comp_comms, | ||
integer, intent(in) | io_comm, | ||
integer, intent(in), optional | rearr, | ||
integer, intent(out), optional | ierr | ||
) |
Initialize the I/O subsystem that is defined using an MPI inter-communicator.
The I/O subsystem is created using one or more groups of compute processes (Defined by an array of MPI communicators, comp_comms
, where each MPI communicator in the array corresponds to a group of compute processes. The compute and I/O processes are disjoint set of processes. Note that typically each MPI communicator/process_group is associated with an computational component in the application) and a group of I/O processes (Defined by an MPI communicator, io_comm
). The compute processes redirect all I/O requests (reading/writing variables, attributes etc) to the I/O processes by asynchronously communicating with the I/O processes (via asynchronous messages)
This is a collective call on the MPI communicator, peer_comm
, used to initialize the I/O subsystem. For all compute processes this API returns after the I/O subsystem is initialized. The API blocks inside the library for all I/O processes waiting for asynchronous messages from the compute processes and returns once the I/O subsystems are finalized (via PIO_finalize()) on all compute processes.
[out] | iosys | The handle to the initialized I/O system is returned via this array. For compute component i (a unique global index decided by the application for each component) the initialized I/O subsystem is returned in the ith index of this array, iosys(i). IO system descriptor structure. This structure contains the general IO subsystem data and MPI structure |
[in] | peer_comm | The peer MPI communicator to use to create an MPI inter-communicator between compute and I/O MPI communicators. This is typically the MPI communicator from which the component and I/O communicators are created (MPI_COMM_WORLD or the "world communicator" for all the MPI communicators) |
[in] | comp_comms | An array of MPI communicators corresponding to the groups of compute processes. Typically each application component has an MPI communicator, containing all the processes in the component, in this array. All processes belonging to compute component i (a unique global index decided by the application for each component) provides the MPI communicator for component i in comp_comms(i). All I/O processes pass MPI_COMM_NULL for all communicators in this array #param[in] io_comm The MPI communicator containing all the I/O processes. The I/O processes are separate (disjoint set) from the compute processes All I/O processes provide the MPI communicator containing all the I/O processes via this argument. All compute processes pass MPI_COMM_NULL in this argument. |
[in] | rearr | (Optional) The three choices to control rearrangement are:
|
PIO_rearr_subset : Use a PIO internal subsetting rearrangement
ierr | (Optional) : The error return code. Set to PIO_NOERR on success, or an error code otherwise (See PIO_seterrorhandling for more information on how to customize/set error handling) |
subroutine spio_init::pio_init_intracomm | ( | integer, intent(in) | comm_rank, |
integer, intent(in) | comm, | ||
integer, intent(in) | nioprocs, | ||
integer, intent(in) | naggprocs, | ||
integer, intent(in) | ioprocs_stride, | ||
integer, intent(in) | rearr, | ||
type(iosystem_desc_t), intent(out) | iosys, | ||
integer, intent(in), optional | base, | ||
type(pio_rearr_opt_t), intent(in), optional, target | rearr_opts, | ||
integer, intent(out), optional | ierr | ||
) |
Initialize the I/O subsystem that is defined using an MPI intra-communicator.
This is a collective call on the MPI communicator, comm
, used to initialize the I/O subsystem.
[in] | comm_rank | The rank (in comm ) of the MPI process initializing the I/O subsystem |
[in] | comm | The MPI communicator containing all the MPI processes initializing the I/O subsystem |
[in] | nioprocs | The total number of I/O processes (a subset of the total number of processes in comm ) in the I/O subsystem |
[in] | naggprocs | The total number of processes aggregating data in the I/O subsystem (this argument is ignored now, and is retained for API backward compatibility) |
[in] | ioprocs_stride | The stride (The number of MPI processes to "skip" between assigning two consecutive I/O processes. A stride of 1 implies consecutive MPI processes after the ioprocs_base rank are designated the I/O processes) between two I/O processes in the I/O subsystem |
[in] | rearr | The three choices to control rearrangement are:
|
PIO_rearr_subset : Use a PIO internal subsetting rearrangement
[out] | iosys | The handle to the initialized I/O system is returned via this argument. IO system descriptor structure. This structure contains the general IO subsystem data and MPI structure |
[in] | base | (Optional) The base (MPI rank of the 1st I/O process) rank for I/O processes. By default, MPI rank 0 (in comm ) is the base I/O process |
[in] | rearr_opts | (Optional) The I/O rearranger options to use for this I/O subsystem. The data rearranger options. The library includes support for a data rearranger that rearranges data among MPI processes to improve the I/O throughput of the application. The user can control the data rearrangement by passing the data rearranger options to the library.
|
ierr | (Optional) : The error return code. Set to PIO_NOERR on success, or an error code otherwise (See PIO_seterrorhandling for more information on how to customize/set error handling) |