SCORPIO  1.7.0
Functions/Subroutines
Pio_init

Initialize the I/O subsystem. More...

Functions/Subroutines

subroutine spio_init::pio_init_intracomm (comm_rank, comm, nioprocs, naggprocs, ioprocs_stride, rearr, iosys, base, rearr_opts, ierr)
 Initialize the I/O subsystem that is defined using an MPI intra-communicator. More...
 
subroutine spio_init::pio_init_intercomm (ncomps, peer_comm, comp_comms, io_comm, iosys, rearr, ierr)
 Initialize the I/O subsystem that is defined using an MPI inter-communicator. More...
 
subroutine spio_init::pio_init_intercomm_v2 (iosys, peer_comm, comp_comms, io_comm, rearr, ierr)
 Initialize the I/O subsystem that is defined using an MPI inter-communicator. More...
 

Detailed Description

Initialize the I/O subsystem.

Each I/O subsystem (corresponding to a set of MPI processes that belong to an MPI communicator and collectively work on a set of files) needs to be initialized before invoking any PIO APIs on the subsystem

Function/Subroutine Documentation

◆ pio_init_intercomm()

subroutine spio_init::pio_init_intercomm ( integer, intent(in)  ncomps,
integer, intent(in)  peer_comm,
integer, dimension(ncomps), intent(in), target  comp_comms,
integer, intent(in)  io_comm,
type(iosystem_desc_t), dimension(:), intent(out)  iosys,
integer, intent(in), optional  rearr,
integer, intent(out), optional  ierr 
)

Initialize the I/O subsystem that is defined using an MPI inter-communicator.

The I/O subsystem is created using one or more groups of compute processes (Defined by an array of MPI communicators, comp_comms, where each MPI communicator in the array corresponds to a group of compute processes. The compute and I/O processes are disjoint set of processes. Note that typically each MPI communicator/process_group is associated with an computational component in the application) and a group of I/O processes (Defined by an MPI communicator, io_comm). The compute processes redirect all I/O requests (reading/writing variables, attributes etc) to the I/O processes by asynchronously communicating with the I/O processes (via asynchronous messages)

This is a collective call on the MPI communicator, peer_comm, used to initialize the I/O subsystem. For all compute processes this API returns after the I/O subsystem is initialized. The API blocks inside the library for all I/O processes waiting for asynchronous messages from the compute processes and returns once the I/O subsystems are finalized (via PIO_finalize()) on all compute processes.

Parameters
[in]ncompsThe number of compute components (Also determines the size of the comp_comms array)
[in]peer_commThe peer MPI communicator to use to create an MPI inter-communicator between compute and I/O MPI communicators. This is typically the MPI communicator from which the component and I/O communicators are created (MPI_COMM_WORLD or the "world communicator" for all the MPI communicators)
[in]comp_commsAn array of MPI communicators corresponding to the groups of compute processes. Typically each application component has an MPI communicator, containing all the processes in the component, in this array. This array needs to contain at least ncomps elements, and all processes belonging to compute component i (a unique global index decided by the application for each component) provides the MPI communicator for component i in comp_comms(i). All I/O processes pass MPI_COMM_NULL for all communicators in this array #param[in] io_comm The MPI communicator containing all the I/O processes. The I/O processes are separate (disjoint set) from the compute processes All I/O processes provide the MPI communicator containing all the I/O processes via this argument. All compute processes pass MPI_COMM_NULL in this argument.
[out]iosysThe handle to the initialized I/O system is returned via this array. For compute component i (a unique global index decided by the application for each component) the initialized I/O subsystem is returned in the ith index of this array, iosys(i). IO system descriptor structure. This structure contains the general IO subsystem data and MPI structure
[in]rearr(Optional) The three choices to control rearrangement are:
  • PIO_rearr_none : Do not use any form of rearrangement
  • PIO_rearr_box : Use a PIO internal box rearrangement

PIO_rearr_subset : Use a PIO internal subsetting rearrangement

  • PIO_rearr_any : Let the library choose the optimal rearranger
    Return values
    ierr(Optional) : The error return code. Set to PIO_NOERR on success, or an error code otherwise (See PIO_seterrorhandling for more information on how to customize/set error handling)

◆ pio_init_intercomm_v2()

subroutine spio_init::pio_init_intercomm_v2 ( type(iosystem_desc_t), dimension(:), intent(out)  iosys,
integer, intent(in)  peer_comm,
integer, dimension(:), intent(in), target  comp_comms,
integer, intent(in)  io_comm,
integer, intent(in), optional  rearr,
integer, intent(out), optional  ierr 
)

Initialize the I/O subsystem that is defined using an MPI inter-communicator.

The I/O subsystem is created using one or more groups of compute processes (Defined by an array of MPI communicators, comp_comms, where each MPI communicator in the array corresponds to a group of compute processes. The compute and I/O processes are disjoint set of processes. Note that typically each MPI communicator/process_group is associated with an computational component in the application) and a group of I/O processes (Defined by an MPI communicator, io_comm). The compute processes redirect all I/O requests (reading/writing variables, attributes etc) to the I/O processes by asynchronously communicating with the I/O processes (via asynchronous messages)

This is a collective call on the MPI communicator, peer_comm, used to initialize the I/O subsystem. For all compute processes this API returns after the I/O subsystem is initialized. The API blocks inside the library for all I/O processes waiting for asynchronous messages from the compute processes and returns once the I/O subsystems are finalized (via PIO_finalize()) on all compute processes.

Parameters
[out]iosysThe handle to the initialized I/O system is returned via this array. For compute component i (a unique global index decided by the application for each component) the initialized I/O subsystem is returned in the ith index of this array, iosys(i). IO system descriptor structure. This structure contains the general IO subsystem data and MPI structure
[in]peer_commThe peer MPI communicator to use to create an MPI inter-communicator between compute and I/O MPI communicators. This is typically the MPI communicator from which the component and I/O communicators are created (MPI_COMM_WORLD or the "world communicator" for all the MPI communicators)
[in]comp_commsAn array of MPI communicators corresponding to the groups of compute processes. Typically each application component has an MPI communicator, containing all the processes in the component, in this array. All processes belonging to compute component i (a unique global index decided by the application for each component) provides the MPI communicator for component i in comp_comms(i). All I/O processes pass MPI_COMM_NULL for all communicators in this array #param[in] io_comm The MPI communicator containing all the I/O processes. The I/O processes are separate (disjoint set) from the compute processes All I/O processes provide the MPI communicator containing all the I/O processes via this argument. All compute processes pass MPI_COMM_NULL in this argument.
[in]rearr(Optional) The three choices to control rearrangement are:
  • PIO_rearr_none : Do not use any form of rearrangement
  • PIO_rearr_box : Use a PIO internal box rearrangement

PIO_rearr_subset : Use a PIO internal subsetting rearrangement

  • PIO_rearr_any : Let the library choose the optimal rearranger
    Return values
    ierr(Optional) : The error return code. Set to PIO_NOERR on success, or an error code otherwise (See PIO_seterrorhandling for more information on how to customize/set error handling)

◆ pio_init_intracomm()

subroutine spio_init::pio_init_intracomm ( integer, intent(in)  comm_rank,
integer, intent(in)  comm,
integer, intent(in)  nioprocs,
integer, intent(in)  naggprocs,
integer, intent(in)  ioprocs_stride,
integer, intent(in)  rearr,
type(iosystem_desc_t), intent(out)  iosys,
integer, intent(in), optional  base,
type(pio_rearr_opt_t), intent(in), optional, target  rearr_opts,
integer, intent(out), optional  ierr 
)

Initialize the I/O subsystem that is defined using an MPI intra-communicator.

This is a collective call on the MPI communicator, comm, used to initialize the I/O subsystem.

Parameters
[in]comm_rankThe rank (in comm) of the MPI process initializing the I/O subsystem
[in]commThe MPI communicator containing all the MPI processes initializing the I/O subsystem
[in]nioprocsThe total number of I/O processes (a subset of the total number of processes in comm) in the I/O subsystem
[in]naggprocsThe total number of processes aggregating data in the I/O subsystem (this argument is ignored now, and is retained for API backward compatibility)
[in]ioprocs_strideThe stride (The number of MPI processes to "skip" between assigning two consecutive I/O processes. A stride of 1 implies consecutive MPI processes after the ioprocs_base rank are designated the I/O processes) between two I/O processes in the I/O subsystem
[in]rearrThe three choices to control rearrangement are:
  • PIO_rearr_none : Do not use any form of rearrangement
  • PIO_rearr_box : Use a PIO internal box rearrangement

PIO_rearr_subset : Use a PIO internal subsetting rearrangement

  • PIO_rearr_any : Let the library choose the optimal rearranger
    Parameters
    [out]iosysThe handle to the initialized I/O system is returned via this argument. IO system descriptor structure. This structure contains the general IO subsystem data and MPI structure
    [in]base(Optional) The base (MPI rank of the 1st I/O process) rank for I/O processes. By default, MPI rank 0 (in comm) is the base I/O process
    [in]rearr_opts(Optional) The I/O rearranger options to use for this I/O subsystem. The data rearranger options. The library includes support for a data rearranger that rearranges data among MPI processes to improve the I/O throughput of the application. The user can control the data rearrangement by passing the data rearranger options to the library.
    • comm_type : The data rearranger communication mode. The data rearranger in the library rearranges data between the compute processes (all the MPI processes in the I/O subsystem) and I/O processes (a subset of the compute processes in the I/O subsystem or a disjoint set of MPI processes in the I/O subsystem) before flushing the data written out by the user to the filesystem. The data rearrangement among the MPI processes can be achieved using MPI point to point or collective communication.
    • PIO_rearr_comm_p2p : Point to point
    • PIO_rearr_comm_coll : Collective
    • fcd : The data rearrangment flow control direction. The data rearranger can use flow control when rearranging data among the MPI processes. The flow control can be enabled for data rearrangement from compute processes to I/O processes (e.g. during a write operation), from I/O processes to compute processes (e.g. during a read operation) or both.
    • PIO_rearr_comm_fc_2d_enable : compute procs to I/O procs and vice versa
    • PIO_rearr_comm_fc_1d_comp2io: compute procs to I/O procs only
    • PIO_rearr_comm_fc_1d_io2comp: I/O procs to compute procs only
    • PIO_rearr_comm_fc_2d_disable: disable flow control
    • comm_fc_opts : The data rearranger flow control options. The data rearranger supports flow control when rearranging data among the MPI processes.
    • enable_hs : Enable handshake (logical). If this option is enabled a "handshake" is sent between communicating processes before data is sent
    • enable_isend : Enable non-blocking sends (logical). If this option is enabled non-blocking sends (instead of blocking sends) are used to send data between MPI processes while rearranging data
    • max_pend_req : The maximum pending requests allowed at a time when MPI processes communicate for rearranging data among themselves. (Use PIO_REARR_COMM_UNLIMITED_PEND_REQ for allowing unlimited number of pending requests)
    Return values
    ierr(Optional) : The error return code. Set to PIO_NOERR on success, or an error code otherwise (See PIO_seterrorhandling for more information on how to customize/set error handling)