mpipool package¶
-
class
mpipool.
MPIExecutor
(master=0, comm=None, rejoin=True)¶ Bases:
concurrent.futures._base.Executor
MPI based Executor. Will use all available MPI processes to execute submissions to the pool. The MPI process with rank 0 will continue while all other ranks halt and
-
property
idling
¶
-
is_master
()¶
-
is_worker
()¶
-
map
(fn, *iterables)¶ Submits jobs for as long as all
iterables
provide values and returns an iterator with the results. The iterables are consumed lazily.
-
shutdown
()¶ Close the pool and tell all workers to stop their work loop
-
property
size
¶
-
submit
(fn, /, *args, **kwargs)¶ Submit a task to the MPIPool.
fn(*args, **kwargs)
will be called on an MPI process meaning that all data must be communicable over the MPI communicator, which by default uses pickle.- Parameters
fn (callable) – Function to call on the worker MPI process.
-
submit_batch
(fn, *iterables)¶ Submits jobs lazily for as long as all
iterables
provide values.- Returns
A batch object
- Return type
Batch
-
workers_exit
()¶
-
property
-
class
mpipool.
MPIPool
¶ Bases:
multiprocessing.pool.Pool
-
apply_async
(fn, args=None, kwargs=None)¶ Asynchronous version of apply() method.
-
close
()¶
-
imap
(fn, iterable)¶ Equivalent of map() – can be MUCH slower than Pool.map().
-
imap_unordered
(fn, iterable)¶ Like imap() method but ordering of results is arbitrary.
-
map
(fn, iterable)¶ Apply func to each element in iterable, collecting the results in a list that is returned.
-
map_async
(fn, iterable)¶ Asynchronous version of map() method.
-
starmap
(fn, iterables)¶ Like map() method but the elements of the iterable are expected to be iterables as well and will be unpacked as arguments. Hence func and (a, b) becomes func(a, b).
-
starmap_async
(fn, iterables)¶ Asynchronous version of starmap() method.
-
workers_exit
()¶
-