Wrapper for embarrassingly parallel function evaluation.


function [out,res] = fevalDistr( funNm, jobs, varargin )


 Wrapper for embarrassingly parallel function evaluation.

 Runs "r=feval(funNm,jobs{i}{:})" for each job in a parallel manner. jobs
 should be a cell array of length nJob and each job should be a cell array
 of parameters to pass to funNm. funNm must be a function in the path and
 must return a single value (which may be a dummy value if funNm writes
 results to disk). Different forms of parallelization are supported
 depending on the hardware and Matlab toolboxes available. The type of
 parallelization is determined by the parameter 'type' described below.

 type='LOCAL': jobs are executed using a simple "for" loop. This implies
 no parallelization and is the default fallback option.

 type='PARFOR': jobs are executed using a "parfor" loop. This option is
 only available if the Matlab *Parallel Computing Toolbox* is installed.
 Make sure to setup Matlab workers first using "matlabpool open".

 type='DISTR': jobs are executed on the Caltech cluster. Distributed
 queuing system must be installed separately. Currently this option is
 only supported on the Caltech cluster but could easily be installed on
 any Linux cluster as it requires only SSH and a shared filesystem.
 Parameter pLaunch is used for controller('launchQueue',pLaunch{:}) and
 determines cluster machines used (e.g. pLaunch={48,401:408}).

 type='COMPILED': jobs are executed locally in parallel by first compiling
 an executable and then running it in background. This option requires the
 *Matlab Compiler* to be installed (but does NOT require the Parallel
 Computing Toolbox). Compiling can take 1-10 minutes, so use this option
 only for large jobs. (On Linux alter startup.m by calling addpath() only
 if ~isdeployed, otherwise will get error about "CTF" after compiling).
 Note that relative paths will not work after compiling so all paths used
 by funNm must be absolute paths.

 type='WINHPC': jobs are executed on a Windows HPC Server 2008 cluster.
 Similar to type='COMPILED', except after compiling, the executable is
 queued to the HPC cluster where all computation occurs. This option
 likewise requires the *Matlab Compiler*. Paths to data, etc., must be
 absolute paths and available from HPC cluster. Parameter pLaunch must
 have two fields 'scheduler' and 'shareDir' that define the HPC Server.
 Extra parameters in pLaunch add finer control, see fedWinhpc for details.
 For example, at MSR one possible cluster is defined by scheduler =
 'MSR-L25-DEV21' and shareDir = '\\msr-arrays\scratch\msr-pool\L25-dev21'.
 Note call to 'job submit' from Matlab will hang unless pwd is saved
 (simply call 'job submit' from cmd prompt and enter pwd).

  [out,res] = fevalDistr( funNm, jobs, [varargin] )

  funNm      - name of function that will process jobs
  jobs       - [1xnJob] cell array of parameters for each job
  varargin   - additional params (struct or name/value pairs)
   .type       - ['local'], 'parfor', 'distr', 'compiled', 'winhpc'
   .pLaunch    - [] extra params for type='distr' or type='winhpc'
   .group      - [1] send jobs in batches (only relevant if type='distr')

  out        - 1 if jobs completed successfully
  res        - [1xnJob] cell array containing results of each job

  % Note: in this case parallel versions are slower since conv2 is so fast
  n=16; jobs=cell(1,n); for i=1:n, jobs{i}={rand(500),ones(25)}; end
  tic, [out,J1] = fevalDistr('conv2',jobs,'type','local'); toc,
  tic, [out,J2] = fevalDistr('conv2',jobs,'type','parfor'); toc,
  tic, [out,J3] = fevalDistr('conv2',jobs,'type','compiled'); toc
  [isequal(J1,J2), isequal(J1,J3)], figure(1); montage2(cell2array(J1))

 See also matlabpool mcc

 Piotr's Computer Vision Matlab Toolbox      Version 3.26
 Copyright 2014 Piotr Dollar.  [pdollar-at-gmail.com]
 Licensed under the Simplified BSD License [see external/bsd.txt]

Generated by m2html © 2003