[FieldTrip] running FT scripts in a supercomputing cluster

Jose joseluisblues at gmail.com
Fri Jun 24 11:23:07 CEST 2016


Hello,

So, finally it works,
If never is useful for other persons in the future this is the detail of
what I did:

1. copy the fieldtrip folder + data in my_cluster_directory

2. do scripts:

    -do my main matlab script (FT_0.m)
        this script do not contain neither the "addpath" nor "ft_defaults"
command lines
        pay attention to use cd(getenv('PWD')) for moving through the
temporary folders in the cluster

    -do a matlab script to compile my main script (compilation_script.m)

    ###
    addpath('my_cluster_directory/fieldtrip-20160317');
    ft_defaults % define default functions
    addpath('my_cluster_directory/fieldtrip-20160317/external/ctf') % for
CTF MEG data
    mcc('-mv', '-N', '-p', 'stats', '-p', 'images', '-p', 'signal', ...
        '-R', '-nodisplay', '-R', '-singleCompThread', ...
        'FT_0.m');
    ###

    -do bash script to run the compilation

    ###
    #!/bin/bash

    module load MATLAB/2013b
    matlab < compilation_script.m
    exit
    ###

    -run it
    ./run_compilation.sh

3. Once the compiled script is done, two files are generated FT_0 and
run_FT_0.sh
 Next, I need to customize my job bash script as usual

    ###
    #!/bin/bash

    #PBS -N FT_JOSE_1
    #PBS -o FT_JOSE_1.log
    #PBS -e FT_JOSE_1.err
    #PBS -q default
    #PBS -l walltime=2:00:00
    #PBS -l nodes=1:ppn=1
    #PBS -l vmem=10gb
    #PBS -m ae

    # Name of Mat file
    matFileName=FT_0

    # directory where the execuatble and script can be found
    # PBS_O_WORKDIR variable points to the directory you are submitting from

    ORIGDIR=$PBS_O_WORKDIR
    WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID
    DATADIR=/user/data/gent/gvo000/gvo00022/vsc41880/

    echo Hostname: $(hostname)
    echo ORIGDIR: $ORIGDIR
    echo WORKDIR: $WORKDIR

    mkdir -p $WORKDIR
    cp -ar $ORIGDIR/run_FT_0.sh $WORKDIR/
    cp -ar $ORIGDIR/FT_0 $WORKDIR/
    cp -ar $ORIGDIR/1_data $WORKDIR/
    cp -ar $ORIGDIR/2_processed_data $WORKDIR/

    # version of MATLAB
    version=2013b

    # load modules
    module load cluster
    module load MATLAB/${version}

    # check the working directory
    if [ ! -d $WORKDIR ]
    then
      echo "Directory $dir is not a directory"
      exit 1
    fi

    # enter the working directory
    cd $WORKDIR
    echo $WORKDIR

    # check the Mat file
    if [ ! -x $matFileName ]
    then
      echo "No executable $matFileName found."
      exit 2
    fi

    # define the sh file
    script=run_${matFileName}.sh

    # check the sh file
    if [ ! -x  $script ]
    then
      echo "No run script $script found"
      exit 3
    fi

    # make cache dir
    # TMPDIR is set and created by torque. 1 unique dir per job
    cdir=$TMPDIR/mcrcache

    # check cache dir
    mkdir -p $cdir
    if [ ! -d $cdir ]
    then
      echo "No tempdir $cdir found."
      exit 1
    fi

    # define dir
    export MCR_CACHE_ROOT=$cdir

    # 1GB cache (more then large enough)
    export MCR_CACHE_SIZE=$((1024*1024*1024))

    # real running
    ./$script $EBROOTMATLAB

    cd $WORKDIR

    cp -r 2_processed_data $ORIGDIR/

    cd
    rm -rf $WORKDIR

    # END

    ###

4. Next I run it:
   sub FT_0_bash.sh

Many thanks to Anne!

Jose

On 16 June 2016 at 10:43, Anne Urai <anne.urai at gmail.com> wrote:

> Hi Jose,
>
> When calling mcc from Matlab, a dependency analysis is first carried out
> (at least in more recent versions of Matlab) - basically, Matlab goes
> through the script you're compiling and finds all the functions that are
> called (which must be on the path). These are all added to the executable.
> In principle, if all the folders you need are on the path (which should be
> okay when you call ft_default), the executable can run.
>
> Now, only in the case when the dependency analysis doesn't recognize
> certain functions (because they are, for example, generated through
> str2func) you should add them manually. I found this out through trial and
> error - I'd first try to compile using the bare bones
>
> mcc('-mv', '-N',  '-R', '-nodisplay', '-R', '-singleCompThread', 'FT_0.m');
>
> when you then run the executable, you'll get an error message if a
> function or a toolbox is missing (and then you can add only those that you
> need and compile again).
>
> In your FT_0.sh, you should indeed load the MCR - otherwise, the
> executable can't run. For this, I'd recommend contacting the admin of the
> supercomputer cluster, since the way I do it on the cluster here is
> specific to the setup. You'll probably have to activate this in your
> FT_0.sh yourself and add the path to the cache of each node (something
> like export MCR_CACHE_ROOT=$TMPDIR).
>
> Good luck!
>
>> Anne E. Urai, MSc
> PhD student | Institut für Neurophysiologie und Pathophysiologie
> Universitätsklinikum Hamburg-Eppendorf | Martinistrasse 52, 20246 |
> Hamburg, Germany
> www.anneurai.net / @AnneEUrai <https://twitter.com/AnneEUrai>
>
> From: Jose <joseluisblues at gmail.com> <joseluisblues at gmail.com>
> Reply: Jose <joseluisblues at gmail.com> <joseluisblues at gmail.com>
> Date: 15 June 2016 at 19:14:43
> To: Anne Urai <anne.urai at gmail.com> <anne.urai at gmail.com>
> Subject:  Re: [FieldTrip] running FT scripts in a supercomputing cluster
>
> Hi Anne,
>
> Thanks for the detailed response,
> I have a couple questions if I may,
>
> So, if I understand well I need to know a priori which functions I want to
> use?. That's a bit strange, no? Because I don't know if any given function
> depend of another one which I might not notice if don't inspect all the
> scripts,
>
> The other thing is that I was trying to compile in the cluster, not
> locally, but anyway I tried locally with something like this:
>
> % these paths will be added at compilation
> addpath('/home/joseluis/Documents/Software/fieldtrip-20160317');
> ft_defaults,
> addpath('/home/joseluis/Documents/Software/fieldtrip-20160317/qsub');
> addpath('/home/joseluis/Documents/Software/fieldtrip-20160317/fileio');
>
> addpath('/home/joseluis/Documents/Software/fieldtrip-20160317/fileio/private');
>
> % options: compile verbose, only use the toolboxes we really need
> % !!! runtime options should be preceded by - to work!
> % dont need to activate the -nojvm flag, can still plot from executable
>
> mcc('-mv', '-N', '-p', 'stats', '-p', 'images', '-p', 'signal', ...
>     '-R', '-nodisplay', '-R', '-singleCompThread', ...
>     '-a',
> '/home/joseluis/Documents/Software/fieldtrip-20160317/fileio/ft_read_event.m',
> ...
>     '-a',
> '/home/joseluis/Documents/Software/fieldtrip-20160317/fileio/private/read_ctf_cls.m',
> ...
>     '-a',
> '/home/joseluis/Documents/Software/fieldtrip-20160317/ft_preprocessing', ...
>     'FT_0.m');
>
> However when I copy FT_0 and run_FT_0.sh and run the job I get:
>
> ./FT_0: error while loading shared libraries: libmwmclmcrrt.so.8.1: cannot
> open shared object file: No such file or directory
>
> Which looking in Internet seems to be associated to the fact I need to run
> a mrc installer in the cluster?
>
> thanks
>
> Jose
>
> On 15 June 2016 at 11:03, Anne Urai <anne.urai at gmail.com> wrote:
>
>> Hi Jose,
>>
>> I ran into similar dependencies issues when compiling fieldtrip, and
>> converged on the following:
>>
>>
>>
>> % these paths will be added at compilation
>>
>> addpath(genpath('~/code/Tools'));
>>
>> addpath('~/Documents/fieldtrip');
>>
>> ft_defaults; % add everything to path that we need
>>
>>
>>
>> addpath('~/Documents/fieldtrip/qsub');
>>
>> addpath(genpath('~/Documents/fieldtrip/template/')); % neighbouring
>> matfile
>>
>>
>>
>> if strcmp(fname, 'B3a_clusterStatsERF.m') || strcmp(fname,
>> 'B3b_clusterStatsTFR.m'),
>>
>>     addpath('~/Documents/fieldtrip/statfun/'); % need the
>> combineClusters mex file
>>
>>     addpath('~/Documents/fieldtrip/external/spm8/'); % for neighbour
>> definition
>>
>>     %
>> http://mailman.science.ru.nl/pipermail/fieldtrip/2014-July/008238.html
>>
>> end
>>
>>
>>
>> % options: compile verbose, only use the toolboxes we really need
>>
>> % !!! runtime options should be preceded by - to work!
>>
>> % dont need to activate the -nojvm flag, can still plot from executable
>>
>> if strcmp(fname, 'B3a_clusterStatsERF.m') || strcmp(fname,
>> 'B3b_clusterStatsTFR.m'),
>>
>>
>>
>>     % statfun is called with a weird eval construction, so not recognized
>>
>>     % by the dependency analysis of mcc
>>
>>     mcc('-mv', '-N', '-p', 'stats', '-p', 'images', '-p', 'signal', ...
>>
>>         '-R', '-nodisplay', '-R', '-singleCompThread', ...
>>
>>         '-a', '~/Documents/fieldtrip/ft_statistics_montecarlo.m', ...
>>
>>         '-a', '~/Documents/fieldtrip/statfun/ft_statfun_depsamplesT.m',
>> ...
>>
>>         fname);
>>
>> else
>>
>>     % no need to specify additional files
>>
>>     mcc('-mv', '-N', '-p', 'stats', '-p', 'images', '-p', 'signal', ...
>>
>>         '-R', '-nodisplay', '-R', '-singleCompThread', ...
>>
>>         fname);
>>
>> end
>>
>>
>> So, the trick is to add everything to your path before comping, and then
>> use the -N option and define specific folders and possible functions using
>> -a. Make sure to only include additional subfolders from Fieldtrip (such as
>> the templates folder) only if you need them, for including them will
>> increase the size of the executable considerably. Also, some functions like
>> the ft_statistics ones are not directly called but instead evaluated using statmethod
>> = str2func(['ft_statistics_' cfg.method]) - this causes the dependency
>> analysis of the compiler to skip those functions, so you'll have to add
>> them manually.
>>
>>
>> PS a similar setup should work directly from the command line mcc, but I
>> found it easier to run ft_defaults from Matlab and then compile from within
>> a Matlab script.
>>
>>
>> Hope this helps!
>>
>>
>>>> Anne E. Urai, MSc
>> PhD student | Institut für Neurophysiologie und Pathophysiologie
>> Universitätsklinikum Hamburg-Eppendorf | Martinistrasse 52, 20246 |
>> Hamburg, Germany
>> www.anneurai.net / @AnneEUrai <https://twitter.com/AnneEUrai>
>>
>> On 10 June 2016 at 10:42, Jose <joseluisblues at gmail.com> wrote:
>>
>>> dear list,
>>>
>>> I'm trying to analyse CTF MEG data through the Flemish Supercomputer
>>> Centre,
>>> I did a matlab script to run the foremost FT functions in my pipeline
>>> (ft_read_event, read_ctf_cls, and ft_preprocessing). This script works well
>>> when I run it locally. To compile my function in the supercomputing cluster
>>> I initially used addpath to include FT and ft_defaults to set the
>>> configuration inside my script, but this wasn't working. I tried using a
>>> startup.m script but this doesn't work neither. Maybe I'm missing
>>> something? My bash script to compile looks like this:
>>>
>>> #!/bin/bash
>>> module load MATLAB/2013b
>>> FTDIR=$VSC_DATA_VO_USER/fieldtrip-20160317
>>> mcc -mv FT_0.m
>>>
>>> I also tried mcc -mv FT_0.m -I $FTDIR
>>> I run my compilation in the same folder where I have the FT folder.
>>> When I run my bash job I get always the same error: Undefined function
>>> 'ft_read_event' for input arguments of type 'char'.
>>>
>>> I've been looking elsewhere but still I haven't find a solution,
>>>
>>> Any hints about this would really appreciated,
>>> best,
>>> Jose
>>>
>>> _______________________________________________
>>> fieldtrip mailing list
>>> fieldtrip at donders.ru.nl
>>> http://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>>>
>>
>>
>
>
> --
> José Luis ULLOA FULGERI
> +32 (0)4 77 42 90 07
> +32 (0)4 92 64 64 77
>
>


-- 
José Luis ULLOA FULGERI
+32 (0)4 77 42 90 07
+32 (0)4 92 64 64 77
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20160624/21806b7d/attachment-0002.html>


More information about the fieldtrip mailing list