Local CRAB
This page is intended to describe how to use CRAB in T3-Higgs
Environment :
I'm hoping that users are familiar with the
usual CRAB recipe
.
For the specific environments here they are :
source /cvmfs/cms.cern.ch/cmsset_default.sh # CMSSW
cmsrel RELEASE ; cd RELEASE/src ; cmsenv # Just a guideline on how to get cmsenv to run *the order is important*
source /cvmfs/oasis.opensciencegrid.org/osg-software/osg-wn-client/3.2/current/el5-x86_64/setup.sh # Grid tools
source /share/apps/crab/crab.sh
Configuration
Everything is usually whatever you want, except these :
[USER]
return_data = 1
copy_data = 0
[CRAB]
scheduler = condor
This will ensure that we will find the right CMSSW.sh that will help us achieve faster the goal of setting the stage-out to Hadoop.
--++ Steps
After your environment is set and the crab.cfg is configured, time to create the jobs with crab -create. This will give you a task directory :
working directory /home/samir/CMSSW_5_3_11/src/crab_0_140705_005804/
Now is the part where we set the stageout location, you need to use the taskdir to edit CMSSW.sh :
/home/samir/CMSSW_5_3_11/src/crab_0_140705_005804/job/CMSSW.sh
Towards the end of the file, search for "file_list". Just below a line like this :
file_list="$SOFTWARE_DIR/outfile_$OutUniqueID.root"
Is where we will place the copy command :
cp $RUNTIME_AREA/outfile_* /mnt/hadoop/store/user/$USER/crabtest/
NOTE: "outfile" was in my case where I specify it in crab.cfg. Make sure that you pick the right filename for the copy command. You will have a hint some lines above in the same script.
output_file = outfile.root
That should be all. Now you can submit your jobs and if the input data is in the right place (/mnt/hadoop) your jobs should run and copy the output files to the directory you specify
-- Main.samir - 2014-07-05