Re: [AMBER] bash scripting for MD tasks

From: James Starlight <jmsstarlight.gmail.com>
Date: Thu, 5 Feb 2015 15:10:34 +0100

Hi all!
Another problem which I faced during scripting of the some automatic
algorithm to make inputs for the tleap looping both trajectories and
topologies. Briefly I have equal number of topologies and trajectories with
the same name with the difference in the extension

gleb.gpu2:/data2/> ls tr_all7D4-androsta.mdcrd 7D4-androste.mdcrd
7D4-decenal.mdcrd
gleb.gpu2:/data2/> ls top_all7D4-androsta.top 7D4-androste.top
7D4-apo.top 7D4-decenal.top

now I need to script looping over both inputs to make something like

workdir=/data2/
all_traj=${workdir}/tr
all_top=${workdir}/top${Area}=:Mol

 for top in all_top and for traj in all_traj; do # how to make this????
 top_n=$(basename "$top") # how to select name without extension here??
 traj_n=$(basename "$traj") # how to select name without extension here??
 # ++add here some condition to make below step only if name of {top}=={traj}
 # make input for my program including imputs from both dirs
 printf "parm ${top_n}\ntrajin ${traj_n}\nwatershell ${Area}
watershell_${top_n}_${Area}.dat :WAT.O* lower $low upper $up" >
./water_${top_n}.in


I'd be very thankful for any suggestions!

2014-09-29 11:22 GMT+02:00 James Starlight <jmsstarlight.gmail.com>:

> after some tests the idea with inner and outer loops have not been good
> because it's launched each simulation on each GPUS (as was expected :) )
> even in case of the presence of the WAIT in GPU inner loop- I think some
> option must be also provided to terminate GPU (inner) loop after its first
> processing for each md job (outter looping)
>
> n=2 # set the number of available GPUs
> #run each simulation of the free GPU
> for sim in $simulations/* ; do
> for ((i=0;i<n-1;i++)); do
> export CUDA_VISIBLE_DEVICES "$i"
> simulation=$(basename "$sim")
> pushd $sim
> chmod +x ./${simulation}.Sh
> echo "Simulation of ${simulation} on ${i} GPU is in a progress!"
> ./${simulation}.Sh &
> # add some command which will forgive to put the same simulation on the
> same GPU- force loop rermination
> popd
> done
> wait
> done
>
>
> James
>
> 2014-09-29 9:06 GMT+02:00 James Starlight <jmsstarlight.gmail.com>:
>
>> Hi Ross,
>>
>> thank you very much for the ideas!
>> I think the task of the scripting (some brief concepts and examples)
>> should be included to the future Amber workshops as the integral part of
>> any computational research :).
>> Regarding my question: I've also considered 2 nested loops example -
>> outer (run sh file in the background assosiated with simulation_i) inner
>> (set k as GPU number)+wait (not sure fow to assosiate this delay to the
>> outer loop but I should to check syntax first).
>>
>> But probably in my case the problem must be really simplifier because I
>> need to run each simulation of each free GPU in parallel (to use n GPU for
>> the same number of simulations):
>>
>> #so just to define at the beginning of the script
>> nvidia-smi -c 3
>> for sim in $simulations/* ; do
>> simulation=$(basename "$sim")
>> echo "Simulation of ${simulation} is in a progress!"
>> cd $sim
>> chmod +x ./${simulation}.Sh
>> ./${simulation}.Sh &
>> done
>>
>> I think it should run not very big ( for instance 3-5) number of the
>> simulation in parallel using free GPU for each md, shouldn't it? BYW what
>> will be in case of the missmatch beetween number of available GPUs and
>> simulations to run (e.g if I have 2 GPU and 10 Sh files) in this case?
>>
>>
>>
>> James
>>
>
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Feb 05 2015 - 06:30:03 PST
Custom Search