User Tools

Site Tools


tutorial:torque

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
tutorial:torque [2016/11/10 14:58]
sertalpbilal
tutorial:torque [2017/04/04 22:38]
jild13 [Advanced]
Line 64: Line 64:
 Now, we will run the code but we are setting the job parameters using ''-'' character (e.g. ''-N JobName'') Now, we will run the code but we are setting the job parameters using ''-'' character (e.g. ''-N JobName'')
  
-===== Important Options ===== +===== Options =====
  
   * ''-q <queue>'' set the queue. Often you will use the standard queue, so no need to set this up.   * ''-q <queue>'' set the queue. Often you will use the standard queue, so no need to set this up.
Line 91: Line 90:
 To show the jobs use ''qstat'' or ''qstat -a'' You can also see more details using To show the jobs use ''qstat'' or ''qstat -a'' You can also see more details using
 <code>qstat -f</code> <code>qstat -f</code>
-To show jobs of some user use ''qstat -u "mat614"'' To remove job do ''qdel JOB_ID''+To show jobs of some user use ''qstat -u "mat614"'' To remove job use 
 +<code shell> 
 +qdel JOB_ID 
 +</code>
  
 +Moreover, you can use the following command:
 +<code>qstat -r : provides the list of the running jobs</code>
 +<code>qstat -i : provides the list of the jobs which are in queue</code>
 +<code>qstat -n : provides the polyps node(s) which are executing each job</code>
 ==== Queues ==== ==== Queues ====
  
-We have few queues ''qstat -q''+We have few queues ''qstat -Q''
 <code> <code>
 Queue            Memory CPU Time Walltime Node  Run Que Lm  State Queue            Memory CPU Time Walltime Node  Run Que Lm  State
Line 122: Line 128:
 | very long  | 240:00:00 | | very long  | 240:00:00 |
  
-==== Examples ====+===== Examples =====
  
-=== Submitting Large Memory Job ===+==== Submitting a Small or Large Memory Job ====
  
-Sometimes your job needs more memory. This can be achieved by ''-l mem=size'' example +You can use the option ''-l mem=size,vmem=size'' to limit memory usage of your job.
-<code>qsub  -l mem=20gb  test.pbs</code>+
  
-=== Running MATLAB -- Example ===+<code bash limited.sh> 
 +qsub -l mem=4gb,vmem=4gb test.pbs 
 +</code> 
 + 
 +Sometimes your job needs more memory. You can choose a larger memory size with the same option: 
 + 
 +<code bash large.pbs>qsub  -l mem=20gb  test.pbs</code> 
 + 
 +==== Running MATLAB ====
  
 You just have to create a submission job which looks like this You just have to create a submission job which looks like this
Line 142: Line 155:
 </code> </code>
  
-=== Interactive Jobs ===+<note tip>Use **-singleCompThread** [[https://www.mathworks.com/help/matlab/ref/maxnumcompthreads.html|option]] for Matlab to use a single thread. A similar option may be needed for the program/solver you're using.</note> 
 + 
 +==== Running Solvers (Gurobi/CPLEX/Mosek/AMPL/...) ==== 
 + 
 +In order to run solvers, you need to use "-V" (it is Upper case) option. i.e.: 
 + 
 +<code>qsub -V submitFile.pbs </code> 
 + 
 +This flag enables the solver to find necessary authentication information.
  
 +==== Interactive Jobs ====
  
 If you do not care where you run your job just use ''-I'' and do not specify any script to run. If you do not care where you run your job just use ''-I'' and do not specify any script to run.
Line 152: Line 174:
 and you will be running interactive session on polyp15. and you will be running interactive session on polyp15.
  
-=== Using GPU's ===+==== Using GPU'====
  
  
Line 160: Line 182:
 However, first you have to have a permission to use GPU (given by Prof. Takac) -- this is just formality to allow to certain users to use video driver on polyp30 However, first you have to have a permission to use GPU (given by Prof. Takac) -- this is just formality to allow to certain users to use video driver on polyp30
  
-=== Running MPI and Parallel Jobs ===+==== Running MPI and Parallel Jobs ====
  
 <code bash mpi.pbs> <code bash mpi.pbs>
Line 222: Line 244:
 c2 c2
 </code> </code>
- 
-===== Advanced ===== 
- 
- 
-The qsub command will pass certain environment variables in the Variable_List attribute of the job. These variables will be available to the job. The value for the following variables will be taken from the environment of the qsub command: 
-  * **HOME** (the path to your home directory) 
-  * **LANG** (which language you are using) 
-  * **LOGNAME** (the name that you logged in with) 
-  * **PATH** (standard path to excecutables) 
-  * **MAIL** (location of the users mail file) 
-  * **SHELL** (command shell, i.e bash,sh,zsh,csh, ect.) 
-  * **TZ** (time zone) 
-These values will be assigned to a new name which is the current name prefixed with the string "PBS_O_". For example, the job will have access to an environment variable named PBS_O_HOME which have the value of the variable HOME in the qsub command environment. In addition to these standard environment variables, there are additional environment variables available to the job. 
-  * **PBS_O_HOST** (the name of the host upon which the qsub command is running) 
-  * **PBS_SERVER** (the hostname of the pbs_server which qsub submits the job to) 
-  * **PBS_O_QUEUE** (the name of the original queue to which the job was submitted) 
-  * **PBS_O_WORKDIR** (the absolute path of the current working directory of the qsub command) 
-  * **PBS_ARRAYID** (each member of a job array is assigned a unique identifier) 
-  * **PBS_ENVIRONMENT** (set to PBS_BATCH to indicate the job is a batch job, or to PBS_INTERACTIVE to indicate the job is a PBS interactive job) 
-  * **PBS_JOBID** (the job identifier assigned to the job by the batch system) 
-  * **PBS_JOBNAME** (the job name supplied by the user) 
-  * **PBS_NODEFILE** (the name of the file contain the list of nodes assigned to the job) 
-  * **PBS_QUEUE** (the name of the queue from which the job was executed from) 
-  * **PBS_WALLTIME** (the walltime requested by the user or default walltime allotted by the scheduler) 
  
 ===== Mass Operations ===== ===== Mass Operations =====
Line 282: Line 280:
 will cancel all jobs (both running jobs and queue). will cancel all jobs (both running jobs and queue).
  
 +
 +===== Advanced =====
 +
 +
 +The qsub command will pass certain environment variables in the Variable_List attribute of the job. These variables will be available to the job. The value for the following variables will be taken from the environment of the qsub command:
 +  * **HOME** (the path to your home directory)
 +  * **LANG** (which language you are using)
 +  * **LOGNAME** (the name that you logged in with)
 +  * **PATH** (standard path to excecutables)
 +  * **MAIL** (location of the users mail file)
 +  * **SHELL** (command shell, i.e bash,sh,zsh,csh, ect.)
 +  * **TZ** (time zone)
 +These values will be assigned to a new name which is the current name prefixed with the string "PBS_O_". For example, the job will have access to an environment variable named PBS_O_HOME which have the value of the variable HOME in the qsub command environment. In addition to these standard environment variables, there are additional environment variables available to the job.
 +  * **PBS_O_HOST** (the name of the host upon which the qsub command is running)
 +  * **PBS_SERVER** (the hostname of the pbs_server which qsub submits the job to)
 +  * **PBS_O_QUEUE** (the name of the original queue to which the job was submitted)
 +  * **PBS_O_WORKDIR** (the absolute path of the current working directory of the qsub command)
 +  * **PBS_ARRAYID** (each member of a job array is assigned a unique identifier)
 +  * **PBS_ENVIRONMENT** (set to PBS_BATCH to indicate the job is a batch job, or to PBS_INTERACTIVE to indicate the job is a PBS interactive job)
 +  * **PBS_JOBID** (the job identifier assigned to the job by the batch system)
 +  * **PBS_JOBNAME** (the job name supplied by the user)
 +  * **PBS_NODEFILE** (the name of the file contain the list of nodes assigned to the job)
 +  * **PBS_QUEUE** (the name of the queue from which the job was executed from)
 +  * **PBS_WALLTIME** (the walltime requested by the user or default walltime allotted by the scheduler)
 +
 +
 +==== Tensorflow with GPU ====
 +To use tensorflow with a specific GPU, say GPU 1, you can simply set
 +<code bash>
 +export CUDA_VISIBLE_DEVICES=1
 +</code>
 +and then schedule your jobs with Torque to perform experiments on GPU 1.
tutorial/torque.txt · Last modified: 2024/02/28 13:12 by mjm519