User Tools

Site Tools


tutorial:torque

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
tutorial:torque [2016/10/07 09:22]
sertalpbilal [ADVANCED]
tutorial:torque [2017/01/13 10:11]
sertalpbilal Changed Example Hierarchy
Line 2: Line 2:
  
 TORQUE provides control over batch jobs and distributed computing resources. It is an advanced open-source product based on the original PBS project and incorporates the best of both community and professional development. It incorporates significant advances in the areas of scalability, reliability, and functionality and is currently in use at tens of thousands of leading government, academic, and commercial sites throughout the world. TORQUE may be freely used, modified, and distributed under the constraints of the included license. TORQUE provides control over batch jobs and distributed computing resources. It is an advanced open-source product based on the original PBS project and incorporates the best of both community and professional development. It incorporates significant advances in the areas of scalability, reliability, and functionality and is currently in use at tens of thousands of leading government, academic, and commercial sites throughout the world. TORQUE may be freely used, modified, and distributed under the constraints of the included license.
 +
 +
 +
 +===== Prerequisite =====
 +In order to extract your output and error results in Torque, you need to have password-less connection between nodes. If you have not set it once, execute the following commands. These commands create a public and private key so that when a node want to transfer a file to your home folder, it does not require the password.
 +After connecting to polyps enter:
 +
 +<code bash>
 +ssh-keygen -N ""
 +</code>
 +
 +Then just press ENTER for any question. After that type the following commands:
 +
 +<code bash>
 +touch ~/.ssh/authorized_keys2
 +chmod 600 ~/.ssh/authorized_keys2
 +cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys2
 +</code>
 +Now, you will get the error log and output log files for your jobs.
 +
 +
 +
  
 ===== Hardware ===== ===== Hardware =====
Line 69: Line 91:
 To show the jobs use ''qstat'' or ''qstat -a'' You can also see more details using To show the jobs use ''qstat'' or ''qstat -a'' You can also see more details using
 <code>qstat -f</code> <code>qstat -f</code>
-To show jobs of some user use ''qstat -u "mat614"'' To remove job do ''qdel JOB_ID''+To show jobs of some user use ''qstat -u "mat614"'' To remove job use 
 +<code shell> 
 +qdel JOB_ID 
 +</code>
  
 ==== Queues ==== ==== Queues ====
  
-We have few queues ''qstat -q''+We have few queues ''qstat -Q''
 <code> <code>
 Queue            Memory CPU Time Walltime Node  Run Que Lm  State Queue            Memory CPU Time Walltime Node  Run Que Lm  State
 ---------------- ------ -------- -------- ----  --- --- --  ----- ---------------- ------ -------- -------- ----  --- --- --  -----
 +gpu                --      --       --      --    0   0 --   E R
 medium             --      --       --      --    0   0 --   E R medium             --      --       --      --    0   0 --   E R
 short              --      --       --      --    0   0 --   E R short              --      --       --      --    0   0 --   E R
 long               --      --       --      --    0   0 --   E R long               --      --       --      --    0   0 --   E R
 batch              --      --       --      --    0   0 --   E R batch              --      --       --      --    0   0 --   E R
-verylong           --      --       --      --    0   --   E R +verylong           --      --       --      --    0   50   E R 
-                                               ----- ----- +AMPL               --      --       --      --      0 10   E R 
-                                                       0+MOSEK              --      --       --      --      50   E R 
 </code> </code>
 +
 +If you want to use AMPL or MOSEK, you have to use queue: AMPL or MOSEK, because we have limited licenses for them.
 +
 +
  
 You can see limits using this command ''qstat -f -Q'' You can see limits using this command ''qstat -f -Q''
Line 94: Line 125:
 | very long  | 240:00:00 | | very long  | 240:00:00 |
  
-==== Examples ====+===== Examples =====
  
-=== Submitting Large Memory Job ===+==== Submitting Large Memory Job ====
  
 Sometimes your job needs more memory. This can be achieved by ''-l mem=size'' example Sometimes your job needs more memory. This can be achieved by ''-l mem=size'' example
 <code>qsub  -l mem=20gb  test.pbs</code> <code>qsub  -l mem=20gb  test.pbs</code>
  
-=== Running MATLAB -- Example ===+==== Running MATLAB -- Example ====
  
 You just have to create a submission job which looks like this You just have to create a submission job which looks like this
Line 114: Line 145:
 </code> </code>
  
-=== Interactive Jobs ===+<note tip>Use **-singleCompThread** [[https://www.mathworks.com/help/matlab/ref/maxnumcompthreads.html|option]] for Matlab to use a single thread. A similar option may be needed for the program/solver you're using.</note> 
 + 
 +==== Interactive Jobs ====
  
  
Line 124: Line 157:
 and you will be running interactive session on polyp15. and you will be running interactive session on polyp15.
  
-=== Using GPU's ===+==== Using GPU'====
  
  
Line 132: Line 165:
 However, first you have to have a permission to use GPU (given by Prof. Takac) -- this is just formality to allow to certain users to use video driver on polyp30 However, first you have to have a permission to use GPU (given by Prof. Takac) -- this is just formality to allow to certain users to use video driver on polyp30
  
-=== Running MPI and Parallel Jobs ===+==== Running MPI and Parallel Jobs ====
  
 <code bash mpi.pbs> <code bash mpi.pbs>
Line 154: Line 187:
 </code> </code>
  
-Allocating more than one CPU under PBS can be done in a number of ways, using the -l flag and the following resource descriptions:+Allocating more than one CPU under PBS can be done in a number of ways, using the ''-l'' flag and the following resource descriptions:
  
   * nodes - specifies the number of separate nodes that should be allocated   * nodes - specifies the number of separate nodes that should be allocated
Line 160: Line 193:
   * ppn - how many processes to allocate for each node   * ppn - how many processes to allocate for each node
  
-The allocation made by pbs will be reflected in the contents of the nodefile, which can be accessed via the $PBS_NODEFILE environment variable.+The allocation made by pbs will be reflected in the contents of the nodefile, which can be accessed via the ''$PBS_NODEFILE'' environment variable.
  
 The difference between ncpus and ppn is a bit subtle. ppn is used when you actually want to allocate multiple processes per node. ncpus is used to qualify the sort of nodes you want, and only secondarily to allocate multiple slots on a cpus. Some examples should help. The difference between ncpus and ppn is a bit subtle. ppn is used when you actually want to allocate multiple processes per node. ncpus is used to qualify the sort of nodes you want, and only secondarily to allocate multiple slots on a cpus. Some examples should help.
Line 194: Line 227:
 c2 c2
 </code> </code>
 +
 +===== Mass Operations =====
 +
 +==== Submitting multiple jobs ====
 +An easy way to submit multiple jobs via PBS is using a batch script. Suppose we would like to give all file names inside a folder with MPS extension into our solver. We can write a PBS Script such as
 +<code bash submit.pbs>
 +cd /home/sec312/
 +/usr/local/cplex/bin/x86-64_linux/cplex ${FILENAME}
 +</code>
 +and a BASH script:
 +<code bash bashloop.sh>
 +for f in dataset/*.mps
 +do
 +    qsub -q batch -v FILENAME=$f submit.pbs
 +done
 +</code>
 +Here, option ''-v'' passes all arguments (''FILENAME'' in our example'') that we define into PBS file. You can submit several arguments by separating them with commas. DON'T use space between arguments.
 +
 +After having these two files, simply calling
 +<code>
 +./bashloop.sh
 +</code>
 +will submit all jobs into Torque.
 +
 +==== Cancelling all jobs ====
 +You can call
 +<code bash>
 +qselect -u <username> -s R | xargs qdel
 +</code>
 +to cancel all of your running jobs.
 +
 +<code bash>
 +qselect -u <username> | xargs qdel
 +</code>
 +will cancel all jobs (both running jobs and queue).
 +
  
 ===== Advanced ===== ===== Advanced =====
Line 218: Line 287:
   * **PBS_QUEUE** (the name of the queue from which the job was executed from)   * **PBS_QUEUE** (the name of the queue from which the job was executed from)
   * **PBS_WALLTIME** (the walltime requested by the user or default walltime allotted by the scheduler)   * **PBS_WALLTIME** (the walltime requested by the user or default walltime allotted by the scheduler)
- 
- 
- 
- 
  
  
  
tutorial/torque.txt · Last modified: 2024/02/28 13:12 by mjm519