This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
tutorial:torque [2017/06/08 22:45] afo214 |
tutorial:torque [2021/06/17 09:22] mjm519 [Using submission script] |
||
---|---|---|---|
Line 44: | Line 44: | ||
#PBS -o / | #PBS -o / | ||
#PBS -l nodes=1: | #PBS -l nodes=1: | ||
+ | #PBS -l pmem=2GB: | ||
#PBS -q batch | #PBS -q batch | ||
Line 61: | Line 62: | ||
</ | </ | ||
If you do not want to write the submission script you can do it just by calling | If you do not want to write the submission script you can do it just by calling | ||
- | < | + | < |
Now, we will run the code but we are setting the job parameters using '' | Now, we will run the code but we are setting the job parameters using '' | ||
Line 134: | Line 135: | ||
==== Submitting a Small or Large Memory Job ==== | ==== Submitting a Small or Large Memory Job ==== | ||
- | You can use the option '' | + | You can use the option '' |
<code bash limited.sh> | <code bash limited.sh> | ||
- | qsub -l mem=4gb, | + | qsub -l pmem=4gb, |
</ | </ | ||
Sometimes your job needs more memory. You can choose a larger memory size with the same option: | Sometimes your job needs more memory. You can choose a larger memory size with the same option: | ||
- | <code bash large.pbs> | + | <code bash large.pbs> |
==== Running MATLAB ==== | ==== Running MATLAB ==== | ||
Line 152: | Line 153: | ||
#PBS -o / | #PBS -o / | ||
#PBS -l nodes=1: | #PBS -l nodes=1: | ||
+ | #PBS -l pmem=2GB: | ||
#PBS -q batch | #PBS -q batch | ||
Line 184: | Line 186: | ||
However, first you have to have a permission to use GPU (given by Prof. Takac) -- this is just formality to allow to certain users to use video driver on polyp30 | However, first you have to have a permission to use GPU (given by Prof. Takac) -- this is just formality to allow to certain users to use video driver on polyp30 | ||
- | If you are using TensorFlow, you can set the limit on amount of GPU memory using: | + | If you are using TensorFlow |
- | < | + | < |
- | config_tf.gpu_options.per_process_gpu_memory_fraction = p</ | + | config_tf.gpu_options.per_process_gpu_memory_fraction = p</ |
- | in which < | + | in which **//p//** is the percent of GPU memory (a number between zero and one). |
==== Running MPI and Parallel Jobs ==== | ==== Running MPI and Parallel Jobs ==== |