This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
tutorial:torque [2017/04/04 22:38] jild13 [Advanced] |
tutorial:torque [2017/11/06 09:51] sertalpbilal [Directly submitting job] Typo |
||
---|---|---|---|
Line 61: | Line 61: | ||
</ | </ | ||
If you do not want to write the submission script you can do it just by calling | If you do not want to write the submission script you can do it just by calling | ||
- | < | + | < |
Now, we will run the code but we are setting the job parameters using '' | Now, we will run the code but we are setting the job parameters using '' | ||
===== Options ===== | ===== Options ===== | ||
- | * '' | + | ^ Option |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
- | | + | | '' |
+ | | '' | ||
- | See [[http:// | + | You can find detailed information |
+ | <note tip>You need to use option '' | ||
===== Monitoring and Removing jobs ===== | ===== Monitoring and Removing jobs ===== | ||
Line 157: | Line 159: | ||
<note tip>Use **-singleCompThread** [[https:// | <note tip>Use **-singleCompThread** [[https:// | ||
- | ==== Running Solvers | + | ==== Running Solvers ==== |
- | In order to run solvers, you need to use " | + | In order to run solvers |
< | < | ||
Line 181: | Line 183: | ||
However, first you have to have a permission to use GPU (given by Prof. Takac) -- this is just formality to allow to certain users to use video driver on polyp30 | However, first you have to have a permission to use GPU (given by Prof. Takac) -- this is just formality to allow to certain users to use video driver on polyp30 | ||
+ | |||
+ | If you are using TensorFlow in Python, you can set the limit on amount of GPU memory using: | ||
+ | < | ||
+ | config_tf.gpu_options.per_process_gpu_memory_fraction = p</ | ||
+ | in which **//p//** is the percent of GPU memory (a number between zero and one). | ||
==== Running MPI and Parallel Jobs ==== | ==== Running MPI and Parallel Jobs ==== |