If you plan to use a considerable amount of memory (i.e. more than 8GB), you should specify how much memory your job will use. If you neglect to do so, the accumulation of high memory jobs on an individual compute node can cause any of the following: (1) the node may terminate your job prematurely (2) the node may slow down considerably (3) the node may crash, terminating all jobs on that node.

To specify additional memory, use the mem and vmem options:

#PBS -l mem=16GB -l vmem=16GB

Calculating your anticipated memory usage is different depending on your application. You may need to reference the software you’re using or if you’re writing your own software, you may need to examine and add up the size of the variables in your code.

The maximum memory you can specify is 256GB, although you should rarely specify this much as you won’t be able to run many jobs simultaneously using this much memory, due to memory limitations of the individual compute nodes.

Please note that if you exceed the limitation you specify, the cluster will terminate your job. If it kills your job due to memory limitations, it unfortunately does not communicate that effectively; your job simply terminates. If your job is unexpectedly dying, consider increasing your memory requirements.

For additional information or assistance, please contact the Tech Desk at 570.577.7777 or techdesk@bucknell.edu.

Keywords: hpc, Linux, linux cluster