Dear all,
We have made a small change to the YNiC cluster configuration.
Until now, each cluster job could use as much memory as they wanted. This has been known to cause problems for users when they submit jobs which require a large amount of RAM. If multiple jobs which require large amounts of RAM are put onto the same machine, the jobs may run out of memory. More problematically, it was sometimes the case that jobs which did not require much RAM were the ones which failed.
To prevent this, we have added a default RAM limitation to the cluster. Each "slot" is allocated 8G by default. If the job uses more than this, it will be killed.
Jobs which require more RAM can still be run on the cluster. To do this, you will simply need to tell the cluster how much memory to reserve for your job - this will prevent your job from running out of RAM by reserving enough for you. This can be done using the -l h_vmem=xxG argument to the qsub command. We have documented this on the wiki at:
https://www.ynic.york.ac.uk/docs/ITPages/IT/ClusterScripts
under the "Resource Limitations" section.
We estimate that this will not affect many users. For anybody who is affected and are using qsub directly, the instructions above provide information on how to request more RAM to be made available. If you are find that you are getting memory allocation errors when using any of the YNiC provided cluster commands (clusterFeat, clusterR, clusterMatlab, clusterReconAll), or when using NAF, please contact it-support@ynic.york.ac.uk and we will make sure that we adapt the scripts to use the correct arguments with the qsub command.
If you have any questions, please contact it-support@ynic.york.ac.uk
Thanks,
Mark