Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Typically, jobs are not run in an interactive manner., except when:
    • users are running small one off jobs
    • evaluating the resources required for bigger jobs
    • running applications that require a GUI like MATLAB
  • HPRC Cluster software is not run in a window on their desktop, neither is it launched by clicking on it in the network drive.
  • Users need to log into the cluster and inform the job scheduler about their job and it will run it when it can.

...

No Format
-bash-4.1$ wget http://paup.csit.fsu.edu/data/ML_analysis.nex
--2014-03-11 13:08:16--  http://paup.csit.fsu.edu/data/ML_analysis.nex
Resolving paup.csit.fsu.edu... 144.174.50.3
Connecting to paup.csit.fsu.edu|144.174.50.3|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2990 (2.9K) [text/plain]
Saving to: “ML_analysis.nex”

100%[=====================================================================================================================================================================>] 2,990       --.-K/s   in 0s

2014-03-11 13:08:17 (70.7 MB/s) - “ML_analysis.nex” saved [2990/2990]

...

  • Walltime: your job can be killed if it exceeds the specified wall time.
  • Memory: overusing memory can cause the compute node's memory to be pushed into swap space, slowing down all jobs on that node. This has also killed compute nodes in the past, destroying
  • CPUs: using more cpus than requested

...