CSaW
CSaW News
Scheduled Maintenance & Downtime
There is no scheduled maintenance at this time.
Updates
09/22/2023 - HTCondor 10.8
01/12/2023 - CSCI Lab Scheduler Restored
12/27/2022 - OS Updates, Account Changes
05/10/2022 - GROMACS & Much More!
10/11/2021 - Singularity
09/13/2021 - HTCondor 9.0
06/15/2021 - VPN Requirement
Older News
CSaW Docs
Getting Started
Logging in to the cluster
Navigating around the file system
The basics
Good places to know about on the file system
Copying files to and from the cluster
Copy files to the cluster
Copy files from the cluster
Using the job scheduler
Scheduling background information
Viewing the job queue and status of your jobs
Testing your software
Submitting a job
Determining the correct universe to run in
Writing a job submission script
Submitting to the vanilla universe
Super basic example
Basic example
Basic example with arguments
Queueing multiple jobs from the same file
Queueing multiple jobs from the same file with globbing
Submitting to the parallel universe
Basic OpenMPI example
Removing jobs
Cluster node status
Additional resources
Man pages
HTCondor Manual
Cluster F.A.Q.
Direct Support
HTCondor Submission File Examples
Python
Basic Python
Virtual Environments
Using a Wrapper Script
Manually setting the PATH environment variable
Anaconda / Miniconda
Singularity
Overview
Using Existing Images
From Sylabs
From DockerHub
Running Containers
Run
Exec
Shell
Other Options
Mounting Other Directories
Using a GPU
Submitting to HTCondor
Using a GPU
Building Your Own containers
Writing a Singularity Definition File (.def)
Building With Sylabs Cloud Builder
Create / Login to your Sylabs Cloud account
Doing the Remote Build
Building With a Personal System
Unsupported Features
Building an Image Locally
Building/Running Sandbox Images
Fakeroot Containers
Notes on Docker Compatibility
Additional Notes / Tips
Singularity Cache
Clearing Out Your Cache
GROMACS
Overview
Example Files
Download the Needed Files
Interacting with GROMACS via HTCondor
Getting an Interactive Prompt on a Compute Node
Submitting the
mdrun
job
Example Files Explained
gmx.sh
OMP+MPIscript.parallel
mdrun.htsub
Currently Available Versions
Additional References
Cluster F.A.Q.
How
How do I see my job status?
How do I see the status of all jobs, not just my own?
How do I remove my job?
How do I see the CPU resources in use and available?
How do I see the GPU resources in use and available?
How do I schedule a job to use multiple CPUs?
How do I schedule a job to use a GPU?
How do I get a shell on the node running my job so I can check on it?
How do I get an interactive shell on an execute node so I can test before submitting a job?
How do I build my software against CUDA?
How do I run my software against a particular version of CUDA / CUDNN?
How do I submit a job for running on the CSCI desktops?
From CSE or CSCI head node
From csci-lab-head as a CSCI student
From a CSCI lab system as a CSCI student
How do I manually specify my accounting group?
How do I connect to the VPN?
How much disk storage space do I have?
How do I use the scratch SSD space?
How do I run a job on a specific node?
How do I use the module system?
How do I use other versions of GCC?
Why
Why is my job in a “hold” state?
Why is my job still idle?
Why does condor_userprio show my user name with a group prefix?
Why is there a quota / limit on disk space?
Why does VS Code Remote - SSH plugin keep disconnecting me?
What
What is the maximum amount of resources I can request?
CSE cluster
CSCI cluster
CSCI desktops
What kind of jobs should I be running on CSCI desktops?
What happens if I use more resources than I request?
What does ‘Exec format error’ mean?
What is the cleanup policy for the scratch disk space?
What is Preemption and how do I opt into it?
System Details
College of Science and Engineering
Compute Nodes
Compute (8x)
Compute (8x)
Compute (4x)
Graphics Nodes
GPU (2x)
GPU (1x)
GPU (1x)
Support Systems & Miscellaneous
Head Node (x1)
Network
InfiniBand
Ethernet
Computer Science
Compute Nodes
Compute (8x)
Graphics Nodes
GPU (3x)
GPU (2x)
Support Systems & Miscellaneous
Head Node (1x)
Network
InfiniBand
Ethernet
Shared Storage
Main Storage (1x)
Backup Storage (1x)
Cluster Account Creation
Requirements
Request your account creation
Get confirmation
Testing your account
Contact
Support
Email
Microsoft Teams
In Person
Appendix
Glossary
Index
CSaW
»
Computational Science at Western (CSaW)
Computational Science at Western (CSaW)
CSaW News
Scheduled Maintenance & Downtime
There is no scheduled maintenance at this time.
Updates
09/22/2023 - HTCondor 10.8
01/12/2023 - CSCI Lab Scheduler Restored
12/27/2022 - OS Updates, Account Changes
05/10/2022 - GROMACS & Much More!
10/11/2021 - Singularity
09/13/2021 - HTCondor 9.0
06/15/2021 - VPN Requirement
Older News
CSaW Docs
Getting Started
Logging in to the cluster
Navigating around the file system
Copying files to and from the cluster
Using the job scheduler
Additional resources
HTCondor Submission File Examples
Python
Singularity
GROMACS
Cluster F.A.Q.
How
Why
What
System Details
College of Science and Engineering
Computer Science
Shared Storage
Cluster Account Creation
Requirements
Request your account creation
Get confirmation
Testing your account
Contact
Support
Email
Microsoft Teams
In Person
Appendix
Glossary
Index