PARCC Tools
On This Page
Welcome to the PARCC user-guide for our new utility scripts: parcc_quota.py, parcc_du.py, parcc_sfree.py, parcc_sqos.py, parcc_sreport.py, and parcc_sdebug.py. These tools are designed to make your life easier on the Betty HPC cluster by providing quick insights into your storage use, quotas, Slurm queue state, and debug info. Use each as indicated below.
1. parcc_quota.py
Purpose: Quickly check your storage quota usage (home directory, project space, etc.).
Usage:
parcc_quota.py
What you’ll see:
- A summary of your current quota usage across the relevant storage pools (e.g., /vast/home, /vast/projects/<your-project>, Ceph warm storage).
- Useful when you suspect you’re hitting a quota limit or want to monitor usage before you do.
Tip: Run this at login node to check before submitting large jobs or transfers.
2. parcc_du.py
Purpose: Recursively check how much space a given directory is using under the project or home filesystem.
Usage:
parcc_du.py /vast/projects/<your-project>
What you’ll see:
- A “du-style” report showing subdirectories and sizes (e.g., which folders are consuming the most space).
- Helps answer the question: “Where is all my space going?” (Fits with the Getting Started guide’s advice.) parcc.upenn.edu
Tip: Useful when your quota is getting full — identify large datasets or leftover temp directories.
3. parcc_sfree.py
Purpose: Check Slurm queue and partition/node availability in a simplified view (free nodes, partitions, GPUs status).
Usage:
parcc_sfree.py
What you’ll see:
- A snapshot of partitions, available nodes, GPUs free/used, memory usage, etc.
- Helps you decide where to submit your job (which partition has space).
Tip: Run this just before job submission to pick the most open partition.
4. parcc_sqos.py
Purpose: Inspect the QOS (Quality of Service) settings available to you, your account’s limits, and usage.
Usage:
parcc_sqos.py
What you’ll see:
- A list of the QOS modes your project/account is eligible for, along with limits such as max TRES, max GPUs, walltime, etc.
- Great for verifying which QOS you should request in your Slurm sbatch script.
Tip: Make sure that your requested QOS doesn’t exceed what you see in the output, to avoid job rejection.
5. parcc_sreport.py
Purpose: Generate a report of your recent job usage — nodes, GPUs, memory, job names — for inspection or billing-awareness.
Usage:
parcc_sreport.py [--user YOUR_PennKey]
What you’ll see:
- A table summarizing your jobs over the past N days (default maybe 7), with columns like JobID, Partition, GPUs used, Node count, Time used.
- Helps you reconcile your actual usage vs your project’s allocation or budget.
Tip: Run monthly before invoices or allocation review meetings.
6. parcc_sdebug.py
Purpose: Debugging tool for jobs/partitions — shows detailed info about node states, job failures, partition health.
Usage:
parcc_sdebug.py [--node NODENAME] [--job JOBID]
What you’ll see:
- If you specify a node, you’ll get its health status, recently failed jobs, GPU status, memory errors, etc.
- If you specify a job, you’ll get deeper logs about why it may have failed or been pre-empted.
Tip: Use when you have unexpected job failures or suspect node issues; helps you report accurate info to PARCC support.
Summary Table
| Script | Primary Use | Typical Command |
|---|---|---|
parcc_quota.py | Check overall storage quotas | parcc_quota.py |
parcc_du.py | Identify storage usage within directories | parcc_du.py /vast/projects/<proj> |
parcc_sfree.py | See free resources in Slurm | parcc_sfree.py |
parcc_sqos.py | Inspect QOS limits for your account | parcc_sqos.py |
parcc_sreport.py | Generate recent job usage report | parcc_sreport.py --user YOUR_PennKey |
parcc_sdebug.py | Debug job/node issues | parcc_sdebug.py --job JOBID |