The Research Computing group provides a Redhat Linux based high-performance computing environment that includes HPC research clusters and systems of various capabilities and serving a variety of campus research communities.
ORION
Orion is our general Slurm partition made up of Intel Xeon compute nodes running Red Hat Enterprise Linux 8.3, that is available for use in any faculty-sponsored research projects. UNC Charlotte Faculty and Graduate student researchers may fill out the account request form to use the cluster. For more information about submitting jobs to Orion, check out the Orion (Slurm) User Notes.
- 99 nodes / 4516 cores:
- 76 nodes with
- Dual 24-Core Intel Xeon Gold 6248R CPU @ 3.00GHz (48 cores / node)
- 384GB RAM (8GBs / core)
- 100GBit EDR Infiniband Interconnect
- 21 nodes with
- Dual 18-Core Intel Xeon Gold 6154 CPU @ 3.00GHz (36 cores / node)
- 388GB RAM (10.7GB / core)
- 100Gbit EDR Infiniband Interconnect
- 1 node with
- Dual 24-Core Intel Xeon Gold 6248R CPU @ 3.00GHz (48 cores / node)
- 1.5TB RAM (31.25GB / core)
- 100Gbit EDR Infiniband Interconnect
- 1 node with
- Quad Intel 16-Core Xeon E7-4850 CPU @ 2.10GHz (64 cores / node)
- 4TB RAM (62.5GB / core)
- 100GBit EDR Infiniband Interconnect
Andromeda
Andromeda is our general Slurm partition made up of AMD EPYC compute nodes running Red Hat Enterprise Linux 8.3, that is available for use in any faculty-sponsored research projects. UNC Charlotte Faculty and Graduate student researchers may fill out the account request form to use the cluster. For more information about submitting jobs to Andromeda, check out the Migrating from Red Hat 7.5 to 8.3.
- 11 nodes / 704 cores:
- 10 nodes with
- Dual 32-Core AMD EPYC 7502 CPU @ 2.5GHz (up to 3.35GHz Max Boost); 64 cores / node
- 512GB RAM (8GBs / core)
- 100GBit HDR100 Infiniband Interconnect
- 1 node with
- Dual 32-Core AMD EPYC 7542 CPU @ 2.9GHz (up to 3.4GHz Max Boost); 64 cores / node
- 4TB RAM (62.5GB / core)
- 200GBit HDR Infiniband Interconnect
GPU
GPU is a general use Slurm partition made up of several GPU compute nodes, that is available for use in any faculty-sponsored research projects. For more information about submitting jobs to the GPU partition, check out the "Submitting a GPU Job" section in the Orion & GPU (Slurm) User Notes.
- 13 nodes / 336 computing cores:
- 1 "Titan V" GPU node with
- Dual 8-Core Intel Xeon Silver 4215R CPU @ 3.20GHz (16 cores total)
- 192GB RAM (12GB / core)
- 8 x NVIDIA Titan V GPUs (12GB HBM2 RAM per GPU)
- 100Gbit EDR Infiniband Interconnect
- 2 "Titan RTX" GPU nodes with
- Dual 8-Core Intel Xeon Silver 4215R CPU @ 3.20GHz (16 cores total)
- 192GB RAM (12GB / core)
- 4 x NVIDIA Titan RTX GPUs (24GB GDDR6 RAM per GPU)
- 100Gbit EDR Infiniband Interconnect
- 2 "V100S" GPU nodes with
- Dual 8-Core Intel Xeon Silver 4215R CPU @ 3.20GHz (16 cores total)
- 192GB RAM (12GB / core)
- 1 node w/ 8 x NVIDIA Tesla V100S Tensor Core GPUs (32GB HBM2 RAM per GPU)
- 1 node w/ 4 x NVIDIA Tesla V100S Tensor Core GPUs (32GB HBM2 RAM per GPU)
- 100Gbit EDR Infiniband Interconnect
- 2 "A100" GPU nodes with
- Dual 16-Core Intel Xeon Gold 6326 CPU @ 2.90GHz (32 cores total)
- 256GB RAM (8GB / core)
- 4 x NVIDIA A100 Tensor Core GPUs (80GB HBM2e RAM per GPU)
- 100Gbit HDR Infiniband Interconnect
- 6 "A40" GPU nodes with
- Dual 16-Core Intel Xeon Gold 6326 CPU @ 2.90GHz (32 cores total)
- 256GB RAM (8GB / core)
- 4 x NVIDIA A40 Data Center GPUs (48GB GDDR6 ECC RAM per GPU)
- 100Gbit HDR Infiniband Interconnect
Leo
Leo is a specialized GPU Slurm partition made up of a single GPU compute node, that is available for use in any faculty-sponsored research projects. For more information about submitting jobs to the Leo partition, check out the Orion (Slurm) User Notes.
- 1 node / 128 computing cores:
- Dual 64-Core AMD EPYC 7742 CPU @ 2.25GHz (up to 3.4GHz Boost); 128 cores total
- 1TB RAM (8GB / core)
- 8 x NVIDIA A100 Tensor Core GPUs (40GB HBM2 RAM per GPU)
- 8-way NVLink
- 200Gbit HDR Infiniband Interconnect
STORAGE (NFS & LUSTRE)
URC provides a unified storage environment that is shared across all of the research clusters.
- 940 TB of general user (NFS) storage space
- 2.7 PB of InfiniBand connected Lustre distributed file system non-backed up storage space used for scratch and large volume storage needs
Each user is provided with:
- a 500 GB home directory that is backed up for disaster recovery
- up to 10TB of temporary scratch storage space
Please note that quota extensions are not available for home directories or scratch space.
Scratch space is for holding temporary data needed by currently running jobs only and is not meant to hold critical data long term. Note that scratch is not backed up and any failure will result in data loss. DO NOT store important data in scratch. If scratch fills, URC staff may delete older data.
Shared storage volumes in /projects and /nobackup are available for research groups upon request and must be owned by a faculty member. (Subject to available space.)
Although URC backs up some file spaces, be sure to maintain an additional copy of critical data outside the cluster.
- Home directories have a 7-day, 4-week backup.
- /projects have a 7-day backup.
- /nobackup and /scratch are NOT backed up.
NEVER modify the permissions on your /home or /scratch directory. If you need assistance, please contact us.
In addition to our general purpose Slurm partitions, we manage and provide infrastructure support for a number of cluster partitions that were purchased by individual faculty or research groups to meet their specific needs. These resources include:
DRACO
- 26 nodes / 720 cores:
- 8 nodes with
- Dual 18-Core Intel Xeon Gold 6154 CPU @ 3.00GHz (36 cores / node)
- 388GB RAM (10.7GB / core)
- 100Gbit EDR Infiniband Interconnect
- 15 nodes with
- Dual 8-Core Intel Xeon E5-2667 CPU @ 3.2GHz (16 cores / node)
- 128GB RAM (8GB / core)
- 100GBit EDR Infiniband Interconnect
- 2 nodes with
- quad Intel 2.1GHz 16-core processors – Xeon E7-4850 v4
- 2TB RAM (32GB / core)
- 1 node with
- quad Intel 2.1GHz 16-core processors – Xeon E7-4850 v4
- 4TB RAM (64GB / core)
PISCES
- 31 nodes / 616 cores:
- 6 nodes with
- Dual 18-Core Intel Xeon Gold 6154 CPU @ 3.00GHz (36 cores / node)
- 388GB RAM (10.7GB / core)
- 100Gbit EDR Infiniband Interconnect
- 24 nodes with
- Dual 8-Core Intel Xeon E5-2667 CPU @ 3.2GHz (16 cores / node)
- 128GB RAM (8GB / core)
- 100GBit EDR Infiniband Interconnect
SERPENS
- 12 nodes / 576 computing cores, each with:
- Dual 24-Core Intel Xeon Gold 6248R CPU @ 3.00GHz (48 cores / node)
- 384GB RAM (8GBs / core)
- 100GBit EDR Infiniband Interconnect
- 1 (Interactive) node with:
- Dual 18-Core Intel Xeon Gold 6154 CPU @ 3.00GHz (36 cores / node)
- 384GB RAM (10.67GBs / core)
- 73TB dedicated, usable RAID storage (96TB raw)
PEGASUS
- 3 nodes / 100 cores:
- 1 nodes with
- Dual 18-Core Intel Xeon Gold 6154 CPU @ 3.00GHz (36 cores / node)
- 388GB RAM (10.7GB / core)
- 100Gbit EDR Infiniband Interconnect
- 2 nodes with
- Dual 16-Core Intel Xeon E5-2697A v4 CPU @ 2.6GHz (32 cores / node)
- 256GB RAM (8GB / core)
- 100Gbit EDR Infiniband Interconnect
HERCULES
- 4 nodes / 108 cores:
- 1 node with
- Dual 8-Core Intel Xeon E5-2620 CPU @ 2.10GHz (16 cores)
- 128GB RAM (8GB / core)
- 8 NVIDIA Titan X (Pascal) GPUs
- 1 node with
- Dual 4-Core Intel Xeon Silver 4112 CPU @ 2.60GHz (8 cores)
- 192GB RAM (24GB / core)
- 8 NVIDIA GeForce GTX-1080Ti GPUs
- 1 node with
- Dual 10-Core Intel Xeon Silver 4114 CPU @ 2.20GHz (20 cores)
- 192GB RAM (9.6GB / core)
- 2 x NVIDIA Titan V GPUs (12GB HBM2 RAM per GPU)
- 1 node with
- Dual 32-Core AMD EPYC 7502 CPU @ 2.5GHz (up to 3.35GHz Max Boost); (64 cores)
- 256GB RAM (4GBs / core)
- 100GBit EDR Infiniband Interconnect