ExCL User Docs
HomeAbout
  • Introduction
  • Acknowledgment
  • System Overview
    • amundsen
    • apachepass
    • clark
    • cousteau
    • docker
    • emu
    • equinox
    • excl-us
    • explorer
    • faraday
    • Hudson
    • leconte
    • lewis
    • mcmurdo
    • Milan
    • minim1
    • Oswald
    • pcie
    • quad
    • radeon
    • snapdragon
    • thunderx
    • Triple Crown
    • Xavier
    • zenith
  • ExCl Support
    • ExCL Team
    • Frequently Encountered Problems
    • Access to ExCL
    • Contributing
    • Glossary & Acronyms
    • Requesting Access
    • Outages and Maintenance Policy
    • Backup & Storage
  • Quick-Start Guides
    • ExCL Remote Development
    • Apptainer
    • Conda and Spack Installation
    • Devdocs
    • GitHub CI
    • Gitlab CI
    • Groq
    • Julia
    • Jupyter Notebook
    • Marimo
    • Ollama
    • Open WebUI
    • Python
    • Siemens EDA
    • ThinLinc
    • Visual Studio Code
    • Vitis FPGA Development
  • Software
    • Compilers
    • ExCl DevOps: CI/CD
    • Git
    • Modules
    • MPI
  • Devices
    • BlueField-2
  • Contributing via Git
    • Git Basics
      • Git Command Line
      • Git Scenarios
    • Authoring Guide
Powered by GitBook
On this page
  • ExCL Server List with Accelerators
  • New Systems and Devices to be Deployed
  • Accelerator Highlights
  • Unique Architecture Highlights
  • Other Equipment
  • Primary Usage Notes
  • Access Host (Login)
  • General Interactive Login Use
  • Graphical session use via ThinLinc
  • Slurm for Large Jobs
  • Gitlab Runner Speciliazed Nodes
  • Docker
  • Specialized usage and reservations
  • Infrastructure Systems

Was this helpful?

Edit on GitHub
Export as PDF

System Overview

Overview of ExCL Systems

ExCL Server List with Accelerators

Host Name
Description
OS
Accelerators or other special hardware

Triple Crown AMD EPYC 7742 (Rome) 2x64-core 1 TB

Ubuntu 22.04

Bluefield 2

NIC/DPUs

Desktop embedded system development

Ubuntu 20.04

Snapdragon 855 (desktop retiring)

ApachePass memory system

Centos 7.9

375 GB Apachepass memory

Desktop embedded system development

Ubuntu 22.04

Intel A770 Accelerator

AMD EPYC 7272 (Rome) 2x12-core 256 GB

Ubuntu 22.04

2 AMD MI100 32 GB GPUs

Intel 20 Core Server 96 GB

Ubuntu 20.04

Docker development environment

DGX Workstation Intel Xeon E5-2698 v4 (Broadwell) 20-core 256 GB

Ubuntu 22.04

4 Tesla V100-DGXS 32 GB GPUs

AMD EPYC 7702 (Rome) 2x64-core 512 GB

Ubuntu 22.04

2 AMD MI60 32 GB GPUs

AMP APU 4x24 Zen 4 cores 512 GB unified HBM3 912 CDNA 3 GPU units

Ubuntu 24.04

4 Mi300a APUs

AMD EPYC 9454 (Genoa) 2x48-core 1.5 TB

Ubuntu 22.04

2 Nvidia H100s

Triple Crown AMD EPYC 7742 (Rome) 2x64-core 1 TB

Centos 7.9

Summit server POWER9 42 Cores

Centos 8.4

6 Tesla V100 16 GB GPUs

Desktop embedded system development

Ubuntu 22.04

Desktop embedded system development

Ubuntu 20.04

Snapdragon 855 & PolarFire SoC (retiring)

AMD EPYC 7513 (Milan) 2x32-core 1 TB

Ubuntu 22.04

2 * Nvidia A100

AMD EPYC 7513 (Milan) 2x32-core 1 TB

Ubuntu 22.04 or other

2 Groq AI accelerators

AMD EPYC 7513 (Milan) 2x32-core 1 TB

Ubuntu 22.04 or other

8 Nvidia Tesla V100-PCIE-32GB GPUs

AMD EPYC 7513 (Milan) 2x32-core 1 TB

Ubuntu 22.04 or other

General Use

Apple M1 Desktop

OSX

Oswald head node

Ubuntu 22.04

Intel Xeon E5-2683 v4 (Haswell) 2x16-core 256 GB

Centos 7.9

Tesla P100 & Nallatech FPGA

Intel Xeon E5-2683 v4 (Haswell) 2x16-core 256 GB

Centos 7.9

Tesla P100 & Nallatech FPGA

Intel Xeon E5-2683 v4 (Haswell) 2x16-core 256 GB

Centos 7.9

Tesla P100 & Nallatech FPGA

Intel Xeon Gold 6130 CPU (Skylake) 32-core 192 GB

Ubuntu 22.04

Xylinx U250 Nalllatech Stratix 10 Tesla P100 Groq Card

Triple Crown AMD EPYC 7742 (Rome) 2x64-core 1 TB

Centos 7.9

Intel 4 Core 64 GB

Ubuntu 22.04

AMD Vega20 Radeon VII GPU

Triple Crown AMD EPYC 7742 (Rome) 2x64-core 1 TB

Ubuntu 22.04

Bluefield 2 NIC/DPU

ARM Cavium ThunderX2 Server 128 GB

Centos Stream 8

Nvidia Jetson AGX

Ubuntu

Volta GPU

Nvidia Jetson AGX Orin

Ubuntu

Ampere GPU (not deployed)

AMD Ryzen Threadripper 3970X (Castle Peak) 32-core 132 GB

Ubuntu 22.04

Nvidia GTX 3090 AMD Radeon RX 6800

New Systems and Devices to be Deployed

  • 2 Snapdragon HDK & Display

  • Intel ARC GPU

  • Achronix FPGA

  • AGX Orin Developer Kits

Accelerator Highlights

Accelerator Name
Host(s)

AMD Radeon VII GPU

radeon

AMD MI60 GPU

explorer

AMD MI100 GPU

cousteau

milan1

Nvidia A100 GPU

milan0

Nvidia P100 GPU

pcie

Nvidia V100 GPU

equinox, leconte, milan2

Nvidia H100 GPU

hudson

Nvidia Jetson

xavier

amundsen, mcmurdo

Intel Stratix 10 FPGA

pcie

Xilinx Zynq ZCU 102

n/a

Xilinx Zynq ZCU 106

n/a

Xilinx Alveo U250

pcie

Xilinx Alveo U280

milan3

2 Ettus x410 SDRs

marconi

Unique Architecture Highlights

Accelerator Name
Host(s)

Intel Optane DC Persistent Memory

apachepass

Emu Technology CPU

Cavium CPU

thunderx

Other Equipment

  • RTP164 High Performance Oscilloscope

Primary Usage Notes

Access Host (Login)

Host
Base Resources
Specialized Resources
Notes

login

4 core 16 Gi vm

-

login node - not for computation, TL

General Interactive Login Use

These nodes can be access with ssh, and are availible for general interactive use.

Host
Base Resources
Specialized Resources
Notes

oswald

16 Core 64 Gb

-

Usable, pending rebuilt to Ubuntu

oswald00

32 core 256 Gi

NVIDIA P100, FPGA @

oswald02

32 core 256 Gi

NVIDIA P100, FPGA @

Not available - rebuilding

oswald03

32 core 256 Gi

NVIDIA P100, FPGA @

Not available - rebuilding

milan0

128 Core 1 Ti

NVIDIA A100 (2)

Slurm

milan1

128 Core 1 Ti

Groq AI Accelerator (2)

Slurm

milan2

128 Core 1 Ti

NVIDIA V100 (8)

milan3

128 Core 1 Ti

Xlinx U280

Slurm

excl-us00

32 Core 192 Gi

-

Rocky 9

excl-us01

32 Core 192 Gi

-

Not available pending rebuild

excl-us03

32 Core 192 Gi

-

CentOS 7 pending rebuild

secretariat

256 Core 1 Ti

-

Slurm

affirmed

256 Core 1 Ti

-

Slurm

pharaoh

256 Core 1 Ti

-

Slurm

justify

256 Core 1 Ti

-

Slurm

hudson

192 Core 1.5 Ti

NVIDIA H100 (2)

faraday

AMD Mi300a (4)

docker

20 Core 96 Gi

-

Configured for Docker general use with enhanced image storage

pcie

32 Core 196 Gi

NVIDIA P100, FPGA @

TL, No hyperthreading, passthrough hypervisor for accellerators

lewis

20 Core 48 Gi

NVIDIA T1000, U250

TL

clark

20 Core 48 Gi

NVIDIA T1000

TL

zenith

64 core 128 Gi

NVIDIA GeForce RTX 3090 @

TL

radeon

8 Core 64 Gi

AMD Radeon VII

equinox

DG Workstation

NVIDIA V100 * 4

rebuilding after ssd failure

explorer

256 Core 512 Gi

AMD M60 (2)

cousteau

48 Core 256 Gi

AMD M100 (2)

leconte

168 Core 602 Gi

NVIDIA V100 * 6

PowerPC (Summit)

Zenith

32 Core 132 Gi

Nvidia GTX 3090 AMD Radeon RX 6800

TL

Zenith2

32 Core 256 Gi

Embedded FPGAs

TL

Notes:

  • All of the general compute resources have hyperthreading enabled unless otherwise stated.. This can be changed on a per request basis.

  • Slurm: Node is added to a slurm partition and will likely be used for running slurm jobs. Try to make sure your interactive use does not conflict with any active Slurm jobs.

    • Most of the general compute resources are Slurm-enabled, to allow queuing of larger-scale workloads. excl-help@ornl.gov for specialized assistance. Only the systems that are heavily used for running Slurm jobs are marked “Slurm” above.

Graphical session use via ThinLinc

  • login — not for heavy computation

  • zenith

  • zenith2

  • clark

  • lewis

  • pcie

  • intrepid

  • spike

Slurm for Large Jobs

  • Triple Crown — Dedicated Slurm runners.

    • affirmed

    • justify

    • secretariat

    • pharaoh

  • Milan — Additional Slurm Resources with other shared use.

    • milan0

    • milan1

    • milan3

  • Others — Shared slurm runners with interactive use.

    • milan[0-3]

    • cousteau

    • excl-us03

    • explorer

    • oswald

    • oswald[00, 02-03]

Gitlab Runner Speciliazed Nodes

  • slurm-gitlab-runner — Gitlab Runner for launching slurm jobs.

  • docker — for docker runner jobs.

  • devdoc — for internal development documentation building and hosting.

Docker

  • docker — Node with docker installed.

Specialized usage and reservations

Host
Specilized Usage
Reserved?

dragon (vm)

Siemens EDA Tools

task-reserved

devdocs (vm)

Internal development documentation building and hosting

task-reserved

spike (vm)

pcie vm with FPGA and GPU passthrough access

task-reserved

lewis

U250

RISC-V Emulation using U250

slurm-gitlab-runner

slurm integration wth gitlab-runner

task-reserved

docker

slurm-integration with gitlab runner for containers

reserved for container use

Notes:

  • task-reserved: reserved for specialized tasks, not for project

Infrastructure Systems

Host Name
Description
OS

excl-us01 (hypervisor)

Intel 16 Core Utility Server 196 GB

PreviousAcknowledgmentNextamundsen

Last updated 16 days ago

Was this helpful?

(quad03)

Login is the node use to access ExCL and to proxy into and out of the worker nodes. It is not to be used for computation but for accessing the compute notes. The login node does have ThinLinc installed and can also be used for graphical access and more performance x11 forwarding from an internal node. See .

TL: Thinlinc enabled. Need to use login as a jump host for resources other than login. See

Note: any node can be used as a CI runner on request. See and . The above systems have a dedicated or specilized use with CI.

ThinLinc Quickstart
ThinLinc Quickstart
GitLab Runner Quickstart
GitHub Runner Quickstart
amundsen
apachepass
clark
cousteau
docker
equinox
explorer
faraday
hudson
leconte
lewis
mcmurdo
milan0
milan1
milan2
milan3
minim1
oswald
oswald00
oswald02
oswald03
pcie
radeon
thunderx
xavier[1-3]
xavier[4-5]
zenith
Groq
Qualcomm Snapdragon 855
emu
affirmed
justify
pharoah
secretariat
Page cover image