arrow-left

All pages
gitbookPowered by GitBook
1 of 25

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

mcmurdo

thunderx

clark

Xavier

docker

excl-us

amundsen

apachepass

equinox

explorer

minim1

cousteau

radeon

Milan

pcie

hashtag
pcie

hashtag
Description

This system is intended for pci-based device support.

This system is a generic development server purchased with the intent of housing various development boards as needed.

The system is

  • Atipa

  • Tyan Motherboard S7119GMR-06

  • 192 GB memory

  • Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHzIntel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz 2x16 cores no hyperthreading

hashtag
Use

This system is used for heterogeneous accelerator exploration and FPGA Alveo/Vitis-based development.

hashtag
Current VMs

Name
Purpose

hashtag
Access

There is not currently special access permissions. System is available to ExCL users. This may change as needed.

hashtag
Images

hashtag
Contact

Please send assistance requests to [email protected].

  • Centos

  • Spike

    Main VM with GPUs and FPGAs passed to it. This VM uses Ubuntu 22.04 and software is deployed via modules.

    Intrepid

    Legacy Vitis development system. Also has docker deployed for Vitis AI work.

    Aries

    Has specialized Vivado install for Ettus RFSoC development. See and for the applied patches.

    system layout
    pci detail
    device wiring detail
    disks/fans/cpu
    USRP Hardware Driver and USRP Manual: Generation 3 USRP Build Documentation (ettus.com)arrow-up-right
    FIR Compiler IP: FIR output is incorrect when using symmetric coefficients and convergent rounding (xilinx.com)arrow-up-right

    quad

    Hudson

    Two Nvidia H100s are now available on hudson.ftpn.ornl.gov. From Nvidia documentation:

    The NVIDIA H100 NVL card is a dual-slot 10.5 inch PCI Express Gen5 card based on the NVIDIA Hopper™ architecture. It uses a passive heat sink for cooling, which requires system airflow to operate the card properly within its thermal limits. The NVIDIA H100 NVL operates unconstrained up to its maximum thermal design power (TDP) level of 400 W to accelerate applications that require the fastest computational speed and highest data throughput. The NVIDIA H100 NVL debuts the world’s highest PCIe card memory bandwidth of nearly 4,000 gigabytes per second (GBps)

    Basic validation has been done via running the nvidia samples nbody program on both devices:

    10485760 bodies, total time for 10 iterations: 401572.656 ms
    = 2738.014 billion interactions per second
    = 54760.284 single-precision GFLOP/s at 20 flops per interaction

    The GPUs are available to the same UIDs as are using the A100s on milan0. If nvidia-smi does not work for you, you don't have the proper group memberships -- please send email to [email protected] and we will fix it. nvhpc is installed as a module as it is on other systems.

    snapdragon

    This document describes how to access Snapdragon 855 HDK boards through mcmurdo and amundsen excl computing machines. The Snapdragon 855 HDK board is connected to Ubuntu linux machines through ADB.

    hashtag
    Description

    The Qualcomm® Snapdragon™ 855 Mobile Hardware Development Kit (HDK) is a highly integrated and optimized Android development platform.

    Accessing this system:

    • Qualcomm board is connected to an HPZ820 workstation (McMurdo) or to an HP Z4 workstation (Clark) through USB

    • Development Environment: Android SDK/NDK

    • Login to mcmurdo or clark

      • $ ssh –Y mcmurdo

    • Setup Android platform tools and development environment

      • $ source /home/nqx/setup_android.source

    • Make sure you have a functining environment

      • adb kill-server

      • adb start-server

    • Run Hello-world on ARM cores

      • $ git clone

      • $ make compile push run

    • Run OpenCL example on GPU

      • $ git clone

      • Run Sobel edge detection

    hashtag
    Other Details

    The snapdragon SDK uses python 2.7; you may need to explicitly specify python2 in your environment.

    hashtag
    Access

    Access will be granted per request (as this cannot be used as a shared resource).

    hashtag
    Useful Links

    1. Android Studio:

    2. Qualcomm HDK:

    3. Quallcomm Neural Processor SDK:

    hashtag
    Images

    zenith

    hashtag
    Zenith 1

    Created using PC Part Picker. The build is available at .

    Type
    Item
    adb root (restart adbd as root)
  • adb devices (to make sure there is a snapdragon responding)

  • adb shell (to test connecting to the device)

  • $ make compile push run fetch

  • Login to Qualcomm development board shell

    • $ adb shell

    • $ cd /data/local/tmp

  • https://code.ornl.gov/nqx/helloworld-androidarrow-up-right
    https://code.ornl.gov/nqx/opencl-img-processingarrow-up-right
    https://developer.android.com/studioarrow-up-right
    https://developer.qualcomm.com/hardware/snapdragon-855-hdkarrow-up-right
    https://developer.qualcomm.com/software/qualcomm-neural-processing-sdkarrow-up-right
    https://developer.qualcomm.com/docs/snpe/overview.htmarrow-up-right
    Laboratory Setup
    Price

    CPU

    $2300.98 @ Amazon

    CPU Cooler

    -

    Motherboard

    $1988.99 @ Amazon

    Memory

    hashtag
    Access to the GPUs

    To have access to the GPUs, request to be added to the video and render groups if you are not already in these groups.

    hashtag
    Images

    Zenith - 0
    Zenith - 1

    hashtag
    Zenith 2

    Created using PC Part Picker. The build is available at https://pcpartpicker.com/list/vjXBPFarrow-up-right.

    PCPartPicker Part Listarrow-up-right

    Type
    Item
    Price

    CPU

    $1605.00 @ Amazon

    CPU Cooler

    $250.00 @ Amazon

    hashtag
    Images

    Zenith2
    https://pcpartpicker.com/list/xPkRwcarrow-up-right
    PCPartPicker Part Listarrow-up-right

    Triple Crown

    High performance build and compute servers

    These 2U servers are highly capable large memory servers, except that they have limited PCIe4 slots for expansion.

    • HPE ProLiant DL385 Gen10 Plus chassis

    • 2 AMD EPYC 7742 64-Core Processors

    G.Skill Ripjaws V 128 GB (4 x 32 GB) DDR4-3600 CL18 Memoryarrow-up-right

    $249.99 @ Amazon

    Storage

    Samsung 980 Pro 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drivearrow-up-right

    $125.65 @ Amazon

    Video Card

    EVGA FTW3 ULTRA GAMING GeForce RTX 3090 24 GB Video Cardarrow-up-right

    $1499.99 @ Amazon

    Video Card

    Gigabyte GAMING OC Radeon RX 6900 XT 16 GB Video Cardarrow-up-right

    $1720.23 @ Amazon

    Case

    Thermaltake Core X9 ATX Desktop Casearrow-up-right

    -

    Power Supply

    EVGA SuperNOVA 1600 P2 1600 W 80+ Platinum Certified Fully Modular ATX Power Supplyarrow-up-right

    $304.99 @ Newegg

    Case Fan

    Noctua F12 PWM chromax.black.swap 54.97 CFM 120 mm Fanarrow-up-right

    $24.75 @ Amazon

    Case Fan

    Noctua F12 PWM chromax.black.swap 54.97 CFM 120 mm Fanarrow-up-right

    $24.75 @ Amazon

    Monitor

    HP X27q 27.0" 2560 x 1440 165 Hz Monitorarrow-up-right

    $289.00 @ Amazon

    Prices include shipping, taxes, rebates, and discounts

    Total

    $8529.32

    Generated by PCPartPickerarrow-up-right 2023-09-26 09:48 EDT-0400

    Motherboard

    Asus ROG ZENITH II EXTREME ALPHA EATX sTRX4 Motherboardarrow-up-right

    -

    Memory

    Corsair Vengeance LPX 256 GB (8 x 32 GB) DDR4-3200 CL16 Memoryarrow-up-right

    $649.99 @ Amazon

    Storage

    Samsung 980 Pro 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drivearrow-up-right

    $169.99 @ B&H

    Video Card

    XFX Speedster SWFT 105 Radeon RX 6400 4 GB Video Cardarrow-up-right

    $159.99 @ Amazon

    Case

    Corsair 4000D Airflow ATX Mid Tower Casearrow-up-right

    $89.99 @ Amazon

    Power Supply

    SeaSonic PRIME PX-1600 ATX 3.0 1600 W 80+ Platinum Certified Fully Modular ATX Power Supplyarrow-up-right

    $456.21 @ Amazon

    Case Fan

    Noctua A14 PWM chromax.black.swap 82.52 CFM 140 mm Fanarrow-up-right

    $26.95 @ Amazon

    Case Fan

    Noctua A14 PWM chromax.black.swap 82.52 CFM 140 mm Fanarrow-up-right

    $26.95 @ Amazon

    Case Fan

    Noctua A12x15 PWM chromax.black.swap 55.44 CFM 120 mm Fanarrow-up-right

    $26.95 @ Amazon

    Prices include shipping, taxes, rebates, and discounts

    Total

    $3462.02

    Generated by PCPartPickerarrow-up-right 2024-06-27 12:09 EDT-0400

    AMD Threadripper 3970X 3.7 GHz 32-Core Processorarrow-up-right
    Corsair iCUE H150i ELITE CAPELLIX 75 CFM Liquid CPU Coolerarrow-up-right
    Asus ROG ZENITH II EXTREME ALPHA EATX sTRX4 Motherboardarrow-up-right
    AMD Threadripper 3970X 3.7 GHz 32-Core Processorarrow-up-right
    Corsair iCUE H100i ELITE LCD 58.1 CFM Liquid CPU Coolerarrow-up-right

    configured with two threads per core, so presents as 256 cores

  • this can be altered per request

  • 1 TB physical memory

    • 16 DDR4 Synchronous Registered (Buffered) 3200 MHz 64 GiB DIMMS

  • 2 HP EG001200JWJNQ 1.2 TB SAS 10500 RPM Disks

    • one is system disk, one available for research use

  • 4 MO003200KWZQQ 3.2 TB NVME storage

    • available as needed

  • hashtag
    Usage

    These servers are generally used for customized VM environments, which are often scheduled via SLURM, and for networking/DPU research.

    hashtag
    Status

    Node
    VM
    OS
    Status

    Justify

    All off

    Ubuntu 22.04

    Operational

    Pharoah

    hashtag
    Affirmed

    Affirmed is one of our triple crown servers (named after Triple Crown winners). These are highly capable large memory servers

    It currently runs Ubuntu 22.04.

    hashtag
    Specialized hardware

    • BlueField-2 DPU connected to 100Gb Infiniband Network

      • Can also be connected to 10Gb ethernet network

      • used to investigate properties and usage of the NVidia BlueField-2 card (ConnectX-6 VPI with DPU).

    hashtag
    Usage

    These servers are generally used for customized VM environments, which are often scheduled via SLURM.

    hashtag
    Justify

    Justify is one of our triple crown servers (named after Triple Crown winners). These are highly capable large memory servers

    It currently runs Centos 7.9.

    hashtag
    Usage

    These servers are generally used for customized VM environments, which are often scheduled via SLURM.

    hashtag
    Pharaoh

    Pharaoh is one of our triple crown servers (named after Triple Crown winners). These are highly capable large memory servers

    It currently runs Centos 7.9.

    hashtag
    Usage

    These servers are generally used for customized VM environments, which are often scheduled via SLURM.

    hashtag
    Secretariat

    Secretariat is one of our triple crown servers (named after Triple Crown winners). These are highly capable large memory servers

    It currently runs Ubuntu 22.04.

    hashtag
    Specialized hardware

    • BlueField-2 DPU connected to 100Gb Infiniband Network

      • Can also be connected to 10Gb ethernet network

      • used to investigate properties and usage of the NVidia BlueField-2 card (ConnectX-6 VPI with DPU).

    hashtag
    Usage

    These servers are generally used for customized VM environments, which are often scheduled via SLURM.

    All off

    Ubuntu 22.04

    Operational

    Affirmed

    All off

    Ubuntu 22.04

    Operational

    Secretariat

    All off

    Ubuntu 22.04

    Operational

    leconte

    hashtag
    Description

    This system is generally identical to the nodes (AC922 model 8335_GTW) in the ORNL OLCF Summit system. This system consists of

    • 2 POWER9 (2.2 pvr 004e 1202) cpus, each with 22 cores and 4 threads per core.

    • 6 Tesla V100-SXM2-16GB GPUs

    • 606GiB memory

    • automounted home directory (on group NFS server)

    hashtag
    Contact

    • [email protected]

    hashtag
    Usage

    As currently configured this system is usable using conventional ssh logins (from login.excl.ornl.gov), with automounted home directories. GPU access is currently cooperative; a scheduling mechanism and scheduled access is in design.

    The software is as delivered by the vendor, and may not be satisfactory in all respects as of this writing. The intent is to provision a system that is as similar in all respects to Summit, but some progress is required to get there. This is to be considered an early access machine.

    Please send assistance requests to [email protected].

    hashtag
    Installed Compilers

    Please see

    hashtag
    GPU Performance

    This system is still being refined with respect to cooling. As of today, rather than running at the fully capable 300 watts per GPU, GPU usage has been limited to 250 watts to prevent overheating. As cooling is improved, this will be changed back to 300 watts with dynamic power reduction (with notification) as required to protect the equipment.

    It is worth noting that this system had to be pushed quite hard (six independent nbody problems, plus CPU stressors on all but 8 threads) to trigger high temperature conditions. These limits may not be encountered in actual use.

    hashtag
    Performance Information

    GPU performance information can be viewed at

    Request access by emailing [email protected].

    hashtag
    Other Resources

    • IBM 8335-GTW documentation:

    Compilers
    https://graphite.ornl.gov:3000/d/000000058/leconte-gpu-statistics?refresh=30s&orgId=1arrow-up-right
    https://www.ibm.com/support/knowledgecenter/en/POWER9/p9hdx/8335_gtw_landing.htmarrow-up-right

    emu

    hashtag
    Description

    EMU-Chick System is composed of 8x nodes that are connected via RapidIO Interconnect.

    Each node has:

    • 8x nodelets, array of DRAMs

    • A stationary core (SC)

    • Migration engine, PCI-Express interfaces, and an SSD.

    • 64-byte channel 64GB of DRAM, divided into eight 8-byte narrow-channel-DRAMs (NC-DRAM

    Each nodelet has:

    • 2x Gosamer cores (GC)

    • 64 concurrent in-order, single-issue hardware threads

    hashtag
    Access

    • The path to access to each individual EMU node is: login.excl.ornl.gov ⇒ emu-gw ⇒ emu ⇒ {n0-n7}

    • emu-gw

    hashtag
    Development Workflow

    • The EMU software development kit (SDK) is installed under /usr/local/emu on emu-gw, which is an x86 based system. Compilation and simulation should be performed on this machine.

    • The official EMU programming guide is located under /usr/docs.

    • emu and emu-gw mount home directories, so you should have no difficulty accessing your projects. Please use $HOME (or ${HOME}

    hashtag
    Other Resources

    This document will be updated with additional documentation references and user information as it becomes available.

    hashtag
    Contact

    Please send assistance requests to [email protected].

    is an x86-based gateway node.
  • The emu is the system board controller (sbc) and individual nodes are accessed only via this host.

  • Connections to emu from the emu-gw are via preset ssh keys that are created during account creation. If you can't log in, your user account/project do not have access to EMU systems.

  • ) as your home directory in scripts, as the mount location of your home directory, may change.

    Oswald

    hashtag
    oswald00

    hashtag
    Description

    This system is a generic development server purchased with the intent of housing various development boards as needed.

    The system is

    • Penguin Computing Relion 2903GT

    • Gigabyte motherboard MD90-FS0-ZB

    • 256 GB memory

    hashtag
    Access

    There is not currently special access permissions. System is available to ExCL users. This may change as needed.

    hashtag
    Images

    hashtag
    Contact

    Please send assistance requests to [email protected].

    hashtag
    oswald01

    triangle-exclamation

    Oswald01 has been decommissioned due to a hardware failure.

    hashtag
    Description

    This system is a generic development server purchased with the intent of housing various development boards as needed.

    The system is

    • Penguin Computing Relion 2903GT

    • Gigabyte motherboard MD90-FS0-ZB

    • 256 GB memory

    hashtag
    Access

    There is not currently special access permissions. The system is available to ExCL users. This may change as needed.

    hashtag
    Images

    hashtag
    Contact

    Please send assistance requests to [email protected].

    hashtag
    oswald02

    hashtag
    Description

    This system is a generic development server purchased with the intent of housing various development boards as needed.

    The system is

    • Penguin Computing Relion 2903GT

    • Gigabyte motherboard MD90-FS0-ZB

    • 256 GB memory

    hashtag
    Access

    There is not currently special access permissions. The system is available to ExCL users. This may change as needed.

    hashtag
    Images

    hashtag
    Contact

    Please send assistance requests to [email protected].

    hashtag
    oswald03

    hashtag
    Description

    This system is a generic development server purchased with the intent of housing various development boards as needed.

    The system is

    • Penguin Computing Relion 2903GT

    • Gigabyte motherboard MD90-FS0-ZB

    • 256 GB memory

    hashtag
    Access

    There is not currently special access permissions. The system is available to ExCL users. This may change as needed.

    hashtag
    Images

    hashtag
    Contact

    Please send assistance requests to [email protected].

    Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz 2x16 cores no hyperthreading
  • Centos

  • Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz 2x16 cores no hyperthreading
  • Centosa

  • Micron 9100 NVM 2.4TB MTFDHAX214MCF

  • Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz 2x16 cores no hyperthreading
  • Centos

  • Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz 2x16 cores no hyperthreading
  • Centos

  • fpga detail
    system layout
    backplane identification
    fpga detail
    left daughterboard detail
    right daughterboard gpu removed
    gpu identification detail
    fpga detail
    SAS card detail
    fpga detail
    left daughterboard detail

    System Overview

    Overview of ExCL Systems

    hashtag
    ExCL Server List with Accelerators

    Host Name
    Description
    OS
    Accelerators or other special hardware

    hashtag
    New Systems and Devices to be Deployed

    • 2 Snapdragon HDK & Display

    • Intel ARC GPU

    • Achronix FPGA

    • AGX Orin Developer Kits

    hashtag
    Accelerator Highlights

    Accelerator Name
    Host(s)

    hashtag
    Unique Architecture Highlights

    Accelerator Name
    Host(s)

    hashtag
    Other Equipment

    • RTP164 High Performance Oscilloscope

    hashtag
    Primary Usage Notes

    hashtag
    Access Host (Login)

    Login is the node use to access ExCL and to proxy into and out of the worker nodes. It is not to be used for computation but for accessing the compute notes. The login node does have ThinLinc installed and can also be used for graphical access and more performance x11 forwarding from an internal node. See .

    Host
    Base Resources
    Specialized Resources
    Notes

    hashtag
    General Interactive Login Use

    These nodes can be access with ssh, and are availible for general interactive use.

    Host
    Base Resources
    Specialized Resources
    Notes

    Notes:

    • All of the general compute resources have hyperthreading enabled unless otherwise stated.. This can be changed on a per request basis.

    • TL: Thinlinc enabled. Need to use login as a jump host for resources other than login. See

    • Slurm: Node is added to a slurm partition and will likely be used for running slurm jobs. Try to make sure your interactive use does not conflict with any active Slurm jobs.

    hashtag
    Graphical session use via ThinLinc

    • login — not for heavy computation

    • zenith

    • zenith2

    hashtag
    Slurm for Large Jobs

    • Triple Crown — Dedicated Slurm runners.

      • affirmed

      • justify

    hashtag
    Gitlab Runner Speciliazed Nodes

    • slurm-gitlab-runner — Gitlab Runner for launching slurm jobs.

    • docker — for docker runner jobs.

    • devdoc — for internal development documentation building and hosting.

    Note: any node can be used as a CI runner on request. See and . The above systems have a dedicated or specilized use with CI.

    hashtag
    Docker

    • docker — Node with docker installed.

    hashtag
    Specialized usage and reservations

    Host
    Specilized Usage
    Reserved?

    Notes:

    • task-reserved: reserved for specialized tasks, not for project

    hashtag
    Infrastructure Systems

    Host Name
    Description
    OS

    Xilinx U280

    Nvidia A100 GPU

    milan0

    Nvidia P100 GPU

    pcie

    Nvidia V100 GPU

    equinox, leconte, milan2

    Nvidia H100 GPU

    hudson

    Nvidia Jetson

    xavier

    amundsen, mcmurdo

    Intel Stratix 10 FPGA

    pcie

    Xilinx Zynq ZCU 102

    n/a

    Xilinx Zynq ZCU 106

    n/a

    Xilinx Alveo U250

    pcie

    2 Ettus x410 SDRs

    marconi

    oswald02

    32 core 256 Gi

    NVIDIA P100, FPGA @

    Not available - rebuilding

    oswald03

    32 core 256 Gi

    NVIDIA P100, FPGA @

    Not available - rebuilding

    milan0

    128 Core 1 Ti

    NVIDIA A100 (2)

    Slurm

    milan1

    128 Core 1 Ti

    Groq AI Accelerator (2)

    Slurm

    milan2

    128 Core 1 Ti

    NVIDIA V100 (8)

    milan3

    128 Core 1 Ti

    -

    Slurm

    excl-us00

    32 Core 192 Gi

    -

    Rocky 9

    excl-us01

    32 Core 192 Gi

    -

    Not available pending rebuild

    excl-us03

    32 Core 192 Gi

    -

    CentOS 7 pending rebuild

    secretariat

    256 Core 1 Ti

    -

    Slurm

    affirmed

    256 Core 1 Ti

    -

    Slurm

    pharaoh

    256 Core 1 Ti

    -

    Slurm

    justify

    256 Core 1 Ti

    -

    Slurm

    hudson

    192 Core 1.5 Ti

    NVIDIA H100 (2)

    docker

    20 Core 96 Gi

    -

    Configured for Docker general use with enhanced image storage

    pcie

    32 Core 196 Gi

    NVIDIA P100, FPGA @

    TL, No hyperthreading, passthrough hypervisor for accellerators

    lewis

    20 Core 48 Gi

    NVIDIA T1000, U250

    TL

    clark

    20 Core 48 Gi

    NVIDIA T1000

    TL

    zenith

    64 core 128 Gi

    NVIDIA GeForce RTX 3090 @

    TL

    radeon

    8 Core 64 Gi

    AMD Radeon VII

    equinox

    DG Workstation

    NVIDIA V100 * 4

    rebuilding after ssd failure

    explorer

    256 Core 512 Gi

    AMD M60 (2)

    cousteau

    48 Core 256 Gi

    AMD M100 (2)

    leconte

    168 Core 602 Gi

    NVIDIA V100 * 6

    PowerPC (Summit)

    Zenith

    32 Core 132 Gi

    Nvidia GTX 3090 AMD Radeon RX 6800

    TL

    Zenith2

    32 Core 256 Gi

    Embedded FPGAs

    TL

    • Most of the general compute resources are Slurm-enabled, to allow queuing of larger-scale workloads. [email protected] for specialized assistance. Only the systems that are heavily used for running Slurm jobs are marked “Slurm” above.

    clark

  • lewis

  • pcie

  • intrepid

  • spike

  • secretariat

  • pharaoh

  • Milan — Additional Slurm Resources with other shared use.

    • milan0

    • milan1

    • milan3

  • Others — Shared slurm runners with interactive use.

    • milan[0-3]

    • cousteau

    • excl-us03

    • explorer

    • oswald

    • oswald[00, 02-03]

  • pcie vm with FPGA and GPU passthrough access

    task-reserved

    lewis

    U250

    RISC-V Emulation using U250

    slurm-gitlab-runner

    slurm integration wth gitlab-runner

    task-reserved

    docker

    slurm-integration with gitlab runner for containers

    reserved for container use

    affirmed

    Triple Crown AMD EPYC 7742 (Rome) 2x64-core 1 TB

    Ubuntu 22.04

    Bluefield 2

    NIC/DPUs

    amundsen

    Desktop embedded system development

    Ubuntu 20.04

    Snapdragon 855 (desktop retiring)

    apachepass

    ApachePass memory system

    Centos 7.9

    375 GB Apachepass memory

    clark

    Desktop embedded system development

    Ubuntu 22.04

    Intel A770 Accelerator

    cousteau

    AMD EPYC 7272 (Rome) 2x12-core 256 GB

    Ubuntu 22.04

    2 AMD MI100 32 GB GPUs

    docker (quad03)

    Intel 20 Core Server 96 GB

    Ubuntu 20.04

    Docker development environment

    equinox

    DGX Workstation Intel Xeon E5-2698 v4 (Broadwell) 20-core 256 GB

    Ubuntu 22.04

    4 Tesla V100-DGXS 32 GB GPUs

    explorer

    AMD EPYC 7702 (Rome) 2x64-core 512 GB

    Ubuntu 22.04

    2 AMD MI60 32 GB GPUs

    hudson

    AMD EPYC 9454 (Genoa) 2x48-core 1.5 TB

    Ubuntu 22.04

    2 Nvidia H100s

    justify

    Triple Crown AMD EPYC 7742 (Rome) 2x64-core 1 TB

    Centos 7.9

    leconte

    Summit server POWER9 42 Cores

    Centos 8.4

    6 Tesla V100 16 GB GPUs

    lewis

    Desktop embedded system development

    Ubuntu 22.04

    mcmurdo

    Desktop embedded system development

    Ubuntu 20.04

    Snapdragon 855 & PolarFire SoC (retiring)

    milan0

    AMD EPYC 7513 (Milan) 2x32-core 1 TB

    Ubuntu 22.04

    2 * Nvidia A100

    milan1

    AMD EPYC 7513 (Milan) 2x32-core 1 TB

    Ubuntu 22.04 or other

    2 Groq AI accelerators

    milan2

    AMD EPYC 7513 (Milan) 2x32-core 1 TB

    Ubuntu 22.04 or other

    8 Nvidia Tesla V100-PCIE-32GB GPUs

    milan3

    AMD EPYC 7513 (Milan) 2x32-core 1 TB

    Ubuntu 22.04 or other

    General Use

    minim1

    Apple M1 Desktop

    OSX

    oswald

    Oswald head node

    Ubuntu 22.04

    oswald00

    Intel Xeon E5-2683 v4 (Haswell) 2x16-core 256 GB

    Centos 7.9

    Tesla P100 & Nallatech FPGA

    oswald02

    Intel Xeon E5-2683 v4 (Haswell) 2x16-core 256 GB

    Centos 7.9

    Tesla P100 & Nallatech FPGA

    oswald03

    Intel Xeon E5-2683 v4 (Haswell) 2x16-core 256 GB

    Centos 7.9

    Tesla P100 & Nallatech FPGA

    pcie

    Intel Xeon Gold 6130 CPU (Skylake) 32-core 192 GB

    Ubuntu 22.04

    Xylinx U250 Nalllatech Stratix 10 Tesla P100 Groq Card

    pharoah

    Triple Crown AMD EPYC 7742 (Rome) 2x64-core 1 TB

    Centos 7.9

    radeon

    Intel 4 Core 64 GB

    Ubuntu 22.04

    AMD Vega20 Radeon VII GPU

    secretariat

    Triple Crown AMD EPYC 7742 (Rome) 2x64-core 1 TB

    Ubuntu 22.04

    Bluefield 2 NIC/DPU

    thunderx

    ARM Cavium ThunderX2 Server 128 GB

    Centos Stream 8

    xavier[1-3]

    Nvidia Jetson AGX

    Ubuntu

    Volta GPU

    xavier[4-5]

    Nvidia Jetson AGX Orin

    Ubuntu

    Ampere GPU (not deployed)

    zenith

    AMD Ryzen Threadripper 3970X (Castle Peak) 32-core 132 GB

    Ubuntu 22.04

    Nvidia GTX 3090 AMD Radeon RX 6800

    AMD Radeon VII GPU

    radeon

    AMD MI60 GPU

    explorer

    AMD MI100 GPU

    cousteau

    Groq

    Intel Optane DC Persistent Memory

    apachepass

    Emu Technology CPU

    emu

    Cavium CPU

    thunderx

    login

    4 core 16 Gi vm

    -

    login node - not for computation, TL

    oswald

    16 Core 64 Gb

    -

    Usable, pending rebuilt to Ubuntu

    oswald00

    32 core 256 Gi

    dragon (vm)

    Siemens EDA Tools

    task-reserved

    devdocs (vm)

    Internal development documentation building and hosting

    task-reserved

    excl-us01 (hypervisor)

    Intel 16 Core Utility Server 196 GB

    ThinLinc Quickstart
    ThinLinc Quickstart
    GitLab Runner Quickstart
    GitHub Runner Quickstart

    milan1

    NVIDIA P100, FPGA @

    spike (vm)

    Qualcomm Snapdragon 855

    lewis

    Currently has a U250 installed with a custom application deployed which requires an older linux kernel.

    Lewis is configured with kernel 5.15.0.

    Hold set with:

    apt-mark hold 5.15.0-72-generic

    To remove hold:

    apt-mark unhold 5.15.0-72-generic