ExCL User Docs
HomeAbout
  • Introduction
  • Acknowledgment
  • System Overview
    • amundsen
    • apachepass
    • clark
    • cousteau
    • docker
    • emu
    • equinox
    • excl-us
    • explorer
    • faraday
    • Hudson
    • leconte
    • lewis
    • mcmurdo
    • Milan
    • minim1
    • Oswald
    • pcie
    • quad
    • radeon
    • snapdragon
    • thunderx
    • Triple Crown
    • Xavier
    • zenith
  • ExCl Support
    • ExCL Team
    • Frequently Encountered Problems
    • Access to ExCL
    • Contributing
    • Glossary & Acronyms
    • Requesting Access
    • Outages and Maintenance Policy
    • Backup & Storage
  • Quick-Start Guides
    • ExCL Remote Development
    • Apptainer
    • Conda and Spack Installation
    • Devdocs
    • GitHub CI
    • Gitlab CI
    • Groq
    • Julia
    • Jupyter Notebook
    • Marimo
    • Ollama
    • Open WebUI
    • Python
    • Siemens EDA
    • ThinLinc
    • Visual Studio Code
    • Vitis FPGA Development
  • Software
    • Compilers
    • ExCl DevOps: CI/CD
    • Git
    • Modules
    • MPI
  • Devices
    • BlueField-2
  • Contributing via Git
    • Git Basics
      • Git Command Line
      • Git Scenarios
    • Authoring Guide
Powered by GitBook
On this page
  • Description
  • Contact
  • Usage
  • Installed Compilers
  • GPU Performance
  • Performance Information
  • Other Resources

Was this helpful?

Edit on GitHub
Export as PDF
  1. System Overview

leconte

Description

This system is generally identical to the nodes (AC922 model 8335_GTW) in the ORNL OLCF Summit system. This system consists of

  • 2 POWER9 (2.2 pvr 004e 1202) cpus, each with 22 cores and 4 threads per core.

  • 6 Tesla V100-SXM2-16GB GPUs

  • 606GiB memory

  • automounted home directory (on group NFS server)

Contact

  • excl-help@ornl.gov

Usage

As currently configured this system is usable using conventional ssh logins (from login.excl.ornl.gov), with automounted home directories. GPU access is currently cooperative; a scheduling mechanism and scheduled access is in design.

The software is as delivered by the vendor, and may not be satisfactory in all respects as of this writing. The intent is to provision a system that is as similar in all respects to Summit, but some progress is required to get there. This is to be considered an early access machine.

Please send assistance requests to excl-help@ornl.gov.

Installed Compilers

GPU Performance

This system is still being refined with respect to cooling. As of today, rather than running at the fully capable 300 watts per GPU, GPU usage has been limited to 250 watts to prevent overheating. As cooling is improved, this will be changed back to 300 watts with dynamic power reduction (with notification) as required to protect the equipment.

It is worth noting that this system had to be pushed quite hard (six independent nbody problems, plus CPU stressors on all but 8 threads) to trigger high temperature conditions. These limits may not be encountered in actual use.

Performance Information

GPU performance information can be viewed at

Request access by emailing excl-help@ornl.gov.

Other Resources

PreviousHudsonNextlewis

Last updated 2 years ago

Was this helpful?

Please see

IBM 8335-GTW documentation:

Compilers
https://graphite.ornl.gov:3000/d/000000058/leconte-gpu-statistics?refresh=30s&orgId=1
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9hdx/8335_gtw_landing.htm