ExCL User Docs
HomeAbout
  • Introduction
  • Acknowledgment
  • System Overview
    • amundsen
    • apachepass
    • clark
    • cousteau
    • docker
    • emu
    • equinox
    • excl-us
    • explorer
    • faraday
    • Hudson
    • leconte
    • lewis
    • mcmurdo
    • Milan
    • minim1
    • Oswald
    • pcie
    • quad
    • radeon
    • snapdragon
    • thunderx
    • Triple Crown
    • Xavier
    • zenith
  • ExCl Support
    • ExCL Team
    • Frequently Encountered Problems
    • Access to ExCL
    • Contributing
    • Glossary & Acronyms
    • Requesting Access
    • Outages and Maintenance Policy
    • Backup & Storage
  • Quick-Start Guides
    • ExCL Remote Development
    • Apptainer
    • Conda and Spack Installation
    • Devdocs
    • GitHub CI
    • Gitlab CI
    • Groq
    • Julia
    • Jupyter Notebook
    • Marimo
    • Ollama
    • Open WebUI
    • Python
    • Siemens EDA
    • ThinLinc
    • Visual Studio Code
    • Vitis FPGA Development
  • Software
    • Compilers
    • ExCl DevOps: CI/CD
    • Git
    • Modules
    • MPI
  • Devices
    • BlueField-2
  • Contributing via Git
    • Git Basics
      • Git Command Line
      • Git Scenarios
    • Authoring Guide
Powered by GitBook
On this page
  • FPGA Current State
  • Vitis Development Tools
  • FPGAs
  • Vitis and FPGA Allocation with Slurm (Recommended Method to Use Tools)
  • Quickstart
  • First Steps
  • Getting specific FPGA information from the Platform.
  • Accessing systems graphically using ThinLinc
  • Using Vitis with the Fish Shell (Recommended Apporach)
  • Manually Setting up License
  • Building and Running FPGA Applications
  • Setting up the Vitis Environment
  • Build Targets
  • Building the Host Program
  • Building the Device Binary
  • Analyzing the Build Results
  • Running Emulation
  • Running the Application Hardware Build
  • Example Makefile
  • Performance Considerations
  • Useful References
  • Useful Commands

Was this helpful?

Edit on GitHub
Export as PDF
  1. Quick-Start Guides

Vitis FPGA Development

Getting started with Vitis FPGA development.

PreviousVisual Studio CodeNextCompilers

Last updated 14 days ago

Was this helpful?

→ →

FPGA Current State

FPGA
State

U250

Attached to spike in Alveo mode.

u55C

Attached to spike in Alveo mode.

u280

Attached to milan3 in Alveo mode.

Vitis Development Tools

This page covers how to access the Vitis development tools available in ExCL. The available FPGAs are listed in the section. All Ubuntu 22.04 systems can load the Vitis/Vivado development tools as a module. See to get started. The have installed, which makes it easier to run graphical applications. See section to get started.

Vitis is now primarily deployed as a module for Ubuntu 22.04 systems. You can view available modules and versions with module avail and load the most recent version with module load Vitis. These modules should be able to work on any Ubuntu 22.04 system in ExCL.

FPGAs

FPGA
Host System
Slurm GRES Name

spike

U250

spike

U55C

milan3

U280

Vitis and FPGA Allocation with Slurm (Recommended Method to Use Tools)

Interactive Use: Vitis Build

Allocate a build instance for one Vitis Build. Each Vitis build uses 8 threads by default. If you plan to use more threads, please adjust -c accordingly.

srun -J interactive_build -p fpgabuild -c 8 --pty bash

Where: -J, --job-name=<jobname> -p, --partition=<partition names> -c, --cpus-per-task=<ncpus>

Recommended: bash can be replaced with the build or execution command to run the command and get the results back to your terminal. Otherwise, you have to exit the bash shell launched by srun to release the resources.

Recommended: sbatch can be used with a script to queue the job and store the resulting output to a file. sbatch is better than srun for long-running builds.

Interactive Use: Allocate FPGA

Allocate the U250 FPGA to run hardware jobs. Please release the FPGA when you are done so that other jobs can use the FPGA.

srun -J interactive_fpga -p fpgarun --gres="fpga:U250:1" --pty bash

Where: -J, --job-name=<jobname> -p, --partition=<partition names> --gres="fpga:U250:1" specifies that you want to use 1 U250 FPGA.

Recommended: bash can be replaced with the build or execution command to run the command and get the results back to your terminal. Otherwise, you have to exit the bash shell launched by srun to release the resources.

Non-interactive Use: Vitis Build

sbatch -J batch_build -p fpgabuild -c 8 build.sh

Where: -J, --job-name=<jobname> -p, --partition=<partition names> -c, --cpus-per-task=<ncpus> build.sh is a script to launch the build.

Recommended: The Slurm parameters can be stored in build.sh with #SBATCH <parameter>.

Non-interactive Use: Vitis Run

sbatch -J batch_run -p fpgarun --gres="fpga:U250:1" run.sh

Where: -J, --job-name=<jobname> -p, --partition=<partition names> --gres="fpga:U250:1" specifies that you want to use 1 U250 FPGA. run.sh is a script to launch the run.

Recommended: The Slurm parameters can be stored in build.sh with #SBATCH <parameter>.

Quickstart

  1. From the login node run srun -J interactive_build -p fpgabuild -c 8 --pty bash to start a bash shell.

  2. Use module load vitis to load the latest version of the vitis toolchain.

  3. Use source /opt/xilinx/xrt/setup.sh to load the Xilinx Runtime (XRT).

First Steps

Getting specific FPGA information from the Platform.

$ platforminfo --platform xilinx_u250_gen3x16_xdma_3_1_202020_1
==========================
Basic Platform Information
==========================
Platform:           gen3x16_xdma_3_1
File:               /opt/xilinx/platforms/xilinx_u250_gen3x16_xdma_3_1_202020_1/xilinx_u250_gen3x16_xdma_3_1_202020_1.xpfm
Description:
    This platform targets the Alveo U250 Data Center Accelerator Card. This high-performance acceleration platform features up to four channels of DDR4-2400 SDRAM which are instantiated as required by
the user kernels for high fabric resource availability, and Xilinx DMA Subsystem for PCI Express with PCIe Gen3 x16 connectivity.


=====================================
Hardware Platform (Shell) Information
=====================================
Vendor:                           xilinx
Board:                            U250 (gen3x16_xdma_3_1)
Name:                             gen3x16_xdma_3_1
Version:                          202020.1
Generated Version:                2020.2
Hardware:                         1
Software Emulation:               1
Hardware Emulation:               1
Hardware Emulation Platform:      0
FPGA Family:                      virtexuplus
FPGA Device:                      xcu250
Board Vendor:                     xilinx.com
Board Name:                       xilinx.com:au250:1.2
Board Part:                       xcu250-figd2104-2L-e

...

Accessing systems graphically using ThinLinc

Fish is installed system-wide with a default configuration based on Aaron's fish configuration that includes helpful functions to launch the Xilinx development tools. The next sections goes over the functions that this fish config provides.

sfpgabuild

sfpgabuild is a shortcut to calling srun -J interactive_build -p fpgabuild -c 8 --mem 8G --mail-type=END,FAIL --mail-user $user_email --pty $argv . Essentially it setups a FPGA build environment using slurm using resonable defaults. Each of the defaults can be overriden by spacifying the new parameter when calling sfpgabuild . sfpgabuild also modifies the prompt to remind you that you are in the fpga build environment.

sfpgarun-u250

sfpgarun is a shortcut to calling srun -J fpgarun-u250 -p fpgarun -c 8 --mem 8G --mail-type=END,FAIL --mail-user $user_email --gres="fpga:U250:1" --pty $argv . sfpgarun-u250 setups up an FPGA run environment complete with requesting the FPGA resource.

sfpgarun-u55c

sfpgarun is a shortcut to calling srun -J fpgarun-u55c -p fpgarun -c 8 --mem 8G --mail-type=END,FAIL --mail-user $user_email --gres="fpga:U55C:1" --pty $argv . sfpgarun-u55c setups up an FPGA run environment complete with requesting the FPGA resource.

sfpgarun-u280

sfpgarun is a shortcut to calling srun -J fpgarun-u280 -p fpgarun -c 8 --mem 8G --mail-type=END,FAIL --mail-user $user_email --gres="fpga:U280:1" --pty $argv . sfpgarun-u280 setups up an FPGA run environment complete with requesting the FPGA resource.

sfpgarun-hw-emu

sfpgarun is a shortcut to calling XCL_EMULATION_MODE=hw_emu srun -J fpgarun -p fpgarun -c 8 --mem 8G --mail-type=END,FAIL --mail-user $user_email --pty $argv . sfpgarun-hw-emu setups up an FPGA run environment complete with specifying XCL_EMULATION_MODE.

sfpgarun-sw-emu

sfpgarun is a shortcut to calling XCL_EMULATION_MODE=sw_emu srun -J fpgarun -p fpgarun -c 8 --mem 8G --mail-type=END,FAIL --mail-user $user_email --pty $argv . sfpgarun-sw-emu setups up an FPGA run environment complete with specifying XCL_EMULATION_MODE.

viv

After running bass module load vitis, sfpgabuild, or sfpgarun, viv can be used to launch Vivado in the background and is a shortcut to calling vivado -nolog -nojournal.

Manually Setting up License

In order to manually set up the the Xilinx license, set the environment variable XILINXD_LICENSE_FILE to 2100@license.ftpn.ornl.gov.

export XILINXD_LICENSE_FILE=2100@license.ftpn.ornl.gov

The FlexLM server uses ports 2100 and 2101.

Note: This step is done automatically by the module load command and manually setting up the license should not be needed.

Building and Running FPGA Applications

Setting up the Vitis Environment

The Vitis environment and tools are setup via the module files. To load the latest version of the Vitis environment use the following command. In bash:

module load vitis

In fish:

bass module load vitis

To see available versions use module avail. Then a specific version can be loaded by specifying the version, for example module load vitis/2020.2.

Note: Because of issues with XRT and with OpenCL including the xilinx.icd by default, on many system we moved the xilinx.icd to /etc/OpenCL/vendors/xilinx/xilinx.icd. Now to load the FPGA as an OpenCL device, you must change the environment variable OPENCL_VENDOR_PATH to point to /etc/OpenCL/vendors/xilinx or /etc/OpenCL/vendors/all.

Build Targets

There are three build targets available when building an FPGA kernel with Vitis tools.

Software Emulation
Hardware Emulation
Hardware Execution

Host application runs with a C/C++ or OpenCLâ„¢ model of the kernels.

Host application runs with a simulated RTL model of the kernels.

Host application runs with actual hardware implementation of the kernels.

Used to confirm functional correctness of the system.

Test the host / kernel integration, get performance estimates.

Confirm that the system runs correctly and with desired performance.

Fastest build time supports quick design iterations.

Best debug capabilities, moderate compilation time with increased visibility of the kernels.

Final FPGA implementation, long build time with accurate (actual) performance results.

The designed build target is specified with the -t flag with v++.

Building the Host Program

The host program can be written using either the native XRT API or OpenCL API calls, and it is compiled using the GNU C++ compiler (g++). Each source file is compiled to an object file (.o) and linked with the Xilinx runtime (XRT) shared library to create the executable which runs on the host CPU.

Compiling and Linking for x86

Each source file of the host application is compiled into an object file (.o) using the g++ compiler.

g++ ... -c <source_file1> <source_file2> ... <source_fileN>

The generated object files (.o) are linked with the Xilinx Runtime (XRT) shared library to create the executable host program. Linking is performed using the -l option.

g++ ... -l <object_file1.o> ... <object_fileN.o>

Compiling and linking for x86 follows the standard g++ flow. The only requirement is to include the XRT header files and link the XRT shared libraries.

When compiling the source code, the following g++ options are required:

  • -I$XILINX_XRT/include/: XRT include directory.

  • -I$XILINX_VIVADO/include: Vivado tools include directory.

  • -std=c++11: Define the C++ language standard.

When linking the executable, the following g++ options are required:

  • -L$XILINX_XRT/lib/: Look in XRT library.

  • -lOpenCL: Search the named library during linking.

  • -lpthread: Search the named library during linking.

  • -lrt: Search the named library during linking.

  • -lstdc++: Search the named library during linking.

Building the Device Binary

The kernel code is written in C, C++, OpenCL™ C, or RTL, and is built by compiling the kernel code into a Xilinx® object (XO) file, and linking the XO files into a device binary (XCLBIN) file, as shown in the following figure.

The process, as outlined above, has two steps:

  1. Build the Xilinx object files from the kernel source code.

    • For C, C++, or OpenCL kernels, the v++ -c command compiles the source code into Xilinx object (XO) files. Multiple kernels are compiled into separate XO files.

  2. After compilation, the v++ -l command links one or multiple kernel objects (XO), together with the hardware platform XSA file, to produce the device binary XCLBIN file.

Compiling Kernels with the Vitis Compiler

The first stage in building the xclbin file is to compile the kernel code using the Xilinx Vitis compiler. There are multiple v++ options that need to be used to correctly compile your kernel. The following is an example command line to compile the vadd kernel:

v++ -t sw_emu --platform xilinx_u200_xdma_201830_2 -c -k vadd \
-I'./src' -o'vadd.sw_emu.xo' ./src/vadd.cpp

The various arguments used are described below. Note that some of the arguments are required.

  • --platform <arg>: Specifies the accelerator platform for the build. This is required because runtime features, and the target platform are linked as part of the FPGA binary. To compile a kernel for an embedded processor application, specify an embedded processor platform: --platform $PLATFORM_REPO_PATHS/zcu102_base/zcu102_base.xpfm.

  • -c: Compile the kernel. Required. The kernel must be compiled (-c) and linked (-l) in two separate steps.

  • -k <arg>: Name of the kernel associated with the source files.

  • -o'<output>.xo': Specify the shared object file output by the compiler. Optional.

  • <source_file>: Specify source files for the kernel. Multiple source files can be specified. Required.

Linking the Kernels

The kernel compilation process results in a Xilinx object (XO) file whether the kernel is written in C/C++, OpenCL C, or RTL. During the linking stage, XO files from different kernels are linked with the platform to create the FPGA binary container file (.xclbin) used by the host program.

Similar to compiling, linking requires several options. The following is an example command line to link the vadd kernel binary:

v++ -t sw_emu --platform xilinx_u200_xdma_201830_2 --link vadd.sw_emu.xo \
-o'vadd.sw_emu.xclbin' --config ./connectivity.cfg

This command contains the following arguments:

  • -t <arg>: Specifies the build target. Software emulation (sw_emu) is used as an example. When linking, you must use the same -t and --platform arguments as specified when the input (XO) file was compiled.

  • --platform <arg>: Specifies the platform to link the kernels with. To link the kernels for an embedded processor application, you simply specify an embedded processor platform: --platform $PLATFORM_REPO_PATHS/zcu102_base/zcu102_base.xpfm

  • --link: Link the kernels and platform into an FPGA binary file (xclbin).

  • <input>.xo: Input object file. Multiple object files can be specified to build into the .xclbin.

  • -o'<output>.xclbin': Specify the output file name. The output file in the link stage will be an .xclbin file. The default output name is a.xclbin

Beyond simply linking the Xilinx object (XO) files, the linking process is also where important architectural details are determined. In particular, this is where the number of compute unit (CUs) to instantiate into hardware is specified, connections from kernel ports to global memory are assigned, and CUs are assigned to SLRs. The following sections discuss some of these build options.

Analyzing the Build Results

Running Emulation

TLDR: Create an emconfig.json file using emconfigutil and set XCL_EMULATION_MODE to sw_emu or hw_emu before executing the host program. The device binary also has to be built for the corresponding target.

Running Emulation on Data Center Accelerator Cards

  1. Set the desired runtime settings in the xrt.ini file. This step is optional.\

  2. The emulation configuration file, emconfig.json, is generated from the specified platform using the emconfigutil command, and provides information used by the XRT library during emulation. The following example creates the emconfig.json file for the specified target platform:

    emconfigutil --platform xilinx_u200_xdma_201830_2

    In emulation mode, the runtime looks for the emconfig.json file in the same directory as the host executable, and reads in the target configuration for the emulation runs. TIP: It is mandatory to have an up-to-date JSON file for running emulation on your target platform.\

  3. Set the XCL_EMULATION_MODE environment variable to sw_emu (software emulation) or hw_emu (hardware emulation) as appropriate. This changes the application execution to emulation mode.\

    Use the following syntax to set the environment variable for C shell (csh):

    setenv XCL_EMULATION_MODE sw_emu

    Bash shell:

    export  XCL_EMULATION_MODE=sw_emu

    IMPORTANT: The emulation targets will not run if the XCL_EMULATION_MODE environment variable is not properly set.\

  4. Run the application.\

    With the runtime initialization file (xrt.ini), emulation configuration file (emconfig.json), and the XCL_EMULATION_MODE environment set, run the host executable with the desired command line argument. IMPORTANT: The INI and JSON files must be in the same directory as the executable.\

    For example:

    ./host.exe kernel.xclbin

    TIP: This command line assumes that the host program is written to take the name of the xclbin file as an argument, as most Vitis examples and tutorials do. However, your application may have the name of the xclbin file hard-coded into the host program, or may require a different approach to running the application.

Running the Application Hardware Build

TLDR: Make sure XCL_EMULATION_MODE is unset. Use a node with the FPGA hardware attached.

  1. Unset the XCL_EMULATION_MODE environment variable. IMPORTANT: The hardware build will not run if the XCL_EMULATION_MODE environment variable is set to an emulation target.\

  2. For embedded platforms, boot the SD card. TIP: This step is only required for platforms using Xilinx embedded devices such as Versal ACAP or Zynq UltraScale+ MPSoC.\

    For an embedded processor platform, copy the contents of the ./sd_card folder produced by the v++ --package command to an SD card as the boot device for your system. Boot your system from the SD card.\

  3. Run your application.\

    The specific command line to run the application will depend on your host code. A common implementation used in Xilinx tutorials and examples is as follows:

    ./host.exe kernel.xclbin

TIP: This command line assumes that the host program is written to take the name of the xclbin file as an argument, as most Vitis examples and tutorials do. However, your application can have the name of the xclbin file hard-coded into the host program, or can require a different approach to running the application.

Example Makefile

HW_TARGET ?= sw_emu # [sw_emu, hw_emu, hw]
LANGUAGE ?= opencl # [opencl, xilinx]
VERSION ?= 1 # [1, 2, 3]

#HWC stands for hardware compiler
HWC = v++
TMP_DIR = _x/$(HW_TARGET)/$(LANGUAGE)/$(VERSION)
src_files = main_xilinx.cpp cv_opencl.cpp double_add.cpp
hpp_files = cv_opencl.hpp double_add.hpp
KERNEL_SRC = kernels/add_kernel_v$(VERSION).cl
COMPUTE_ADD_XO = $(HW_TARGET)/$(LANGUAGE)/xo/add_kernel_v$(VERSION).xo
XCLBIN_FILE = $(HW_TARGET)/$(LANGUAGE)/add_kernel_v$(VERSION).xclbin

ifeq ($(LANGUAGE), opencl)
    KERNEL_SRC = kernels/add_kernel_v$(VERSION).cl
else
    KERNEL_SRC = kernels/add_kernel_v$(VERSION).cpp
endif

.PHONY: all kernel
all: double_add emconfig.json $(XCLBIN_FILE)
build: $(COMPUTE_ADD_XO)
kernel: $(XCLBIN_FILE)

double_add: $(src_files) $(hpp_files)
    g++ -Wall -g -std=c++11 $(src_files) -o $@ -I../common_xlx/ \
    -I${XILINX_XRD}/include/ -L${XILINX_XRT}/lib/ -L../common_xlx -lOpenCL \
    -lpthread -lrt -lstdc++

emconfig.json:
    emconfigutil --platform xilinx_u250_gen3x16_xdma_3_1_202020_1 --nd 1

$(COMPUTE_ADD_XO): $(KERNEL_SRC)
    $(HWC) -c -t $(HW_TARGET) --kernel double_add --temp_dir $(TMP_DIR) \
    --config design.cfg -Ikernels -I. $< -o $@

$(XCLBIN_FILE): $(COMPUTE_ADD_XO)
    $(HWC) -l -t $(HW_TARGET) --temp_dir $(TMP_DIR) --config design.cfg \
    --connectivity.nk=double_add:1:csq_1 \
    $^ -I. -o $@

.PHONY: clean
clean:
    rm -rf double_add emconfig.json xo/ built/ sw_emu/ hw_emu/ hw/ _x *.log .Xil/

Performance Considerations

NCPUS := $(shell grep -c ^processor /proc/cpuinfo)
JOBS := $(shell expr $(NCPUS) - 1)

XOCCFLAGS := --platform $(PLATFORM) -t $(TARGET)  -s -g
XOCCLFLAGS := --link --optimize 3 --vivado.synth.jobs $(JOBS) --vivado.impl.jobs $(JOBS)
# You could uncomment following line and modify the options for hardware debug/profiling
#DEBUG_OPT := --debug.chipscope krnl_aes_1 --debug.chipscope krnl_cbc_1 --debug.protocol all --profile_kernel data:all:all:all:all

build_hw:
	v++ $(XOCCLFLAGS) $(XOCCFLAGS) $(DEBUG_OPT) --config krnl_cbc_test.cfg -o krnl_cbc_test_$(TARGET).xclbin krnl_cbc.xo ../krnl_aes/krnl_aes.xo

Useful References

Useful Commands

xbutil configure # Device and host configuration
xbutil examine   # Status of the system and device
xbutil program   # Download the acceleration program to a given device
xbutil reset     # Resets the given device
xbutil validate  # Validates the basic shell acceleration functionality

platforminfo -l # List all installed platforms.
platforminfo --platform <platform_file> # Get specific FPGA information from the platform.

Suggested machines to use for Vitis development are also setup with Slurm. Slurm is used as a resource manager to allocate compute resources as well as hardware resources. The use of Slurm is required to allocate FPGA hardware and reserve build resources on Triple Crown. It is also recommended to reserve resources when running test builds on Zenith. The best practice is to launch builds on fpgabuild with Slurm, then launch bitfile tests with Slurm. The use of Slurm is required to effectively share the FPGAs, and to share build resources with automated CI Runs, and other automated build and test scripts. As part of the Slurm interactive use or batch script, use modules to load the desired version of the tools. The rest of this section details how to use Slurm. See the for commonly used Slurm commands. See the to learn the basics of using Slurm.

Resources: "fpga:U250:1" can be replaced with the FPGA resource that you want to use. Multiple resources can also be reserved at a time. See for a list of available FPGAs.

Template: See for Slurm sbatch script templates.

Template: See for Slurm sbatch script templates.

Follow the to set up the .

Go through the .

Go through the .

Go through the .

Use to query additional information about an FPGA platform. See the example command below.

See .

Note: Fish is not backwards compatible with Bash. See . So in order to load modules and source bash scripts, I have included the bass function. Prepend bass before the source or module commands to use bash features in fish.

Using Vitis with the (Recommended Apporach)

Xilinx FPGA projects can be built using the , the Vitis GUI, , or .

In general, I recommend using the Vitis compiler via the command line and scripts, because the workflow is easy to document, store in git, and run with GitLab CI. I recommend using Vitis HLS when trying to optimize kernel since it provides many profiling tools. See .

In particular, this goes over the building and running of an example application.

See the for more details on building and running FPGA applications.

See the for more details on setting up the Vitis Environment.

See the for more information.

See the for more information.

Important: Set up the command shell or window as described in prior to running the tools.

Note: In the you may see the addition of xcl2.cpp source file, and the -I../libs/xcl2 include statement. These additions to the host program and g++ command provide access to helper utilities used by the example code, but are generally not required for your own code.

For RTL kernels, the package_xo command produces the XO file to be used for linking. Refer to for more information.

You can also create kernel object (XO) files working directly in the Vitisâ„¢ HLS tool. Refer to for more information.

TIP: The v++ command can be used from the command line, in scripts, or a build system like make, and can also be used through the Vitis IDE as discussed in .

TIP: The output directories of v++ can be changed. See . This is particularly helpful when you want to build multiple versions of the kernel in the same file structure. The shows an example of how to do this.

See the for more information.

Important: Set up the command shell or window as described in prior to running the tools.

-t <arg>: Specifies the build target, as discussed in . Software emulation (sw_emu) is used as an example. Optional. The default is hw.

The above list is a sample of the extensive options available. Refer to for details of the various command-line options. Refer to to get an understanding of the location of various output files.

Important: Set up the command shell or window as described in prior to running the tools.

--config ./connectivity.cfg: Specify a configuration file that is used to provide v++ command options for a variety of uses. Refer to for more information on the --config option.

TIP: Refer to to get an understanding of the location of various output files.

The Vitis™ analyzer is a graphical utility that allows you to view and analyze the reports generated while building and running the application. It is intended to let you review reports generated by both the Vitis compiler when the application is built, and the Xilinx® Runtime (XRT) library when the application is run. The Vitis analyzer can be used to view reports from both the v++ command line flow, and the Vitis integrated design environment (IDE). You will launch the tool using the vitis_analyzer command (see ).

See the for more information.

See the for more information.

Important: Set up the command shell or window as described in prior to running the tools.

As described in , the file specifies various parameters to control debugging, profiling, and message logging in XRT when running the host application and kernel execution. This enables the runtime to capture debugging and profile data as the application is running. The Emulation group in the xrt.ini provides features that affect your emulation run. TIP: Be sure to use the v++ -g option when compiling your kernel code for emulation mode.\

Create an emconfig.json file from the target platform as described in . This is required for running hardware or software emulation.\

See the for more information.

TIP: To use the accelerator card, you must have it installed as described in Getting Started with Alveo Data Center Accelerator Cards ().

Edit the xrt.ini file as described in .\

This is optional, but recommended when running on hardware for evaluation purposes. You can configure XRT with the xrt.ini file to capture debugging and profile data as the application is running. To capture event trace data when running the hardware, refer to . To debug the running hardware, refer to . TIP: Ensure to use the v++ -g option when compiling your kernel code for debugging.\

A simple example Vitis project is available at . This project can be used to test the Vitis compile chain and

The used by this project is an example of how to create a makefile to build an FPGA accelerated application.

Vitis and Vivado will use 8 threads by default on Linux. Many of the Vivado tools can only utilize 8 threads for a given task. See the Multithreading in the Vivado Tools section from . I found from experimenting that the block level synthesis task can leverage more than 8 threads, but will not do so unless you set the vivado.synth.jobs and vivado.impl.jobs flags.

Here is an example snippet from the which shows one way to query and set the number of CPUs to use.

Slurm Templates · code.ornl.gov
Slurm Templates · code.ornl.gov
Vitis Getting Started Tutorials
Vitis Hardware Accelerators Tutorials
Vitis Accel Examples
platforminfo
ThinLinc Quickstart
Fish for bash users (fishshell.com)
Fish Shell
Vitis Compiler
Vitis HLS
Vivado
Vitis HLS Tutorial
Tutorials are available to learn how to use Vitis.
Getting started with Vitis Tutorial
Vitis Documentation
Vitis Documentation
Vitis Documentation
Vitis Documentation
Vitis Examples
RTL Kernels
Compiling Kernels with the Vitis HLS
Using the Vitis IDE
Vitis Documentation
Build Targets
Vitis Compiler Command
Output Directories of the v++ Command
Vitis Compiler Command
Output Directories of the v++ Command
Vitis Documentation
Vitis Documentation
xrt.ini File
emconfigutil Utility
Vitis Documentation
UG1301
xrt.ini File
Enabling Profiling in Your Application
Debugging During Hardware Execution
https://code.ornl.gov/7ry/add_test
Vitis HLS
makefile
Vivado Design Suite User Guide Implementation (UG904)
Xilinx Buttom-Up RTL Tutorial
Vitis Unified Software Development Platform 2020.2 Documentation
Vivado Design Suite User Guide (UG892)
Xilinx Vivado Design Suite Quick Reference Guide (UG975)
Vivado Design Suite Tcl Command Reference Guide (UG835)
Vivado Design Suite User Guide Implementation (UG904)
FPGAs
quickstart
Vitis Environment
Setting Up the Vitis Environment
Vitis Documentation
makefile example
Setting Up the Vitis Environment
Setting Up the Vitis Environment
Setting Up the Vitis Environment
Setting Up the Vitis Environment
Alveo U250
Alveo U55C
Alveo U280
ExCl
User Documentation
Vitis FPGA Development
ThinLinc
FPGAs
Quickstart
virtual systems
Accessing ThinLinc
Slurm Quick Start User Guide
Cheat Sheet
ExCL FPGA Overview
Vitis Build Process