Installation Guide
This guide covers how to install and use ACCV-Lab, a CUDA-accelerated library with multiple namespace packages.
Prerequisites
Environment Setup
Before installing ACCV-Lab, please ensure that the environment is set up correctly. The recommended way is to use the Docker image as described in the Docker Guide.
You can also have a look at the Dockerfile for more details on the base environment.
Submodules
ACCV-Lab uses submodules in some of the packages (currently only the on_demand_video_decoder package).
To clone the repository with the submodules, you can use the following command:
git clone --recurse-submodules https://github.com/NVIDIA/ACCV-Lab.git
If you have already cloned the repository without the submodules, you can add them later with the following command:
git submodule update --init --recursive
Installation Methods
1. Installation Using the Package Manager Script
Overview
Important
The editable installation (-e) is not supported for scikit-build based packages
(e.g. on_demand_video_decoder or dali_pipeline_framework). This can lead to missing binaries
and import errors.
The standard way to install ACCV-Lab is using the unified installer script that handles all namespace packages
automatically. By default it installs packages with their basic dependencies only; to also install optional
dependencies (needed for some tests and examples), pass the --optional flag explicitly:
# Install all namespace packages
./scripts/package_manager.sh install
# Install in development mode (editable installation)
./scripts/package_manager.sh install -e
# Install with optional dependencies
./scripts/package_manager.sh install --optional
# Install in development mode with optional dependencies
./scripts/package_manager.sh install -e --optional
Important
Installing with optional dependencies is required if you plan to run the contained
tests, as they rely on optional dependencies such as pytest (and possibly other dependencies). It may be
also required for the contained examples, as they may use additional packages which are otherwise
not used in the core library.
The package manager script:
Automatically installs the required
accvlab_build_confighelper package (see thebuild_configdirectory in the repository root)Installs all configured namespace packages from
namespace_packages_config.py(see the development guide for more details).Installs the individual namespace packages with
pip installand the--no-build-isolationflag by default. You can pass--with-build-isolationto the script if you want pip to use build isolation.Tests imports after installation
Provides detailed progress feedback
2. Installation Using the Convenience Wrapper Script
Alternative: You can also use the convenience wrapper install_local.sh which calls the package manager
automatically and performs a single default install with optional dependencies enabled.
# Install all packages with optional dependencies (default local setup)
./scripts/install_local.sh
Note
install_local.sh does not accept any parameters. If you need fine-grained control (e.g.,
installing without optional dependencies, using editable installs, or building wheels), use
scripts/package_manager.sh directly instead of install_local.sh.
3. Building and Installing Wheels
You can also build the wheels:
# Build wheels for all namespace packages
./scripts/package_manager.sh wheel
# Build wheels with optional dependencies
./scripts/package_manager.sh wheel --optional
# Build wheels in a specific directory
./scripts/package_manager.sh wheel -o /path/to/wheels
The wheel building script:
Creates wheels for all namespace packages
Saves wheels to
./wheels/directory by defaultIncludes the
build_confighelper package wheelSupports various build configurations for different deployment scenarios
Uses
--no-build-isolationby default. This means that the resulting wheel will be built in the current environment. You can pass--with-build-isolationto the script if you want pip to use build isolation.Uses
--no-depsby default. This means that only a wheel for the package itself will be built, and no wheels for dependencies will be prepared. You can pass--with-depsto the script if you want to use the defaultpip wheelbehavior instead (i.e. prepare wheels for all dependencies as well).
Installing from Built Wheels
After building wheels, you can install them:
# Install all wheels from the wheels directory
pip install wheels/*.whl
# Install specific wheels
pip install wheels/accvlab_optim_test_tools-*.whl
pip install wheels/accvlab_batching_helpers-*.whl
pip install wheels/accvlab_dali_pipeline_framework-*.whl
pip install wheels/accvlab_on_demand_video_decoder-*.whl
4. Installing Individual Packages with pip
For development or when you only need specific packages, you can install them individually directly with
pip.
Note
{-e} means that the -e (editable) option is optional.
Note
The -e option is not supported for scikit-build based packages (e.g.
dali_pipeline_framework, on_demand_video_decoder).
# Install individual packages
cd packages/optim_test_tools && pip install {-e} . --no-build-isolation
cd packages/batching_helpers && pip install {-e} . --no-build-isolation
cd packages/dali_pipeline_framework && pip install . --no-build-isolation
cd packages/on_demand_video_decoder && pip install . --no-build-isolation
Installing with Optional Dependencies
For individual package installation with optional dependencies:
# Install individual packages with optional dependencies
cd packages/optim_test_tools && pip install {-e} .[optional] --no-build-isolation
cd packages/batching_helpers && pip install {-e} .[optional] --no-build-isolation
cd packages/dali_pipeline_framework && pip install .[optional] --no-build-isolation
cd packages/on_demand_video_decoder && pip install .[optional] --no-build-isolation
Verifying Installation
Basic Verification
Test that the packages installed correctly:
# Test basic import
python -c "import accvlab; print('ACCV-Lab loaded successfully')"
# Test specific namespace packages
python -c "import accvlab.optim_test_tools; print('Optim test tools loaded successfully')"
python -c "import accvlab.batching_helpers; print('Batching helpers loaded successfully')"
python -c "import accvlab.dali_pipeline_framework; print('DALI pipeline framework loaded successfully')"
python -c "import accvlab.on_demand_video_decoder; print('On-demand video decoder loaded successfully')"
Check Available Namespace Packages
# List all configured namespace packages
python -c "from namespace_packages_config import get_namespace_packages; print('\n'.join(get_namespace_packages()))"
Running all Unit Tests
The repository provides a convenience script to run pytest for all configured namespace packages:
./scripts/run_tests.sh
Important
If you want to run the tests, please make sure to install the packages with optional dependencies, as they may be required for the tests.
Important
If you want to run the tests inside a docker container, you need to
install and use the Nvidia container runtime to ensure that the on_demand_video_decoder package can be
used.
Please see the Docker Guide for more details on how to set up and run with the Nvidia
container runtime.
Alternatively, you can also remove the on_demand_video_decoder package from the installation (by removing
it from the list of namespace packages in the namespace_packages_config.py file, also see the
Development Guide).
Build Configuration
You can customize the build process using environment variables. Note that this works for any of the installation methods described above.
# Debug build with verbose output
DEBUG_BUILD=1 VERBOSE_BUILD=1 ./scripts/package_manager.sh install
# Optimized build for production
OPTIMIZE_LEVEL=3 USE_FAST_MATH=1 ./scripts/package_manager.sh install
# Custom CUDA architectures (if you know your GPU architecture)
CUSTOM_CUDA_ARCHS="70,75,80" ./scripts/package_manager.sh install
# Enable profiling support
ENABLE_PROFILING=1 ./scripts/package_manager.sh install
Note
These build variables are honored across all build types in ACCV-Lab: setuptools (PyTorch extensions), external CMake builds (via the provided helper script), and scikit-build packages.
Available Build Variables
Variable |
Type/Values |
Default |
Description |
|---|---|---|---|
|
bool: |
|
Enable debug symbols and assertions |
|
int: |
|
Compiler optimization level |
|
string: |
|
C++ standard to use |
|
bool: |
|
Show detailed build output |
|
list: e.g. |
Auto-detect |
Target CUDA architectures (overrides auto-detect) |
|
bool: |
|
Enable fast math optimizations |
|
bool: |
|
Enable profiling support |
Important
Currently only C++17 is supported across all packages and toolchains. Set
CPP_STANDARD=c++17. Using newer standards (e.g., C++20) may not be supported for CUDA builds for some
of the packages.
Important
If CUSTOM_CUDA_ARCHS is not set, auto-detection via PyTorch is attempted. If this fails,
the build will use CMake’s native detection (CMAKE_CUDA_ARCHITECTURES=native) to select the appropriate
GPU architectures on the current system.
Additional Information
For information about extending ACCV-Lab or adding new namespace packages, see the development guide.