GPU Mode Command Line Initialization

There are several command line arguments that can be used with the VisionPro Deep Learning GUI on startup for library initialization. The commands are issued using Windows Command Prompt, and navigating to the C:\Program Files\Cognex\VisionPro Deep Learning\1.0\Cognex Deep Learning Studio directory (for Cognex Deep Learning Studio.exe (IDE application) or C:\Program Files\Cognex\VisionPro Deep Learning\1.0\Service directory for VisionPro Deep Learning Service.exe (if using the Deep Learning Client/Server Functionality). You issue the commands by first specifying the name of the application followed by the command line arguments.

The following command line arguments can be used to control the GPU Mode, which GPU device to use and the allocation of GPU memory:

Command Description

--gpu-mode=NoSupport or SingleDevicePerTool

Specifies the GPU mode to be used by the application.

SingleDevicePerTool

A single GPU is used for the analysis of the tool. When using multiple GPUs, the processing time of a single image remains the same, but multiple images can be processed concurrently on different devices.

NoSupport

Specifies that a GPU will not be used.

Note: This option conflicts with --gpu-devices and/or --optimized-gpu-memory.
Note: See the Multithreading topic for more information about the GPU modes.

--gpu-devices=comma separated indexed list of GPUs

Specifies the GPUs that will be used on initialization, via an indexed list. For example: --gpu-devices=0,1

--optimized-gpu-memory=memory size, in MB

Specifies the size of the pre-allocated optimized memory buffer. This setting is activated by default, with the default size of 2GB. To deactivate the feature, first issue the --optimized-gpu-memory-override=1 command, and then --optimized-gpu-memory=0 command. To use a memory buffer size other than the default setting, first issue the --optimized-gpu-memory-override=1 command, and then the --optimized-gpu-memory=<memory size in MB> command.

Note: The GPU Memory Optimization setting is enabled by default. For more information about the functionality, see the GPU Memory Optimization topic.

--optimized-gpu-memory-override=[0 or 1]

If using the --optimized-gpu-memory setting, set to 1. If set to 0, the default amount of memory will be allocated.