Configure Feature Parameters and Sampling Parameters

Unlike other tools, when you use the Blue Locate, you need to configure the Features parameters and Sampling parameters before labeling because these parameters affect the size and other properties of feature labels you place on each view.

 

Configure Features Parameters

The two Features parameters, Oriented and Scaled, provide for degrees of freedom (DOF) of features than can be labeled, learned and reported by the tool. Enabling orientation and scale incorporates a tolerance for unlimited scale and rotation change during training of the tool. The specific range of rotation and scale that can be accommodated by the tool will then be controlled by a run-time property.

When Oriented and/or Scaled are enabled, you must consistently label each feature with its orientation and/or size, in addition to the feature's position and identity. During runtime, each found feature's orientation and scale will be determined and reported, and you can adjust the Angle Range and Size Range Processing parameters, if necessary, to account for variations.

Note:
  • If you only need the tool to be able to tolerate feature rotation and/or scale, do not enable these settings. Instead, using the , enable an appropriate amount of Rotation and Scale.
  • The Features parameters, Oriented and Scaled, are only available for the Blue Locate Tool and the Blue Read Tool.
  • The Legacy Mode checkbox enables the behavior of the Oriented and Scaled to the pre-3.2 release state. In this state, the scale and rotation tolerances are fixed during training, based on the training samples and the Perturbation parameters.

    While in Legacy Mode, the extracted feature orientation and scale are of limited precision. Scale is clamped to 1/4-4 times the feature size interval.

 

Oriented

The Oriented setting is manually configured by adjusting the orientation of the feature label graphic. Adjust the orientation of the feature label graphic by clicking on a handle of the label, and dragging it to the desired orientation. Ensure that you rotate each feature label so that it correctly labels the orientation of the feature.

 

  1. Select Oriented in Features parameters.

     

  2. Select a feature on Image Display Area. A red rotation toggle button will show up.

     

  3. Select the toggle and drag it around to rotate the feature.

Note: If your application contains rotated features, teach the Blue Locate tool to properly handle rotation.
  • Enable the Oriented parameter.
  • Make sure that all of the various rotation variances are accurately labeled.

 

Scaled

When Scaled is enabled, you label each feature with the feature's size. At runtime, you specify the range of feature sizes across which to search. With Scaled enabled, you set the Feature Size to indicate the feature size that is at 100% scale. The runtime scale range is the range of feature sizes returned relative to the base feature size. You may also select whether to enable Uniform or Non-Uniform scale.

 

Scaled Uniform

When Uniform scale is enabled, all of your feature labels will be the same shape, albeit with different sizes. When labeling each feature, you label each feature based on its size. During the runtime operation of the tool, you specify the range of feature sizes that you expect to encounter.

Note: Hold down the Ctrl key while dragging a region handle to center the resizing of the feature label graphic.

 

Scaled Non-Uniform

If Non-uniform scale (also referred to as aspect-ratio) is enabled, then you set the feature size to indicate the 100% scale of X and Y independently, and label each feature instance with a unique aspect ratio. This can be useful when training a single tool to find a range of differently-shaped parts.

Note:
  • Ensure that the Feature Size parameter setting matches your label's dimensions. This can either be set graphically, or you can manually set it based on your label (hover over the label to get the X and Y dimensions of the label).
  • Hold down the Ctrl key while dragging a region handle will center the resizing of the feature label graphic based on the current location of the feature graphic.
  • Hold down the Shift key while dragging a region handle to create a uniform scale of the feature graphic label.

 

Configure Sampling Parameters

Features are the pixels in your image data you are interested in and, at the same time, the pixels that are critical for solving your machine vision problem and achieving your specific goal. For example, the features can be the pixels that represent the boundary, color, and shades of an object in a view for a Blue Locate tool.

 

How Features are Sampled

Green Classify in Focused mode, Red Analyze in Focused mode, Blue Locate and Blue Read tools do not sample the input images uniformly (although the image sampling does cover the entire image extent). During training, the tools use a special technique to selectively sample at a higher rate those parts of the images that are determined to be more likely to contribute additional information to the network.

Because the network training is performed using both information within the sample region and contextual information from around the sample region, the tool can be strongly influenced by samples collected at the edge of an image. If you are using a view within an image, then the context information for samples collected at the edge of the view will use pixels from outside of the view for context data.

1

Feature Size

2

Sample Region

3

Context Region

If the sample is at the edge of the image itself, the tool will generate synthetic pixels for use as context. You can control the specific method used for this through the use of masks, borders, and sample color channels (via the Border Type and Color , respectively).

The tools also allow you to provide masks for use during sampling. This allows you to explicitly exclude parts of the image from training, although even masked regions are considered as context, depending on the Masking Mode parameter setting.

Finally, if you are using color images (or any image with multiple planes or channels), you can explicitly specify which channels are sampled. Using multiple channels has a minor impact on training and processing time, but it can allow the tools to work more accurately in cases where color provides important information in the image.

Note: You can use the Masking Mode parameter as an alternative method for handling image boundaries through the following method: By masking the border of an image, you can prevent the tool from collecting image samples that require the generation of synthetic pixels for context.

 

Sampling Parameters

For Focused Mode tools, you need to specify in detail how your features are like as much as possible to effectively teach your tool about them. This specification is achieved by configuring up Sampling Parameters. A Focused tool uses a feature sampler which samples pixel information from a view, and by configuring up Sampling Parameters, you tell this feature sampler about the properties of the features that should be sampled or not.

Note: Sampling parameters affects the result of training, so it should be configured with other training parameters (Network Model parameters, Training parameters, Perturbation parameters) before training.
Note: The tools based on High Detail and High Detail Quick architectures do not require configuring features, because they samples pixel information from the entire view and thus do not require a feature sampler which is set up by Sampling parameters.

 

The Sampling tool parameters control the way that images are sampled during training and processing.

Parameter Description

Feature Size

Specifies the typical feature diameter, in pixels. The Feature Size parameter is graphically displayed in the lower left of the image, and can be graphically resized within the image to set a more accurate size.

Feature size strongly influences processing time (n2), in other words,, a Feature Size of 100 is 100 times faster than a size of 10, while a Feature Size less than 15 usually does not yield good results.

When setting the Feature Size, consider the following in regards to Processing time (Ptime):

Note: The tool will actually see an area five times larger than the Feature Size setting. However, the level of detail will be much higher in the center of the feature, as opposed to the periphery.

Detail

Specifies how much focus the Blue Locate tool should place on the area within the feature graphic, versus the area surrounding the feature label. The available settings are values 1 through 4.

  • When this parameter is enabled and set to 4, the tool will focus attention on learning more about the area inside the labeled feature size graphic, and place limited emphasis on the context region surrounding the graphic.
  • When this parameter is enabled and set to 1, the tool will focus attention on both the context and the area inside the labeled feature size graphic, though less emphasis will be placed on the detail inside the labeled feature size graphic.
  • Settings 2 and 3 place varying degrees of emphasis between the two extremes of 1 and 4.

Color

Specifies the number of color channels to use when sampling the image. When set to 1, color images will be converted to greyscale.

  1. Treats the image as greyscale
  2. Two channels images (spectral images, gray+alpha)
  3. BGR image.
  4. BGR(A) image
Note:
  • If the image is RGB(A) it will be converted to grayscale. For computational efficiency (memory allocation, transfer, file save, color conversion,… ) always prefer the correct number of channels. The VisionPro Deep Learning tools use the BGR channel order.
  • If your application relies on color images, only use the minimal number of required color channels possible, and only send images that already have the correct number of channels to avoid conversion. This is because:

Number of Image Channels Number of Training Channels Description
1 (greyscale) 1 The correct setting for a greyscale image.
1 2, 3, 4 Most likely, this will result in a training error.
2 1 The tool will use only the first channel.
2 2 The tool will use the full pixel information.
2 3, 4 This will result in a training error.
3 (BGR) 1 This will result in a BGR to greyscale conversion.
3 2 The tool will only use the first two channels (in other words, B and G).
3 3 The tool will use the full pixel information.
3 4 Most likely, this will result in a training error.
4 (BGRA) 1 This will result in a BGRA to greyscale conversion.
4 2 The tool will only use the first two channels (in other words, B and G).
4 3 The tool will only use the first three channels (in other words, B, G and R).
4 4 The tool will use the full pixel information.

Border Type

Specifies how pixels on the outside of the image are sampled.

Tip: Adding an additional mask at the boundaries of the image highly reduces the false detection rate.
  • Black: Fills the outside of the image with a solid color, black.

  • Replicate: Fills the outside of the image with the last pixel.

Masking Mode

Specifies how a mask will be applied to the sampled image. A mask is used to limit areas of the image processed by the tool.

Note: Masks can be set after training, however, setting them prior to training will help the learning phase.
  • Transparent: Samples are only collected within the unmasked parts of the image, but data from the Context Region is also collected from the areas that are masked. This ensures that features or defects that are at the boundary of the mask or ROI generate the same response as features or defects that are centrally located.
  • Mask: The mask is used to ignore the areas which are masked. All pixels that are masked are set to 0, which prevents data from the masked parts of the image from being considered during training or runtime. However, it also alters the response of the tool to defects or features near the mask boundary. This setting also works to focus the tool toward the center of the ROI.
  • Overlay: The mask is added to the sampled image as an additional color channel.
Note:
  • The Detail, Border Type, and Masking Mode parameters are only available when Expert Mode is enabled. This is enabled via the Help menu.
  • Changes to the Sampling parameters after a tool has been trained will invalidate the training, because the underlying image statistics may be fundamentally changed, which will necessitate retraining the tool.

 

Sampling Parameter Details: Feature Sampling and Feature Size

Green Classify in Focused mode, Red Analyze in Focused mode, Blue Locate and Blue Read tools analyze images based on the Feature Size that you specify. The feature size, which is based in pixels, functions as a hint to the tool about the expected size of "meaningful" or "distinctive" features in the input images. The best method for selecting a feature size is to examine the input images as if you were a human inspector. Note the features in the image that you would use to characterize the image as good or bad, to identify a defect or problem, or to determine where something was and what it was.

For example, if you were attempting to classify pictures of airplanes based on the number of engines, the feature size would be based on the approximate size of an airplane engine.

During both training and runtime, the tool will collect samples from the image that correspond to the pixels within a subregion of the image, as well as contextual information around that region. The contextual region is approximately five times the feature size.

1

Feature Size

2

Sampling Region

3

Context Region

 

Specifying a feature size is a subjective judgment, although there are a few specific guidelines to follow. For the Blue Locate tool, the feature size should be about the size of the object that you are identifying. If you are labeling features of different sizes, pick a compromise feature size.

Note: For more information of the relationship between the Feature Size and the training/processing speed of a tool, see Feature Size Optimization.