Green Classify High Detail

Overview of Green Classify Architectures

There are 3 types of architecture in Green Classify Tool: Focused, High Detail, and High Detail Quick.

 

  • The Green Classify Focused is used to classify an object or a complete scene. Be it the identification of products based on their packaging, the classification of welding seams or the separation of acceptable or unacceptable defects; the Focused mode learns to separate different classes based on a collection of labeled images. To train the Focused mode, all you need to provide are images assigned and labeled the different classes.

  • The Green Classify High Detail is similar with Focused Green Classify Tool, but uses a different architecture. The High Detail mode learns to separate different classes based on a collection of labeled images. To train the High Detail mode, all you need to provide are images assigned and labeled the different classes.

  • The Green Classify High Detail Quick is the modification of Green Classify High Detail mode, hiking up its training speed dramatically with some decrease in detection accuracy. It was forged to import the state-of-the-art training algorithms to guarantee robust and still performing results though its accuracy level is slightly lower than that of Green Classify High Detail. It also requires few Tool Parameters thus offers easy and quick training. The other details including its step-by-step usage are not much different from Green Classify High Detail.

 

About Green Classify High Detail

Green Classify High Detail is an image classification tool which shows the best classification accuracy in most cases among all Green Classify tools. There is no sampler at its base because it samples from the entire image, which slows its training a bit but guarantees higher accuracy than Green Classify Focused tool in most cases. It has some different tool parameters and unlike Green Classify Focused, it uses Validation Set in training to pick up the best neural network classification model given the training data. Away from these, Green Classify High Detail is not much different from Green Classify Focused regarding how to train, process, and interpret the results.

 

Architectures: Green Classify High Detail vs Green Classify Focused

High Detail mode uses an architecture which is different with Focused mode. Due to the architecture, High Detail mode takes more time for training than the Focused mode, but you can get more accurate results.

The way you create a model in High Detail mode is basically same with Focused mode but there are some differences in tool parameters. Also, you cannot assign multiple tags per view in High Detail mode. Each view has only a single corresponding tag and it means that non-exclusive mode is not supported.

Focused mode uses an architecture which is different from High Detail mode. Due to the different architecture, Focused mode takes less time for training than the High Detail, so you can get a fast feedback when compared to High Detail mode.

Focused mode setting is selective, focusing on the parts of the image with useful information. Due to this focus, the network can miss information, especially when the image has important details everywhere.

 

  High Detail mode Focused mode
Speed Slow Fast
Accuracy More Accurate Accurate
Number of Parameters

Many

Many

Image Dataset Composition Training Set, Validation Set, Test Set Training Set, Test Set

 

Architectures: Green Classify High Detail vs Green Classify High Detail Quick

High Detail Quick is different from High Detail mode as it adopted a different training algorithm, a one-of-kind one. Due to this backbone change, it became dramatically faster in training while keeping its classification accuracy only a little behind that of Green Classify High Detail. It does not use the validation set in training anymore and it does not require as many tool parameters as High Detail requires. Everything else is just the same as High Detail mode. Note that if the tool parameters are properly tuned, the accuracy of High Detail modes are higher than that of High Detail Quick modes on average.

 

  High Detail mode High Detail Quick mode
Speed Slow Fast
Accuracy More Accurate Accurate
Number of Parameters

Many

Almost None

Image Dataset Composition Training Set, Validation Set, Test Set Training Set, Test Set

 

Supported Features vs Architectures

Features

Green Classify Focused Green Classify High Detail Green Classify High Detail Quick
View Inspector Supported without heat map Supported with heat map Supported with heat map
Loss Inspector Not supported Supported Not supported
Validation Set Not used in training Used in training Not used in training
VisionPro Deep Learning Tool Parameters Less parameters

More parameters for control*,
No sampling parameters

Almost no parameters

Multi-class Classification
(Non-Exclusive/Exclusive Mode)
Supported Not supported Not supported
Resize Mode Not supported Supported Not Supported
Note: To exploit all functions of each tool, go to Help and enable Expert Mode.
Note: See Loss Inspector for more details of Loss Inspector feature.

 

Training Workflow for Green Classify High Detail

When a Green Classify tool is in Green Classify High Detail mode, the training workflow of the tool is:

 

  1. Launch VisionPro Deep Learning.
  2. Create a new workspace or import an existing one into VisionPro Deep Learning.
  3. Collect images and load them into VisionPro Deep Learning.
  4. Define ROI (Region of Interest) to construct Views.
    1. If a pose from a Blue Locate tool is being used to transform the orientation of the View being used as an input to the Green Classify tool, process the images (press the Scissors icon) before opening the Green Classify tool. For more details, see ROI Options Following a Blue Locate Tool.
    2. If necessary, adjust the Region of Interest (ROI). Within the Display Area, right-click and select Edit ROI from the menu.

    3. After adjusting the ROI, press the Apply button and the adjusted ROI will be applied to all of the images.
    4. Press the Close button on the toolbar to continue.
  5. If there is extraneous information in the image, add appropriate masks to exclude those areas of the image. Within the Display Area, right-click and select Edit Mask from the menu.
    1. From the Mask toolbar, select and edit the appropriate masks.

    2. After adding the necessary mask(s), press the Apply button and the mask will be applied to the current image.

      If you click Apply All and click Yes button on the ensuing Apply Mask dialog after adding the necessary masks, the same mask will be applied on all the images. If you click No on the dialog, the mask will not be applied and you go back to the Edit Mask window.

    3. Press the Close button on the toolbar to continue.
  6. Go through all of the images and label the images with classification tags. See Create Label (Labeling) for the details of labeling.

    Tip: It is recommended to annotate your image files with descriptive names or numbering schemes to help when labeling the images.
  7. You can use the Label Views option in the View Browser.

  8. When labeling, you can either apply a tag or use regular expressions to apply the classification tag label.

  9. Make sure that all of the images have been labeled with a classification tag.
  10. Split the entire images into the training images and the test images. Utilize image sets to properly divide them into the training and test group. Add images to the training set.
    1. Select the images in the View Browser, and on the right-click pop-up menu click Add views to training set. To select multiple images in the View Browser, use the Shift + Left Mouse Button.
    2. Or, use Display Filters to show the desired images for the training only and add them to the training set by clicking Actions for ... views → Add views to training set.
  11. Prior to training, you need to set parameters in Tool Parameters. You can configure Training and Perturbation parameters or just use the default values of these. See Configure Tool Parameters for the details of the supported parameters.
    1. Green Classify High Detail doesn't have Sampling parameters as it doesn't use feature-based sampler but it samples from the entire pixel of each view.
    2. If you want more granular control over training or processing, turn on Expert Mode on Help menu to initialize additional parameters in Tool Parameters.
    3. If you want to change the proportion of the amount of data ceded to the validation set out of training set for the training with validation, change the Validation Set Ratio. See Prepare Validation Set for the details of the training with validation.
    4. If the class distribution of training set images is highly imbalanced, for example, more than 50% of training set images were labeled with a certain tag, configure Class Weights to differentiate the weights to each class. See Training Parameter Details: Class Weights for the details of Class Weights.
  12. Train the tool by pressing the Brain icon.
    1. You can check the training status by monitoring Validation Loss with Loss Inspector . See Validation Set and Validation Loss for more details.
    2. If you stop training in the middle of it by pressing the Stop icon, you can stop training with saving the current tool so far trained. You can later load this tool and process images with it, but you cannot continue the training from where the last training was stopped.
  13. After training, review the results. Open Database Overview panel and review the Confusion Matrix and Precision, Recall, F-Score for each class (tag) to understand the results. See Interpret Results for the details of interpreting results.

  14. After reviewing the results, go through all of the images and see how the tool correctly or incorrectly marked the tags for each image.
    1. If the tool has correctly marked the images with the appropriate tag, right-click the image and select Accept View.
    2. If the tool has incorrectly marked the images with incorrect tags:
      1. Right-click the image again and select Clear Markings and Labels.
      2. Manually tag the image.

    3. If you encountered the scenario in (a.), you are ready to use the tool. If you encountered the scenario in (b.), you will need to retrain the tool and repeat the steps 11 ~ 14.

 

The details of each step are explained in each subsection of Training Green Classify High Detail.