🎯

Detector Accuracy

Getting an accuracy score for your detector

The accuracy of your detector might be affected by multiple parameters:

  • The number of training annotations
  • The number of training areas containing objects or containing background and counter-examples
  • The complexity of the background and counter-example objects

To precisely calculate the accuracy of your detector you can use Accuracy areas:

  • Draw one or more Accuracy Area(s) where you want to test your detector
image
  • Annotate all the objects inside your Areas
image
  • Train your detector and visualize the score, displayed in the drop-down just below the "Train Detector" button.
image
  • You can also check the score per-accuracy area directly on the image, and sort the areas by score, which enables fast navigation of them to sport where your detector is performing poorly:
image

How is the score computed?

The meaning of the score is different depending on whether your detector is in Segmentation mode or Count mode.

In count mode...

For count mode detectors, the accuracy score to use is the Counting F-score. This score is composed of two other scores, recall and precision. The recall is the percentage of your outlined objects in your accuracy area that were correctly matched by a detection. The precision is the percentage of your detections that matched an outline. Combined, these can be used to easily compute a single score called the F-score.

If you want to see all of these scores (recall, precision, f-score, and foreground iou) regardless of what mode you are in, they are all available in the training report as well, which is generated each time you train your detector.

In segmentation mode...

For segmentation mode detector, the accuracy to use it the Per Pixel F-Score (

This is a measure of how well your detected regions overlap with the regions you outlined in your Accuracy Areas. This is a “per pixel” score, meaning it is not suited for matching detected and annotated objects. If you are trying to detect individual objects you should be in count mode, not segmentation mode.

DEPRECATED: Formerly we used the intersection over union, also known as the Jaccard index. This was changed as of 26/09/2022.

Sign up for Picterra University for a more in-depth understanding on detector accuracy as well as many of the other advanced features of the platform.

Accuracy is relative!

Note that there is no single magical number that can be called the “accuracy” of your machine learning model.

This is simply not how machine learning-based approaches work and is a common misunderstanding when approaching machine learning. Along with the the question of “How accurate is your model?” one must necessarily also ask the question “Accuracy on what?”.

You can train a building model that performs at 100% on one set of buildings, but then run it on a different set of buildings that look very different from what you trained it on and you could get much lower, but maybe you don’t care about those other buildings anyways. So it really depends on what data you are assessing your model.

This is why we’ve created accuracy areas for you. It is up to you to properly score your model on the imagery corresponding to the scenario that you have defined your model to work on. Be sure to create ample variety with your accuracy areas as in the end, if your accuracy areas do not cover regions that are representative of what you are trying to detect over at scale, then your accuracy score will not be representative of the true score either.