Component interactions of the Object Detector

Let's understand the interactions between the various components of the Object Detector application:

From the user's perspective, the application loads a random image and displays the objects (or labels) that have been detected within that image. The demo workflow is as follows:

  1. The object detector application's web interface calls the Demo Object Detection endpoint to start the demo.
  2. The endpoint calls the Storage Service to get a list of files that are stored in a specified S3 bucket.
  3. After receiving the list of files, the endpoint randomly selects an image file for the demo.
  4. The endpoint then calls the Recognition Service to perform object detection on the selected image file.
  5. After receiving the object labels, the endpoint packages the results in JSON format.
  6. Finally, the web interface displays the randomly selected image and its detection results.