Demo Overview
Experience a sample workflow of the Self-Checkout AI Copilot reference app, built with NVIDIA Metropolis Microservices.
The use case is to reduce misscans and improve customer experience by giving existing kiosks visual recognition capabilities, that can be adapted quickly to new products & designs with limited data and model retraining.
What to Expect
Once the demo is lauched, you'll be able to explore the app workflow, presented as a wizard. Each step contains brief instructions on what it does and how you can review or interact with it.
More specifically, the navigation tabs & steps are:
- [Start] Stream a video of a self-checkout scene to the app
- [Monitor] Review how the app's vision system identifies items, augmenting the barcode scanner. Initially, the app can't identify about half the time, since half of the items were not introduced during model training
- [Optimize] Refine the app's prediction quality by adding a couple of new products' visual signatures to the database
- [Monitor] Go back to monitor the app's operation to review the improvement
- Explore repeating step 3 & 4 to further optimize the app
Technologies Used
Several technologies used include:
- TAO Toolkit to train and optimize the AI models for inference
- DeepStream SDK to develop the real-time perception pipeline
- Metropolis Microservices to provide modular, cloud-native building blocks to quickly build & deploy the full app
Disclaimers
This is a standalone demo and not tied to an actual checkout kiosk. The scanner signal is simulated.