Comparative Analysis of Few Shot Anomaly Detection System for Quality Control
Olowe, Adeniyi Babatunde (2025)
Olowe, Adeniyi Babatunde
2025
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-202504298302
https://urn.fi/URN:NBN:fi:amk-202504298302
Tiivistelmä
The deployment of artificial intelligence (AI) in manufacturing for anomaly detection is often hindered by the need for extensive labeled datasets and significant computational resources. Few-shot learning (FSL) has emerged as a promising alternative, allowing AI models to generalize from a limited number of training examples. This study investigates the effectiveness of zero-shot, one-shot, and two-shot learning approaches using the Contrastive Language-Image Pretraining (CLIP) model for automated visual inspection in manufacturing. A comparative evaluation was conducted using both the publicly available MVTec AD dataset and a custom-collected dataset of defective plastic spoons.
The research evaluated model performance across key metrics including accuracy, precision, recall, F1-score, BLEU score, and Intersection over Union (IoU). The results show that adding more training examples greatly improves anomaly detection accuracy. Two-shot learning performed best (74.8%), followed by one-shot (68.2%) and zero-shot (62.4%). Using data specific to the task also made the model more reliable, emphasizing the need for well-matched datasets. The study also looks at the balance between speed and accuracy, showing that while zero-shot learning requires no extra training, it is less accurate than methods using some training data. These insights help improve AI-based defect detection with minimal data, making it more affordable for real-world manufacturing.
The research evaluated model performance across key metrics including accuracy, precision, recall, F1-score, BLEU score, and Intersection over Union (IoU). The results show that adding more training examples greatly improves anomaly detection accuracy. Two-shot learning performed best (74.8%), followed by one-shot (68.2%) and zero-shot (62.4%). Using data specific to the task also made the model more reliable, emphasizing the need for well-matched datasets. The study also looks at the balance between speed and accuracy, showing that while zero-shot learning requires no extra training, it is less accurate than methods using some training data. These insights help improve AI-based defect detection with minimal data, making it more affordable for real-world manufacturing.