Hyppää sisältöön
    • Suomeksi
    • På svenska
    • In English
  • Suomi
  • Svenska
  • English
  • Kirjaudu
Hakuohjeet
JavaScript is disabled for your browser. Some features of this site may not work without it.
Näytä viite 
  •   Ammattikorkeakoulut
  • Yrkeshögskolan Novia
  • Opinnäytetyöt (Avoin kokoelma)
  • Näytä viite
  •   Ammattikorkeakoulut
  • Yrkeshögskolan Novia
  • Opinnäytetyöt (Avoin kokoelma)
  • Näytä viite

Development of a Machine Vision-Guided Irrigation Prototype for Organic Rhubarb Cultivation

Ehrs, Mikael (2026)

 
Avaa tiedosto
Ehrs_Mikael.pdf (5.854Mt)
Lataukset: 


Ehrs, Mikael
2026
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-202604106006
Tiivistelmä
This thesis develops and validates a low cost, Raspberry Pi based machine vision prototype for irrigation in small scale organic rhubarb cultivation. The objective is to detect rhubarb crowns reliably from a downward facing camera under real field conditions and translate detections into a robust actuation signal (implemented as a GPIO driven LED as a stand in for a future solenoid valve), thereby demonstrating feasibility and identifying key deployment constraints for precision irrigation on small farms.

An engineering based method was used. Overhead images were collected on the case farm using a Raspberry Pi camera mounted to an ATV drawn watering platform, capturing realistic variation in weed cover, plant size, shadows, and motion. The dataset was balanced and expanded using controlled augmentations, manually annotated with bounding boxes (single class: rhubarb), and split into train/validation/test sets. An initial TensorFlow Lite SSD–MobileNet approach proved impractical due to software/dependency constraints, motivating a switch to an Ultralytics YOLO based pipeline trained on GPU in Google Colab. The final model was deployed on a Raspberry Pi 4 in a real time inference loop (Picamera2 + OpenCV), with confidence percentage filtering and simple multiframe debouncing to stabilize the GPIO output.

The baseline YOLO11n detector achieved strong validation performance (precision ≈0,90, recall ≈0,86, mAP@0,5 ≈0,939, mAP@0,5:0,95 ≈0,708; derived F1 ≈0,88). However, pronounced domain shift was observed: indoor or non field textures could trigger moderate to high rhubarb confidence. Adding diverse “out of domain” negative images (Places365) and retraining substantially reduced high confidence false positives while preserving field utility. On Raspberry Pi 4 (CPU only, with onscreen visualization), runtime performance was in the low single digit Frames Per Second range; with multi frame debouncing, end to end indicator response was ~1 second at slow driving pace, and printed picture tests showed fairly reliable triggering over pictures of most mature plants. Field tests are yet to be done, but the solutions show promise.
Kokoelmat
  • Opinnäytetyöt (Avoin kokoelma)
Ammattikorkeakoulujen opinnäytetyöt ja julkaisut
Yhteydenotto | Tietoa käyttöoikeuksista | Tietosuojailmoitus | Saavutettavuusseloste
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatKoulutusalatAsiasanatUusimmatKokoelmat

Henkilökunnalle

Ammattikorkeakoulujen opinnäytetyöt ja julkaisut
Yhteydenotto | Tietoa käyttöoikeuksista | Tietosuojailmoitus | Saavutettavuusseloste