How to build a Raspberry Pi alarm that pulverizes porch hackers


These days everyone has packages delivered on a regular basis, and if you don’t notice a delivery right away, a “porch hacker” might steal your stuff. According to C + R research, 43 percent of Americans had at least one package stolen in 2020.

To solve this problem, I built a Raspberry-Pi powered system that uses a camera and machine learning to determine if a package has been stolen from your door. When it detects the problem, it can either set off an alarm, turn on your sprinklers to wet offenders, or even shoot thieves with flour. Tom’s Hardware covered my project in a previous press article, but today I will show you how to do it yourself.

If you’ve never worked with machine learning before, this should be an easy enough project to get your feet wet. We will use a type of computer vision called image classification to determine whether or not there is a package at your door. To train it, we’ll use a tool called Google Cloud AutoML, which takes much of the complexity out of training a machine learning model.

What you will need for this project

The initial setup

To get started, we’ll need to configure a few things to collect data for our machine learning model.

1. Configure your Raspberry Pi. If you don’t know how, check out our story on how to set up your Raspberry Pi for the first time or how to set up a headless Raspberry Pi (without monitor or keyboard).

2. Plug in your pi, install basic dependencies and clone repository to your Raspberry Pi.

cd ~/
sudo apt-get update && sudo apt-get -y install git python3-pip && python3 -m pip install virtualenv
git clone

3. Go down in the training directory and set up a virtual environment.

cd package_theft_preventor/training
python3 -m virtualenv -p python3 env

4. Activate your virtual environment and install python requirements.

source env/bin/activate
pip install -r requirements.txt

5. Configure your RTSP camera and point it at your front door. If you are using a Wyze Cam V2, flash the custom RTSP firmware (instructions available here).

(Image credit: Tom’s Hardware)

6. Get the RTSP url from your camera settings and set it as feed url in the training / Makefile.

nano Makefile

# update stream_url with your stream URL

7. Run the code to test the image collection. You should start to see images appear in the data directory.

make no-package-images

8. Use the code to collect images of your front door at various times of the day. The code takes a photo every 10 seconds, which should account for various lighting and weather conditions. You will need around 1000 photos without a plan to get started.

# take photos of your door without packages
make no-package-images

(Image credit: Tom’s Hardware)

9. Once you have gathered enough images without packages, we will need to start taking pictures with various packages. Use a variety of boxes and envelopes in different sizes, in different positions and orientations around your door. When you have a good number of photos of various packages and an equal number of your door without packages at different times of the day, you are ready to begin training.

make package-images

(Image credit: Tom’s Hardware)

10. Browse the training / data directory and delete any photos that might not have turned out, or may not be good for training.

11. Create a Google Cloud Storage bucket to store your images. You will need a Google Cloud account, and the gcloud command line tool installed on your local machine. You will also need a Google Cloud Project if this is your first time using Google Cloud.

# gsutil is installed with gcloud
gsutil mb gs://your_bucket_name_here -p your-project-name-here -l us-central1

12. Define the name of your bucket as GCS_BASE in the training / Makefile file.

nano Makefile
# edit GCS_BASE=gs://your_bucket_name_here

13. Run the make generate-csv command to generate the CSV required for the training.

make generate-csv

14. Download the generated CSV to your new bucket.

gsutil cp training_data.csv gs://your_bucket_name_here

15. Upload your images to your new bucket.

gsutil cp -r data gs://your_bucket_name_here/data

16. Access the Google Cloud AutoML Vision dashboard in the Google Cloud Console. Click on “Start” with AutoML Vision.

(Image credit: Tom’s Hardware)

17. Click on “New data set” and choose a name for this dataset. Select “Classification in one label” as the objective of the model, and click on “Create a dataset”.

(Image credit: Tom’s Hardware)

18. Choose “Select a CSV file on Cloud Storage”, and provide the Google Cloud storage path to your downloaded CSV file in the box below, then click Continue.


(Image credit: Tom’s Hardware)

19. Google Cloud will take you back to the import screen. After approximately 10 minutes, you will automatically be taken to the “Images” section of your dataset. Check that all your images are uploaded and correctly tagged as package or no_package.

(Image credit: Tom’s Hardware)

20. In the “Train” tab, click on “Train a new model” and choose a name. Then choose “Edge” the model can therefore be downloaded from Google Cloud once completed. Then click on Continue.

(Image credit: Tom’s Hardware)

21. Click on “Faster predictions” for the optimization of the model, since we will be running on a Raspberry Pi with limited computing power. Then Click Continue.

(Image credit: Tom’s Hardware)

22. Accept the default recommendation for the node’s hourly budget, but please note that you will be charged for the hours that these machines train your model. At the time of this writing, the approximate cost is $ 3.15 per node-hour, so this model should cost a little over $ 12 to train.

23. Click the Start Training button. You will receive an email when your training is complete.

24. Once the training is completed, go to the “Test and use” tab and download the model as a TF Lite file. As the destination, choose the compartment in which you saved your training data and download it with the following command. It will download a dict.txt file, model.tflite and a tflite_metadata.json file. You now have a machine learning model trained to identify whether or not there is a package at your doorstep.

(Image credit: Tom’s Hardware)
gsutil cp -r gs://your_bucket_name_here/model-export/ ./

Raspberry Pi package alarm system configuration

1. Go to the root of the repository and run the install command to install all the lower-level and Python-based requirements for the project to work.

cd ~/package_theft_preventor
make install

2. Copy your downloaded template files from your computer to your Raspberry Pi and use them to replace existing files in the src / models directory.

# From your desktop machine
mv training/model-export/dict.txt /home/pi/package_theft_preventor/src/models/dict.txt
mv training/model-export/tflite_metadata.json /home/pi/package_theft_preventor/src/models/tflite_metadata.json
mv training/model-export/model.tflite /home/pi/package_theft_preventor/src/models/model.tflite

3. Define the STREAM_URL in the Makefile to the URL of the camera’s RTSP stream pointing to your door. This will be the same URL that you used to train your model.

nano Makefile
# Edit STREAM_URL=rtsp://username:password@camera_host/endpoint

4. Connect the VCC and ground pins of your relay board to your Raspberry Pi, using respectively pins 4 (VCC) and 6 (ground) of the board.

(Image credit: Tom’s Hardware)

5. Connect the data pins of the relay to the following Raspberry Pi BCM pins. You can change the order, just keep track of which channel each pin is connected to for further wiring.

Relay Pin 1 = Raspberry Pi BCM Pin 27 (Sprinkler Pin)
Relay Pin 2 = Raspberry Pi BCM Pin 17 (Siren Pin)
Relay Pin 3 = Raspberry Pi BCM Pin 22 (Air Solenoid Pin)

(Image credit: Tom’s Hardware)

To note: For my project, I used a combination of a 12v siren, 12v sprinkler controller, and 12v air solenoid to trigger a variety of alarms. For the purposes of this tutorial, we’ll only be connecting a siren – and I don’t recommend connecting anything other than that for actual use. If you are connecting other modules, follow the same three steps as below.

6. Plug the positive end of your 12 volt power supply into the common port of relay channel 2.

(Image credit: Tom’s Hardware)

7. Connect your 12v siren to the normally open port of relay channel 2.

(Image credit: Tom’s Hardware)

8. Connect the other end of the siren directly to the ground of the 12v power supply.

9. If you are connecting a sprinkler, wire in your 12v sprinkler controller in the exact steps above using Relay channel 1, connecting the common port of relay channel 1 to the positive end of the power supply, one end of the sprinkler controller to the relay, and the other end of the sprinkler controller to ground. The code controls the sprinkler and siren independently.

(Image credit: Tom’s Hardware)

10. If you are connecting a sprinkler, connect the supply end to a pressurized water source and turn on the tap. Connect the other end to a hose leading to a sprinkler.

(Image credit: Tom’s Hardware)

11. Connect your 12v power supply to a power source.

12. Add a photo of your face to the src / faces directory as a .jpg file (optional). This allows the system to periodically check if you are in the area and to disarm the system accordingly.

13. Start the system and test it. You will see a variety of log instructions showing the current state of the system:

make run
# Starting stream - The system is connecting to the camera
# System watching - The system is classifying images of your porch as package/no_package right now
# System Armed - A package has been definitively detected, if it is removed the alarm will go off
# Activating Alarm - The package has been removed, activating the alarm

(Image credit: Tom’s Hardware)

14. Invite your friends to steal packages from you to test your new anti-theft alarm. Mine had a lot of fun doing it.

(Image credit: Tom’s Hardware)

Once you’re happy, your Raspberry Pi Package Theft Detection System should work. However, we advise you to be careful because if it doesn’t work perfectly, you could end up alerting

Source link

Leave A Reply

Your email address will not be published.