Monitor table occupancy status at Beo Crescent Hawker Centre based on crockeries/receptacles left on tables and patron occupancy.
Tablevision v1 [deprecated]: Clarifai
Tablevision v2: Google Cloud AutoML Vision
There are three environments to run Tablevision:
"IoT": Raspberry Pi
"Initialiser": Device with GUI (laptop or Pi connected to screen)
"Processer": Logic-processing endpoint
-
Raspberry Pi –
_sender
: To capture the image from the Raspberry Pi's main camera and send it to our API endpoint,tablevision_processer
. -
Local device (Our macOS) –
_initialiser
: To map out each table from an image using a GUI. Can be run on Raspberry Pi as well, if a GUI is enabled and a screen is connected on the Pi. -
API Endpoint on the Cloud (Our EC2) –
_processer
: Endpoint for Raspberry Pi to send image to and processes each frame and sends it to the Google Cloud AutoML API with our custom Tablevision model. Receives the object prediction results and processes the logic by updating our NoSQL database.
- Check that the EC2 instance is running
- On the Pi, capture a still image using
raspistill -o initialise.jpg
- Transfer
initialise.jpg
to your local device. Skip this step if Pi is connected to a monitor and GUI is enabled. - In the
tablevision_initialiser
folder, specify your image name in theimagecrop.py
file. - Get coordinates of tables using
imagecrop.py
. - In the same folder, run
initialise.py
and use the parameters provided byimagecrop.py
in the sysargs. See example below. - Ensure
tablevision_sender/tablevision.py
is running on the Raspberry Pi.
Step 6 example:
python3 initialise.py '{"46": [[0.03948170731707307, 0.09692911255411243], [0.4337906504065041, 0.09692911255411243], [0.03948170731707307, 0.4648944805194804], [0.4337906504065041, 0.4648944805194804]]}'