Skip to content

Latest commit

 

History

History
43 lines (29 loc) · 2.03 KB

README.md

File metadata and controls

43 lines (29 loc) · 2.03 KB

Tablevision

Monitor table occupancy status at Beo Crescent Hawker Centre based on crockeries/receptacles left on tables and patron occupancy.

APIs used

Tablevision v1 [deprecated]: Clarifai

Tablevision v2: Google Cloud AutoML Vision

Tablevision device environments

There are three environments to run Tablevision:

"IoT": Raspberry Pi

"Initialiser": Device with GUI (laptop or Pi connected to screen)

"Processer": Logic-processing endpoint


  • Raspberry Pi – _sender: To capture the image from the Raspberry Pi's main camera and send it to our API endpoint, tablevision_processer.

  • Local device (Our macOS) – _initialiser: To map out each table from an image using a GUI. Can be run on Raspberry Pi as well, if a GUI is enabled and a screen is connected on the Pi.

  • API Endpoint on the Cloud (Our EC2) – _processer: Endpoint for Raspberry Pi to send image to and processes each frame and sends it to the Google Cloud AutoML API with our custom Tablevision model. Receives the object prediction results and processes the logic by updating our NoSQL database.

Instructions

  1. Check that the EC2 instance is running
  2. On the Pi, capture a still image using raspistill -o initialise.jpg
  3. Transfer initialise.jpg to your local device. Skip this step if Pi is connected to a monitor and GUI is enabled.
  4. In the tablevision_initialiser folder, specify your image name in the imagecrop.py file.
  5. Get coordinates of tables using imagecrop.py.
  6. In the same folder, run initialise.py and use the parameters provided by imagecrop.py in the sysargs. See example below.
  7. Ensure tablevision_sender/tablevision.py is running on the Raspberry Pi.

Step 6 example:

python3 initialise.py '{"46": [[0.03948170731707307, 0.09692911255411243], [0.4337906504065041, 0.09692911255411243], [0.03948170731707307, 0.4648944805194804], [0.4337906504065041, 0.4648944805194804]]}'