Chapter 2 – Sensor Fusion and Coding Structures
2.3 – Camera
Self-driving cars need more than obstacle detection sensors. They require an understanding of the surrounding environment to navigate. Humans rely on eyes and ears to detect potential dangers ahead, such as pedestrians, cyclists, and other vehicles. Self-driving cars also need this capability.
This module teaches you how to access Zumi’s camera, take pictures, and display videos. You will learn to import camera libraries containing the code to take, modify, and display images.
How Does Zumi See?
To navigate our world, self-driving cars need more than just obstacle detection sensors. They need to detect and recognize the different elements in the environment, such as pedestrians, vehicles, and other objects.
Camera Blocks
These are the blocks that can be used in the `Camera` Menu. Make sure to always include the `import camera` block in every program that uses the camera.
Take a selfie
The first step is to use Zumi’s camera to take a picture and display it on the screen. You will need to import the camera and vision libraries before running the code.
Show the image
After taking a picture, you can view it on Zumi running the following code.
Remember to turn off the camera with `close()`. If you do not run `close()` and try to run `start_camera()` again, you will get an error. If this occurs, save and close the notebook to force the camera to turn off.
How does Zumi see images?
Zumi actually sees images as a set of numbers, which all translate into pixels.
This is what the output of this code is:
Taking videos
Zumi can also take videos. Every video is just a lot of pictures or “frames” per second. We can make a simple video by implementing a for loop that will take a picture and show the picture 10 times as shown in the blockly code below.
Conclusion
In this module we have learned how to use Zumi’s camera and how it is a powerful tool for understanding the environment.
Demo Video