As a good example of edge computing, check out this new tutorial about live object detection with depth estimation thanks to stereo vision with a Raspberry Pi Compute Module.

Linux Projects, AI, ML & Deep Learning, Real-time audio video streaming, Robotics
As a good example of edge computing, check out this new tutorial about live object detection with depth estimation thanks to stereo vision with a Raspberry Pi Compute Module.
As announced in the previous post, a new raspicam driver that adds some AI to UV4L has been released!
Here is a tutorial about how to quickly configure UV4L to make a robot doing real-time object detection, tracking by controlling pan/tilt servos and streaming over the web with WebRTC at the same time.
Below is another example of live object detection where bounding boxes, labels and confidence scores are overlaid real-time on theĀ high-resolution video (tracking has been disabled in this case).
webrtcH4cKS has recently posted a very interesting article about a computer vision project that makes use of UV4L for real-time streaming of video and data over WebRTC from a Raspberry Pi Zero connected to a AIY Vision board. The board embeds a Vision Processing Unit (VPU) chip that runs Tensor Flow image processing graphs super efficiently.
The article is divided in two parts:
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.