[2] In the group II, we are currently working with OpenCV in order to detect motion, quantify it and execute actions under certain conditions. For the moment, the Python code works well on our computers (it detects the motion accordingly to what we wanted). The next step is to put that code on the NAO to make it detect what we want.
This article aims to explain the process we followed to understand how OpenCV work and to detect an amout of motion in an image.
I. Discover OpenCV
We discovered the basics of OpenCV by speaking with another group who use it to detect colors. In a few minutes, we were able to find a code on Internet that would read the input of the computer camera and display it in a window.
This code let us understand how OpenCV work a little bit:
- It works with frames: each frame is an image we can manipulate and compare with others ;
- It brings natively a lot of image processing tools that we can use to achieve what we want ;
- It can detect by itself the movement between two images ;
By searching a bit on Internet, we were able to build a movement tracker that would create red rectangles around what move in real time.
II. Quantify the amount of movement
Once we were able to detect movement, the next issue was to quantify that movement in order to trigger or not our action.
a. First try: percentage of the image in movement
Our first implemetation used a percentage of the image covered by red rectangles: if more than 30% of the image was in movement, we trigger the actions. However, this method wasn’t great for two main reasons:
- We didn’t have a notion of time, so a single big movement could be enough to trigger the system ;
- Every single movement, even the smallest ones like a face movement, were tracked, so the percentage was pretty bad ;
b. Improve the method by adding time and ignoring some rectangles
In a second and third implementations, we worked on these problems and solved them by adding a time notion and by ignoring small rectangles. Now, the motion had to be detected for a certain number of frame to trigger the event.
But we faced a last problem when we tested the code: if someone moved his or her hand very closely to the camera, the system would be triggered even if the motion is small. That is not what we want, so we tried to improve our algorithm.
III. The motion distance
The best thing we could do at this step was to test and experiment. After a while, we discovered something interesting: if the motion is far enough, a single big red rectangle is created instead of a lot of small ones. Great!
Thus we chose to change our algorithm: instead of ignoring too small rectangles, we will consider only the biggest one for the percentage. After some tests, we were proud: our system works pretty well!
IV. The final code
1 | #!/usr/bin/env python |