OpenCV hover-assist
OpenCV hover-assist
Still haven't destroyed our two CF 2.0's and hoping to snake in some hover assistance using OpenCV with a webcam. The blue LED make for good tracking (except when the lights bounce off my shiny forehead...).
I've cobbled together some inline python code based on OpenCV examples, and I'm looking for some helpful hints where/how I might best integrate and dial in some offsets into the cflient to help hover. I haven't poured through all the traffic here yet (apologies), but would appreciate any suggestions.
Thanks!
I've cobbled together some inline python code based on OpenCV examples, and I'm looking for some helpful hints where/how I might best integrate and dial in some offsets into the cflient to help hover. I haven't poured through all the traffic here yet (apologies), but would appreciate any suggestions.
Thanks!
Re: OpenCV hover-assist
I've tried to do the same. My approach was to use little colored styrofoam balls on top of the CF2 to be able to track it. I haven't actually tried it yet though, because my webcam has really crappy quality and so poor color representation. It's hard to properly detect the colors.
What algorithm are you using? Camshift or simple meanshift?
What algorithm are you using? Camshift or simple meanshift?
Re: OpenCV hover-assist
Working off this: https://github.com/jessicaaustin/roboti ... tracker.py
She hooked a camera up to it, which should make for a decent tracker, at least until it crashes into my forehead seeking blue reflections...
She hooked a camera up to it, which should make for a decent tracker, at least until it crashes into my forehead seeking blue reflections...
Re: OpenCV hover-assist
So, I've scanned most of the source, both client and flie, and have a (very!) vague idea about what's going on.
I think that the input.py program would be a logical place to knit in my theoretical hover-assist, +/- the thrust in that loop. I'm thinking to keep my hands as far out of the existing code as possible, and would like to keep my OpenCV code in a separate program. It can send periodic 'altitude' data now.
My question is: what would be the best/proper way to communicate between the two programs (cfclient & my hackware)? Sockets, rpc, trying to start them both as subprocesses, something else already existing int the client environment?
This is all new to me, and I'm an old procedural C guy, so I'm enjoying/struggling with all these objects. Any helpful hints very much appreciated.
I think that the input.py program would be a logical place to knit in my theoretical hover-assist, +/- the thrust in that loop. I'm thinking to keep my hands as far out of the existing code as possible, and would like to keep my OpenCV code in a separate program. It can send periodic 'altitude' data now.
My question is: what would be the best/proper way to communicate between the two programs (cfclient & my hackware)? Sockets, rpc, trying to start them both as subprocesses, something else already existing int the client environment?
This is all new to me, and I'm an old procedural C guy, so I'm enjoying/struggling with all these objects. Any helpful hints very much appreciated.
Re: OpenCV hover-assist
Very interested in your progress with this. I haven't yet begun looking at Open CV but will in the next few weeks.
It isn't very difficult to make your own version of the client. I did this with a Kinect project. It made it easy to consolidate my code rather than try to communicate between my code and the client.
It isn't very difficult to make your own version of the client. I did this with a Kinect project. It made it easy to consolidate my code rather than try to communicate between my code and the client.
Re: OpenCV hover-assist
Hi,
If you would like to make a completely separate script for running the hover control then have a look at this post, wiki page and video. It's just a simple (and not very stable..) script, but it shows how to use the API to control the Crazyflie. The Crazyflie Python API is documented here and there's some examples here. There's also a pretty long thread here describing Olivers implementation using depth images from the Kinect.
If you would like to "hook it into" the normal client I would recommend looking at dev-inputdev branch on GitHub. It has a new arch for input devices that we are working on. Basically it supports multiple back-ends for getting input device data. I'm working on the back-end for the Leap Motion right now which will (from the client and user side) look like a normal input device. If you would like to hook in the Kinect you can create a new .py file in lib/cfclient/utils/inputreaders which will be able to control the Crazyflie. It will be picked up and initialized when the application starts. I'm not sure which would be the easiest way to connect to the application, probably a socket.
Here's a quick rundown of how it should work. The readers in lib/cfclient/utils/inputreaders are responsible for opening/closing/scanning/reading input devices. The function read should return an array containing two arrays of values, one for buttons and one for axes. The scale of the values here is not important except for the buttons which should be 0 or 1. For every read a full set of values should be returned, i.e not just the ones that have changed. After having read the buttons and axes from the device in this file it's used together with an input map from conf/input to get the final values for roll/pitch/yaw/thrust and other functionality like altitunde hold. These values are then sent to the Crazyflie and to UI for display. The changes and architecture on this branch is still under development, so thing might change before it's done. But as it looks right now it's probably going to stick.
/Marcus
If you would like to make a completely separate script for running the hover control then have a look at this post, wiki page and video. It's just a simple (and not very stable..) script, but it shows how to use the API to control the Crazyflie. The Crazyflie Python API is documented here and there's some examples here. There's also a pretty long thread here describing Olivers implementation using depth images from the Kinect.
If you would like to "hook it into" the normal client I would recommend looking at dev-inputdev branch on GitHub. It has a new arch for input devices that we are working on. Basically it supports multiple back-ends for getting input device data. I'm working on the back-end for the Leap Motion right now which will (from the client and user side) look like a normal input device. If you would like to hook in the Kinect you can create a new .py file in lib/cfclient/utils/inputreaders which will be able to control the Crazyflie. It will be picked up and initialized when the application starts. I'm not sure which would be the easiest way to connect to the application, probably a socket.
Here's a quick rundown of how it should work. The readers in lib/cfclient/utils/inputreaders are responsible for opening/closing/scanning/reading input devices. The function read should return an array containing two arrays of values, one for buttons and one for axes. The scale of the values here is not important except for the buttons which should be 0 or 1. For every read a full set of values should be returned, i.e not just the ones that have changed. After having read the buttons and axes from the device in this file it's used together with an input map from conf/input to get the final values for roll/pitch/yaw/thrust and other functionality like altitunde hold. These values are then sent to the Crazyflie and to UI for display. The changes and architecture on this branch is still under development, so thing might change before it's done. But as it looks right now it's probably going to stick.
/Marcus
Re: OpenCV hover-assist
Than you Marcus, I shall go down this path and try and do it "right". Although I must say your team has set a pretty lofty bar for right...(a little aviation humor to start your new year)...D
-
- Beginner
- Posts: 27
- Joined: Sat May 21, 2016 8:37 pm
Re: OpenCV hover-assist
Would love help getting this working. I got the python code working that uses openCV to detect and track specific colors using my laptop's built-in webcam. What next? Any help appreciated!
Re: OpenCV hover-assist
Hi,
With the new kalman filter and position controller developped for the LPS the performance of a Crazyflie tracked by a webcam should now be very good.
If you are able to calculate the X/Y/Z position of the Crazyflie you could send it using a new port that has been created for that: https://github.com/bitcraze/crazyflie-f ... position.c. The packet format is in the .h file. There is no support for this packet in the lib yet but it is fairly easy to add so either you can try to add it or tell me I can give it a try.
/Arnaud
With the new kalman filter and position controller developped for the LPS the performance of a Crazyflie tracked by a webcam should now be very good.
If you are able to calculate the X/Y/Z position of the Crazyflie you could send it using a new port that has been created for that: https://github.com/bitcraze/crazyflie-f ... position.c. The packet format is in the .h file. There is no support for this packet in the lib yet but it is fairly easy to add so either you can try to add it or tell me I can give it a try.
/Arnaud
-
- Beginner
- Posts: 27
- Joined: Sat May 21, 2016 8:37 pm
Re: OpenCV hover-assist
Thanks, Arnaud. Further guidance would definitely be helpful! I'm new to this - python, crazyflie software interaction, etc. How do I tie it all together? I have working python code that uses openCV to detect and track specific colors - it puts a red border around the specific tracked color. Can you suggest how I might proceed from here? Thank you in advance!