I'm working on small WPF desktop app to track a robot. I have a Kinect for Windows on my desk and I was able to do the basic features and run the Depth camera stream and the RGB camera stream. What I need is to track a robot on the floor but I have no idea where to start. I found out that I should use EMGU (OpenCV wrapper) What I want to do is track a robot and find it's location using the depth camera. Basically, it's for localization of the robot using Stereo Triangulation. Then using TCP and Wifi to send the robot some commands to move him from one place to an other using both the RGB and Depth camera. The RGB camera will also be used to map the object in the area so that the robot can take the best path and avoid the objects.
The problem is that I have never worked with Computer Vision before and it's actually my first, I'm not stuck to a deadline and I'm more than willing to learn all the related stuff to finish this project. I'm looking for details, explanation, hints, links or tutorials to achieve my need. Robot localization is a very tricky problem and I myself have been struggling for months now, I can tell you what I have achieved But you have a number of options:. Optical Flow Based Odometery: (Also known as visual odometry):.
Extract keypoints from one image or features (I used Shi-Tomashi, or cvGoodFeaturesToTrack). Do the same for a consecutive image. Mac keylogger software keylogger for mac. Match these features (I used Lucas-Kanade).
Extract depth information from Kinect. Calculate transformation between two 3D point clouds. What the above algorithm is doing is it is trying to estimate the camera motion between two frames, which will tell you the position of the robot. Monte Carlo Localization: This is rather simpler, but you should also use wheel odometery with it. Check out for a c# based approach. The method above uses probabalistic models to determine the robot's location.
The sad part is even though libraries exist in C to do what you need very easily, wrapping them for C# is a herculean task. If you however can code a wrapper, then 90% of your work is done, the key libraries to use are.
The last option (Which by far is the easiest, but the most inaccurate) is to use KinectFusion built in to the Kinect SDK 1.7. But my experiences with it for robot localization have been very bad. You must read, it will make things about Monte Carlo Localization very clear. The hard reality is, that this is very tricky and you will most probably end up doing it yourself. I hope you dive into this vast topic, and would learn awesome stuff.
For further information, or wrappers that I have written. Just comment below.:-) Best.