Drone Detection and stabilization platform

Drone Detection and stabilization platform

Team Members: Alon Melamud, Tom Guter
Quick overview
Project Objective:

The goal of our project was to create a platform for drone detection and stabilization, without using markers on the drone.

We wanted to create a working proof of concept of this idea.
In this project, we succeeded to create an easy-to-use platform for the following projects that wish to use stabilized drone without markers.
Demonstration:
A short clip presenting the platform:

[V_Demo.MOV]
Black dot shows where the drone should be and the red vector is the path it has to make in order to stay stabilized.
Our solution and workflow
assumptions:
1. The drone is an orange SYMA X5SW.
2. Cameras located below drone.
solution
Our solution for Drone Detection and stabilization:

In order to detect the Drone without markers, we used machine learning.

We trained a neural network to detect our drone so when given an image with the drone in it, the output will be the coordinates of the drone in the image.

Using a stereo camera system, with the drone coordinates in each camera (detected with machine learning), we were able to triangulate and find the world coordinates of the drone.

Given the world coordinates of the drone, it is easy to create a vector of movment for the drone in order for it to be stabilized.
workflow:
We have first created the data sets for the neural network training.

Then, we trained the neural network until we were satisfied with the results.

The next step was to create a calibrated Stereo camera system and the triangulation algorithm.

Then we combined the drone detection and the Stereo camera system with the triangulation algorithm.

In the end we added GUI and the project was ready!
reconstructing our project – set up
Workspace:


[P_TensorFlowLogo.png] [P_opencv.jpg]
Devices and softwares used
Devices
2 identical webcams
Windows/Linux computer
orange SYMA X5SW
Softwares & libraries
TensorFlow
open cv 3.3.0
Python 3.5+

Code:

Our main code that runs everything can be found Here.
Running the programs
*Please note that in order to run the project, you will need the full project folder.

First install TensorFlow: instructions.

Then, build your stereo camers system and make sure they are calibrated.

In order to calibrate your cameras, take photos of a 7×9 chessboard with your stereo camers and save them in the calibration/images folder, named left*.jpg and right*.jpg.

After making sure the cameras are calibrated, and the libraries needed for the project are installed, run object_detection_live.py

It will setup everything for you, and after it says “Ready!” it will start the GUI.

Algorithms description:
Detecting drone and generating points for triangulation
Drone was detected using machine learning and points generated using SIFT:
Taking about 400 pictures of the drone and creating XML file for each photo containing the drone bounding box in the image

Automatically generating over 1800 different images (changing the zoom and rotation) and XML file to each image.

Training the neural network using ssd mobilenet coco v1 detection network with the dataset we created. (for about 7 hours)

(The neural network was trained using the TensorFlow API)
We used this detection network because we want the detection to be fast and use it live.
When detected drone, the neural network returns a bounding box around the drone.
Using SIFT (Scale-invariant feature transform) we can find the same point in both images by looking for same features inside the bounding boxes.

Calibrating cameras and Triangulation algorithm
The Triangulation algorithm gets the drone position in each image.
First, calibrate both cameras using the chessboard images

Calibraion returns: 2 camera matrices, 2 vectors of distortion coefficients, rotation matrix, translation vector
Using the Calibraion we then rectify the cameras so they will have the same image plane.
After the cameras are rectified we can triangulate using this equation Pl=RT(Pr-T)
Using Pr and Pl we can use the open cv function triangulatePoints to reconstruct 3D points by triangulation
After triangulation we reproject Image To 3D and we know both the world coordinates of the drone, and the distance from the cameras
Future Improvements

Faster detection:

Train with bigger and better dataset. Use a faster CPU for better FPS.

Find rotation of the drone:

In this project we found the (X,Y,Z) world coordinates of the drone, but in order to fully stabilize, the rotation of the drone is needed.

Using this project it should be possible to stabilize a drone that does not change his rotation (phantom 4 for example).
improve translation accuracy:

use more datat to calibrate the cameras. improve RMS value.

use dispersity map to find depth, which will improve the distance measurement accuracy.

Contact us:
Alon Melamud: alonmem@gmail.com
Tom Guter: tomyguter@gmail.com

Accessibility
Close Close