Research Initiatives

This are the main questions driving our research initiatives:


  • How autonomous agents use visual sensors to avoid collisions and drive safely over unpredictable dynamic environments?
  • What rules are necessary for navigation and what unique group behaviors may emerge from those rules?
  • How much understanding of the visual scene and knowledge of the world is need it for the task of navigation ?
  • How much can be achieved with sensor base navigation strategies?

Autonomous Drones

Autonomous Landing

Objective: Given a target platform, derive navigation rules for drones to land safely asynchronously without collisions and without the need for a central control.

Accomplishments: Safe landing behaviour without collisions was possible for a group of 4 drones that were simulated in the game engine Unity3D, using simple rules with vector calculations that combine VTC and target navigation.


Autonomous Surveillance Drones Landing Simulation

Autonomous Drones

Control Allocation - Robust Motor Mixer

Objective: Given an arbitrary drone configuration with a number not specified of motors, find the distribution of forces and momentums on each motor for a desired total force and momentum from the attitude controller. The algorithm should adapt and reconfigure itself in the case of the failure of various motors.

Accomplishment: An algorithm was developed on Ruby and simulated on SketchUp for a drone with 20 motors on arbitrary positions. This algorithm uses a scheme similar to linear descent to converge in less than 50 iterations on the necessary forces on each motor given a target of total Force and Momentum to the rigid body. Various configurations were successful tested with 20, 9, and 4 motors.

Future work: Improve the algorithm for actuators or motors that saturates or that can not provide forces in all the axes, just the perpendicular axes to the propeller.

A physical realization of the algorithm can be implemented on an microcontroller.

Computer Vision

Pixel Looming Visualization

Objective: By the use of the GPU and the deep field of the video sequence of the simulated camera create a visualization that calculates the relative Looming for each pixel between video frames and apply a heat map like color to get insights of obstacles approaching the drone.

Accomplishments: A visualization of the relative pixel looming was accomplished by accessing the pixel deep buffer of the video sequence directly from the GPU on the Unity3D simulation by using shaders.


Pixel Shader for Looming Visualization - Drone Obstacle Avoidance

Game Simulation:

Future Work: The resulting looming vector for all the pixels on the scene on each frame may be calculated directly from the GPU, but further work is needed on shader programing.

A physical realization may be possible with a deep field stereo camera and with embedded GPUs on a real drone.


Autonomous Drones

Collision free Navigation in a Complex Environment

Objective: Simulation of quadcopters that achieve relative collision free navigation in a complex environment with many fixed and moving obstacles and intercalated shared destinations.

Accomplishments: A simulation on the Unity3D game engine was put in place for six quadcopters that navigate in a very complex environment full of fixed and moving obstacles. Each drone has destination rules to reach by touch each of the bases, switching between them and moving at full speed. Good performance and no relative collisions were appreciated. Also interesting emergent behaviours can be appreciated like drones taking turns to reach their goals without no central command.


Quadcopter Simulation - Collision Avoidance and Goal Navigation
Quadcopter Obstacle Avoidance Simulation

Autonomous Drones

Navigation and Obstacle Avoidance

Objective: Create a simulation with various quadcopters that aim to the same destination at constant altitude and avoid collisions in a conflicting goal all at the same time.

Use Looming and VTC in 3D as a mechanism to govern navigation and obstacle avoidance.

Accomplishment: A simulation on the Unity3D game engine was put in place for two drones with conflicting goals. By manually turning PID controllers for a stable flight and quick maneuvers and by applying the right coefficients to the looming and VTC algorithms a good performance with no collision was accomplished.

Bee Drones Simulation with Looming and VTC

Autonomous Systems

Emergent Behaviour

Objective: Create a virtual simulation of a modular robot or a collection for multiple parts that shows complex emergent behaviors from simple rules applied to each element.

Accomplishment: The Simulation was made in a Game Engine Platform - Unity3D. By connecting simple elements and actuators in a sequence from a root cell that moves with a cosine wave, and by applying just two simple control rules to each actuator (PID angle control of connected parent cell) a very organic type of behaviour was possible, emulating the tail of a living organism, without the need o a central command.

Robotic tail behavior emerges from simple rules

Autonomous Systems

Obstacle Avoidance -VTC (Visual Thread Cue).

Objective: Use the concept of VTC (Visual Thread Cue)[1] to derive rules for a group of agents to navigate safely without collisions on simple 2D virtual world.


Accomplishments : Each agent on the simulation calculates vector quantities for the relative looming and VTC from other objects in its field of view and also the relative looming from obstacles like walls. With just one formula each agent can then derive a rule along with its destination for the heading on the next time interval. In this way no collisions occurs and cooperatively all navigate safely in this simple 2D world.

Navigation using Looming and VTC - 1/2
Navigation using Looming and VTC - 2/2

[1] Kundur, S. R., & Raviv, D. (1996, June). Novel active-vision-based visual-threat-cue for autonomous navigation tasks. In Computer Vision and Pattern Recognition, 1996. Proceedings CVPR'96, 1996 IEEE Computer Society Conference on (pp. 606-612). IEEE.

Computer Vision

Looming Space Fields

Objective: Generate a "heat map" visualization using the concepts of electrical fields but applied to visual looming for moving objects in space to get insights of looming fields.


Accomplishment: By segmenting in SketchUp the floor in small squares of equal size for a 3D model of a video sequence and by calculating the resulting looming for all objects on the scene relative to this point at each time interval, an intensity field was produced and mapped with a color cue. Deep red indicates high positive intensities of the looming field, while deep blue indicates negative looming.


Visual Looming field for moving objects

Autonomous Systems

Obstacle Avoidance - Spaghetti Visualization

Objective: Based on a 3D model reconstructed from a 2D video sequence, create a visualization of the trajectories for each object and project the time as another space dimension.

Accomplishment:  Using a 3D model in SketchUp the time dimension was projected as a space dimension on the Z-axis with a circle of constant size in the middle of the object. In this way the trajectories of objects were visualized like tubes. Every second was mapped as a 50cms of vertical space.
This produces a spaghetti maze like diagram where the lack of touching of the tubes is a clear indications of lack of accidents.

Visualization of moving objects with time as another space dimension.
Space Time slices for moving objects

Computer Vision

Video Object Reconstruction

Objective: Take a 2D video sequence of a real intersection, and post-process it to track all the moving objects. Then represent the scene on a 3D model animation for further analysis of visual theories.

Accomplishment:  It was possible to reconstruct a very approximate 3D model of the scene for objects involved in a 2D video sequence of a vehicular intersection. This was made possible via a process of manual digitization of key frames from a 10 s video, and making general assumptions of vehicle and street dimensions from satellite maps.

Digitizing a 2D Video to a 3D animation - 30 frames

Simulations in 3D

Bike Simulations

Objective: Create simple simulations for multiple robot bikes in a virtual intersection as a point of reference for further exploration of coordinated behaviour and obstacle avoidance algorithms.

Accomplishments: Several simulations of multiple bikes were developed in SketchUp along with simple turn and tilt dynamics. This was useful for the development of some Ruby code to control turning commands.

Some visualizations were helpful to get valuable insights about interactions between agents, especially the "Follow the lider" simulation. The absence of visual algorithms proved how dangerous it is to drive under those conditions.

Robot motorcycles simulation - 1/2
Robot motorcycles simulation - 2/2
Bicycle motion simulation 1/2
Bicycle motion simulation 2/2
Robot bike simulation - Follow the leader

Computer Vision

Locus of Zero Retinal Flow

Objective: Visualize how the Locus of Zero Retinal Flow is realized by an observer who is moving and it is fixating on a target. This is an important visual cue that may be useful for obstacle avoidance.

Accomplishment: Some simulations were produced on SketchUp and was verified that for a moving observer who is fixating on a target, objects move right or left of the retina (screen) depending if they are inside or outside the circle of zero flow in the world.

Locus of Zero Flow (1/3) - Top View
Locus of Zero Flow (2/3) - Fixed Observer
Locus of Zero Flow (2/3) - Eye Retinal View

Digital Fabrication

Tetrahedral Spherical Connector

Objective: The main idea is to build physical structures (the Voronoi dual mesh) from a 3D tetrahedral solid model using very simple elements.

AccomplishmentA simple tetrahedral 4 way connector was simulated in SketchUp using just two elements. Within the individual connector, rod angles can be fixed with internal screws at will. In addition some Ruby code algorithms allows the optimal alignment of connectors with their neighbors.

Future Work: 3D printing of samples and a software interface to derive the instructions for humans to 3D print any structure.