Typhoon

Note: open-source, basic Python implementation available on GitHub.

Typhoon is a motion estimation program designed to retrieve vector motion fields from image sequences of turbulent flows. It is the reference implementation of the algorithm described in my PhD thesis.

 About Typhoon

Typhoon is a computer-vision algorithm; it aims at extracting the apparent displacements (motion) from image sequences. It is originally designed for fluid flows, that is to say to recover fluid velocity fields from images of fluid-flow visualizations, e.g. particle image velocimetry (PIV), schlieren photography or lidar imagery.

The first version of Typhoon has been extensively tested and validated on both synthetic and actual particle (PIV) and scalar dispersion images[1][2], as well as satellite[3] and lidar imagery[4]. Schlieren photography, IR imagery, water vapor imagery have shown promising results as well (unpublished).

Presently, Typhoon is being used by the Lidar group of CSU, Chico for dense wind field estimation in real-time using imagery from the REAL research lidar. It is also licensed by Spectral Sensor Solutions for the same purpose, this time with the SAMPLE commercial lidar. Another activer user is the Geomorphology and Sediment Transport Laboratory of the USGS, using Typhoon to extract surface river flows from infrared imagery.

A web interface to a demonstration version is available on the A||go platform.

 Examples

Wind estimation (aerosol backscatter)

This example features wind estimation from aerosol backscatter lidar data. It shows the ability to recover a wide range of wind speeds, complex motion, as well as dynamic masking of unreliable estimates in the far range.

The input backscatter data is shown in the background (copper colors). Estimated motion is visualized by tracers.

Wind estimation (water vapor)

This is a demo using satellite imagery. Input frames are water vapor images taken by GOES satellite over the U.S. East coast, during Nemo winter storm (8 Feb. 2013) - source CIMSS Satellite Blog.

A higher quality gif is available here (80 Mo). Note that raw, unfiltered displacements between two images (i.e. not instantaneous velocities) are plotted. Motion vectors are available at every pixel, yet only 1 out of 15 in both directions is shown. Sometimes the vectors seem to scale up. Indeed, the time-step between two images doubles from time to time, resulting in much larger observable displacements.

Surface flow estimation (ink on milk)

The video below features ink drops on the surface of milk. When dishwashing liquid is added, ink drops are dispersed due to suface tension modification, resulting in a visible motion. This apparent motion is extracted by Typhoon algorithm, and later visualized through particle trajectories.

Dispersion simulation

We use the flow field derived from the lidar imagery to advect tracer from some arbitrary point in the scan area. It allows one to estimate the dispersion of a constant release of hypothetical material (gas, pollutant, …) by a fixed source located in the near range (100x100 m white box).

The released material is advected by the estimated wind field and colored in blue-green. The color itself is not relevant as this quantity does not represent a concentration, it rather indicates where the material would go.

 Specifications

Typhoon belongs to the class of dense optical-flow algorithms. It provides a two-component displacement vector at every grid point (image pixel), contrary to the well-known cross-correlation algorithm which usually provides sparse motion fields. Typhoon's main particularity is that it relies on a wavelet representation of the motion field, which brings a multiscale framework as well as interesting regularity properties.

The original software is written in C++. In Chico, we have achieved the parallelization of low-level functions in CUDA, which allows to execute code on compatible NVIDIA's graphic processing units (GPUs). The CUDA version of the software achieves a speed gain of 10 to 100 with respect to the CPU version[5], which makes it very interesting for large images. The code has been tested on Ubuntu 12.04 and OSX 10.7--9.

Some of the features of (cu)Typhoon:

  • Relies on the CImg library and Imagemagick/convert for image input/output, which accept a wide range of image formats;
  • (cu)Tsunami, a custom wavelet library containing both CPU and GPU-accelerated routines;
  • A "server" version for custom I/O, pre and post-processing, …
The package contains both GPU and CPU versions. The GPU code requires a computing capability of 3 or above; it has been used on Tesla K20, GeForce GTX TITAN, as well as lighter GeForce GT 650M (MacBook Pro).

 Licensing

Typhoon is currently co-owned by Inria (France) and the CSU, Chico Research Foundation (USA). The software and its source code are registred by the French Agency for the Protection of Programs (APP):
ID# IDNN.FR.001.490027.000.S.P.2013.000.21000

Licensing is handled by the CSU Research Foundation. Licenses are free for public research activities. Feel free to contact myself, Shane Mayor (CSU Chico) or Etienne Mémin (Inria Rennes) for more information.

 References

  1. Wavelets and Fluid motion estimation, PhD thesis, 2012.
  2. Wavelets and Optical Flow Motion Estimation, Num. Math. The. Meth. App., 2013.
  3. Divergence-free Wavelets and High Order Regularization, Int. J. Comp. Vis., 2013.
  4. Wavelet-based optical flow for two-component wind field estimation from single aerosol lidar data, J. Atm. Oc. Tech., 2015.
  5. Wavelet-Based Optical Flow for Real-Time Wind Estimation Using CUDA, GPU Tech. Conf., 2014.