I work in computer vision, previously in academia and now in industry. Details of some of my work can be found on this page.
I was previously a Research Assistant in computer vision at the University of Bristol (as part of the Visual Information Laboratory - www.bris.ac.uk/vi-lab)
For two years I was a research assistant working on bioinspired models for vision guided locomotion. This resulted in a couple of related papers:
Fusing Inertial Data with Vision for Enhanced Image Understanding |
|
Using Inertial Data to Enhance Image Segmentation -
Knowing camera orientation can improve segmentation of outdoor scenes |
|
More information (plus videos!) on the inertial image segmentation work can be found here: osianh.com/inertial
Multi-User Egocentric Online System for Unsupervised Assistance on Object Usage
Dima Damen, Osian Haines, Teesid Leelasawassuk, Andrew Calway, Walterio Mayol-Cuevas
ECCV Workshop on Assistive Computer Vision and Robotics (ACVR 2014)
download pdf (via bris.ac.uk)
You-Do, I-Learn: Discovering Task Relevant Objects and their Modes of Interaction from Multi-User Egocentric Video
Dima Damen, Teesid Leelasawassuk, Osian Haines, Andrew Calway, Walterio Mayol-Cuevas
British Machine Vision Conference, September 2014 (BMVC '14)
download pdf (via bris.ac.uk)
This section gives a brief outline of my PhD work, on using single images to detect planes by learning from prior knowledge.
Recognising planes in a single image |
|
PhD Thesis: Interpreting the Structure of Single Images by Learning from Examples |
|
Visual mapping using learned structural priors |
|
Detecting planes and estimating their orientation from a single image |
|
Estimating planar structure in single images by learning from examples |
|
Estimating planar structure in single images by learning from examples - Technical Report |
|
See also my list of publications on the Bristol Computer Science page: Osian Haines' publications or my Google Scholar Profile
The dataset for the VISAPP paper is available here: Download here (includes readme file)
This comprises six sets of images, from different video sequences, gathered with a head mounted camera. Each image is equipped with its orientation from the Oculus Rift inertial sensor.
Some datasets are available, with limited annotation. These are .bmp images of urban scenes accompanied by text files describing the plane/non-plane locations.
Training Set ("University") ~430 images of planar and non-planar scenes Download .zip, 75MB
Test Set ("City" - totally independent of the trainign/cross-validation data set) Download .zip, 24MB
Explanation of data format for the dataset: TruthImagesReadme.txt
I have also worked as a teaching assistant on various courses, including the MSc Programming in C course (website), the Introduction to C++ service course for undergraduate engineering students(website), and more recently Tilo and Andrew's wonderful Image Processing and Computer Vision course (website)
Home: www.osianh.com
The Visual Information Laboratory www.bris.ac.uk/vi-lab