· Home Page

· Classes

· Publications & Talks

· CV

· Code


Research Projects



· UVIS-UMass


Research Groups-UMass


· Computer Vision Lab



· JavaScript Examples

Paul Dickson

Paul's preatty face Visiting Assistant Professor of Computer Science
School of Cognitive Science
Hampshire College

Contact Information

Hampshire College
School of Cognitive Science
893 West St.
Amherst, MA 01002

office phone: x5861, off campus (413) 559-5861
office location: Adele Simmons Hall, room 204
email: pedcs[at]hampshire[dot]edu


Research Interests

Current Work
My current research focuses on the continued development of Presentations Automatically Organized from Lectures (PAOL), a lecture recording system that automatically and transparently captures all material presented. PAOL began as my dissertation at the University of Massachusetts Amherst. For years the University of Massachusetts Amherst recorded lectures and turned them into indexed presentations that include a video of the lecturer, a table of contents linked to different parts of the lecture, and an enlargement of whatever appeared on the computer screen or white board. The recordings were part of the Multimedia Asynchronous Networked Individualized Courseware (MANIC) project in the Research in Presentation Production for Learning Electronically (RIPPLES) laboratory. These presentations were always hand generated from videos captured by human operators because no method had been created that could automatically store the material presented and determine significant points that would make up the table of contents. PAOL was created to do just that.

PAOL uses a device to capture the output of a lecturer's computer as it is sent to a projector. Computer vision techniques are then used to determine when significant changes occur and store an image of the material as well as the time when the change occurred. This capture technique works far better, identifying fewer insignificant changes than the best commercial system. Unlike many computer capture systems, PAOL uses image processing techniques to determine significance and is able to determine significant events and generate content from any application displayed on screen.

High-resolution cameras face the front of the lecture room and capture the entire white board space at 15 frames per second. Simple vision techniques are used to locate the lecturer in these frames and to extract a window that centers on the lecturer. These smaller frames are used to create a video of the lecturer. Vision techniques are also used on the images of the front of the room to remove the instructor from the scene in order to better capture material written on the white board. These images are processed to heighten the contrast and sharpness of what is written or drawn on the board. The board images are then analyzed to determine when material has been added, and selected images and the times they appear are saved. Though some similar work has been done with constrained aspect ratio white boards, none has attempted content capture over such large surfaces and with such varied lighting.

This material, with a soundtrack of the lecture, is sufficient to create an indexed presentation from the lecture that is similar to those previously produced by hand. No other system can capture and index material presented on both computer and white board. Also, no white board capture system has been shown to be as robust in accommodating poor lighting, changes in lighting conditions, and other variables. A rough sample presentation recorded by PAOL can be found here.

Previous Work
My previous project was designing an Under Vehicle Inspection System (UVIS). This project involved taking large numbers of images of the undercarriages of vehicles and mosaicing them together to create multiple 2D images that together allowed viewing of 3D information. Details about this project can be found at UVIS.

In college I was a part of the Swarthmore robotics team that won a pair of hors d'oeuvre serving competitions at the AAAI Mobile Robot Competition and Exhibition. Details of this project can be found on my publications page.

Page last updated: