Accession Number : ADA289175

Title :   Using Virtual Active Vision Tools to Improve Autonomous Driving Tasks.

Descriptive Note : Technical rept.,

Corporate Author : CARNEGIE-MELLON UNIV PITTSBURGH PA ROBOTICS INST

Personal Author(s) : Jochem, Todd M.

PDF Url : ADA289175

Report Date : OCT 1994

Pagination or Media Count : 23

Abstract : ALVINN is a simulated neural network for road following. In its most basic form, it is trained to take a subsampled, preprocessed video image as input, and produce a steering wheel position as output. ALVINN has demonstrated robust performance in a wide variety of situations, but is limited due to its lack of geometric models. Grafting geometric reasoning onto a non-geometric base would be difficult and would create a system with diluted capabilities. A much better approach is to leave the basic neural network intact, preserving its real-time performance and generalization capabilities, and to apply geometric transformations to the input image and the output steering vector. These transformations form a new set of tools and techniques called Virtual Active Vision. The thesis for this work is: Virtual Active Vision tools will improve the capabilities of neural network based autonomous driving systems.

Descriptors :   *NEURAL NETS, *AUTONOMOUS NAVIGATION, *COMPUTER VISION, COMPUTERIZED SIMULATION, INPUT, OUTPUT, POSITION(LOCATION), STEERING, MODELS, REAL TIME, REASONING, THESES, GEOMETRIC FORMS, IMAGES, WHEELS, SELF OPERATION, GEOMETRY, VIDEO SIGNALS, TRANSPLANTATION, PREPROCESSING, TRANSFORMATIONS.

Subject Categories : Computer Programming and Software
      Computer Systems

Distribution Statement : APPROVED FOR PUBLIC RELEASE