Next: Hybrid and Discrete Up:

A Graphical Environment Previous: The Proposed Environment


Discrete Event Observation Under Uncertainty

We present a new framework and representation for the general problem of observation. The system being studied can be considered as a ``hybrid'' one, due to the fact that we need to report on distinct and discrete visual states that occur in the continuous, asynchronous and three-dimensional world, from two-dimensional observations that are sampled periodically. In other word, the system being observed and reported on consists of a number of continuous, discrete and symbolic parameters that vary over time in a manner that might not be ``smooth'' enough for the observer, due to visual obscurities and other perceptual uncertainties.

The problem of observing a moving agent was addressed in the literature extensively. It was discussed in the work addressing tracking of targets and, determination of the optic flow [2,7,15,33], recovering 3-D parameters of different kinds of surfaces [6,20,31,32], and also in the context of other problems [1,3,8,11]. However, the need to recognize, understand and report on different visual steps within a dynamic task was not sufficiently addressed. In particular, there is a need for high-level symbolic interpretations of the actions of an agent that attaches meaning to the 3-D world events, as opposed to simple recovery of 3-D parameters and the consequent tracking movements to compensate their variation over time.

In this work we establish a framework for the general problem of observation, recognition and understanding of dynamic visual systems, which may be applied to different kinds of visual tasks. We concentrate on the problem of observing a manipulation process in order to illustrate the ideas and motive behind our framework. We use a discrete event dynamic system as a high-level structuring technique to model the visual manipulation system. Our formulation uses the knowledge about the system and the different actions in order to solve the observer problem in an efficient, stable and practical way. The model incorporates different hand/object relationships and the possible errors in the manipulation actions. It also uses different tracking mechanisms so that the observer can keep track of the workspace of the manipulating robot. A framework is developed for the hand/object interaction over time and a stabilizing observer is constructed. Low-level modules are developed for recognizing the ``events'' that causes state transitions within the dynamic manipulation system. The process uses a coarse quantization of the manipulation actions in order to attain an active, adaptive and goal-directed sensing mechanism.

The work examines closely the possibilities for errors, mistakes and uncertainties in the visual manipulation system, observer construction process and event identification mechanisms, leading to a DEDS formulation with uncertainties, in which state transitions and event identification is asserted according to a computed set of 3-D uncertainty models.

We motivate and describe a DEDS automaton model for visual observation in the next section and then proceed to formulate our framework for the manipulation process and the observer construction. Then we develop efficient low-level event-identification mechanisms for determining different manipulation movements in the system and for moving the observer. Next, the uncertainty levels are discussed. Some results from testing the system are enclosed.




Next: Hybrid and Discrete Up:

A Graphical Environment Previous: The Proposed Environment



sobh@cs.utah.edu
Tue Nov 22 21:30:54 MST 1994