Next: Conclusions Up:

Discrete Event Control Previous: A Graphical DRFSM


Experiment

In conducting our experiments, we use a B/W CCD camera mounted on a Puma 560 robot arm (see Figure 10), and simulate the operation of a CMM probe. Control signals that were generated by the DRFSM were converted to simple English commands and displayed to a human operator so that the simulated probe could be moved.

In order for the state machine to provide control, it must be aware of state changes in the system. As exploration takes place, the camera supplies images that are interpreted by a set of 2D and 3D vision processing algorithms and used to drive the DRFSM. These algorithms are described in greater detail in a technical report [4], but include thresholding, edge detection, region growing, stereo vision, etc. The robot arm is used to position the camera in the workplace and move in the case of occlusion problems. Our latest experiments used the robot and GIJoe-generated automata. One of them is described below.

The DRFSM generated by GIJoe is shown in figure 11. This machine has the following states:

A part similar to the fuel pump cover from a Chevrolet engine was used in the experiment to test the exploration automaton. This piece offers interesting features and has a complex recursive structure. The piece was placed within view of the camera. Lighting in the room was adjusted so as to eliminate reflection and shadows on the part to be explored.

Some of the images from the experiment are shown in sequence in Figure 12.



Next: Conclusions Up:

Discrete Event Control Previous: A Graphical DRFSM



sobh@bridgeport.edu
Mon Sep 12 15:48:37 MDT 1994