Next: Model-Building Experiment Up: Experiments Previous: Experiments

Control Process Experiments

This experiment was performed to integrate the visual system with the state machine. An appropriate DRFSM was generated by observing the part and generating the feature information. A mechanical part was put on a black velvet background to simplify the vision algorithms. The camera was placed on a stationary tripod such that the part was always in view. A simulated CMM probe was moved in response to the textual output of the controlling machine.

Once the first level of the DRFSM was created, the experiment proceeded as follows: First, an image was captured from the camera. Next, the image processing modules found the position of the part, the number of features observed, the recursive string, and the location of the probe. A program using this information produced a state signal appropriate for the scene. The signal was read by the state machine and the next state was determined. Each closed feature was treated as a recursive problem, and as the probe entered a closed region, a new level of the DRFSM was generated with a new transition vector.

The specific dynamic recursive DEDS automaton generated for the test was a state machine (shown in figure 2.)

Where the set of states {Initial,EOF,Error,A,B,C,D}and the set of transitional events {1,2,3,4,5,6,7,8,9,eof}. Here, states A through D correspond to:

The transitional events correspond to tests such as ``distance from probe to feature is less than 2 inches'', where the distance is variable, depending on the depth of recursion. This adds robustness to the system, allowing ranges to be specified as opposed to exact thresholds.

State transitions were controlled by the input signals supplied by intermediate vision programs. As described above, there are four stable states (A, B, C, and D) that describe the state of the probe and part in the scene. Three other states, Initial, Error, and EOF specified the state of the system in special cases.

In one sequence, the probe was introduced into the scene and moved in a legal way (accepted by stable states in the machine) until contact was made. Next, the probe backed off and again approached until the probe and part were in proximity. The automaton was forced into an error state by approaching too quickly. The probe was not seen until it was close to the object body. Because a transition from state A to C is invalid, an error state is reached. Images recognized during the experiment as these states are shown in Figure 3. As shown in that figure, the part used was a simple one with only one hole. Its part string was recovered to be the following: Outside(Hole())

In another sequence, the part was more complex. Exploration of a hole in this part is shown in Figure 4. Its part string was recovered to be the following: Outside(Hole(),Pocket(Hole()),Hole())

As before, the probe was introduced into the scene and moved towards the part. Next, the probe backed off and again approached until the probe and the part were in proximity. The automaton was forced into an error state by the sudden disappearance of the probe after it was very close to the part. Because a transition from state C to state A is invalid, an error state is reported.

Since touch sensing in this context requires contact of a precision instrument with a part, it is highly desirable to use a robust control mechanism. For both parts, the operation of camera and probe and detection of error conditions demonstrated the effectiveness of DRFSM in controlling disparate types of systems to achieve safe operation.

A second experiment was run, using a DRFSM was generated by GIJoe. GIJoe is an interactive graphical tool for building FSMs which was modified to build DRFSMs[6]. The output of GIJoe was compiled and linked directly to the experimental sensing code used in the first experiment.

The controlling DEDS machine performed as expected.



Next: Model-Building Experiment Up: Experiments Previous: Experiments


sobh@bridgeport.edu
Mon Sep 19 19:38:55 MDT 1994