The aim of this project is to add 2D vision to the BARMAN demonstrator shown in the figure. The BARMAN is composed of two DENSO robots. In its basic release it picks up bottles, uncorks them and places them on the rotating table. It then rotates the table, so that people can pick them up and drink.

The tasks of the Barman are summarized here:

(i) to survey the foreground and check if empty glasses are present;
(ii) to rotate the table and move glasses to the background;
(iii) to monitor for a bottle on the conveyor, recognize it, pick it up, uncork it and fill the glasses;
(iv) to rotate the table to move eglasses to the foreground zone.

These simple operations require that suitable image processing is developed and validated. The software environment is the Halcon Library 9.0; the whole-project is deveoped in VB2005. The robot platform is the ORiN 2 (from DENSO).

The work performed so far implements the following  basic functions:

(i) Calibration of the cameras and of the robot: the aim is to define an absolute reference system for space point coordinates, where the camera coordinates can be mapped to the robot coordinates and viceversa. To perform this task a predefined master is used; it is acquired under different perspectives (see Fig. 1 and Fig. 2 ).

The acquired images are elaborated by the Halcon calibration functions, and both extrinsic and intrinsic camera parameters are estimated. In parallel, special procedures for robot calibration have been reproduced and combined with the parameters estimated for the cameras (Fig 3).

master-camera.JPG

Fig. 1: Acquisition of the master in correspondence with the background zone (left); Corresponding image in the Halcon environment (right)

immagini_calibrazione.JPG

Fig. 2: Images of the master before (left) and after (right) the elaboration.

master_robot.JPG

Fig. 3: calibration processo of the robot

(ii) Detection of empty glasses in the foreground: flexible detection has been implemented to monitor the number of empty glasses. In addition, some special cases have been taken into account. Some exaples are shown in the following images:

glass_detection.JPG

Fig. 4: detection ofa single glass (left); detection of three glasses (right)

glass_detection._2JPG.JPG

Fig. 5: Detection of four glasses very close to each other (left),
detection in the presence of glasses turned upside down.

(iii) Detction of glasses in the background: The position of the glasses in the background are calculated very precisely, since the camera is calibrated. In addition, it is possible to recognize semi-empty glasses and glasses turned upside down. This detection is mandatory, to guarantee that the filling operation is performed correctly. Fig. 6 shows some significant examples.

back_ground_glass_detection_1_Low.JPG

Fig. 6: detection of the position of the glasses in the background. The system detects the presence of semi-empty glasses (center image) and turned glasses and do not mark them as available for subsequent operations.

(iv) Bottle detection: three different types of bottles are recognized as ‘good’ bottles by the system. The system is able to detect any other object which does not match the bottles above. It can recognize either if the ‘unknown’ object can be pick up and disposed by the Barman, or if it must be removed manually.

bottle_detection_NOK.JPG

Fig. 8: detection of unmatched objects.
(v) Filling of the glasses: The robot is moved to the positions detected by the camera (see Fig. 6), and fills the glasses. Since both the cameras and the robots share the same reference system, this operation is ‘safe': the robot knows where the glasses are.

glass_filling.JPG

Fig. 9: Filling operation.