Interactive Robotic Vacuum



I participated in the 2019 SNU Creative Design Fair as a team of four. The fair was held by the College of Engineering at Seoul National University, where participating teams competed with creativity and ability to implement. Our team implemented an Interactive Robotic Vacuum with a novel shape & locomotion. Our work has three unique features: 1) a unique pointed body shape, 2) mecanum wheels, and 3) a smartphone application with a hand gesture detection model. My role was implementing the algorithm for automatic control, and implementing the smartphone application for human-robot interaction (HRI). I was also heavily involved in building the robot body from scratch. Our team won 2nd place in the competition.

As the ordinary vacuums have round shapes and inlets on their ventral side, they cannot sweep the inside of corners perfectly. Unlike other ordinary robotic vacuums, our robot has a pointed shape body, where the vacuum inlet is on a side of the tip allowing the robot to sweep corners scrupulously.

However, due to the position of the inlet, the sweeping area of the robot is not as same as the trajectory of the robot. Therefore, in terms of controlling the robot, we need to consider not only the locomotion of the robot but also the head direction of it. To tackle this, we equipped the robot with mecanum wheels, a.k.a omnidirectional wheels, which have better freedom of maneuverability. With mecanum wheels, the robot is able to change its instantaneous center of rotation (ICR) flexibly, and by dynamically adjusting ICR, the robot can sweep any kind of convex & concave shapes of walls with its tip.

The control system was implemented on an Arduino board and an RPLiDAR was used for sensing topographical features (e.g. walls, obstacles). The rotating laser sensor in RPLiDAR produces distance from any obstacle in all directions. A coarse-grained egocentric map was implemented with the sensor data, which is used to determine the situation of the robot (e.g. confronting obstacle or incoming corner). Given the map, a simple rule-based algorithm automatically controls the robot to sweep a flat wall or a corner.

The lack of interaction with robots is one of the major sources of users' low confidence in robots. Our team devised a novel HRI platform where users can interact with the robot by hand gestures. As the core of the platform, a smartphone application detects a user's hand gesture and sends commands to the robot according to the detected gesture. The hand gesture detector was implemented using MediaPipe (https://github.com/google/mediapipe), an open-source package developed by Google. We also utilized the pre-trained hand-tracking model provided by the package developers. As proof of concept, we implemented an application that moves the robot to be aligned with the users' hands position. This allows users to control the robot with their hands, and clean the floor with a sweeping gesture.

DEMO