Skip to main content Skip to local navigation

Scuba enthusiasts: Your future dive buddy might not be human

Research on responsive robots, programmed to help humans in some way and facilitate fast and effective human-robot interaction, is usually set on land. Robots that function underwater is another whole matter. This is the novel frontier of York University’s Department of Electrical Engineering & Computer Science in the Lassonde School of Engineering – a recognized leader in robotics.

In a paper that appeared at the 14th annual ACM/IEEE International Conference on Robot-Human Interaction (2019) in Korea, graduate student Robert Codd-Downey and Professor Michael Jenkin describe a new method for underwater human-robot interaction, which they created.

This image illustrates the diver’s body parts that the robot is able to recognize

“This research borrows from the long-standing history of diver communication using hand signals. It presents an exciting opportunity to develop a marketable product that assists robot-diver interactions underwater,” Codd-Downey explains.

The work was funded by the Natural Sciences and Engineering Research Council of Canada’s Canadian Robotics Network and Vision: Science to Applications (VISTA). Codd-Downey received a VISTA doctoral scholarship to undertake the work in 2017. Jenkin, a member of both VISTA and the Centre for Vision Research at York, supervised the work. He is an international expert in mobile, visually guided autonomous robotics; computer vision; and virtual reality.

From left: Robert Codd-Downey and Michael Jenkin

From left: Robert Codd-Downey and Michael Jenkin

VISTA advances visual science through research

This is an excellent example of the groundbreaking research that VISTA supports. VISTA is a collaborative program funded by the Canada First Research Excellence Fund that builds on York’s world-leading interdisciplinary expertise in biological and computer vision.

VISTA essentially asks: “How can machine systems provide adaptive visual behavior in real-world conditions?” Answering this question will provide advances to vision science and exciting, widespread applications for visual health and technologies.

“Our overarching aim is to advance visual science through research that spans computational and biological perspectives and results in real-world applications,” says VISTA’s Scientific Director, Professor Doug Crawford.

VISTA will propel Canada as a global leader in the vision sciences

VISTA will propel Canada as a global leader in the vision sciences

Underwater represents certain challenges for robots

Through this new work, Codd-Downey and Jenkin were searching for a better way for human-robot interaction to take place underwater. “Current methods for human-robot interaction underwater seem antiquated in comparison to their terrestrial counterparts,” Codd-Downey explains. “And humans can’t operate their vocal cords underwater, which prevents voice communication,” he adds.

Other complications include the fact that acoustic modems are bulky and power intensive; and mechanical interaction devices, such as keyboards, joysticks and touchscreens, don’t function properly underwater without significant protection from the elements which makes them difficult to operate.

So, the researchers turned to tried-and-true diver communication hand signals. The Professional Association of Diving Instructors (PADI) defines a number of common hand signals. The gesture language combines hand configuration with arm trajectories to encode messages. Commercial divers and technical diving groups have defined additional signals useful for specific tasks.

Common PADI hand signals. The hand signal illustrations are provided by PADI and are used with PADI’s permission.

Common PADI hand signals. The hand signal illustrations are provided by PADI and are used with PADI’s permission.

How does the robot recognize the hand gestures?

To facilitate and ensure that the robot recognizes the gestures, Codd-Downey and Jenkin broke down the process into four steps, detailed below.

Step 1: Object recognition: The first step involves ensuring the robot recognizes the body parts of the diver with whom it will be communicating – specifically the person’s head and hands. Here, the researchers were able to program the robot to a high degree of accuracy.

Step 2: Hand and head tracking: In this step, the researchers programmed the robot to ensure that it is able to track the hands and head of the diver.

Step 3: Hand pose classification: Again, the researchers ensured that the robot could identify the number of fingers and the direction of each palm of the diver. Interestingly, the combination of these two parameters (fingers, direction) define 25 different classes. [Classes refer to number of different hand poses used in PADI gestures.]

Step 4: Translation: The previous three steps generate data that is interpreted or translated to the robot.

Data collection using an underwater vehicle that Codd-Downey developed. Jenkin is in the shot

Data collection using an underwater vehicle that Codd-Downey developed. Jenkin is in the shot.

Codd-Downey elaborates on Step 4: “The OK gesture, for example, can be interpreted as IDLE-OKIDLE. Where IDLE represents a return to none gesturing posture/action. A more complicated sequence of gestures can be interpreted as IDLE-YOU-STALL-FOLLOW-ME-IDLE. Where STALL represents a pause or break in the gesture sequence without returning to an IDLE state.”

In addition to hand signals, the researchers also used light-based communication methods to control the underwater robot either remotely or by a nearby diver.

Codd-Downey’s work in underwater human-robot communication has not gone unnoticed. He received an award for the best demonstration at the June 2018 Space Vision and Advanced Robotics Workshop for related work on light-based communication between two robots.

This cutting-edge work continues. “Work on hand gesture recognition is an ongoing part of my thesis,” Codd-Downey explains. He adds that he is currently collecting additional annotations to train the hand pose classifier and identifying common phrases that divers use to communicate to train the models for in-water testing.

To read the article, “Human Robot Interaction Using Diver Hand Signals,” visit the website. A video of the diver part recognition system operating can be found here. To read a related article from VISTA, visit the website. To learn more about VISTA, visit the website.

To learn more about Research & Innovation at York, follow us at @YUResearch; watch our new animated video, which profiles current research strengths and areas of opportunity, such as Artificial Intelligence and Indigenous futurities; and see the snapshot infographic, a glimpse of the year’s successes.

By Megan Mueller, senior manager, Research Communications, Office of the Vice-President Research & Innovation, York University, muellerm@yorku.ca