A future of collaboration, not just automation!


I recently chatted with Madeline Gannon, one of a new generation of innovators helping to challenge our concepts of how robots and people will work together
Madeline Gannon “the Robot Tamer” shares her vision for a collaborative future.
ABB: What first got you interested in robots?
Madeline: I’ve always been interested in finding more intuitive ways to communicate with CNC (Computer Numerical Control) machines. My research aims to bridge digital and physical techniques in design and fabrication.
Industrial robots are a favorite of mine because they are so adaptable. Machines like 3D printers or waterjet cutters or CNC routers can do one or two fabrication processes really well. But industrial robots can transform their entire purpose just by changing the end-effector[i]. That kind of flexibility and open-endedness is intoxicating for curious researchers and designers!
ABB: And where has this research led you?
Madeline: Today I’m head of Madlab, a research studio dedicated to exploring the future of digital making. Our primary work is in inventing tools for crafting things on and around the body. Our latest project, Quipt, is a gesture-based control software for ABB robots.
Quipt gives an industrial robot a spatial understanding of how you are moving within its work zone –– letting you and the robot safely follow, mirror, and avoid one another as you collaborate together.
Development for Quipt was sponsored by Autodesk Applied Research Lab and Autodesk Pier 9. The Artist-in-Residence program at Autodesk Pier 9 gives artists, makers and fabricators a chance to work together and explore, create and document cutting-edge projects. Autodesk’s Applied Research Lab is developing technology for Advanced Robotics, Internet of Things, Applied Machine Learning, and Maritime/Sea Level Rise.
ABB: What traditional assumptions about the capabilities and limits of robots do you want to challenge / shatter?
Madeline: My interests lie in questioning assumptions about who and where we can use industrial robots. These machines have the potential to be incredibly useful tools in live environments like construction sites or film sets.
But taking industrial robots out of the factory and putting them into dynamic spaces bring unique challenges for usability and safety. To integrate into existing domains like these, the robots need to be easy and safe to use for people with little or no technical expertise.
In developing Quipt, my strategy for creating control software that non-technical users could inherently understand is to mimic how two people might interact with one another in space. We’ve been trained our entire lives to read and send body language and spatial behaviors to communicate our intent to the people around us. Why not give an industrial robot the same capabilities to communicate with us?
ABB: In what ways do you think human robot collaboration will evolve in the future?
Madeline: The promise of human-robot collaboration will come from finding ways in which robots augment and expand our capabilities; not in replacing people with robots. I am optimistic that collaborative futures will exist, however, they are significantly more difficult to deploy (at scale) than traditional robotic automation.
Taking a human-centered approach to industrial robotics means tailoring the technology to be more contextual to a specific user and a specific scenario. This is not necessarily as simple as creating a better software or hardware solution. It requires a more nuanced design strategy to choreograph interaction, environment, hardware, and software with one another.
ABB: What are some of the challenges to achieving this level of collaboration?
Madeline: Collaboration is a far more open-ended, and therefore difficult, technical challenge than automation, where the problem space is well defined. When I am collaborating with a robotic arm, I am doing a task or an activity that I could not achieve without the help of the robot.
As a simple example, it is very easy to automate lights to illuminate a static workspace. But if you are talking about a robotic arm continuously repositioning a flashlight for a person working on a car engine, suddenly you have a very challenging collaborative task for the robot arm to understand.
This task is very easy for a person to do: we can read a person’s bodily gestures, understand their intentions, and respond accordingly. But it is fairly difficult for a robot: not only is the environment constantly shifting, but the needs of the person are constantly shifting, too.
So to shine a light proper direction and with the right amount of intensity, the robot would need quite a bit of nuanced sensing for this very specific scenario: the robot would need to sense how the person’s body is positioned in relation to where and how they are looking; plus it would need to know where various car parts sit to avoid collisions.
Now admittedly this is a very ordinary and boring example. Robots are in fact extraordinary machines, and we are just starting to see the full potential of bringing them out of the factory and using them for collaboration, not just automation.
The vision for the future is that we can make robot technologies so safe, intuitive and useful that this level of human/ robot collaboration will become commonplace and we won’t even need to give it extra thought – having robots do certain tasks for us will be as natural as doing them ourselves. Even for tasks that we could not do without a robot!
To tie this back to Quipt, what I’ve built is a first-step––a proof of concept––of how we can use body language to communicate with a robotic arm. The overarching ambition for Quipt is to offer an alternative perspective on how we might collaborate with robotic arms on less-structured, more open-ended activities in the near future.
[i] End-effector is the device at the end of a robotic arm that interacts with the environment. They can include anything from grippers or force sensors to welding torches.