Dr. Madeline Gannon shares her views with ABB on the changing relationship of people and robots during the World Economic Forum in Tianjin, China.
The World Economic Forum (WEF) recently held its Annual Meeting of the New Champions in Tianjin, China, a gathering of leaders from major multinationals, government, media, and academia who are shaping the future of business and society. The goal was to foster dialogues on how the interrelated economic, political, social and environmental challenges coming from the forth industrial revolution can be managed for the betterment of all people.
As part of its focus on emerging technologies such as block chain, artificial intelligence and robotics, the WEF invited Dr. Madeline Gannon, ‘the Robot Tamer,’ to present a thought-provoking demonstration on how people and robots will work together in the future. Dr. Gannon is a multidisciplinary designer who blends design, art and technology to push the boundaries of human and machine communications. In 2016, she created an exhibition called Mimus using a 1,200 kg ABB industrial robot for the opening of the new London Design Museum. Mimus was designed to challenge misplaced fears about robots and to get people to envision a future where robots are used to amplify our own uniquely human capabilities.
ABB: Madeline, it’s been some time since your Mimus exhibit in London, which was designed to challenge the way people think about robots. Do you think people’s attitudes about robots have changed in the last year or two?
Gannon: I’ve noticed that people are beginning to feel the increased presence of robots in our daily lives. In past two years, we’ve seen incredibly fast diversification the different species of robots that are leaving the research lab and living out in the public in our streets, sidewalks, and skies.
ABB: Have you noticed a difference in the way children and adults look at robots? If so, what can we take from that?
Gannon: There is an interesting difference. Children approach robots like any other thing they are experiencing for the first time: sometimes with curiosity, sometimes with reserve, but never with any preconceived notions of how a robot should behave.
Adults, by contrast, bring many past experiences with technology that can bias a first encounter. This is why designing legible interactions between humans and machines is so important: when a robot intuitively broadcasts how you should engage with it — through its body language or other means — a person can intuitively understand its behaviors without needing to explicitly learn any technical information.
ABB: You describe Mimus as being ‘curious about the world around her.’ What is the most common question you get from people about robots?
Gannon: The most common question I get asked is “Will robots replace us?” And although these technological anxieties are well-founded, I believe there is a more fundamental question we can ask: “How can robots enhance us?” This is the more desirable future we are chasing. In order to get there, we need interfaces that leverage robots to enhance, augment, and expand human capabilities; not replace them.
ABB: What do we need to do differently to ensure that desirable future, where people and robots coexist side by side?
Gannon: To me, this is fundamentally a design problem. For people to have meaningful, productive relationships with intelligent machines, the onus should not be on us to understand robots; the robots should be able to understand us. Rapid advancements in artificial intelligence and machine learning are steadily working towards this future.
However, today we can already facilitate more fluid and legible interactions with robots using alternative techniques. For example, when I design an interface with a robot, I look for opportunities to imbue the machine with an awareness of its context and surroundings. That means that if it is moving through a space full of people, that it is considerate of social norms; or if it collaborating on a task, that its body language communicates something about its internal state of mind. These are things we can do now — with existing hardware and software — to better configure robot behaviors to be more human-centered.
From Mimus to Manus
ABB: What inspired this new robot installation, Manus?
Gannon: With Manus, I wanted to illustrate the new challenges and affordances we face in the shift from robotic automation to robotic autonomy. Intelligent, autonomous machines present many unique abilities that we have yet to fully understand as a society. One of their most valuable assets is how autonomous robots can share a single brain across many bodies. In Manus, I connect 10 industrial arms with a single robot brain. So instead of acting in isolation, this hoard of robots moves as a coordinated pack once visitors approach.
ABB: What were some of the technical challenges you had to overcome in creating Manus?
Gannon: Manus is an ambitious project on an ambitious timeline — two months to develop, test, and deploy the custom sensing and control software that brought these 10 robots to life. Adding to the challenge was that I only had two physical robots to work with; the rest I needed to simulate until I arrived on-site.
I relied on ABB’s RobotStudio Virtual Robot emulator and a 10-robot station to validate my software in simulation. Not only did this let me test for any hardware-level errors that my software might trigger, but it also let me train my staff to operate the installation.
ABB: What are the main differences between Mimus and Manus?
Gannon: Mimus showed that an industrial robot built for automation could be reconfigured into a life-like mechanical creature. Manus develops these autonomous, personable robot behaviors further, across 10 interconnected robot arms. In this interactive installation, these autonomous robots sense and move in response to visitors that enter into their ten x four meter space. While each robot moves independently, they share the same central brain. This results in intertwined behaviors that ripple through the pack, as people walk by.
ABB: What is the goal of Manus?
Gannon: I created Manus to explore new frontiers in human-robot interaction. At its most visceral, Manus provides an opportunity to feel what it is like to be surrounded by autonomous machines. By default, this should be a very intimidating experience. However, with Manus, I wanted to show how thoughtfully designed interactions can help people intuitively understand the behaviors and intentions of a group of non-humanoid, autonomous robots. People will hopefully experience a dynamic range of emotional states from these machines: from eager excitement and enthusiasm, to anxious nervousness, to bored indifference.
Dr. Madeline Gannon is a multidisciplinary designer inventing better ways to communicate with machines. In her research, Gannon seeks to blend knowledge from design, robotics, and human-computer interaction to innovate at the intersection of art and technology. Her work has been internationally exhibited at leading cultural institutions, published at ACM conferences, and widely covered by diverse media outlets across design, art, and technology communities. Her interactive installation, Mimus, earned her the nickname “The Robot Whisperer”, and was awarded a 2017 Ars Electronica STARTS Prize Honorable Mention. She is a 2017 & 2018 World Economic Forum Cultural Leader, and was a featured artist at the 2018 World Economic Forum Summer Davos. Gannon holds a PhD in Computational Design from Carnegie Mellon University, where she explored human-centered interfaces for autonomous fabrication machines. She also holds a Masters in Architecture from Florida International University.