Between Facebook’s core social network product and apps like Messenger and WhatsApp, Mark Zuckerberg’s tech giant has undeniably changed the way people communicate. Could it next help change the way we communicate with robots?
In 2021, the idea of being able to communicate with artificial intelligence through natural language is nowhere near as science fiction as it once was. Whether it’s Amazon’s Echo voice assistant or the voice bots we interact with when you phone your bank, A.I. today means that machines can do a pretty good job of understanding what humans are asking for, without the human in question having to do much to modify the way that they’re speaking.
However, robots aren’t quite so user-friendly. Partly because most robots are still used in industrial or lab-based settings, where there’s not quite the same requirement for them to be accessible to everyday users, robot interactions remain more opaque in their operation. Breakthroughs in locomotion, gripper dexterity, and image recognition mean that robots can now be trained to navigate different environments and interact with objects. But asking a robot to, for instance, “pick up the blue tube next to the fuzzy chair that Bob is sitting in” is still far beyond the scope of today’s robot platforms.
At least, that’s the popular conception.
Researchers in Facebook’s A.I. Research laboratories have been working to change this — and, in the process, they’re developing technology that won’t just imbue robot platforms with smart assistant-style natural language abilities but, so it claims, far exceed current standards.
Facebook and the neural semantic parser
“We don’t want to speculate on the tech in Siri or Echo, but natural language understanding (natural language understanding) for home assistants is still often based on rules or narrowly-defined neural models, and often only can handle specific user ‘intents’ — for example, ‘send message,’” Mary Williamson and Arthur Szlam of Facebook A.I. (FAIR) told Digital Trends via email.
Facebook’s open-source technology, called “droidlet,” uses what it calls a “neural semantic parser” to provide robots with a better way to understand the real world.
“For example, for the command, ‘move to the red chair,’ the droidlet agent parses the command, and then searches its memory for any previously observed object it might have detected that has been tagged by its object-property model as being ‘red’ and ‘chair,’” the researchers said. “In order to create this object detection, it [uses] a data stream from the robot’s sensors, perhaps mediated by a library like [open-source robot software framework] ROS. Using this memory, it builds a move command directing the body to the location of the chair. That move command is [then] sent to a lower level interface like ROS that executes the move on the robot.”
The droidlet system comprises multiple pieces of core technology including a dialog system, perceptual or vision modules, a structured memory system for storing and retrieving “memories,” and more. Facebook says that its open-source technology can be used for controlling multiple real-world robots such as LoCoBot and Hello-Robot, as well as for robot simulators like Habitat. If the company is successful with the project, it promises that it will allow researchers to more easily build robot agents able to carry out complex tasks in both real life and in simulated environments like Minecraft.
Big tech’s robot love
Facebook is far from alone in the world of big tech giants investing in robotics. In some cases, the moves are more immediately commercially applicable than in others. Amazon, for instance, has its own Amazon Robotics division, previously known as Kiva Systems when it was bought by the online retail giant in 2012 for $775 million. Amazon’s robots fit neatly into its mission statement because they can be used in the company’s fulfillment centers to help dispatch orders. At present, several hundred thousand robots are used in this capacity.
Google, on the other hand, has also long maintained an interest in robots — although its most famous acquisition, robot dog Spot-makers Boston Dynamics, was acquired in late 2013 and then offloaded to Japan’s SoftBank Group just a few years later in mid-2017. With no buying or selling price ever confirmed, it’s impossible to know just what Google got out of the deal, but it certainly wasn’t its own commercial (or even non-commercial) robot.
With Facebook being a company that has dipped its toes into hardware even less frequently than Google (its Portal video-calling devices being a rare exception), how does the world’s biggest social media giant stand to gain from its latest robot fixation?
“While our work is focused purely on A.I. research, we believe the long-term applications are plentiful,” Williamson and Szlam said. “[We] see social interactions in natural language with learned, embodied agents — robots, smart wearable devices such as AR glasses, et cetera — for various kinds of future assistants as an interesting area for Facebook in general.”
That’s not exactly confirming that Facebook robots are coming soon. While many Facebook products over the years started life as bits of research at FAIR, the research lab does not necessarily carry out its work with an immediate eye on commercialization. However, this work is certainly — in Facebook terminology — a “poke” that indicates interest in ramping up its focus on all things robot. It remains to be seen whether that becomes anything timeline-worthy.
Editors’ Recommendations