When Nvidia's CEO, Jensen Huang, interacted with a sensitive robotic hand in today's open house for his company's robotics research lab in Seattle, it was love at first contact.
"It almost feels like a pet!" Huang said as he tickled the fingers of his hand, causing them to pull back gently.
"It's surprisingly therapeutic," he told the crowd around him. "I can have one?"
The robotic hand, which is programmed to avoid stinging humans when they get too close, was just one of the machines on display in the 13,000-square-foot laboratory in the Seattle University District.
Nvidia is headquartered in Silicon Valley, California, and has about 200 employees working in an engineering center in Redmond, Washington. But when the chip maker set plans to open a lab focused on robotics and artificial intelligence research, it settled in the same building that houses the CoMotion lab at the University of Washington. He also put Dieter Fox, a former computer science professor at UW, in charge of the operation as senior director of robotics research.
Huang said Seattle was the natural choice.
"Due to the University of Washington, due to Microsoft, due to Amazon, this has become one of the great centers of computing," he said. "And that's why it made sense for us to think about this area."
UW's tradition of collaboration in information technology was also a point of sale.
"Everyone was working with everyone else," Huang said. "This is very unnatural, frankly, in most universities. They tend to be very isolated. … The collaboration, I felt, was the perfect culture to create a robotic platform. "
Nvidia conducts a large amount of research elsewhere, focusing on applications ranging from virtual reality systems to self-driving cars, from autonomous drones to medical images. But Fox said the Seattle lab will be the central facility for basic robotics research.
"This lab is focusing on the next generation of interactive manipulators," Fox said.
Nvidia moved to the lab in November, and since then operations have been increasing towards the goal of having 50 roboticists working on the site. At least 20 of those roboticists will be employees of Nvidia, and the rest will visit academic researchers and students, Fox said.
Today, the lab tables were filled with boxes of Cheez-It cookies and Domino sugar, plus cans of Spam and Campbell tomato soup. Those ingredients did not go into the dishes served in the open house (which included pizza and coconut shrimp, and they were quite tasty, by the way). Instead, boxes and cans are the ingredients for the laboratory's deep learning and computer vision experiments.
Fox and his teammates decided that a kitchen would be the best place to start testing the capabilities of their robots for image recognition, object manipulation and interaction with humans. So they went down to the Ikea store in Renton, Washington, and bought all the necessary equipment for a kitchen that would work. The test field is even equipped with a sink and an oven.
"Ultimately, we want to get a robot that can cook a meal with you, or that can only talk to him and tell the robot what he wants to do," Fox said. "Get me the sugar box." & # 39; and you tell the robot that it's in the third drawer on the left … and the robot can do that. "
Teaching robots to recognize a box of sugar or a can of soup is not as easy as it seems. The Nvidia system takes advantage of machine learning, analyzing images of kitchen items in a wide variety of random poses and lighting conditions.
"We can only train on synthetic data and get results that work in the real world," said Stan Birchfield, a senior research scientist at Nvidia.
The kitchen manipulator robot that is being developed in the Seattle lab uses the Nvidia Jetson platform for navigation, and real-time inference for processing and handling on Nvidia Titan GPUs. The robotic perception system was trained using the PyTorch deep learning framework accelerated by cuDNN.
All the results of the laboratory experiments will be published openly, with the aim of creating a type of operating system that can be used in a wide range of robots.
Finally, Nvidia intends to create specialized software or hardware packages that come pre-programmed to perform kitchen tasks, and add to their knowledge base by observing what human cooks do. Fox said that models of 3-D computer kitchen environments, or any domestic environment, could become standard applications that are delivered every time someone moves into a new house or remodels an old one.
"You buy your kitchen plus the model, and let's say you already have your robot," Fox said. "Load it to the robot, and your robot will know about the kitchen."
But the fruits of Seattle laboratory studies will not be found only in the kitchens of the future. Someday, personal robots could take on delicate tasks such as shaving people with disabilities, helping older people who would otherwise have to move out of their homes or doing jobs that human workers would rather not do. Nvidia's robotic research could help make this happen.
"There are so many interesting things that we could sneak away in our search for a general artificial intelligence robot. "For example, it is very likely that in the near future you will have 'exo-vehicles' around you, be it an exoskeleton or an exo-something that helps people with disabilities, or help us to be stronger. than us, "Huang said.
"It is very likely that in the future, manufacturing robots do not have to be programmed, that somehow they learn by observing us, or learn by imitation, or learn in general what their objectives are," he added.
So, did Huang's tour of the Seattle lab and his encounter with the robotic hand generate ideas for the new Nvidia products? "I have many ideas for new products," he said with a sly smile. "I'll tell you about them at GTC."
For what it's worth, 10 weeks are left for the Nvidia GPU Technology Conference, also known as GTC.