Nvidia’s AI robot learns from observing humans

Nvidia has demonstrated a robot with a groundbreaking AI which learns to complete tasks by observing the actions of a human.

The researchers, led by Stan Birchfield and Jonathan Tremblay, claim their development is a ‘first of its kind’ deep learning-based system.

In their research paper, they state:

“For robots to perform useful tasks in real-world settings, it must be easy to communicate the task to the robot; this includes both the desired result and any hints as to the best means to achieve that result.

With demonstrations, a user can communicate a task to the robot and provide clues as to how to best perform the task.”

Nvidia’s robot is powered by the firm’s TITAN X graphics cards which features 3584 NVIDIA CUDA cores running at 1.5GHz for a total performance of around 11 TFLOPS.

The company describes TITAN X as “powered by Pascal to deliver up to 3x the performance of previous-generation graphics cards, plus innovative new gaming technologies and breakthrough VR experiences.”

Using the TITAN X, the researchers trained a sequence of neural networks to perform duties associated with perception, program generation, and program execution. From a single human demonstration, the robot could begin to perform tasks.

The researchers will present their research paper and work at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia this week.

You can find the full paper here (PDF)

What are your thoughts on Nvidia’s AI robot demonstration? Let us know in the comments.

Rojenx is a leading concept artist who work appears in games and publications

Check out his personal gallery here

This site uses Akismet to reduce spam. Learn how your comment data is processed.