Robotics' ChatGPT Moment has arrived

Robotics' ChatGPT Moment has arrived



Image source:


I have heard this since 2022.


For years, artificial intelligence learned to write, converse and create images, but now it is taking a much more dangerous and powerful step, according to NVidia, AI does not just want to understand texts, it wants to understand the physical world and act within it and a historic announcement at NVidia revealed a new generation of models, tools and infrastructure focused on robotics and autonomous vehicles.


The objective is clear, to allow machines to learn, reason and plan actions in real, dynamic and unpredictable environments, it was in this context that Jensen Huang declared that “the ChatGPT moment of robotics” finally arrived. According to the company, physics is the next great frontier, it is no longer enough to see the world through sensors, now robots need to understand what they see, anticipate consequences and decide what to do, as a human being would.




Today most robots are still extremely limited.


They are specialists in a single task, worked well, only in controlled environments, any minimal change requires expensive and complex reprogramming, for Nvidia, that model is doomed, the future, according to the company, belongs to specialist generalist robots, machines capable of handling multiple situations, but that also develop deep skills for complex tasks, something comparable to a highly qualified professional who understands the general context and masters his specialty.


To make this possible, NVidia launched new open models capable of generating simulated physical worlds, predicting consequences and reasoning about actions. The great advance is in systems that unite vision, language and action in a single artificial brain, this allows humanoids to move and manipulate objects at the same time, understanding the environment and adjusting their behavior in real time.


In practice this means that robots stop being programmed machines and become intelligent agents, these technologies are already being used by real companies, surgical arms train in physical simulations before touching a patient, industrial humanoids learn to navigate, manipulate objects and make decisions in complex factories. Even the new Boston Dynamics Atlas was trained with these tools and now works on Nvidia infrastructure.


To support this revolution, Nvidia brought its Blackwell architecture to robotics. The new Jetson modules offer multiple times the performance, while maintaining low power consumption, essential for humanoid robberies. This allows complex decisions to be made locally without relying on the cloud, in other words, the robot's brain now fits inside it.


The company also moved in the direction of autonomous cars, according to NVidia, autonomous driving is the first great example of physics in history, after phases based only on perception and rules, vehicles are now entering a new era, that of people, capable of reasoning, planning and driving with human-like judgment.


References 1


Follow my publications with the latest in artificial intelligence, robotics and technology.
If you like to read about science, health and how to improve your life with science, I invite you to go to the previous publications.



0
0
0.000
0 comments