
Until recently, artificial intelligence was perceived mainly as a software technology. It analyzed texts and images, helped with information search, or automated routine digital tasks. Today, however, AI is rapidly moving beyond screens and beginning to operate in the physical world. This shift became clearly visible after recent statements at technology exhibitions, where it was demonstrated that artificial intelligence is transitioning from a supporting tool to an active participant in real-world processes.
Physical artificial intelligence refers to systems that not only process data but also control machines, robots, and autonomous transport. They interact with the environment, make decisions in real time, and bear responsibility for physical actions. This is precisely what makes the new stage of AI development fundamentally different.
Next-Generation Chips and a Sharp Reduction in Computing Costs
One of the key factors behind this transition is hardware evolution, which was prominently highlighted at CES — one of the world’s largest exhibitions of consumer electronics and technology. CES takes place annually in Las Vegas and serves as a platform where leading companies showcase not ready-made retail products, but the direction in which the entire industry will move in the coming years. It is here that technologies are often first introduced before eventually becoming mainstream.
During one of the keynote presentations at CES, the next generation of AI chips called Rubin was announced. For the average user, this may sound like just another technical name, but in reality it refers to a new processor architecture designed specifically for artificial intelligence workloads. These chips are not universal like standard computer processors; instead, they are optimized precisely for the computations performed by AI models.
The main announcement was that the use of Rubin chips could reduce the cost of token generation by approximately 90 percent. Simply put, tokens are the conditional “building blocks” from which AI forms responses, texts, images, or decisions. The cheaper it is to process these tokens, the more accessible artificial intelligence becomes. This means that complex AI systems can be deployed not only in large data centers, but also across a much wider range of devices and services.
It is precisely this reduction in computing costs that opens the door to physical artificial intelligence. When AI becomes economically viable, it can be integrated into robots, autonomous transport, and industrial systems that must operate continuously and without delays. Thus, the announcements at CES and the emergence of Rubin chips are not merely technical news, but a signal of the beginning of a new stage in the development of artificial intelligence.
Autonomous Robotaxis and AI Entering City Streets
Another clear signal of this new stage was the confirmation of plans to launch autonomous robotaxis as early as 2026, in partnership with a large but as yet unnamed company. This means that AI is no longer just a driver-assistance system or a support tool. It is moving toward full control of vehicles in urban environments.
Robotaxis are a telling example of physical artificial intelligence, as they combine computer vision, analysis of traffic conditions, map processing, prediction of other road users’ behavior, and instantaneous decision-making. In effect, AI takes on responsibility for actions that were previously the exclusive domain of humans.
Multimodality as a New Standard for AI Systems
Another important point was the assertion that multimodality in artificial intelligence models will become the new standard. This means that AI will simultaneously work with different types of data — images, video, sound, text, and signals from physical sensors.
For physical AI, multimodality is critically important. Robots, autonomous vehicles, and other systems must perceive the surrounding world in a comprehensive way. They need to see objects, recognize movement, hear signals, analyze context, and respond without delays. This is why the development of multimodal models is directly linked to AI’s transition into the physical space.
The Role of Server Infrastructure in the Development of Physical AI
Despite the rapid growth of autonomous devices, the foundation of physical artificial intelligence remains server infrastructure. Today, AI is already widely used in data centers to optimize workloads, manage energy consumption, predict failures, and automate server maintenance.
In addition, servers are responsible for training complex models, updating them, and scaling them. Even autonomous systems require constant connectivity to infrastructure in order to receive new data and improve algorithms. This means that the development of physical AI inevitably leads to a sharp increase in demand for server resources and specialized AI chips.
Why the World Will Need More and More AI Chips
The final but key conclusion from this development is that an enormous number of chips will be required for all these robots, models, and autonomous systems. Physical artificial intelligence does not replace software-based AI; instead, it complements it, creating a new level of complexity and scale.
The growth in the number of robots, autonomous transport, and intelligent infrastructure implies exponential demand for computing power. That is why hardware development and server infrastructure are becoming strategic resources on par with energy or connectivity.
A New Stage That Has Already Begun
Physical artificial intelligence is not a distant future, but a process that is happening right now. The reduction in computing costs, the emergence of new chips, the development of multimodal models, and the launch of autonomous systems all indicate that AI is entering the real world in a systematic and large-scale way. For businesses, this opens up new opportunities, and for society, new challenges. It is clear, however, that this stage of technological development will be one of the defining ones in the years ahead.
Leave a Reply