AI Olfaction Brings Deus ex Machina to Robots

Yu-hua Liu November 26, 2025

AIolf's Promise

Robots achieve actions and language through perceiving targets, as well as perceiving targets through actions and language. We promise that by 2030, blind people will have robot guide dogs, cities will have robot dogs to detect drugs and fight terrorism, and you will have robot nannies that can sense food spoils and report to you if your kids are drinking or smoking weed at home.


Why Now

Today's robots are integrating VLA models to understand the relationship between objects and space, which makes us realize when looking back at SLAM that we have avoided the fascinating mistakes. Although SLAM has a successful monetization case - the vacuum cleaner robot, that robot that cannot expand intelligence and can only do one thing clearly does not look like the future. Biological intelligence is even more enlightening. It is possible to migrate thousands of kilometers and hide food before retrieving it without geometric maps. In short, moving away from localization and mapping to multimodal models, it is still a leap to move computing energy from terminals to pre-training.

Robots have not yet reached the critical point of monetization. When work is repetitive, machines need to be faster and longer than living things. When work is engineering, machines need to be in line with biological intelligence. Yes, machines must overtake biology, which is also the tipping point of intelligence emergence.

How does intelligent emergence happen? Many EV companies that claim to master autonomous driving are working on humanoid robots, with the view that "cars are robots with four wheels". So the same thing inevitably happens - a failure occurs and a black cloth is covered. There is no ready-made answer to the question of how to move autonomously when there is a lack of maps and visual failure.
The paradigm of using the massive data of RGB images and videos to understand objects and Spaces in the physical world has already taught a painful lesson in autonomous driving. RGB is designed for human vision, not for robots to understand the physical world. Although the scenarios of cars and robots are very different, under the condition that cars carry far more computing power and training data than robots, intelligence has not emerged. If the new view is that "washing dishes is 100x easier than driving", it actually confirms that this paradigm is a fascinating mistake - it only works in specific scenarios and intelligence still cannot be scaled.

Intelligent emergence requires deus ex machina, and AI olfaction brings deus ex machina to robots. As Richard Sutton said, General methods that leverage computation are ultimately the most effective. We further extend this: Sensing rooted in physical ontologies is ultimately the most universal.
For this reason, we propose GuGu conjecture: When machines gain new sensory dimensions, the energy required to achieve their goals decreases — intelligence emerges. And, Intelligence emergence ratio law:

I ∝ S / E

I: Intelligence
S: Sensory dimensions
E: Energy required to achieve goals

This provides intuitive and actionable machine design principles: increase the dimension of perception, reduce energy consumption, and intelligence will emerge. As with biological intelligence, so will machine intelligence. AI olfaction is exactly the way we used to verify it, and we look forward to more researchers and engineers to verify the conjecture.


How To Do It

GuGuSniff-EMO is a Language-Olfaction-Action (L-O-A) ontological model that learns directly from the physical world.
Olfaction as Physics - Molecules in the physical world are converted into electrical signals through sensors, with no algorithmic compensation required. The sensing comes from the ontology.
Olfaction as Language - Electrical signals are processed through event-driven encoding, sparse coding, semantic vector mapping, and contextual learning. Sensing generates tags.
Olfaction as Action - Based on the rise or fall of the ontology concentration gradient, autonomous tracking and avoidance, adaptive start and termination are realized. Sensing achieves the goal.


What To Do After

There are many constants in the physical world, and if intelligence has no constant like gravity or the speed of light, there is no limit to the intelligence of biology or machines.
Intelligence is the universe's Sisyphean effort against entropy, and our free will to expand intelligence is the universe's will. Regarding how to expand intelligence, we look forward to any cooperation, such as organizations that also build general perception hardware based on physical ontologies, as well as researchers in RL.

Meanwhile, artificial intelligence and robots have raised social concerns: Will robots cause the majority of people to lose their jobs? And then tighten the belt? The worry does not stem from a sense of "minority influence majority" imbalance. In fact, this imbalance predates technological and market unfettered development. The worry stems from the disorientation of the "future majority" as the "past minority", what to do after?
All the institutional utopias have failed, to the extent that when the utopias of technology and market arrive, we are at a loss instead, just as we witnessed deus ex machina but forgot that the drama must end.