AI is evolving faster than our vocabulary for describing it. We may need a few new words. We have “cognition” for how a single mind thinks, but we don’t have a word for what happens when human and machine intelligence work together to perceive, decide, create and act. Let’s call that process intelition.
Intelition isn’t a feature; it’s the organizing principle for the next wave of software where humans and AI operate inside the same shared model of the enterprise. Today’s systems treat AI models as things you invoke from the outside. You act as a “user,” prompting for responses or wiring a “human in the loop” step into agentic workflows. But that’s evolving into continuous co-production: People and agents are shaping decisions, logic and actions together, in real time.
Read on for a breakdown of the three forces driving this new paradigm.
A unified ontology is just the beginning
In a recent shareholder letter, Palantir CEO Alex Karp wrote that “all the value in the market is going to go to chips and what we call ontology,” and argued that this shift is “only the beginning of something much larger and more significant.” By ontology, Karp means a shared model of objects (customers, policies, assets, events) and their relationships. This also includes what Palantir calls an ontology’s “kinetic layer” that defines the actions and security permissions connecting objects.
In the SaaS era, every enterprise application creates its own object and process models. Combined with a host of legacy systems and often chaotic models, enterprises face the challenge of stitching all this together. It’s a big and difficult job, with redundancies, incomplete structures and missing data. The reality: No matter how many data warehouse or data lake projects commissioned, few enterprises come close to creating a consolidated enterprise ontology.
A unified ontology is essential for today’s agentic AI tools. As organizations link and federate ontologies, a new software paradigm emerges: Agentic AI can reason and act across suppliers, regulators, customers and operations, not just within a single app.
As Karp describes it, the aim is “to tether the power of artificial intelligence to objects and relationships in the real world.”
World models and continuous learning
Today’s models can hold extensive context, but holding information isn’t the same as learning from it. Continual learning requires the accumulation of understanding, rather than resets with each retraining.
To his aim, Google recently announced “Nested Learning” as a potential solution, grounded direclty into existing LLM architecture and training data. The authors don’t claim to have solved the challenges of building world models. But, Nested Learning could supply the raw ingredients for them: Durable memory with continual learning layered into the system. The endpoint would make retraining obsolete.
In June 2022, Meta’s chief AI scientist Yann LeCun created a blueprint for “autonomous machine intelligence” that featured a hierarchical approach to using joint embeddings to make predictions using world models. He called the technique H-JEPA, and later put bluntly: “LLMs are good at manipulating language, but not at thinking.”
Over the past three years, LeCun and his colleagues at Meta have moved H-JEPA theory into practice with open source models V-JEPA and I-JEPA, which learn image and video representations of the world.
The personal intelition interface
The third force in this agentic, ontology-driven world is the personal interface. This puts people at the center rather than as “users” on the periphery. This is not another app; it is the primary way a person participates in the next era of work and life. Rather than treating AI as something we visit through a chat window or API cal, the personal intelition interface will be always-on, aware of our context, preferences and goals and capable of acting on our behalf across the entire federated economy.
Let’s analyze how this is already coming together.
In May, Jony Ive sold his AI device company io to OpenAI to accelerate a new AI device category. He noted at the time: “If you make something new, if you innovate, there will be consequences unforeseen, and some will be wonderful, and some will be harmful. While some of the less positive consequences were unintentional, I still feel responsibility. And the manifestation of that is a determination to try and be useful.” That is, getting the personal intelligence device right means more than an attractive venture opportunity.
Apple is looking beyond LLMs for on-device solutions that require less processing power and result in less latency when creating AI apps to understand “user intent.” Last year, they created UI-JEPA, an innovation that moves to “on-device analysis” of what the user wants. This strikes directly at the business model of today’s digital economy, where centralized profiling of “users” transforms intent and behavior data into vast revenue streams.
Tim Berners-Lee, the inventor of the World Wide Web, recently noted: “The user has been reduced to a consumable product for the advertiser … there’s still time to build machines that work for humans, and not the other way around.” Moving user intent to the device will drive interest in a secure personal data management standard, Solid, that Berners-Lee and his colleagues have been developing since 2022. The standard is ideally suited to pair with new personal AI devices. For instance, Inrupt, Inc., a company founded by Berners-Lee, recently combined Solid with Anthropic’s MCP standard for Agentic Wallets. Personal control is more than a feature of this paradigm; it is the architectural safeguard as systems gain the ability to learn and act continuously.
Ultimately, these three forces are moving and converging faster than most realize. Enterprise ontologies provide the nouns and verbs, world-model research supplies durable memory and learning and the personal interface becomes the permissioned point of control. The next software era isn’t coming. It’s already here.
Brian Mulconrey is SVP at Sureify Labs.
