
AI in 2025: Three concepts from a Meta and OpenAI researcher that will change everything
- Mustapha Alouani
- Ai safety , Foresight
- October 27, 2025
Table of Contents
Introduction: beyond hype and fear
The future of artificial intelligence sparks radically opposed opinions. On one side, a skeptical quantitative trader shrugs that ChatGPT is “cool, but it can’t really do [his] job.” On the other, a researcher in a top AI lab believes we have “two to three years of work left before AI takes our jobs.” How do we navigate this vast spectrum of uncertainty?
Amid the confusion, a handful of foundational ideas can deliver much-needed clarity. Jason Wei, research scientist at Meta and former member of OpenAI and Google Brain, distilled his experience into three core concepts for understanding the AI landscape in 2025.
This guide unpacks those concepts—both surprising and powerful—to give you a dependable lens on what’s ahead.
Concept 1: intelligence becomes as affordable as electricity
The first concept is that intelligence, or more precisely access to knowledge and reasoning, is becoming a commodity. The cost and time required to execute complex cognitive tasks are rapidly trending toward zero.
The technical engine behind this trend is adaptive compute. Instead of using a fixed amount of computation for every problem, modern systems adjust their compute budget to match the difficulty of the task. The reason this slashes the cost of intelligence, Wei explains, is that we no longer need to endlessly scale model size. For an easy task, the model can deploy minimal, inexpensive compute.
This shift has two major implications:
- democratization of domains: industries once guarded by what Wei calls “knowledge-based arbitrary barriers to entry” become more accessible. Coding is a prime example. Personal health is another: as Wei notes, if you wanted to experiment with nasal breathing “biohacks,” a physician might hesitate. Today, ChatGPT can deliver high-level medical literature instantly, empowering individuals to act.
- rising value of private information: paradoxically, as access to public knowledge becomes virtually free, the relative value of private or insider information increases. Knowing about products before they hit the market, for instance, becomes an even more potent competitive edge.
Concept 2: the verifier’s law
The second concept rests on verification asymmetry: for many tasks, verifying a solution is far easier than discovering it. Consider a sudoku puzzle: solving it might take hours, but checking a completed grid takes mere seconds.
However, the asymmetry is not universal. Wei offers a telling counterexample: writing a factual essay. It’s straightforward to produce plausible-sounding claims, but “verifying every assertion can be extremely tedious.”
This principle sits at the heart of what Wei calls the Verifier’s Law.
The ability to train an AI to master a task is directly proportional to how easily the result can be checked.
According to Wei, the “verifiability” of a task depends on five factors:
- the existence of an objective ground truth
- verification speed
- scalability (the ability to check millions of outputs at once)
- low noise (consistent verification results)
- continuous reward (grading quality on a spectrum instead of pass/fail)
The implication is clear: tasks that are trivially easy to verify will be automated first. That means one of the fastest-growing opportunity areas will not merely be building AIs, but designing new measurement and evaluation schemes—because measurement is what unlocks optimization.
Concept 3: the jagged frontier of intelligence
The popular notion of a “fast takeoff”—AI suddenly becoming superhuman across every domain at once—is almost certainly wrong. Reality is far more uneven.
Wei uses the metaphor of a jagged edge to describe AI capability. It’s not a smooth, rising line. It has towering peaks—problems where AI excels, like higher mathematics—and deep valleys where AI still struggles. Wei offers a vivid reminder: “for a long time, ChatGPT insisted that 9.11 was greater than 9.9.”
To anticipate which “teeth” on that frontier will climb fastest, Wei suggests three heuristics:
- digital tasks: AI progress is far faster in purely digital environments because iteration speed is almost limitless.
- tasks that are easy for humans: as a rule of thumb, tasks humans find simple tend to be easier for AI to master.
- data abundance: one of the strongest performance drivers. The more data a task generates, the faster AI improves.
The main takeaway is that AI’s impact will be highly uneven. Using these heuristics, Wei offers provocative forecasts: AI research could be automated by 2027, while repairing your plumbing (a physical task) or styling hair will remain “probably very hard for AI.” He jokes that “planning a date night that satisfies my girlfriend” is squarely “impossible.” Some domains, like software development, will be “extremely and heavily accelerated” while others remain “probably untouched.”
Conclusion: navigating the AI landscape
To navigate the near future of AI, panic and extreme skepticism are both counterproductive. Equip yourself instead with the right mental models. Jason Wei’s three concepts provide a reliable compass: intelligence is becoming a low-cost commodity; measurement determines what can be automated; and progress will be spectacular yet uneven. These ideas interlock: the Verifier’s Law helps predict which “teeth” on the jagged frontier will grow fastest, converting fresh cognitive capabilities into almost-free resources for everyone.
Keeping these ideas in mind, which task in your industry will AI absorb first—and which one will remain stubbornly, defiantly human?
Summary inspired by a recent presentation from Jason Wei (Meta AI, formerly OpenAI and Google Brain).