This was written quickly in one sitting on May 27, 2025. It wasn't read-over or edited.
Time horizons are increasing exponentially. They're about 2 hours now, and they're doubling every 3 months. In a year this would get us to 32 hours. That is, we'd have AI which could do what can be done in 32 expert man hours.
However, the real figure will likely be higher: AI will increasingly amplify the productivity of ML researchers, and the additional precision with which AI needs to understand the world in order that we reach another doubling will decrease. Quantifying these effects is difficult, but let's suppose they give (in sum) a 25%, taking us to 5 doublings rather than 4, which gives us 64 expert man hours.
Would AI then be able to act continuously, trying the same task many times until it succeeds, such as AI today struggles to do?
And would it be able to come up with truly novel discoveries?
To the first question I guess "yes". The ability to try and try again is certainly encapsulated in how a human behaves over 64 working hours, and for that lone reason I expect the AI to have aquired the skill by that point. This is not then to day that AI will be able to do tasks with indefinite time horizons; but simply that for those tasks at which the AI has any appreciable success rate, it will have a 100% success-rate; and that it will be able to act agentically indefinitely; but not that it will be able to do tasks which fall outside of its time horizon, which is defined not by how long the AI might work autonomously, but rather how long it takes a skilled human to complete the task. Take this last point to heart, as it is what allows one to see that an AI which can work indefinitely does not therefore have an indefinite time horizon as we are using that term here.
Anyway, to act agentically really mean to act at a human level with respect to agency. Humans are not fully-autonomous; they rely on collaboration, feedback, assistance, and division of labour. So, we'll have by this point (that is, mid-2026, the point in time 12 months from now) AI which has human-level agency and which can do programming tasks which take skilled humans 64 hours. This perhaps constitutes the thing which people refer to as a "drop-in worker".
I note that I have focused here on AI capabilities in programming, but that AI may soon become significantly better at math than programming, in which case mathematics would be the more predictive aspect of AI capabilities on which to focus.
In any case, my impression at this point is that this AI will not be superhuman. The generally foolish idea that people have of AI, that it will simply be human-but-faster, will at this point be more-or-less correct. The analogues to the human brain in the neural networks which constitute these AI will still be lesser to their human counterparts, or within human range as best we can estimate it, therefore I expect the precision with which these AI can model the world to fall largely within the bounds of human precision in those areas. However, humans did develop our higher reasoning faculties over an incredibly short space of evolutionary time. And we did not, for example, evolve to be expert mathematicians nor programmers. Therefore, to the extent to which our abilities are nonplastic, and rely on some algorithmic priming carried out through evolution, those aspects of our brains devoted to mathematics and computer programming may be disproportionately poorly developed, allowing AI with minds in many ways lesser to our own, to be better tuned to these tasks, and to therefore attain some superhuman conceptions within those domains. But I am skeptical, it seems that most conceptions generalise well; those most important conceptions which a brain roughly as capable as the human brain, we have developed, and can well generalise to mathematics and programming. So, to the extent that AI at this point in time (mid 2026) is superhuman at these tasks, I expect it won't be so great as to be largely beyond our comprehension, and therefore the foolish, common conception of future AI will not at this point yet be shattered.
So, this is where I expect we will be at in 12 months' time: AI that is remarkably capable at mathematics and programming, but which can be modeled still in terms of human abilities: like skilled humans, but faster, and somewhat conceptually limited, albeit shockingly superior in some limited domains.
What will the market reaction be at this point? Significant. A company such as Alphabet could see its market cap approach 10 trillion dollars, or something along those lines (I'm not attempting to be greatly precise here). But still very limited compared to what it would sensibly be; market participants will still not be thinking in terms of total human-obsolescence, of AI whose world-model is vastly more precise and comprehensive than our own, of science and industry totally beyond our comprehension; thinking in terms of our entire cosmic endowment.
The few months that follow on from this point twelve months from now will be the final vaguely-normal months we experience as a species.