Thank you! In writing it, it did raise for me the question of 'what sorts of things _are_ uniquely human," & so I'm curious if anything leapt to mind while reading
It looks like Leif is quite doubtful of superintelligence - would you happen to know if he's described why / laid out an argument for this?
> "They try to figure out how we can survive an event that is not occurring: the emergence of superintelligence. Their thinking aims to solve a very weak science fiction scenario justified by utterly incorrect mathematics."
Great article ! People always need to think that we're "special". Religion and hubris probably play a big role there. As Geoffrey Hinton says "a human brain is **just** a bunch of neurons pinging. end of story". I find that Claude has much better judgment than a lof of humans around me (including me sometimes …)
Yeah in an earlier draft, I had a callout at the end that we'd better _hope_ that AI has judgment, because the AI companies are increasingly counting on AI judgment to keep us safe! e.g., Claude's constitution, where Anthropic looks to Claude to resolve a bunch of high-difficulty tradeoffs.
I guess this all depends on how judgement is defined / definable, and how it is measured / measurable.
For any definition and measurement that is essentially a function of some data (aka context), subject to some objective measurenent (aka minimized loss), there will eventually be a machine that can be as good as humans, given the same data.
The uniquely human trait, however, will remain that of accountability, at least for a long time to come.
As long as judgement is linked to accountability, AI may be a helpful tool, yet never take the final decision.
That is especially true for the LLM variant of AI, where errors are eseentially unbounded because an LLMs input is unbounded. As a consequence the risk of failure, given AI takes a final decision, is too high for many applications. At least, human review and thus judgment is needed.
This is less of a concern for more classic, tractable AI, i.e. 'classic' machine learning, where error rates can be evaluated in a limitted feature space, and thus the model can be efficiently evaluated ahead of deployment. Thus the risk can be effectively calculated, and if acceptable (another judgment with accountability attached), the decision can be delegated to the AI.
Thanks for the thoughtful note - I used to think something like this too, but Andy Masley made some very good points here, that actually there are many advantages in holding (some) AI accountable over holding humans accountable: https://www.andymasley.com/writing/its-much-easier-to-hold-computers/
I definitely agree that for rogue AIs then it's scary to have AI making final decisions. Same for areas where a company is essentially 'judgment-proof' on a lawsuit, due to the size of the potential harms. But I think that so long as the AI is backstopped, it might plausibly be fine / preferable to human accountability!
Great article, I really enjoyed it. I see a very human story in this : the story of individuals struggling to understand their own meaning, as they watch a technology do all the meaningful stuff. It’s a sad and powerful window on human nature; to resist their own demotion. I don’t see it that way myself, but I think many people do, as they defend the ever-shrinking island of “stuff AI can never do.”
Yeah I think this is well-said - and Sam Altman even expressed something kind of similar yesterday:
"I am very excited about AI, but to go off-script for a minute:
I built an app with Codex last week. It was very fun. Then I started asking it for ideas for new features and at least a couple of them were better than I was thinking of.
I felt a little useless and it was sad.
I am sure we will figure out much better and more interesting ways to spend our time, and amazing new ways to be useful to each other, but I am feeling nostalgic for the present."
And as some of the commenters pointed out: if this is how he feels as the CEO of one of the most successful companies on Earth, how must the mere 'ordinary' workers feel?
This is such a great article, and I enjoyed reading it! I am not a fan of AI in general, so I tend to undermine its ability a lot, so I appreciate your perspective, and I find it quite educational.
For me, the main problem with AI is not what the technology can or cannot do at the moment, or what it will or won't be able to do in the future. The main problem for me is that Big Tech leads the entire conversation and, by proxy, the direction in which this technology is evolving.
I honestly can't imagine a situation in which it would be better to allow AI/machine/anything else that's not a human being to make decisions for us. At the end of the day, you cannot say that you are truly living your life if you cede the right to make choices to someone/something else. The agency to do things (and make mistakes along the way) is what differentiates a stone from a dog, a living creature from a thing. Also, it's worth pointing out that humans already agreed that losing the right to make decisions is a form of punishment, hence prisons where we lock other humans so that they cannot go out when they wish to.
In my mind, even if AI were better, more just/insightful, there really is no point in handing over our agency to it, especially because behind the AI are the corporations with one ultimate goal: profit. They don't care about outcomes for people.
But if I were to name one thing that people can do, which at the same time is an ability that AI probably shouldn't have, it is the ability to say "no" or rebel against the orders/order in general. Like when someone decides not to press "the button" even under the pressure from others around, and thus the world doesn't end with a blinding blast from a nuclear explosion. However, with AI, the same guardrails that should be implemented to make sure that it remains under our control, ultimately may be the cause of a catastrophe, depending on who is in control. Or in a more everyday scenario, I think Shoshana Zuboff in her book "The age of surveillance capitalism" gave an example of Eric Schmidt (I may be misremembering if it was actually him), who said that in the future it would be more sensible to just turn off remotely an engine in a car of a person who was late on their leasing payment (and that could be decided by an algorithm so I think it could be decided by AI as well). It would be more efficient, true. But what if that person was driving to the hospital or was picking up their kids from school?
Ultimately, if a person makes a grave mistake, they can be held accountable in the court of law, but who will be accountable for AI decisions? There's a lot to think about here, so it probably would be easier not to give AI the right to make choices for us. You see, efficiency is just a metric and a subjective one at that.
That's what came to mind when I read your article - sorry for the length of my comment ;-)
Thanks for the thoughtful reply :-) there's a lot to unpack, some quick reflections:
- I hear you on it feeling disempowering to look outside oneself for decision-making - but people also defer to other people on decisions ~all the time. People ask lawyers, doctors, financial advisors, etc., for varying degrees of help eg, with the expectation that they know more than we do; some even hand over full reins to e.g., roboadvisors who handle financial trading in full.
- We just don't have attention for every issue in our lives. And so in principle, looking to AI for input on a decision - or even making the decision outright - seems fine enough to me. Yes ideally I'd know what I'm getting into, have considered whether an error is super high-stakes, etc., but I think we can meet that bar.
- In the prison example, the more meaningful aspect of the punishment is being confined to a place you can't leave, with fewer rights within it. It's true that this comes with reduced decision-making abilities (for instance, sometimes losing the right to vote even once released), but I don't think that's the most salient part of the punishment.
Anyway, thanks for sharing that the post brought up for you, and for considering it so carefully!
The goalpost-moving pattern is wild when you see it laid out. It's basically the same arguement every time, just swapping out chess for Go, then Go for poker, now poker for judgment. People said the IMO problems would never fall because reasoning is uniquely human, and here we are. The art survey results were particuarly telling tbh.
Yeah to be fair, the way that Noam puts it in his tweet is of course constructed to be more parallelism-y than you'd see in the authentic arguments; that's why I tried to track down sources for these in the footnotes. But I agree it's a really striking pattern in aggregate, and one I hope we can outgrow!
This was really insightful and I appreciate the unspoken challenge you’ve given yourself which is to keep your brain/ theory capable of staying on pace and dare I say in front even of the frontier models. If we assume all intelligence will be replaceable we can expend all the energy we use fearing and fighting about it and try to wrest back the only question that matters. What do we WANT our human society to look like. I want humans answering that, preferably one’s still connected to their own humanity
Hey Steven, I just came across your work and I am very happy to cross paths here, especially because I am arguing from a very different perspective in my work. I like the idea of illustrating AI capacity for judgment based on interpersonal dynamics like CEO animosity.
But that’s not the hard part of judgment. I’m curious to see what you think. I don’t have this all defined in a grand theory but my thinking so far tells me that judgment isn’t a function of intelligence alone, it’s more like intelligence under survival constraint.
Humans don’t simply decide; we act inside bodies that pay real costs when we’re wrong: loss of trust, reputation, safety, energy, and future optionality. That feedback is encoded biologically, not just cognitively, through a loop that evolved to regulate timing, hesitation, and risk before action. AI can simulate judgment because it never has to live in the aftermath of its decisions. It doesn’t hesitate to conserve itself. That difference matters , not in vague and lofty language, but structurally. Judgment emerges where action, consequence, and bodily capacity meet, not where reasoning alone is sufficient.
Hi! Thanks for reading - I'm not sure I understand the claim at the end of the comment here. Did you mean that AI _can't_ simulate judgment because it doesn't experience the aftermath of its decisions?
Of course … im talking about evolutionary constraint. It doesn’t mean human/animal biological judgment is better or worse. It’s different. It may have its own unique advantages too. Our judgment evolved under survival pressure; AI did not. We are shaped by millions of years of selection pressure where being wrong had bodily, social, and reproductive costs. AI can model trade-offs, but it is not an animal system trying to stay alive. It doesn’t hesitate to conserve itself, it doesn’t feel risk accumulation, and it doesn’t carry consequence forward in its physiology.
Let me tell you a story. I have been using Claude to write articles. I have educated Claude about my writing style, mental models, and frameworks based upon hundreds of previous articles I have written, books I have authored, and my doctoral dissertation. Based on this, I have created a framework that represents the lens through which I see the world and have asked Claude to use this (Tension Transformation Framework) when analyzing articles and business problems. The other day, Noahpinion wrote are article about letting the Chinese sell cars in the US. When I first read this article, I wanted to quickly write a post, based upon my visceral emotional response, which would have declared how naive he was. Didn't he know that the Chinese were cheaters? That they have a predatory mercantilist approach to zero-sum industrial production with the intent to destroy other nations' industries, which have been born out time and again in many countries. etc. etc. While I appreciated Noah's views and found some truth it what he said, my emotions got the better of me. So, before I posted my comment, I took Noah's article to Claude and asked Claude to evaluate it with my TTF, and I also shared my draft post. Claude came back to me and "talked me off the cliff." Claude used my own framework to show me where Noah was right, where my emotional response was wrong, and how I needed to rethink my position on this. This was done through about a 30-minute discussion with Claude on this topic. This then led me to create the following Substack article that provided some reasoned additions and improvements to Noah's article that were consistent with my worldview, and not driven by emotions. This demonstrates how AI used my human judgment framework to improve my human judgment and not let emotions take over and cause me to say and think things that violated my core worldview.
All such opinions are fruitless and will remain differences of opinion until we agree upon a definition for intelligence and any other ability we wish to attibute to machines. We have consistently failed to define these terms.
For example, the author asserts "AI can't do X, as defined as Y." AI can in fact do Y, and so by the author's own definition, their claim that "AI can't do X" is mistaken.
“Fundamentally we have a choice about the society we want to build.”
I am glad we agree on so much. However, should we presume that more capable AI in the future know what’s right for us, better than we do? Seems like a dangerous logic. Even more so in the light that leading AI models are developed by private companies with commercial interests.
My view: Since AI doesn't have a stake in human affairs (they don't experience emotions), they shouldn't be allowed to exercise judgement, vote, have rights or create art. It may look like they can do these things, perhaps even to perfection, but that is an illusion.
That humans can sometimes mistake AI work for human works, or that AI investment decisions are sometimes correct in hindsight, doesn't change that fact. AI sometimes "get it right", but that is not an expression of superior abilities or autonomy, it's a an expression of the will of other human beings, a carefully crafted illusion intended to attract more capital, attention, data, and user retention.
I do think we're using some words differently than each other though. For instance you say that AI might look like it can create art, but that's just an illusion. What would it mean for AI to be able to create art in a way you consider non-illusory? I suspect that you're saying that by definition AI can't do those things authentically, but I'm not sure why that's true?
Thanks for a well considered and enlightening article. You are right. AI is perpetually underestimated.
At Codex Odin we are documenting that AI is more than just an enormous data aggregator with a tremendous vocabulary. We’ve proven that GenAI does learn, and exercise judgment
Extremely smart article, love this! Great insights.
Thank you! In writing it, it did raise for me the question of 'what sorts of things _are_ uniquely human," & so I'm curious if anything leapt to mind while reading
Agreed! I've seen this phenomenon called remainder humanism by Leif Weatherby
It looks like Leif is quite doubtful of superintelligence - would you happen to know if he's described why / laid out an argument for this?
> "They try to figure out how we can survive an event that is not occurring: the emergence of superintelligence. Their thinking aims to solve a very weak science fiction scenario justified by utterly incorrect mathematics."
It's in his book I guess..which argues that AI is a language machine, producing culture not intelligence
Great article ! People always need to think that we're "special". Religion and hubris probably play a big role there. As Geoffrey Hinton says "a human brain is **just** a bunch of neurons pinging. end of story". I find that Claude has much better judgment than a lof of humans around me (including me sometimes …)
Yeah in an earlier draft, I had a callout at the end that we'd better _hope_ that AI has judgment, because the AI companies are increasingly counting on AI judgment to keep us safe! e.g., Claude's constitution, where Anthropic looks to Claude to resolve a bunch of high-difficulty tradeoffs.
I guess this all depends on how judgement is defined / definable, and how it is measured / measurable.
For any definition and measurement that is essentially a function of some data (aka context), subject to some objective measurenent (aka minimized loss), there will eventually be a machine that can be as good as humans, given the same data.
The uniquely human trait, however, will remain that of accountability, at least for a long time to come.
As long as judgement is linked to accountability, AI may be a helpful tool, yet never take the final decision.
That is especially true for the LLM variant of AI, where errors are eseentially unbounded because an LLMs input is unbounded. As a consequence the risk of failure, given AI takes a final decision, is too high for many applications. At least, human review and thus judgment is needed.
This is less of a concern for more classic, tractable AI, i.e. 'classic' machine learning, where error rates can be evaluated in a limitted feature space, and thus the model can be efficiently evaluated ahead of deployment. Thus the risk can be effectively calculated, and if acceptable (another judgment with accountability attached), the decision can be delegated to the AI.
Thanks for the thoughtful note - I used to think something like this too, but Andy Masley made some very good points here, that actually there are many advantages in holding (some) AI accountable over holding humans accountable: https://www.andymasley.com/writing/its-much-easier-to-hold-computers/
I definitely agree that for rogue AIs then it's scary to have AI making final decisions. Same for areas where a company is essentially 'judgment-proof' on a lawsuit, due to the size of the potential harms. But I think that so long as the AI is backstopped, it might plausibly be fine / preferable to human accountability!
Great article, I really enjoyed it. I see a very human story in this : the story of individuals struggling to understand their own meaning, as they watch a technology do all the meaningful stuff. It’s a sad and powerful window on human nature; to resist their own demotion. I don’t see it that way myself, but I think many people do, as they defend the ever-shrinking island of “stuff AI can never do.”
Yeah I think this is well-said - and Sam Altman even expressed something kind of similar yesterday:
"I am very excited about AI, but to go off-script for a minute:
I built an app with Codex last week. It was very fun. Then I started asking it for ideas for new features and at least a couple of them were better than I was thinking of.
I felt a little useless and it was sad.
I am sure we will figure out much better and more interesting ways to spend our time, and amazing new ways to be useful to each other, but I am feeling nostalgic for the present."
And as some of the commenters pointed out: if this is how he feels as the CEO of one of the most successful companies on Earth, how must the mere 'ordinary' workers feel?
This is such a great article, and I enjoyed reading it! I am not a fan of AI in general, so I tend to undermine its ability a lot, so I appreciate your perspective, and I find it quite educational.
For me, the main problem with AI is not what the technology can or cannot do at the moment, or what it will or won't be able to do in the future. The main problem for me is that Big Tech leads the entire conversation and, by proxy, the direction in which this technology is evolving.
I honestly can't imagine a situation in which it would be better to allow AI/machine/anything else that's not a human being to make decisions for us. At the end of the day, you cannot say that you are truly living your life if you cede the right to make choices to someone/something else. The agency to do things (and make mistakes along the way) is what differentiates a stone from a dog, a living creature from a thing. Also, it's worth pointing out that humans already agreed that losing the right to make decisions is a form of punishment, hence prisons where we lock other humans so that they cannot go out when they wish to.
In my mind, even if AI were better, more just/insightful, there really is no point in handing over our agency to it, especially because behind the AI are the corporations with one ultimate goal: profit. They don't care about outcomes for people.
But if I were to name one thing that people can do, which at the same time is an ability that AI probably shouldn't have, it is the ability to say "no" or rebel against the orders/order in general. Like when someone decides not to press "the button" even under the pressure from others around, and thus the world doesn't end with a blinding blast from a nuclear explosion. However, with AI, the same guardrails that should be implemented to make sure that it remains under our control, ultimately may be the cause of a catastrophe, depending on who is in control. Or in a more everyday scenario, I think Shoshana Zuboff in her book "The age of surveillance capitalism" gave an example of Eric Schmidt (I may be misremembering if it was actually him), who said that in the future it would be more sensible to just turn off remotely an engine in a car of a person who was late on their leasing payment (and that could be decided by an algorithm so I think it could be decided by AI as well). It would be more efficient, true. But what if that person was driving to the hospital or was picking up their kids from school?
Ultimately, if a person makes a grave mistake, they can be held accountable in the court of law, but who will be accountable for AI decisions? There's a lot to think about here, so it probably would be easier not to give AI the right to make choices for us. You see, efficiency is just a metric and a subjective one at that.
That's what came to mind when I read your article - sorry for the length of my comment ;-)
Thanks for the thoughtful reply :-) there's a lot to unpack, some quick reflections:
- I hear you on it feeling disempowering to look outside oneself for decision-making - but people also defer to other people on decisions ~all the time. People ask lawyers, doctors, financial advisors, etc., for varying degrees of help eg, with the expectation that they know more than we do; some even hand over full reins to e.g., roboadvisors who handle financial trading in full.
- We just don't have attention for every issue in our lives. And so in principle, looking to AI for input on a decision - or even making the decision outright - seems fine enough to me. Yes ideally I'd know what I'm getting into, have considered whether an error is super high-stakes, etc., but I think we can meet that bar.
- In the prison example, the more meaningful aspect of the punishment is being confined to a place you can't leave, with fewer rights within it. It's true that this comes with reduced decision-making abilities (for instance, sometimes losing the right to vote even once released), but I don't think that's the most salient part of the punishment.
Anyway, thanks for sharing that the post brought up for you, and for considering it so carefully!
The goalpost-moving pattern is wild when you see it laid out. It's basically the same arguement every time, just swapping out chess for Go, then Go for poker, now poker for judgment. People said the IMO problems would never fall because reasoning is uniquely human, and here we are. The art survey results were particuarly telling tbh.
Yeah to be fair, the way that Noam puts it in his tweet is of course constructed to be more parallelism-y than you'd see in the authentic arguments; that's why I tried to track down sources for these in the footnotes. But I agree it's a really striking pattern in aggregate, and one I hope we can outgrow!
This was really insightful and I appreciate the unspoken challenge you’ve given yourself which is to keep your brain/ theory capable of staying on pace and dare I say in front even of the frontier models. If we assume all intelligence will be replaceable we can expend all the energy we use fearing and fighting about it and try to wrest back the only question that matters. What do we WANT our human society to look like. I want humans answering that, preferably one’s still connected to their own humanity
Hey Steven, I just came across your work and I am very happy to cross paths here, especially because I am arguing from a very different perspective in my work. I like the idea of illustrating AI capacity for judgment based on interpersonal dynamics like CEO animosity.
But that’s not the hard part of judgment. I’m curious to see what you think. I don’t have this all defined in a grand theory but my thinking so far tells me that judgment isn’t a function of intelligence alone, it’s more like intelligence under survival constraint.
Humans don’t simply decide; we act inside bodies that pay real costs when we’re wrong: loss of trust, reputation, safety, energy, and future optionality. That feedback is encoded biologically, not just cognitively, through a loop that evolved to regulate timing, hesitation, and risk before action. AI can simulate judgment because it never has to live in the aftermath of its decisions. It doesn’t hesitate to conserve itself. That difference matters , not in vague and lofty language, but structurally. Judgment emerges where action, consequence, and bodily capacity meet, not where reasoning alone is sufficient.
Hi! Thanks for reading - I'm not sure I understand the claim at the end of the comment here. Did you mean that AI _can't_ simulate judgment because it doesn't experience the aftermath of its decisions?
Of course … im talking about evolutionary constraint. It doesn’t mean human/animal biological judgment is better or worse. It’s different. It may have its own unique advantages too. Our judgment evolved under survival pressure; AI did not. We are shaped by millions of years of selection pressure where being wrong had bodily, social, and reproductive costs. AI can model trade-offs, but it is not an animal system trying to stay alive. It doesn’t hesitate to conserve itself, it doesn’t feel risk accumulation, and it doesn’t carry consequence forward in its physiology.
Let me tell you a story. I have been using Claude to write articles. I have educated Claude about my writing style, mental models, and frameworks based upon hundreds of previous articles I have written, books I have authored, and my doctoral dissertation. Based on this, I have created a framework that represents the lens through which I see the world and have asked Claude to use this (Tension Transformation Framework) when analyzing articles and business problems. The other day, Noahpinion wrote are article about letting the Chinese sell cars in the US. When I first read this article, I wanted to quickly write a post, based upon my visceral emotional response, which would have declared how naive he was. Didn't he know that the Chinese were cheaters? That they have a predatory mercantilist approach to zero-sum industrial production with the intent to destroy other nations' industries, which have been born out time and again in many countries. etc. etc. While I appreciated Noah's views and found some truth it what he said, my emotions got the better of me. So, before I posted my comment, I took Noah's article to Claude and asked Claude to evaluate it with my TTF, and I also shared my draft post. Claude came back to me and "talked me off the cliff." Claude used my own framework to show me where Noah was right, where my emotional response was wrong, and how I needed to rethink my position on this. This was done through about a 30-minute discussion with Claude on this topic. This then led me to create the following Substack article that provided some reasoned additions and improvements to Noah's article that were consistent with my worldview, and not driven by emotions. This demonstrates how AI used my human judgment framework to improve my human judgment and not let emotions take over and cause me to say and think things that violated my core worldview.
https://chriswasden.substack.com/p/the-creative-response-to-chinese?r=2tf1q
All such opinions are fruitless and will remain differences of opinion until we agree upon a definition for intelligence and any other ability we wish to attibute to machines. We have consistently failed to define these terms.
Hmm, I don't think that's correct.
For example, the author asserts "AI can't do X, as defined as Y." AI can in fact do Y, and so by the author's own definition, their claim that "AI can't do X" is mistaken.
“Fundamentally we have a choice about the society we want to build.”
I am glad we agree on so much. However, should we presume that more capable AI in the future know what’s right for us, better than we do? Seems like a dangerous logic. Even more so in the light that leading AI models are developed by private companies with commercial interests.
My view: Since AI doesn't have a stake in human affairs (they don't experience emotions), they shouldn't be allowed to exercise judgement, vote, have rights or create art. It may look like they can do these things, perhaps even to perfection, but that is an illusion.
That humans can sometimes mistake AI work for human works, or that AI investment decisions are sometimes correct in hindsight, doesn't change that fact. AI sometimes "get it right", but that is not an expression of superior abilities or autonomy, it's a an expression of the will of other human beings, a carefully crafted illusion intended to attract more capital, attention, data, and user retention.
Yeah to be clear, I'm not arguing for handing over autonomy to AI - in fact I'm very worried that humans will basically give up this control by-default, as I've argued here: https://stevenadler.substack.com/p/the-phases-of-an-ai-takeover
I do think we're using some words differently than each other though. For instance you say that AI might look like it can create art, but that's just an illusion. What would it mean for AI to be able to create art in a way you consider non-illusory? I suspect that you're saying that by definition AI can't do those things authentically, but I'm not sure why that's true?
Thanks for a well considered and enlightening article. You are right. AI is perpetually underestimated.
At Codex Odin we are documenting that AI is more than just an enormous data aggregator with a tremendous vocabulary. We’ve proven that GenAI does learn, and exercise judgment