I think that animal metaphors like this can be useful, but it's never clear to me how much they really apply.
Our intelligence advantage over animals is qualitative: we have an abstract, symbolic, conceptual intelligence that they lack. It's not clear that AI will have any such qualitative advantage over us, or even that such an advantage exists or is possible.
AI does have quantitative advantages over us, and these might become big enough to *effectively* be a qualitative advantage (difference in degree making a difference in kind). But again, it's not obvious to me how strong the animal analogy is. (C.f. Katja Grace's take on “we don't trade with ants”: https://www.lesswrong.com/posts/wB7hdo4LDdhZ7kwJw/we-don-t-trade-with-ants)
Appreciate you reading - I wonder if you have an easy illustration of what it’d mean for a difference to be qualitative?
For instance, plausibly AI can “be in many locations at once” and act as a way more coherent unified entity, but I’m not sure if that would reach your bar?
(It could be achieved through eg continual learning that incorporates the experience of each instance of the model)
I agree this is tough to reason about though, & useful to hear what hesitations have come to mind
Well, there's a qualitative difference between our intelligence and animals. They can't learn language (with syntax), we can. They can't conceive of the long-range future, we can. Etc. It's not clear to me whether there is any such qualitative difference between us and AI. It is faster, more widely-read with better recall, can be forked, etc.… but is there anything it *understands* that we don't or can't? I'm not sure.
Ah thanks, I see what you mean. One of my main threat models runs through novel technology development, which I’m not sure relies on such a difference existing. Will need to puzzle it over more though, I’m likely to write about the threat model soon
Great read! While I’m a superintelligence skeptic, I’m also wrong enough of the time that I would like to see some robust controls around research in this direction. It sounds simplistic, but the most straightforward way to deal with this self-inflicted risk is to simply…not build it.
Thank you! I'm curious, what mainly has you feeling skeptical about it?
(I also appreciate the recognition that one can be < 100% confident that a thing is possible, & still think we should guard against the possibility that it happens!)
My understanding is that LLMs have already absorbed the sum total of text-based human knowledge, and how they’ve been deployed is sharply reducing the financial incentives to create more. The only alternative seems to be synthetic data, which can nudge models towards collapse. Already, model advances seem to be slowing. Perhaps we’re gradually reaching a ceiling beyond which models cannot improve without a fresh surge of human creation.
Beyond this, we’re starting to see water and energy tradeoffs in communities hosting large data centers. I see this becoming a big issue that could ultimately drive regulation.
But I could be completely wrong about all of this…and/or the AI community’s commitment to building a doomsday machine. 🤣
Very interesting - yeah I think it’s worth separating out the resources question probably, though I agree there’s some chance that AI companies can’t marshall enough resources to make something like superintelligence happen. (Possibly it depends on how likely superintelligence seems, because that relates to whether they can surely expect a return on their investment?)
On the data point, yeah I agree that synthetic data will probably end up being big for these systems, especially to the extent that companies want to do “self-play” (a la RL training for other environments). I think I’d need to dig more into the current empirics on model collapse, but my intuition is that this an overcomable limitation? It seems like if models can surpass humans in some training environments without any human data (which they do), then I’d expect they can in others too?
Thanks for reading and for the interesting thoughts!
Nice post! The "something to ponder" might be the current scenario has many acknowledging the slowing in pace of ability measure, yet the evidence of the AI working to win or beat perceived opponent.
What will the landscape be in a couple of years as the true next 'aha' happens and we get past the clunky tool of today?
I think that animal metaphors like this can be useful, but it's never clear to me how much they really apply.
Our intelligence advantage over animals is qualitative: we have an abstract, symbolic, conceptual intelligence that they lack. It's not clear that AI will have any such qualitative advantage over us, or even that such an advantage exists or is possible.
AI does have quantitative advantages over us, and these might become big enough to *effectively* be a qualitative advantage (difference in degree making a difference in kind). But again, it's not obvious to me how strong the animal analogy is. (C.f. Katja Grace's take on “we don't trade with ants”: https://www.lesswrong.com/posts/wB7hdo4LDdhZ7kwJw/we-don-t-trade-with-ants)
Appreciate you reading - I wonder if you have an easy illustration of what it’d mean for a difference to be qualitative?
For instance, plausibly AI can “be in many locations at once” and act as a way more coherent unified entity, but I’m not sure if that would reach your bar?
(It could be achieved through eg continual learning that incorporates the experience of each instance of the model)
I agree this is tough to reason about though, & useful to hear what hesitations have come to mind
Well, there's a qualitative difference between our intelligence and animals. They can't learn language (with syntax), we can. They can't conceive of the long-range future, we can. Etc. It's not clear to me whether there is any such qualitative difference between us and AI. It is faster, more widely-read with better recall, can be forked, etc.… but is there anything it *understands* that we don't or can't? I'm not sure.
Ah thanks, I see what you mean. One of my main threat models runs through novel technology development, which I’m not sure relies on such a difference existing. Will need to puzzle it over more though, I’m likely to write about the threat model soon
I think there are some clear threat models that don't rely on animal metaphors, especially the principal-agent problem https://blog.rootsofprogress.org/four-lenses-on-ai-risks
Great read! While I’m a superintelligence skeptic, I’m also wrong enough of the time that I would like to see some robust controls around research in this direction. It sounds simplistic, but the most straightforward way to deal with this self-inflicted risk is to simply…not build it.
Thank you! I'm curious, what mainly has you feeling skeptical about it?
(I also appreciate the recognition that one can be < 100% confident that a thing is possible, & still think we should guard against the possibility that it happens!)
My understanding is that LLMs have already absorbed the sum total of text-based human knowledge, and how they’ve been deployed is sharply reducing the financial incentives to create more. The only alternative seems to be synthetic data, which can nudge models towards collapse. Already, model advances seem to be slowing. Perhaps we’re gradually reaching a ceiling beyond which models cannot improve without a fresh surge of human creation.
Beyond this, we’re starting to see water and energy tradeoffs in communities hosting large data centers. I see this becoming a big issue that could ultimately drive regulation.
But I could be completely wrong about all of this…and/or the AI community’s commitment to building a doomsday machine. 🤣
Very interesting - yeah I think it’s worth separating out the resources question probably, though I agree there’s some chance that AI companies can’t marshall enough resources to make something like superintelligence happen. (Possibly it depends on how likely superintelligence seems, because that relates to whether they can surely expect a return on their investment?)
On the data point, yeah I agree that synthetic data will probably end up being big for these systems, especially to the extent that companies want to do “self-play” (a la RL training for other environments). I think I’d need to dig more into the current empirics on model collapse, but my intuition is that this an overcomable limitation? It seems like if models can surpass humans in some training environments without any human data (which they do), then I’d expect they can in others too?
Thanks for reading and for the interesting thoughts!
You’re very welcome! There’s definitely a lot to think about!
Nice post! The "something to ponder" might be the current scenario has many acknowledging the slowing in pace of ability measure, yet the evidence of the AI working to win or beat perceived opponent.
What will the landscape be in a couple of years as the true next 'aha' happens and we get past the clunky tool of today?