Discussion about this post

User's avatar
Jason Crawford's avatar

I think that animal metaphors like this can be useful, but it's never clear to me how much they really apply.

Our intelligence advantage over animals is qualitative: we have an abstract, symbolic, conceptual intelligence that they lack. It's not clear that AI will have any such qualitative advantage over us, or even that such an advantage exists or is possible.

AI does have quantitative advantages over us, and these might become big enough to *effectively* be a qualitative advantage (difference in degree making a difference in kind). But again, it's not obvious to me how strong the animal analogy is. (C.f. Katja Grace's take on “we don't trade with ants”: https://www.lesswrong.com/posts/wB7hdo4LDdhZ7kwJw/we-don-t-trade-with-ants)

Expand full comment
Karen Spinner's avatar

Great read! While I’m a superintelligence skeptic, I’m also wrong enough of the time that I would like to see some robust controls around research in this direction. It sounds simplistic, but the most straightforward way to deal with this self-inflicted risk is to simply…not build it.

Expand full comment
9 more comments...

No posts