Terrific angle. If China also looks at AI development through this lens, escalation in Taiwan seems imminent. The US has only indigenized a single-digit % of leading-edge semi capacity. Furthermore, China is likely to also scale up what is already arguably the most well resourced (cyber)espionage program in the world. Should the US somehow defend against exfil, algorithmic unlocks will still swiftly flow out of America and into leading Chinese projects.
Containing China's development of powerful AI systems will hinge first on deterring escalation in Taiwan and cracking down on economic espionage. America's lead in stack size, and with it its leverage, erodes in the face of each of these risks. These two areas will be vital proving grounds for American diplomacy as it hopefully builds up the muscles needed for AI non-proliferation.
I appreciate this analysis, but I'm worried about how it might be interpreted. You make a case for mutual deceleration and cooperation, but I think the framing could lead to misunderstandings.
The early focus on military threats and why "beating China" matters, combined with the "containment" language and poker analogies, could give casual readers the wrong takeaway. Someone skimming this whose main impression is "China getting AGI first would be catastrophic" could easily conclude "therefore we need to race to AGI as fast as possible and make sure China never gets close".
Your actual interesting policy recommendations come much later. But I worry that message gets lost in all the competitive framing upfront.
Maybe this is exactly how you intended to structure the argument, since you do seem a lot more open to racing in case cooperation fails than I am. But I can't help thinking that in our current environment, anything that starts with "here's why we can't let China win" risks dangerously reinforcing the racing mindset.
I tend to think slowing down makes sense regardless of what China does, so perhaps I'm just more sensitive to language that might be interpreted as pro-racing. But given how high the stakes are, I think the messaging on this stuff really matters.
I appreciate the thoughtful note and engagement. My first preference is definitely that the US and China try much harder to cooperate and head off the race (or at minimum, de-escalate the stakes). I do think there's a missing POV among folks who believe the race is necessary, which is 1) grokking exactly how hard it's going to be, and 2) that many of the challenges with cooperation actually apply to "winning" the race as well, and so given that we need to overcome those, we may as well aim even higher (by trying to head it off).
I'm less sure that it makes sense for the US to slow down if China won't; I'm curious your view on why? I think this might be informed by my view that many safety practices can and should be adopted by the US with basically a negligible slowdown, if any. (And in fact, that if we don't adopt these safety practices, we will really regret not being able to count on our AI systems being trustworthy when the time comes.) I'm probably going to write a future post on this, at least if there's interest: why racing and safety really aren't and shouldn't be opposites
I'm not sure racing and safety can or should go together. I know alignment research and capabilities often go hand in hand, but I have a strong intuition that slower means safer.
I'm confident that if the US slows down, China will too. There would be much less pressure on them to go as fast as possible. My impression is that China might actually be more worried about misaligned AI than the US. If they see the US slowing down despite having the capacity to win, I think they'd be concerned about rushing ahead.
This impression could be wrong, and there's real risk that China wins if the US slows down. But for the sake of humanity's survival, I'd rather live in a world where China gets AGI in 15 years than where the US gets it in 5.
One interesting thing to potentially notice: “slower —> safer” doesn’t necessarily imply that “safer —> slower”. That was moreso the point I was trying to make, that regardless of whether going slower is safer or not (I certainly agree it is from a first-order perspective, where you don’t need to account for other actors’ responses), some ways of focusing on safety just might not slow the US down very much at all
Terrific angle. If China also looks at AI development through this lens, escalation in Taiwan seems imminent. The US has only indigenized a single-digit % of leading-edge semi capacity. Furthermore, China is likely to also scale up what is already arguably the most well resourced (cyber)espionage program in the world. Should the US somehow defend against exfil, algorithmic unlocks will still swiftly flow out of America and into leading Chinese projects.
Containing China's development of powerful AI systems will hinge first on deterring escalation in Taiwan and cracking down on economic espionage. America's lead in stack size, and with it its leverage, erodes in the face of each of these risks. These two areas will be vital proving grounds for American diplomacy as it hopefully builds up the muscles needed for AI non-proliferation.
I appreciate this analysis, but I'm worried about how it might be interpreted. You make a case for mutual deceleration and cooperation, but I think the framing could lead to misunderstandings.
The early focus on military threats and why "beating China" matters, combined with the "containment" language and poker analogies, could give casual readers the wrong takeaway. Someone skimming this whose main impression is "China getting AGI first would be catastrophic" could easily conclude "therefore we need to race to AGI as fast as possible and make sure China never gets close".
Your actual interesting policy recommendations come much later. But I worry that message gets lost in all the competitive framing upfront.
Maybe this is exactly how you intended to structure the argument, since you do seem a lot more open to racing in case cooperation fails than I am. But I can't help thinking that in our current environment, anything that starts with "here's why we can't let China win" risks dangerously reinforcing the racing mindset.
I tend to think slowing down makes sense regardless of what China does, so perhaps I'm just more sensitive to language that might be interpreted as pro-racing. But given how high the stakes are, I think the messaging on this stuff really matters.
I appreciate the thoughtful note and engagement. My first preference is definitely that the US and China try much harder to cooperate and head off the race (or at minimum, de-escalate the stakes). I do think there's a missing POV among folks who believe the race is necessary, which is 1) grokking exactly how hard it's going to be, and 2) that many of the challenges with cooperation actually apply to "winning" the race as well, and so given that we need to overcome those, we may as well aim even higher (by trying to head it off).
I'm less sure that it makes sense for the US to slow down if China won't; I'm curious your view on why? I think this might be informed by my view that many safety practices can and should be adopted by the US with basically a negligible slowdown, if any. (And in fact, that if we don't adopt these safety practices, we will really regret not being able to count on our AI systems being trustworthy when the time comes.) I'm probably going to write a future post on this, at least if there's interest: why racing and safety really aren't and shouldn't be opposites
I'm not sure racing and safety can or should go together. I know alignment research and capabilities often go hand in hand, but I have a strong intuition that slower means safer.
I'm confident that if the US slows down, China will too. There would be much less pressure on them to go as fast as possible. My impression is that China might actually be more worried about misaligned AI than the US. If they see the US slowing down despite having the capacity to win, I think they'd be concerned about rushing ahead.
This impression could be wrong, and there's real risk that China wins if the US slows down. But for the sake of humanity's survival, I'd rather live in a world where China gets AGI in 15 years than where the US gets it in 5.
One interesting thing to potentially notice: “slower —> safer” doesn’t necessarily imply that “safer —> slower”. That was moreso the point I was trying to make, that regardless of whether going slower is safer or not (I certainly agree it is from a first-order perspective, where you don’t need to account for other actors’ responses), some ways of focusing on safety just might not slow the US down very much at all