Which will come first?
Intentional detonation of a nuclear weapon as an act of war
or
A battle with robots fighting robots as the dominant form of combat
Terms:
The basic terms are fairly straightforward in the first case--a nuke blows up on purpose designed to hurt targeted victims. But I guess there could be some ambiguity like if a bomb detonation is attempted but somehow fails or is thwarted or if it melts down rather than properly explodes. In the interest of specificity I will stipulate that the device must be truly an intentional nuclear explosion.
In the second case there would seem to be a lot of room for interpretation. Let us stipulate that it would need to be a significant engagement with at least a potentially meaningful affect on a larger conflict if not be the entire war by itself. This must be a major conflict in terms of world events. It must involve at least one nation state with the opponent being at least a major aggressor (significant terrorist group, etc. if not a state-level actor itself). To be robot-on-robot it must mean that humans cannot be directly targeted in the robot versus robot fighting--collateral damage notwithstanding as well as other human involvement/risk as a secondary part of the combat. I will allow that the devices doing the fighting can be "dumb" devices like drones fully controlled by humans remotely, but extra credit to the degree these are autonomous entities.
Discussion:
Tyler Cowen has been thinking a lot about nuclear war and nuclear device detonation recently including before the Russian invasion of Ukraine. His latest Bloomberg piece discusses just how thinkable the "unthinkable" has become. This is a bigger part of a much needed rethinking of MAD.
Tyler's partner at Marginal Revolution, Alex Tabarrok, is in the game contributing this overview of the related probabilities.
Thankfully, Max Roser has done the math for us. Relatedly, he argues that "reducing the risk of nuclear war should be a key concern of our generation". Before we get too excited about a white-flash end to civilization, consider as gentle pushback this piece arguing that nuclear weapons are likely not as destructive as we commonly believe--make no mistake, they are still really bad.*
If Roser is roughly correct, then within a decade we are at a 10% chance of nuclear war. I am not sure if his "nuclear war" would be a equal to or a different level of what would qualify in this WWCF. Suppose it is a higher threshold. Let's make the probability of nuclear weapon use as defined here slightly higher each year such that there is a 20% chance within 10 years (basically equal to his 2% annual risk curve). This gives us a baseline for comparison.
Turning to Rock 'Em Sock 'Em Robots it is not as farfetched as I think most people believe. In fact we may be quite close to it as defensive weapons like Israel's Iron Dome prepare to confront adversaries like drones and Saudi Arabia battles against drone counter attacks from Yemen. As Noah Smith writes, "the future of war is bizarre and terrifying".
It does sound terrifying in one reading, but in another there is a glimmer of hope. A proxy war using robots to settle disputes could be vastly better than any conflict humanity as known before. Imagine a world where the idea that a human would be actually physically harmed from combat was unthinkable. This is not too many steps away from professional armies, rules of engagement, and norms, laws, and treaties against harming civilians, et al.**
Back to the issue at hand, once we consider that dumb, remotely driven/released weapons might soon be battling smart, sophisticated devices with either of these being on defense from the other, we quickly relax how hard it is to foresee it all happening. The hardest hurdle might only be if the conflict big enough to qualify.
My Prediction:
I think nuclear risk is a lumpy, non-normal risk that follows a random walk (i.e., it can all of a sudden get a lot more likely but that likelihood can get absorbed away if conditions improve). It is not as linear and cumulative as Roser suggests. At the same time play the game long enough and anything will happen.
Robot battles seem more like a cumulative progression, an inevitability. We almost cannot escape it eventually happening and probably soon. So, this comes down to how likely a nuclear pop is in the very near term as it tries to out race the tortoise of robot warfare. Just like in the fable, the turtle is going to win.***
I'll put it at 75% confidence that we see this one resolved robot fights robot.
*Of course other future potential weapons that are not nuclear can be extremely scary too--"Rods from God" doesn't just sound very ominous; it truly is.
**Then again, maybe not:
As a result, conflicts involving AI complements are likely to unfold very differently than visions of AI substitution would suggest. Rather than rapid robotic wars and decisive shifts in military power, AI-enabled conflict will likely involve significant uncertainty, organizational friction, and chronic controversy. Greater military reliance on AI will therefore make the human element in war even more important, not less.
***I know they aren't the same thing!
P.S. When I first conceived of this WWCF, I thought I'd be comparing robot wars to lasers as prolific, dominant weapons. I changed it as laser weaponry seemed to be consistently failing to launch. However, great strides have been made recently in this realm. Perhaps I was too hasty. However, thinking about it more I would guess that robot war will go hand in hand with laser weaponry. The development of one spurs the development of the other such that there isn't much room for a WWCF.
P.P.S. The ultimate tie would be an AI launches a preemptive nuclear strike on a rival nation's AI or other robot weaponry. Let's hope if they do this the battle is on Mars.