
Almost everyone agrees that the speed of AI development is risky, yet it seems no one is able to slow it down.
After the release of ChatGPT, several leading AI researchers publicly warned about the dangers of extremely rapid progress. Yoshua Bengio, Geoffrey Hinton, Stuart Russell, and others signed open letters calling for a pause in training increasingly powerful systems. One thing became clear: this had turned into an out-ofcontrol race. [1]
And yet, nothing stopped. Development continued. Newer models were deployed at accelerating speed. Infrastructure expansion plans grew larger, databases expanded, and computing power increased within short periods of time. Major tech companies pushed forward regardless. The feeling emerged that all players were locked into competition. Almost against their own will. [2]
And yet, nothing stopped. Development continued. Newer models were deployed at accelerating speed. Infrastructure expansion plans grew larger, databases expanded, and computing power increased within short periods of time. Major tech companies pushed forward regardless. The feeling emerged that all players were locked into competition. Almost against their own will.
Why? The logic is simple: if one company slows down, another will move faster. As Tristan Harris puts it in his TED Talk, the common justification is: “If I don’t build it first, someone else will.”
The problem is that there is no global coordination guiding this process. In the past, technologies like nuclear weapons were slowed by treaties and binding safety standards. With AI, such rules barely exist. Voluntary commitments are fragile and easy to reverse.
If responsibility keeps losing to the speed of deployment, slowing down may no longer be a choice. And then, the only thing left to hear might be: sorry human.
Why? The logic is simple: if one company slows down, another will move faster. As Tristan Harris puts it in his TED Talk, the common justification is: “If I don’t build it first, someone else will.” [2]
The problem is that there is no global coordination guiding this process. In the past, technologies like nuclear weapons were slowed by treaties and binding safety standards. With AI, such rules barely exist. Voluntary commitments are fragile and easy to reverse.
If responsibility keeps losing to the speed of deployment, slowing down may no longer be a choice. And then, the only thing left to hear might be: sorry human.