There Shouldn't be an AI Race

The phrase “AI race” has been in use since the mid-2010s. However, in recent months, its use has increased, particularly as more rival AI models enter the scene. If you’re reading this, then you’re probably aware of the DeepSeek news from January. That event also sparked a lot of discussions concerning the race.

I’ve been following research in the field of AI since 2016. This was after the boom of deep learning methods for computer vision and, more specifically, the invention of the residual neural network (ResNet). At the time, big labs like OpenAI and DeepMind didn’t have any consumer-facing products on the market. There was a lot of focus on reinforcement learning (RL), multi-agent interaction, self-play for games, and simulation to reality (Sim2Real) for robotics. Almost every year, there were cool and new methods around different modalities.

There has been more concentration on large language models (LLMs) since the invention of the transformer architecture in 2017. We saw a lot of good models after that: bidirectional transformers (BERT), generative pre-trained transformers (GPT 1–3). Wide-scale dispersion of LLMs began when OpenAI released ChatGPT in 2022. This has also contributed to why most people view “AI” as synonymous with a chat interface or a language model. These language models are trained and finetuned with varying RL techniques, and they get better with each new release.

We’ve seen a lot of open and closed-weight (open source isn’t open-weight) language models in the last few years. At the frontiers of closed-weight models, you’ll find OpenAI, Anthropic, and Google. In the open-weights arena, we’ve seen impressive models like Llama (by Facebook), Qwen (by Alibaba), Mistral, Phi (by Microsoft), Aya (by CohereAI), and DeepSeek. I’ve played with most of the closed and open-weight models. I think everyone in the space is doing impressive work.

What I’m most concerned about is why we need to frame the development and dispersion of AI models as a race. Why does one company or nation need to win over another? Most people in this field claim to be doing important things. Work that, according to them, is necessary for solving human problems. If we believe that this work is important for humanity, shouldn’t we be happy that anyone is building these tools, regardless of what company or nationality? Why should we believe that corporations with vested interests or nations that like to display their superiority are racing to benevolently give us AI?

You might think governments are primarily invested in this race because of economic prosperity, productivity, and adjacent benefits, but that is not the case. Their core concern is military power: autonomous weapons, enhanced surveillance, cyber warfare, and AI-assisted decision-making. They either want to develop it first or fear that some other nation might beat them to it. For years, there have been calls for more cooperation and regulations on the ethical use of AI, especially in warfare, but what do we get? Increased propaganda, finger-pointing, and fearmongering.

I read the Dream Machine last year, and it made me question why there has historically been more acceleration during times of war and fear. It appears that we have always had an ethically flawed path towards scientific and technological progress. Why is it that the price we’ve always paid for such progress is copious amounts of human and economic waste?

Another concern is why one country thinks they have a moral duty to lead the race. The same thing is already happening in the field of quantum computing. I cannot comprehend this “us good, them bad” mindset that has continued to perpetuate itself in the global technological landscape. It has become so stale, and I wish people could see that it serves no purpose. The same thing happened during the race to put a man on the moon. This type of race might help us iterate and solve faster. There was a lot of fast technological progress in satellites, GPS, computing, and material science. However, to be honest, showcasing national superiority was one of the goals of the space race. We will also see progress as AI tools are developed and deployed at scale. It is all inspiring, but who does it help in the long run if a certain nation wins this “race”? How will their gain of power or monopoly affect us? I think these are important questions to ask. Competition sparks and significantly drives innovation, but we should be racing towards safe and beneficial technology.

These tools are being dispersed at scale to all types of people on Earth. People not only use these tools for work, creative, or academic pursuits. They are now embedded in our personal and interpersonal relationships. I’ve heard people say they use ChatGPT as their therapist. I’ve seen people who use these tools to construct replies when texting friends or lovers. Not only are these tools augmenting us, but they are augmenting the role of other people in our lives. The sort of thing you would discuss with a trusted friend can now be outsourced to a language model. There’s a section of the population that scowls at people who use AI tools to generate visual art, creative writing, and other seemingly “serious things.” At the core of these concerns is the fear that we’re outsourcing our agency and understanding to machines. I think it is already too late to worry. These tools are here, and people will use them to do all sorts of things. Given how deeply embedded they will become in people’s lives, we must take practical steps to make them safe and beneficial.

There are current examples of international collaborations, such as the European Organization for Nuclear Research (CERN) and the International Space Station (ISS). These collaborations demonstrate that nations can come together to build infrastructure that advances science and technological progress in a non-zero-sum manner. The World Wide Web (WWW) came out of CERN, and that work has been deeply beneficial to everyone on Earth. The ISS is a marvelous piece of work, and I’m unsure if one nation would have singlehandedly accomplished that.

I love technological progress as much as any accelerationist you know, but I do not think that one company or nation should win or be the biggest decision-maker in the development and propagation of AI tools. We should use the ISS as a model for international cooperation and collaboration. We simply cannot trust that a winning nation or company has the best interest of the general population at heart.

The semiconductor/chip war also has direct ties to the race we’re observing. Semiconductors are the backbone of present day computing, and massive amounts of computing power is needed to develop and deploy AI models. The fight for semiconductors and chip fabrication companies added more gas to the trade and geopolitical tensions that continue to develop globally. Even more, we’re observing governments artificially disrupt supply chains because they want to gain an advantage and take away the edge that others might have. I do not want to debate the ethics of this because, quite frankly, geopolitics isn’t my forte. However, one must at least acknowledge that nations are using their powers to boil up international conflicts that serve no one. It is a divide-and-conquer strategy that has been employed throughout history.

I invite practitioners and everyone else to focus on what matters. We have a responsibility to develop and deploy intelligent tools that are safe. Tools that serve us in the best possible way. Tools that align with our intentions and needs. Whether they are LLMs or other models that mimic or synthesize intelligence. The world we live in today is plagued by big problems like climate change, poor healthcare, inequality, and food insecurity. The tools we develop should help us address these concerns. I deeply believe that collaboration and consideration are the best paths forward. There should be no arms race disguised as an AI race. We should all be aware that behind the fancy talk of nations is their intense struggle for military dominance and power.

Get in touch_

Or just write me a letter here_