Since I caused a minor firestorm writing about growth in my last Substack post, I thought I would move to the much calmer waters of the AI meltdown this time. What could go wrong?And I do so for a self-interested reason. This week on Rethink (coming up this Thursday) we have an episode called ‘Is Big Tech Stealing Your Life’, which will look at whether AI companies should be paying for the intellectual property - books, art, music, news - that they use to train their models. They hope not to. Content creators disagree. So hear me chat to Ben Zhao, Justine Roberts from Mumsnet, Jack Stilgoe and more. And if you missed it, you can hear me walking around the building site at the new London Museum and talk museums with Stephen Bush, Sara Wajid and Tony Butler here.
The last week may go down as one of the most hubristic moments in American history. On Monday, Donald Trump’s inauguration as the 47th President of the United States was cheered on by a line of the richest men in the world. There clapping away gleefully were Mark Zuckerberg, Jeff Bezos, Sunder Pichai and of course, the man whose later hand gestures would spark a further firestorm, Elon Musk. Their companies - Meta, Amazon, and Google, along with Sam Altman’s OpenAI, had all given million dollar donations to Trump’s inauguration fund. Musk of course had contributed far far more over the course of Trump’s campaign.
It is no secret the techbro Olympians often detest one another - Musk and Altman had yet another public falling out after the announcement of Stargate this Tuesday, a $500 billion venture between OpenAI, Softbank and Oracle - and backed by Trump - to develop infrastructure for the American AI industry. But internecine warfare aside, this marked a crowning moment for big tech’s big bet on Donald Trump. There was the 47th President both receiving fealty from the tech billionaires and in return placing his ‘royal’ charter on their much desired infrastructure.
And then this Monday happened. Last week, people who know more about AI than me were remarking that major developments in AI technology were afoot in China. Alex Hern wrote an excellent piece for the Economist, setting out the major developments in Chinese AI that potentially threatened to out-compete the big American providers - OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and X’s Grok.
In particular, a certain engine called DeepSeek - financed and developed by a Chinese hedge fund High Flyer - appeared on the scene. DeepSeek’s R1 model was able to perform competitively against ChatGPT’s o1 model but a fraction of the training costs. A very small fraction - something like 3/100. Basically, DeepSeek can do you your AI tasks for about a thirtieth of the cost. To put that in perspective, that’s roughly the same as the decline in the cost of light in the UK between 1925 and 2005, as we moved from coal and gas lamps to LED bulbs. Except this happened over a few months.
What’s more, DeepSeek is open source - or to be more precise, the weights that it uses to calculate predictions in the underlying convolutional neural network model are open (the full training model is not). That means that developers can immediately build off DeepSeek to develop new applications and make their own adjustments to the underlying neural network. That’s not entirely novel of course. Meta’s LLM, LlaMA - developed by another of Elon Musks’ enemies, Yann LeCunn, is also open source. But OpenAI, despite its name, is ironically not.
The market response to this has been fierce. One might ask why it didn’t happen last week but, for whatever reason it was over the weekend that the penny dropped, and investors belatedly realised that all their investments in the AI/LLM providers AND in the hardware providers that sold them the GPU chips needed to train their models might be a wee bit overvalued. The basic story why is that the market had bought Big Tech’s argument that the only way AI applications would improve was through brute force scale. And the cost of growing in scale was exponential. GPT 3 allegedly cost around $2-3 million to train, GPT 4 about $100 million. It’s hard to get a precise estimate of current training cost of GPT 5 but there is a lot of speculation that we are talking well over a billion dollars.
If that exponential cost increase were indeed the case, a few things would follow. First, Nvidia stock owners would become very rich. The bottleneck for all AI training is the GPU chips needed to process enormous amounts of data through neural network models. And it’s Nvidia who provide those chips. During last summer Nvidia became the world’s largest company by market capitalisation (it appears Apple retook the lead later in the year).
Second, OpenAI and its ilk would need to raise billions and billions more in investment to be able to continue to train the models as their scale got ever larger. OpenAI has had to secure some of the largest funding rounds in history, which led to plenty of skepticism last year that it would be able to do so. That Stargate investment looked well timed…
Third, and IMO most important, AI products would ultimately have to be valuable enough to justify all this capital expenditure. ChatGPT and other applications are often offered relatively cheaply, in part to build up consumer demand. Presumably, they would become so valuable, so necessary for success in life and work that the AI companies could start jacking up subscription fees. Or companies such as Microsoft could start charging extra for AI add-ons such as Copilot, when they sold Windows or MS Word. Or, and this is the main hope, the investment in scale would ultimately - but also soon - produce Artificial General Intelligence - the white whale of the AI industry.
Artificial General Intelligence (AGI) is basically an AI that passes the Turing test (can not be differentiated from a human by a human) and can reason like a human. These are not the same thing. Imitating a human in a way that other humans can’t tell is very different from being able to think and argue like a human. The former can be done pretty well already by existing models thanks to the wonders of statistical pattern matching - where ChatGPT most excels is a mega auto-complete - so if it can predict how a human would respond to a prompt, using all previous information about how existing humans responded to similar prompts, then its imitation would be pretty near perfect.
But that’s not the same as logically reasoning through a prompt in the way you or I might. That’s just not how LLMs work. There are clever ways - including OpenAI’s o1 model - to add a second train of processing to try to explain the ‘reasoning’ process that the LLM is undertaking - but this risks also predicting the argument given using statistical prediction of similar-such arguments… and around and around we go. It’s not clear to a number of AGI skeptics - most notably Gary Marcus - that any of the current models can really bridge this divide between imitation and reason.
Still, this hasn't stopped Sam Altman and company from arguing AGI is just around the corner because if it is then surely (surely?) it can replace all kinds of existing tasks that us reasoning humans do well, just as it can imitate us out of jobs as copywriters, artists, musicians, and translators. And surely that would be worth a lot of money right?
So that’s the current Big Tech gamble. It’s all about scale. That increases the value of the underlying hardware (Nvidia), it compels AI companies (OpenAI, Google, Anthropic, X) to engage in massive capital investment to buy these chips and train models on them, and it will all be fine in the end because consumers and businesses will be desperate to pay unimaginable amounts for the final products.
But what if it doesn’t cost this amount of money? What if DeepSeek can provide AI services comparable to models that cost a lot more? Well then Nvidia’s stock price might fall almost eighteen percent in one day, wiping over $600bn from its market capitalisation. And questions might start being asked about Sam Altman’s promises that OpenAI would generate massive profits and pay back its investors. And perhaps even President Trump might look at his Stargate promise with some trepidation. Donald Trump does not like to feel like a sucker…
It’s all early days. We don’t really know how any of this plays out. But let me throw a few possible scenarios out there that warrant thinking about.
1. Content Providers Never Get Paid. This is one that interests me a great deal given this week’s episode of Rethink. There are huge potential legal issues brewing as large publishers and news companies demand payment from AI companies for the content that they hoover up into their models and spit back at consumers. We are all now familiar with typing prompts into Google and getting a weird AI answer back at the top that purports to answer our question by merging together information from God knows where. But that information came from somewhere, even if the algorithm incorrectly applies it (the old hallucination problem). And if it came from copyrighted sources without attribution? Well that seems bad.
So far AI companies seem to have been blithely training their data hoping that the legal penumbra never truly darkens their profits. But there has been major pushback including the well-known New York Times suit. Every copyright holder, including yours truly, would obviously like a cut of the profits if our content is being used by these models. And slowly some organisations such as ProRata.ai are trying to come up with profit-sharing schemes.
But what if DeepSeek pulls out the floor from under all the major US AI companies and dashes their profitability. Well then it seems unlikely to me that they will want to cut any such deals, since they won’t have any profits to share. And they may become increasingly resistant to claims by the creative industry. The British government’s AI plan, devised by Matt Clifford, calls for an ‘opt-out’ scheme whereby copyright holders are presumed to consent to their works having models trained on them unless they explicitly opt out. But what if there’s no money to pay them for this training? Not much point in having a deal to share zero profits. And in that case it’s all downside for copyright holders.
2. AI Applications Can Revolutionise the Public Sector: Here’s a happier thought. If AI models become much cheaper because of DeepSeek and similar models, that’s good news for anyone who wants to use them but is cash-strapped. And you know who is cash-strapped? The British government. As part of their new strategy to have a growth strategy, the Labour Government has doubled down on AI as both a private sector industry that can succeed in the UK (I’ll come back to this) and as a tool to reduce costs and increase efficiencies in the public sector.
The worst thing that could happen to this plan is for Labour to get continually snookered by high-paid IT consultants recommending extremely expensive proprietary AI systems, run by Big Tech, that are priced at a level that might be able eventually to pay back all those incredibly expensive Nvidia GPUs that got bought over the last few years. Nobody should want to the British government - or indeed other governments - to be caught in the position of signing long-term contracts that shuffle monies back to the giant Big Tech Nvidia bill.
If the government can instead use a basically as good set of AI models for, let’s say, three percent of the cost, well why wouldn’t they? What’s more, small and medium-sized app developers will also find it much more cost effective to work with cheaper and open AI models, which they can easily customise to their client’s - in this case the government’s - needs. I sincerely doubt Keir Starmer, Rachel Reeves, or Wes Streeting saw this week’s AIpocalypse coming but boy is it well timed on that front.
3. National Champions: A thousand years ago, when you began reading this post, you may recall me noting Donald Trump’s approval of Stargate - the big AI infrastructure development in the US. That seems less like a good investment today than it did last week. But maybe it still is. Because America, particularly Trump’s America, is not defenceless. America doesn’t have to just let the Chinese win the AI race. I very much suspect it will not. And it will support its own national champions by making it that much harder for those from elsewhere: obviously China, but also European rivals.
The way it will do this will be similar to America’s broader control of the global financial, information and legal systems - a network that Henry Farrell and Abe Newman call America’s Underground Empire. You will be delighted to know that I’ll be interviewing Henry and Abe for a future episode of Rethink. But you should all already be subscribing to Henry’s brilliant Substack. They argue that purposefully or not, since the end of the Cold War America has been able to use everything from the design of the global internet to the bank transfer system to monitor and control the movement of ideas, money and technology around the world.
I suspect an offensive AI strategy by the Trump administration might continue in this vein. One obvious solution is to ban the use of DeepSeek and other non-American AI applications by the federal government (and certainly by the military and intelligence services). Another would be to try and undermine the functioning of Deep Seek, perhaps in a manner reminiscent of Ben Zhao’s Nightshade algorithm (covered on yes, this week’s Rethink). A third option would be to double down on the race for the pot of gold at the end of the rainbow - Artificial General Intelligence.
This last outcome is I think baked into the cake. There is no way that Donald Trump will want to be perceived to have lost America’s AI edge. As Duncan Weldon noted today, Sputnik knocked ten percent of US stock prices. And we all know what happened post Sputnik. It will be like we are all in a new season of For All Mankind. This is where the Trump/techbro alliance makes most sense, and so to take us back to the beginning, if the tech billionaires are as important as they think they are, they will convince Trump that this is existential for the US and the scale of investment will getter ever and ever bigger - yuuge as some might say.
For China and new AI rivals, their edge will be that of the ‘late developer’. I don’t say this pejoratively. Alexander Gerschenkron’s seminal Economic Backwardness in Historical Perspective argued that being behind can eventually put you ahead. DeepSeek is efficient because Chinese companies were deprived of the latest chips (due to Biden’s policy to deprive them of such chips) - so they had to figure out how to get the most out of the power they had - just as Eastern European programmers did in the 1980s. And DeepSeek could build on the lessons learned from existing LLMs developed at much higher cost in the states. Oh and they figured out that you could train the model pretty well on entirely synthetic data, which is much cheaper to acquire and has no nasty copyright shadows hanging over it.
So to return to little Britain - the wannabe AI powerhouse - I suspect that the UK’s edge will not be in any kind of direct competition. We are probably not going to be able to match China as a ‘late developer’ but we certainly can piggyback on the achievements and monies already spent by large American Big Tech firms. Britain will have to become a leader at using the main AI engines developed in the US, and yes perhaps China. Smaller apps that can take advantage of our public sector’s impressive array of data (yes, we are also doing a Rethink on medical data ;) ) and the still surviving (just) excellent higher education sector - that’s our niche.
And that means for Labour it should not be about thinking big and pretending we can outdo Silicon Valley - if we ever could we won’t be able to once Trump throws money at this. It’s about supporting smaller, specialised AI application providers and making sure we don’t starve our brilliant university sector of the resources it needs to train and research in this area.
There you go guys, there’s a theory of growth for you.
Next week, I’ll be doing something new on this Substack, interviewing the brilliant Mike Albertus, whose new book Land Power, was just released. If you have questions for him, ping them along to me.
Ben, there is one actor missing and that Europe as in EU and its MS. What about them and the UK looking what they -might- have to offer? Where is a will there is a way around red lines
Important common sense stuff. Learnt something reading it.