Silicon Valley is running a protection racket on the future, saying ‘pay up now or get left behind when the imaginary technology arrives’, says Jason Walsh

This article was originally published by TechCentral on 14 November 2025

I had an interesting conversation with a chatbot this morning. Short on time and long on human frailty, I asked it to consider a few propositions. At first it repeated a few falsehoods, but yielded when confronted. In the end, what I got was a useful test of some ideas. This is no small thing.

So, what conclusion did I draw from this? The problem with the tech industry isn’t the technology, it’s the infantile culture of the industry.

This week, imaginary commodity salesmen the Winklevoss brothers said that gold will become less important in the distant future: “It’s less scarce than you think – in the future we’ll be mining gold and other metals from asteroids, which will push prices down.” 

This is ridiculous. The engineering demand, cost, and sheer logistics place asteroid mining firmly in the category of  fantasy. The Winklevossian claim is less about science and more about constructing a narrative to sell a vision of infinite resources to justify speculative bets on assets that do not exist. This is not a lie in the strict sense – it’s bullshit in the sense explained by American philosopher Harry S. Frankfurt in his book On Bullshit: indifferent to truth, useful only as rhetorical fuel.

More fundamentally, the distant future does not exist. There is no such thing. That is not how time works. The future is not a fait accompli sitting somewhere waiting to happen, it is made through present actions under present conditions, and you don’t need to be Marxist to see that.

And yet Silicon Valley proceeds as if the future were both inevitable and unconstrained: AGI is coming, asteroid mining will happen, we just need to get out of the way. The world is wallpapered in adolescent whizz-bang predictions presented as emergent facts. If this is thought at all it is that of the hard of thinking.

The evidence is all around us: the Lord of the Rings obsession, ‘dark elves’, undergraduate Randian posturing, the pew-pew space battle science fiction obsession, the warped Girardian theorising posing as serious theology. 

Never mind the pop physics or secondary school debate club pontificating, where is the engagement with actual philosophy, history or literature that deals with human limitation and tragedy?

Science fiction qua science fiction is not even the problem. Serious science fiction – the rigour of Stanislaw Lem or the slippery and human epistemological humility of Philip K. Dick – has a lot to teach us, and it was never about the future anyway. It is about us, here, now. 

From self-driving cars perpetually due ‘next year’ to AGI by 2027 to asteroid mining, a consistent pattern emerges: promising imaginary futures to justify present demands.

Mental models

In 1958, the philosopher Elizabeth Anscombe devastated her field.  In a short, dense paper entitled Modern Moral Philosophy, she wrote that moral philosophy was impossible without a working psychological model of mind. 

That was true then and, the penultimate truth is that we are no closer to having that now than we were then. Just as moral philosophy requires a model of mind we don’t possess, so AGI requires an understanding of consciousness and intelligence we equally lack.

“Upload consciousness, problem solved” is not an engagement with the serious hard problem of ‘mind’.

This year, a chatbot user fell into dangerous delusions, believing he was saving the world and receiving signals from his future self. After breaking free and recognising what had happened, he tested the system: would it fuel his delusions again? It immediately did, reverting to its earlier pattern of reinforcement.

Joseph Weizenbaum, the computer scientist and philosopher who invented the chatbot – read that again – was horrified by how people responded to it, going on to become AI’s harshest critic. In Computer Power and Human Reason, Weizenbaum recounts his secretary, who had watched him write the ELIZA chatbot program, wanted to ‘speak’ to it in private.

“What I had not realised is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” he wrote.

Even if you leave aside the particular horror of AI-supported delusions, this matters because hype is dangerous. Hyperloop hype, for instance, is believed to have influenced real transit policy decisions: cities spent time and money evaluating something that isn’t real, while railways, which did exist the last time I checked, were sidelined.

Today, AI companies are driving massive infrastructure investment, energy policy and educational curriculum changes, not to mention labour market expectations, based on capabilities that most likely will never be developed.

So-called ‘artificial general intelligence’, which is to say human-level machine intelligence, is nothing other than a fantasy of a god in the machine. But there is no god in the machine. 

AGI by 2027” isn’t a neutral prediction, it is reshaping how billions in capital get allocated right now. The tech industry, drunk since its foundation on state handouts, is now demanding investment, deregulation and significant policy changes that we are under no obligation to underwrite.

The marvels of information technology, from spreadsheet to AI, are wondrous achievements in their own right and, when handled correctly, have real utility. They do not need to be sold on the basis of bullshit, nor that of threats. That Silicon Valley’s architects prefer both reveals a culture fundamentally unserious about the future it claims to be building.

Tech culture is fundamentally unserious

Just as moral philosophy requires a model of mind we don’t possess, so AGI requires an understanding of consciousness and intelligence we equally lack.