Very good analysis. 💯 But isn’t this part of bigger problem?
Western tech companies are no longer focused on creating great products for happy customers. Instead their only soulless mission is to drive up market valuation.
That’s what they scaling, and that’s why they are producing more narratives than products.
Because trying this with innovative products is risky. Those can be evaluated, have to generate sales or, even worse, profits. And no competition is forcing them to do this.
Narratives about a glorious future just have to be believed. And this belief can be assured through eye-popping demos ("look, we can make cooler videos!“), great vision (AGI tomorrow) and huge investments that would be totally insane if the future were not glorious.
So isn’t AI just the latest iteration of a problem that we have had much longer: a market economy without competition but lots of monopolies that have no idea why they are doing what they are doing?
I think all corporations as they grow bigger have that risk. Some resist it longer. Some find ways to keep extracting value. And to really disrupt the status quo you probably need disruptive new cycle, like the ICT was between the 70s and early 2000s. I don't see AI creating new disruptors because all the incumbents and monopolies are already full steam ahead in adopting and powering it.
Yes, I agree. AI won’t change anything here. The question is, what will happen once the AI bubble bursts and everyone realizes that those hundreds of billions of monopoly profits just went up in smoke...?
Probably the same as all previous bubbles. PE comression. Loads of apologies. Those who have cash and are smart pick up the pieces and carry on. Those who aren't really corporates and can't figure out how to become profitable either sell out or die
Really insightful piece, Francesco — especially your framing of hype vs fundamentals. At Qognetix we’ve come to similar conclusions. Our work on biophysically faithful spiking-neuron engines shows that biology offers something today’s LLMs can’t: predictability, bounded precision, and efficiency closer to how the brain actually works. Instead of chasing diminishing returns on scale, we’re building tools that match canonical neuroscience benchmarks (Brian2, NEURON, NEST) while being deployable on standard hardware.
If the “trillion-dollar crash” is inevitable, the next wave won’t come from ever-larger black-box models, but from substrates that combine scientific validity with commercial accessibility. That’s the opportunity we’re pursuing at Qognetix — a path beyond the bubble to systems that are trustworthy, transparent, and fit for real-world use.
It could makes sense. I backed SpiNNcloud from Dresden, Germany who are building neuromorphic chips and data centres based on the SpiNNaker 2 architecture
SpiNNcloud are doing impressive work — their lineage from SpiNNaker 2 shows how much potential there still is in hardware-level innovation.
We’re tackling the same problem from the opposite end: making biophysically faithful simulation accessible on general-purpose hardware first, so researchers and companies can prototype models without specialised chips. Once the substrate is validated in software, mapping to silicon becomes the natural next step.
Would love to stay in touch — there’s clear synergy between software-defined fidelity and neuromorphic acceleration.
Thanks for sharing! This may be a silly question, but *how* does one spot accelerating trends? Is it domain knowledge, reading/researching widely including contradictory views, or specific frameworks you use to guide your thinking and develop an eye for these things? Just curious
Very good analysis. 💯 But isn’t this part of bigger problem?
Western tech companies are no longer focused on creating great products for happy customers. Instead their only soulless mission is to drive up market valuation.
That’s what they scaling, and that’s why they are producing more narratives than products.
Because trying this with innovative products is risky. Those can be evaluated, have to generate sales or, even worse, profits. And no competition is forcing them to do this.
Narratives about a glorious future just have to be believed. And this belief can be assured through eye-popping demos ("look, we can make cooler videos!“), great vision (AGI tomorrow) and huge investments that would be totally insane if the future were not glorious.
So isn’t AI just the latest iteration of a problem that we have had much longer: a market economy without competition but lots of monopolies that have no idea why they are doing what they are doing?
I think all corporations as they grow bigger have that risk. Some resist it longer. Some find ways to keep extracting value. And to really disrupt the status quo you probably need disruptive new cycle, like the ICT was between the 70s and early 2000s. I don't see AI creating new disruptors because all the incumbents and monopolies are already full steam ahead in adopting and powering it.
Yes, I agree. AI won’t change anything here. The question is, what will happen once the AI bubble bursts and everyone realizes that those hundreds of billions of monopoly profits just went up in smoke...?
Probably the same as all previous bubbles. PE comression. Loads of apologies. Those who have cash and are smart pick up the pieces and carry on. Those who aren't really corporates and can't figure out how to become profitable either sell out or die
Really insightful piece, Francesco — especially your framing of hype vs fundamentals. At Qognetix we’ve come to similar conclusions. Our work on biophysically faithful spiking-neuron engines shows that biology offers something today’s LLMs can’t: predictability, bounded precision, and efficiency closer to how the brain actually works. Instead of chasing diminishing returns on scale, we’re building tools that match canonical neuroscience benchmarks (Brian2, NEURON, NEST) while being deployable on standard hardware.
If the “trillion-dollar crash” is inevitable, the next wave won’t come from ever-larger black-box models, but from substrates that combine scientific validity with commercial accessibility. That’s the opportunity we’re pursuing at Qognetix — a path beyond the bubble to systems that are trustworthy, transparent, and fit for real-world use.
It could makes sense. I backed SpiNNcloud from Dresden, Germany who are building neuromorphic chips and data centres based on the SpiNNaker 2 architecture
SpiNNcloud are doing impressive work — their lineage from SpiNNaker 2 shows how much potential there still is in hardware-level innovation.
We’re tackling the same problem from the opposite end: making biophysically faithful simulation accessible on general-purpose hardware first, so researchers and companies can prototype models without specialised chips. Once the substrate is validated in software, mapping to silicon becomes the natural next step.
Would love to stay in touch — there’s clear synergy between software-defined fidelity and neuromorphic acceleration.
Thanks for sharing! This may be a silly question, but *how* does one spot accelerating trends? Is it domain knowledge, reading/researching widely including contradictory views, or specific frameworks you use to guide your thinking and develop an eye for these things? Just curious