Filippo is a Senior Cloud Security Architect & DevSecOps Lead with over 10 years of experience, specializing in cloud security architecture, zero-trust implementations, and DevSecOps leadership for large-scale, distributed systems.
Currently at InfoCert, he leads the design and deployment of secure, scalable monitoring infrastructures (Grafana LGTM stack) and security policies for 100+ Kubernetes clusters and 500+ VMs, ensuring compliance with GDPR, ISO 27001, and eIDAS regulations.
TL;DR / Disclaimer
Before you read: Just so you know, I use these tools daily, both local and remote models (those that prioritize privacy and GDPR compliance).
I am not advocating for banning or censoring these tools. On the contrary, I see them as an important technological advancement and firmly believe they should be regulated and integrated into contexts where they can genuinely add value.
What I reject is the misleading term "artificial intelligence" and the irresponsible, widespread misapplication of these tools, which has real social and labor implications.
A critical look at the hype, the risks, and the future of "AI"
LLMs are not "intelligent" in any meaningful sense. They are stochastic parrots, systems that regurgitate patterns from their training data without understanding, reasoning, or context. They cannot be held accountable for their output because they lack agency, intent, or the ability to justify their responses.
Chiunque prenda per buona una risposta di un modello linguistico senza controllarla è un fesso, o un pazzo, o entrambe le cose.
Walter VanniniTranslation: Anyone who accepts an LLM’s output without verification is a fool, a madman, or both.
So called "Hallucinations" are not a bug, they are a feature of how LLMs are constructed. These models are trained on the entire internet, which is a dumpster fire of misinformation, bias, and nonsense. There is no way to guarantee accuracy, no matter how sophisticated the prompt or the model.
LLMs are non-deterministic, their outputs vary wildly based on unpredictable factors like input phrasing, randomness in training, or even the phase of the moon. This makes them unreliable for any serious application.
The LLM industry is a bubble, one that has been inflated by hype, venture capital, and the desperate hope that "AI will save us."
The real business model of LLMs is emotional dependency. Companies sell the fantasy that these tools can replace human judgment, creativity, and expertise.
Gli unici che si possono sostituire con un modello linguistico sono i dirigenti e gli amministatori delegati che credono a queste fesserie.
Walter VanniniTranslation: The only people who can be replaced by LLMs are executives and CEOs who believe this nonsense.
The current LLM industry will not survive in its current form. It is unsustainable, economically, technically, and ethically.
The future of "AI" lies not in giant, opaque, corporate models, but in small, open, and controlled systems:
Saranno la versione evoluta del dizionario dei sinonimi e contrari, del dizionario delle citazioni. Non ci daranno un prodotto finito, ma un altro strumento per produrre materiale grezzo che raffineremo con la nostra intelligenza.
Walter VanniniTranslation: They will be the evolved version of a thesaurus and a quote dictionary. They won’t give us a finished product, but another tool to produce raw material that we refine with our own intelligence.
It’s time to stop drinking the Kool-Aid and start thinking critically about the tools we use, and the future we want.
The LLM industry is a gigantic Ponzi scheme, one that sells the illusion of intelligence to extract money, attention, and power. But the truth is simpler:
LLMs are not the future. They are a dead end.
The future belongs to small, open, and human-centered tools, tools that empower, not replace.
For my detailed CV, visit: cv.filippobuletto.info
Looking for my personal blog? Visit it at blog.filippobuletto.info