Filippo Buletto, Senior Cloud Security Architect

Filippo Buletto

Senior Cloud Security Architect & DevSecOps Lead

Filippo is a Senior Cloud Security Architect & DevSecOps Lead with over 10 years of experience, specializing in cloud security architecture, zero-trust implementations, and DevSecOps leadership for large-scale, distributed systems.

Currently at InfoCert, he leads the design and deployment of secure, scalable monitoring infrastructures (Grafana LGTM stack) and security policies for 100+ Kubernetes clusters and 500+ VMs, ensuring compliance with GDPR, ISO 27001, and eIDAS regulations.

My stake on AI / LLMs

TL;DR / Disclaimer

Before you read: Just so you know, I use these tools daily, both local and remote models (those that prioritize privacy and GDPR compliance).

I am not advocating for banning or censoring these tools. On the contrary, I see them as an important technological advancement and firmly believe they should be regulated and integrated into contexts where they can genuinely add value.

What I reject is the misleading term "artificial intelligence" and the irresponsible, widespread misapplication of these tools, which has real social and labor implications.

Why LLMs Are Not (and Cannot Be) Artificial Intelligence

A critical look at the hype, the risks, and the future of "AI"


1. The Illusion of Reasoning and Accountability

LLMs are not "intelligent" in any meaningful sense. They are stochastic parrots, systems that regurgitate patterns from their training data without understanding, reasoning, or context. They cannot be held accountable for their output because they lack agency, intent, or the ability to justify their responses.

  • No reasoning: LLMs do not "think." They predict the next word based on statistical likelihood, not logic or evidence.
  • No context: They cannot distinguish between truth and fiction, fact and opinion, or ethical and unethical content. They are "hallucination" machines by design.
  • No accountability: If a lawyer uses an LLM to draft a contract and it includes false information, who is responsible? The user? The developer? The model? No one. This is a legal and ethical abyss.

Chiunque prenda per buona una risposta di un modello linguistico senza controllarla è un fesso, o un pazzo, o entrambe le cose.

Walter Vannini

Translation: Anyone who accepts an LLM’s output without verification is a fool, a madman, or both.

1.1 The "Hallucination" Problem: A Built-In Flaw

So called "Hallucinations" are not a bug, they are a feature of how LLMs are constructed. These models are trained on the entire internet, which is a dumpster fire of misinformation, bias, and nonsense. There is no way to guarantee accuracy, no matter how sophisticated the prompt or the model.

  • No quality control: Training data is a Wild West of unverified, copyrighted, and often malicious content. Models cannot distinguish between reliable sources and garbage.
  • Poisoning risks: With just 250 malicious documents, an LLM can be manipulated to produce harmful or misleading outputs. This is not a theoretical risk, it’s already happening.
1.2 The Determinism Illusion: "Sometimes It Works, Sometimes It Doesn’t"

LLMs are non-deterministic, their outputs vary wildly based on unpredictable factors like input phrasing, randomness in training, or even the phase of the moon. This makes them unreliable for any serious application.

  • No consistency: A simple prompt might yield a brilliant answer today and a nonsensical one tomorrow. A complex prompt might work once and fail the next.
  • No control: Even when you constrain inputs (e.g., "only use these 10 documents"), the model may still invent information. There is no escape.

2. The Business of LLMs: A House of Cards

The LLM industry is a bubble, one that has been inflated by hype, venture capital, and the desperate hope that "AI will save us."

  • No profitability: After four years of relentless marketing, not a single major LLM company is profitable.
  • No path to profitability: The business models rely on user addiction, not sustainable value creation. Companies like OpenAI and Mistral are burning cash at an alarming rate, with no clear path to break-even.
  • No innovation: The industry is stuck in a loop: train bigger models, sell more tokens, repeat. There is no breakthrough in sight, only more of the same.

The real business model of LLMs is emotional dependency. Companies sell the fantasy that these tools can replace human judgment, creativity, and expertise.

Gli unici che si possono sostituire con un modello linguistico sono i dirigenti e gli amministatori delegati che credono a queste fesserie.

Walter Vannini

Translation: The only people who can be replaced by LLMs are executives and CEOs who believe this nonsense.


3. A Realistic Future: LLMs as Personal Tools, Not Oracles

The current LLM industry will not survive in its current form. It is unsustainable, economically, technically, and ethically.

  • No trillion-dollar industry: The hype will fade. Investors will demand returns. The bubble will burst.
  • No replacement for humans: LLMs cannot replace lawyers, doctors, artists, or programmers. They can only augment human work, if used correctly.
3.1 What Will Emerge from the Rubble?

The future of "AI" lies not in giant, opaque, corporate models, but in small, open, and controlled systems:

  • Minimal, open models: Tools like DeepSeek and Whisper are the future, not because they are "intelligent," but because they are transparent, customizable, and safe.
  • Personal assistants: Instead of cloud-based oracles, we will use local, private models trained on our own data, with no internet access, and no "hallucinations".
    • Example: A lawyer’s assistant trained only on their case files, a doctor’s assistant trained only on medical journals, a writer’s assistant trained only on their drafts.
  • Controlled creativity: These tools will be inspiration engines, not replacement ones. They will help us overcome writer’s block, refine ideas, and explore alternatives, but the final product will always be human-curated.

Saranno la versione evoluta del dizionario dei sinonimi e contrari, del dizionario delle citazioni. Non ci daranno un prodotto finito, ma un altro strumento per produrre materiale grezzo che raffineremo con la nostra intelligenza.

Walter Vannini

Translation: They will be the evolved version of a thesaurus and a quote dictionary. They won’t give us a finished product, but another tool to produce raw material that we refine with our own intelligence.


4. A Call to Action: Reclaiming Technology

It’s time to stop drinking the Kool-Aid and start thinking critically about the tools we use, and the future we want.

  • Demand transparency and accountability:
    • Boycott black-box models. Support open-source, auditable alternatives.
    • Hold companies accountable. If an LLM produces harmful output, the company must be liable.
  • Use technology as a tool, not a crutch:
    • LLMs are not oracles. They are libraries, translators, and editors, nothing more.
    • Keep humans in the loop. Always verify, always fact-check, always refine.
  • Imagine a different future:
    • Local, private, and ethical AI. Models trained on your data, running on your machine, with no corporate agenda.
    • A return to craftsmanship. Technology should serve humans, not replace them.

Final Thought

The LLM industry is a gigantic Ponzi scheme, one that sells the illusion of intelligence to extract money, attention, and power. But the truth is simpler:

LLMs are not the future. They are a dead end.
The future belongs to small, open, and human-centered tools, tools that empower, not replace.

Connect

Looking for my personal blog? Visit it at blog.filippobuletto.info