Pull down to refresh stories
Emerging

We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is

While the software industry has made genuine strides over the past few decades to deliver products securely, the furious pace of AI adoption is putting that progress at risk. Businesses are moving fast to self-host LLM infrastructure, drawn by the promise of AI as a force multiplier and the pressure to deliver more value faster. This piece sits on 1 source layers, but the real value is showing why the story should not be skimmed past too quickly.

While the software industry has made genuine strides over the past few decades to deliver products securely, the furious pace of AI adoption is putting that progress at risk. Businesses are moving fast to self-host LLM infrastructure, drawn by the promise of AI as a force multiplier and the pressure to deliver more value faster. The signal is strong enough to deserve attention, but it still needs to be read as something developing rather than fully settled.

Emerging The topic has initial corroboration, but the newsroom is still waiting on stronger confirmation.
Reference image for: We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is
Reference image from The Hacker News. The Hacker News

While the software industry has made genuine strides over the past few decades to deliver products securely, the furious pace of AI adoption is putting that progress at risk. Businesses are moving fast to self-host LLM infrastructure, drawn by the promise of AI as a force multiplier and the pressure to deliver more value faster. In the wake of the ClawdBot fiasco — the viral self-hosted AI assistant that’s averaging an eye-watering 2.6 CVEs per day — the Intruder team wanted to investigate how bad the security of AI infrastructure actually is. The Hacker News is the main source layer for now, and the rest should be read as a signal that is still widening. In security, the real value is not just the warning itself but the way it changes operational risk, account safety, and the cost of responding later.

Featured offer

Patrick Tech Store Open the AI plans, tools, and software currently getting the push Jump straight into the store to see what Patrick Tech is pushing right now.

What is happening now

While the software industry has made genuine strides over the past few decades to deliver products securely, the furious pace of AI adoption is putting that progress at risk. The Hacker News form the main source layer behind the core facts in this piece. This is still a developing thread, so the useful part is knowing which source signals are hardening and which ones still need caution. In security, the real value is whether the team becomes measurably safer, not whether another settings screen has been added.

Where the sources line up

The Hacker News is the main source layer for now, and the rest should be read as a signal that is still widening. Businesses are moving fast to self-host LLM infrastructure, drawn by the promise of AI as a force multiplier and the pressure to deliver more value faster. The Hacker News form the main source layer behind the core facts in this piece.

Featured offer

Patrick Tech Store Open the AI plans, tools, and software currently getting the push Jump straight into the store to see what Patrick Tech is pushing right now.

The details worth keeping

In the wake of the ClawdBot fiasco — the viral self-hosted AI assistant that’s averaging an eye-watering 2. 6 CVEs per day — the Intruder team wanted to investigate how bad the security of AI infrastructure actually is. In security, the real value is not just the warning itself but the way it changes operational risk, account safety, and the cost of responding later.

Why this matters most

The signal is strong enough to deserve attention, but it still needs to be read as something developing rather than fully settled. With 1 source layers on the table, the part worth reading most closely is where firm facts meet the market's early reaction. To scope the attack surface, we used certificate transparency logs to pull just over 2 million hosts with 1 million exposed services.

What to watch next

The next layer to watch is scope, patch speed, and the operating cost if teams are forced to change process because of this story. Patrick Tech Media will keep checking rollout speed, user reaction, and how The Hacker News update the next pieces. From 1 early signals, the piece keeps 1 references that are useful for locking the main details in place.

Context Worth Keeping

While the software industry has made genuine strides over the past few decades to deliver products securely, the furious pace of AI adoption is putting that progress at risk. Businesses are moving fast to self-host LLM infrastructure, drawn by the promise of AI as a force multiplier and the pressure to deliver more value faster. In the wake of the ClawdBot fiasco — the viral self-hosted AI assistant that’s averaging an eye-watering 2. 6 CVEs per day — the Intruder team wanted to investigate how bad the security of AI infrastructure actually is. The Hacker News is the main source layer for now, and the rest should be read as a signal that is still widening. In security, the real value is not just the warning itself but the way it changes operational risk, account safety, and the cost of responding later. In security coverage, the meaningful part is not just the flaw or the patch itself, but the operational risk and protection it changes. This is still a developing thread, so the useful part is knowing which source signals are hardening and which ones still need caution.

Source notes

From Patrick Tech

Contextual tools

Related stories