Okay, so it looks like the AI revolution isn't all sunshine and rainbows, especially when it comes to cybersecurity. You know those AI-powered websites and apps we've all been playing with? Turns out, they might be riddled with security holes, leaving them wide open for attacks. And guess what? Even letting AI generate your passwords could be a recipe for disaster.

A new study from Irregular, a cybersecurity firm, revealed that passwords created by those fancy large language models (LLMs) might seem strong on the surface, but they're actually "fundamentally insecure" and surprisingly easy to guess. This is concerning, especially as we increasingly rely on AI for various tasks.

To test how well AI models could handle password generation, Irregular tasked Claude, ChatGPT, and Gemini with creating secure, 16-character passwords that included special characters, numbers, and letters. The AIs aced the assignment. The passwords looked legit, just like the ones your password manager spits out. They even passed online security checks with flying colors! It looks like a really difficult password, doesn't it?

The Problem with AI Randomness

Here's the kicker: these AI-generated passwords turned out to be pretty easy to crack. The reason? Large language models aren't exactly masters of randomization. When researchers asked Claude Opus 4.6 to generate 50 unique passwords, they noticed a very predictable pattern. Each password kicked off with a letter, often an uppercase "G," followed by the digit "7." Certain characters like "L," "9," "m," "2," "$," and "#" were consistent fixtures, while large portions of the alphabet were ignored. It seems that AI it's not as smart as we think!

ChatGPT and Gemini exhibited similar quirks. ChatGPT favored the character "v" at the beginning of its passwords, with "Q" frequently taking the second slot. Like Claude, it leaned heavily on a limited set of characters. Gemini showed the same penchant for patterns, frequently starting passwords with either an uppercase or lowercase "K," followed by variations of "#," "P," or "9."

What's even weirder is that the researchers noticed the LLMs seemed to be trying too hard to make the passwords look random, which ironically revealed their lack of true randomness. For example, none of the generated passwords had repeating characters. While this might seem like a sign of randomness, Irregular pointed out that, statistically speaking, it's actually quite unlikely if the passwords were truly random.

Entropy: The Key to Password Strength

Password strength is usually measured by something called "bits of entropy," which essentially gauges how many guesses it would take to crack a password. Think of it this way: if you could only choose between two passwords, like "11111" or "12345," there's a 50% chance someone would guess your password, meaning it has only 1 bit of entropy.

On the flip side, if your password could be any one of 1,000 words, it would take up to 1,000 tries to guess it, which translates to about 10 bits of entropy. The more options you have for each character in your password, the higher the entropy, and the harder it is to crack. A password with 20 bits of entropy has around a million possibilities, but it can still be cracked in seconds with modern GPUs. A password with 100 bits of entropy, however, would take trillions of years to crack.

The Bottom Line: AI-Generated Passwords Are Weak

So, how bad are these LLM-generated passwords? Well, according to the researchers, a truly secure password should have around 6.13 bits of entropy per character. But LLM-generated passwords only manage about 2.08 bits of entropy. That means a standard, secure 16-character password with 98 bits of entropy is leagues ahead of the 27 bits offered by LLMs. This makes them extremely vulnerable to brute-force attacks. It's like bringing a butter knife to a gunfight!

While it's easy enough for you and me to avoid using AI to generate our passwords, the problem is that more and more people are handing over coding and other tasks to AI agents. And these agents are also relying on LLMs for password creation. The researchers even found common LLM-created patterns in GitHub and other technical documents, which means there are password-protected apps and services out there just waiting to be cracked. This is a serious issue!

Irregular doesn't think a simple update can fix this issue. They believe that "People and coding agents should not rely on LLMs to generate passwords. Passwords generated through direct LLM output are fundamentally weak, and this is unfixable by prompting or temperature adjustments: LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation," the company said. It's a fundamental limitation of how these models work.