Bad Vibes

A developer boots up their IDE, fires up Copilot, and starts shipping code at machine speed. No documentation read.
No threat model sketched. No tests written.
Just pure intuition channeled through autocomplete - and it compiles on the first try. Ship it.
Somewhere between hitting deploy and watching it go live, you lost the part that actually mattered: knowing what the hell you built.
This isn't just sloppy engineering anymore - it's a philosophy.
"It compiles, it runs, it vibes." That's the new mantra.
Nobody asks if it's secure. Nobody checks if it's correct.
Developers who spent years learning to question everything now trust their gut and their AI assistant more than their compiler warnings. Vibe coding has arrived, and it's dressed like progress.
When code becomes performance art instead of engineering, who's left to catch the bugs before they become exploits?

Vibe coding isn't some fringe practice happening in basement hackathons.
It's gone mainstream, baked into the infrastructure of how modern software gets built.
Move37 documented the practice: developers writing code based on gut feeling, copying snippets without comprehension, making changes until something works without knowing why.
GitHub CEO Thomas Dohmke reported that for developers using Copilot, about 40% of code is now written by AI.[
More recent data](https://www.wearetenet.com/blog/github-copilot-usage-data-statistics) shows this has jumped to 46% on average, with Java developers seeing up to 61% AI-generated code.
Cursor IDE has become the default environment for entire dev teams. ChatGPT debugs production systems in real-time.
These tools promised to democratize coding, to make everyone a developer.
What they actually did was make it possible to ship complex systems without understanding them.
Autocomplete disguised as intelligence. Plausibility masquerading as correctness.
This wasn't supposed to be controversial.
AI coding assistants were sold as productivity multipliers - write more, ship faster, let the machine handle the boilerplate.
Except nobody mentioned that the machine optimizes for what looks right, not what is right.
MIT Sloan Review found that AI accelerates productivity - but it also accelerates error propagation.
Faster code = faster bugs.
Andela documented how development itself is being reshaped around what they call the "intuition stack" - pattern recognition, judgment around AI output, and the ability to feel when something's off before it breaks.
Teams now need developers who can collaborate with AI without surrendering authorship, who treat Copilot as a partner rather than a shortcut.
But this shift comes with friction: only 29% of developers feel confident detecting vulnerabilities in AI-generated code, and Forrester predicts over 50% of tech organizations will face serious technical debt by 2025.
Move fast, trust the autocomplete, let future-you deal with the consequences.
Then crypto entered the picture, and vibe coding found its perfect host.
DeFi runs on code where bugs aren't just technical debt - they're open invitations for theft.
Smart contracts go live with millions locked inside. No undo button. No rollback.
Code breaks, money disappears, and there's nobody to call. Just immutable mistakes and permanent losses.
Speed culture collided with immutable ledgers. Vibes met value. Nobody asked what breaks when you can't rewind.
But understanding how this happened requires looking at why developers started trusting machines over methods in the first place - what changed in their heads?
The Psychology of "Good Enough"
Smart people do dumb things for predictable reasons. Vibe coding isn't a skill problem - it's a cognitive trap wearing a productivity costume.
Daniel Kahneman mapped this out: Fast and intuitive versus slow and deliberate.
Vibe coding lives entirely in System 1. Pattern recognition, gut checks, autocomplete suggestions that feel right because they read smoothly.
System 2 - the part that questions assumptions, runs edge cases, checks invariants - gets skipped entirely.
Modern dev culture rewards System 1 and punishes System 2. Ship fast or get left behind.
Three biases feed the cycle.
Fluency illusion hits first. AI code reads well, compiles clean, looks professional.
Developers see good syntax and assume good logic. If the compiler doesn't complain, why should they?
Then confirmation bias kicks in. Code works on the happy path - first test passes, deployment goes smooth.
Testing edge cases feels like paranoia.
"Worked in dev" becomes "ready to ship" with nothing in between.
Automation complacency closes the loop. Copilot suggests something plausible, and critical thinking shuts off.
Why question the machine when the machine is faster than you?
The reward structure makes it worse.
Tech culture celebrates lines shipped, features deployed, velocity metrics climbing.
Nobody tracks test coverage. Security audits are expensive line items that delay launches.
Post-deployment monitoring? That's just users finding your bugs for free.
You get what you incentivize, and tech incentivized speed over everything else.
Dopamine replaced discipline.
Developers started chasing the high of instant deployment instead of the satisfaction of building something solid.
AI made it easier to feel productive without being correct.
Autocomplete became a drug, and some of the industry got hooked.
When feeling productive matters more than being correct, how long before the systems built on vibes start collapsing under scrutiny?
What The Data Actually Says
Psychology explains why developers trust the vibes. Data proves they shouldn't.
ACM researchers found in 2022 and reconfirmed in 2025 - that intuitive reasoning correlates directly with higher defect rates.
Developers who code by feel produce buggier software. Not occasionally. Consistently. Intuition optimizes for what feels right, not what is right.
A large-scale study published on arXiv compared human-written code against AI-generated code across thousands of samples.
AI code contained more bugs per thousand lines. Higher defect density, more vulnerabilities, increased complexity.
The machine writes code that looks good but breaks worse.
Security degrades with each iteration.
Another arXiv study tracked what happens when AI regenerates or refactors code multiple times.
AI doesn't remember threat models between passes - it just makes the code look cleaner.
Surface issues get fixed. New attack vectors get introduced.
Net result: prettier vulnerabilities.
PMC's systematic literature review looked at what AI actually does to software security.
The meta-analysis showed AI coding tools introduce entirely new vulnerability classes that didn't exist in traditional development.
Not just more of the same bugs - different bugs, harder to catch, easier to exploit.
Then there's the legal mess.
DevLicOps research showed AI tools spit out code with mystery licensing. Nobody knows where the training data came from.
Copyright violations, IP theft, compliance nightmares waiting to happen. Good luck explaining that one to the lawyers.
Lawfare Media documented how AI coding assistants produce plausible but insecure code. Vibe coders don't directly interact with code and can't assess quality.
The practice breaks with established software development - code gets deployed without manual review or testing.
The same cognitive trap hits human reviewers dealing with AI output. Double vulnerability, single point of failure.
Qwiet cataloged the real-world damage: missing input validation, broken authentication, logic flaws, race conditions.
AI generates all the classics, just faster and at scale.
Every metric points the same direction. The faster you code by vibe, the faster you fail by exploit.
AI doesn't just accelerate development - it weaponizes incompetence.
If the data is this clear, why does anyone still ship code this way?
When Vibes Meet Reality
Theory is one thing. Practice writes itself in blood.
The pattern plays out in hypothetical scenarios like the following…
DeFi startups face brutal deadlines and moonshot ambitions.
Small teams fork Open Zeppelin templates, plug in AI modifications and ship without audits.
"We'll audit after launch when we have traction."
Three weeks later, an exploit drains the treasury. Post-mortem finds an access control flaw in the AI logic.
The function compiled clean, passed basic tests, felt comprehensive.
Spoiler alert: It wasn't.
NFT projects rush to market. Contract code starts with OpenZeppelin templates, gets modified with ChatGPT suggestions.
Mint function gets "gas optimized" by AI.
The auditor gives conditional approval. The team ignores the conditions - can't wait, hype is peaking now.
Mint day arrives. Function locks after 200 mints. Integer overflow in AI-written increment logic.
Project dead on arrival.
Optimization without understanding turned out to be just compilation with extra steps.
Layer 2 solutions deploy bridge contracts written with heavy AI assistance. Dev team trusts the tool, skips the threat model.
Fast finality, low fees, slick UX. Community loves it. Investors love it more.
Security researcher finds the critical bug three months in.
Responsible disclosure, but reputation damage already done.
Root cause: validation logic that looked solid but had holes.
Nobody questioned it because it read well.
Same pattern, different players.
Confidence phase - it compiles, vibes are immaculate.
Deployment phase - it's live, we're going to make it.
Catastrophe phase - it's gone, what happened?
The timeline varies, but the outcome doesn't.
Halborn's 2025 report put numbers to it.
Faulty input validation topped the list at 34.6% of contract exploits.
Reentrancy had its moment in 2023. Access control dominated other periods.
The vulnerability flavor changes.
The root cause doesn't - developers shipping code they don't understand, tools suggesting fixes that sound good but aren't.
Traditional audits caught some of it, but not enough.
Auditors face the same trap - AI writes the code, AI assists the review, everyone assumes someone else checked the logic. Responsibility gets diffused. Nobody owns the failure when it hits.
Off-chain attacks accounted for 80.5% of stolen funds in 2024. Compromised accounts made up 56.5% of incidents.
But the on-chain exploits - the ones hitting smart contracts directly - those increasingly trace back to the same source.
Developers who felt confident, AI that suggested plausible vulnerabilities, and systems that incentivized shipping over security.
Stack traces don't lie. When something breaks in production, the logs tell the story.
Except in vibe-coded systems, the story is: nobody knew why it worked, so nobody knew why it broke.
How many protocols are running on code nobody actually understands?
Breaking the Cycle
Vibe coding isn't inevitable. It's a choice made daily by thousands of developers who decided speed matters more than understanding.
Fixing it requires breaking the dopamine loop.
Read the documentation. Actually read it.
Not the Medium post about it, not the Twitter thread summarizing it, not the ChatGPT explanation of it. The actual docs.
If you don't know why something works, you can't predict when it breaks.
AI gives you code. Documentation gives you understanding. Choose understanding.
Write tests before writing code.
Test-Driven Development isn't bureaucracy - it's a forcing function for thought. If you can't articulate what success looks like before you build it, you don't understand the requirement.
Tests catch the bugs you didn't know existed. Vibes catch nothing.
Verify assumptions, especially the obvious ones. The most dangerous vulnerabilities hide in code that "obviously works."
Question everything. Particularly what feels right. Intuition is for sketching prototypes. Rigor is for deploying to mainnet.
Never ship intuition. Full stop.
Gut feelings belong in product direction, not security-critical code.
Prototype on vibes if you want. But production deployment requires verification. Know the difference. Don't cross the line.
Treat AI code the same way you'd treat any external dependency - guilty until proven innocent.
The machine wrote it? Humans review it. No exceptions.
Every suggestion gets scrutinized. Every autocomplete gets questioned. Trust nothing from a black box.
Slow down. Speed killed more protocols than complexity ever did. Every hour saved in development becomes a week lost in incident response. Rush the code, rush the exploit. Take the time to build it right, or take more time explaining why it failed.
Security experts documented the mitigation strategies.
Human review for everything AI touches. Security scans baked into the pipeline. Teaching devs to spot AI's favorite mistakes. Commit messages that admit when the robot did the work. Audits that focus on AI-generated sections.
But tooling alone won't fix cultural rot.
Tech needs to reward different behavior. Celebrate thorough work, not just fast work. Praise security researchers as much as feature shippers.
Make "I don't know" a professional strength instead of career suicide. Normalize saying "this needs more time" without getting labeled a blocker.
Incentives drive everything. Right now, they drive speed. Change the incentives, change the outcomes.
Blockchain doesn't grade on a curve. Code either works or it doesn't. There's no partial credit for effort, no sympathy for good intentions.
The ledger doesn't care about your deadline. It only records what actually happened.
If developers won't change behavior voluntarily, market forces will change it for them - one exploit at a time, until nobody trusts anything built on vibes.
Who decides when fast enough becomes too fast?

Crypto hasn’t hit full vibe‑coding disaster yet.
But every unchecked AI suggestion is a loaded gun waiting for the wrong block to fire.
When will we see our first crypto exploit where we found out the root cause was due to vibe coding?
Don’t see a Polymarket bet for that, yet…
Maybe vibe coding was always here, just moving slower.
Developers shipping Friday afternoon patches they barely tested.
Production hotfixes deployed between coffee breaks.
Code that worked because nobody stressed it enough to break.
Human error at human speed - survivable, mostly.
Then AI arrived and turned the dial to eleven. Same mistakes, infinite velocity.
Every cognitive shortcut now runs at machine speed. Every bad habit now scales automatically.
The vulnerabilities didn't change - just their rate of reproduction.
What breaks first - the systems built on vibes, or the belief that vibes were ever enough?

REKT作为匿名作者的公共平台,我们对REKT上托管的观点或内容不承担任何责任。
捐赠 (ETH / ERC20): 0x3C5c2F4bCeC51a36494682f91Dbc6cA7c63B514C
声明:
REKT对我们网站上发布的或与我们的服务相关的任何内容不承担任何责任,无论是由我们网站的匿名作者,还是由 REKT发布或引起的。虽然我们为匿名作者的行为和发文设置规则,我们不控制也不对匿名作者在我们的网站或服务上发布、传输或分享的内容负责,也不对您在我们的网站或服务上可能遇到的任何冒犯性、不适当、淫秽、非法或其他令人反感的内容负责。REKT不对我们网站或服务的任何用户的线上或线下行为负责。