Mail
Inbox
Compose
Forward
From
Phillip Carter <pcarter@fastmail.com>
Phillip Carter <phillip.phillipcarter.carter@gmail.com>
To
Cc
Bcc
Subject
Markdown
Write
Split
--- **---------- Forwarded message ----------** **From:** Ajay Kumar via LinkedIn <newsletters-noreply@linkedin.com> **To:** Phillip Carter <pcarter@fastmail.com> **Date:** Fri, Mar 20, 2026, 1:27 AM **Subject:** The Vibe Coding Trap: What 5 Hours of Auto-Accepting Does to Your Engineering Brain. AI-Club By Ajay Kumar By Ajay Kumar Read this article on LinkedIn to join the conversation Read on LinkedIn https://www.linkedin.com/comm/pulse/vibe-coding-trap-what-5-hours-auto-accepting-does-your-ajay-kumar-sslhe?lipi=urn%3Ali%3Apage%3Aemail_email_series_follow_newsletter_02%3BqJ%2BbyF%2B2QG%2BBHXhbvR3pqw%3D%3D&midToken=AQFWuuwTtiQOsA&midSig=0SrMVlx5zn9cc1&trk=eml-email_series_follow_newsletter_02-null-0-read_more_banner_cta_&trkEmail=eml-email_series_follow_newsletter_02-null-0-read_more_banner_cta_-null-21ltud~mmy7yqef~1i-null-null&eid=21ltud-mmy7yqef-1i You started the morning as a senior engineer. By hour four, you're a human rubber stamp approving code you no longer understand. Here's my way of looking at this, the technical debt, and the pragmatic playbook that ive been thinking about sharing for a long time now. This one's going to be a lengthy brain wave sync of Ajay back to you. Not saying don't vibe code, but don't do it blindly please for the love of Code !!! 1. What "Vibe Coding" Actually Is (Mechanically) Let's be precise about what we're discussing. Vibe coding is the practice of describing what you want in natural language, accepting the AI-generated output with minimal review, and iterating by describing the next thing. The feedback loop looks like this: you talk, the model writes, you glance, you accept, you talk again. The code accumulates like sediment. This is fundamentally different from AI-assisted coding, where you use a model as a collaborator but maintain authorship — reading each line, understanding the decisions, reshaping the architecture. In vibe coding, you've ceded authorship. You're the product manager now, not the engineer. And here's the thing — for the first 60 to 90 minutes, this feels incredible. You're shipping features at a rate that makes your old self look glacial. A full CRUD API in twenty minutes. A dashboard component with sorting, filtering, and pagination before lunch. The dopamine is real. But something is happening underneath, and it's happening to you, not the code. 2. The Cognitive Decay Timeline I want to map out what actually happens to your cognition over a 4-5 hour vibe coding session. This isn't abstract — it's a measurable, repeatable pattern that anyone who's done this honestly can recognize. Hour 0–1: The Architect Phase You're still thinking. You read the generated code. You catch things: "That's an N+1 query." "That state should live in context, not local." You rewrite the prompt. You reject output. Your mental model of the codebase is sharp. You are coding with AI assistance. Hour 1–2: The Editor Phase You start skimming. You check the structure — "okay, it made a component, it imported the right things, the function names look reasonable." You stop reading function bodies. You run the app, it works, you move on. Your mental model becomes a sketch instead of a blueprint. Hour 2–3: The Approver Phase You're now operating on vibes. The code shows up, you look for red squiggles, you hit accept. If it compiles and the page renders, that's your test. You've transitioned from "is this a good solution" to "does this seem to work." You are vibe coding. Hour 3–4: The Passenger Phase You've lost the thread. The codebase has grown by hundreds or thousands of lines that you haven't genuinely read. When something breaks, you don't debug — you describe the error to the model and accept the fix. You can no longer explain what half the files do. You are a passenger in your own project. Hour 4–5: The Sunk Cost Phase You're tired, the codebase is unfamiliar, and you know that starting over would mean losing "all that progress." So you keep going. You start copy-pasting error messages without reading them. You accept structural changes you don't understand. The model is coding. You are watching. 3. Technical Symptoms: What the Code Actually Looks Like The cognitive decay isn't just in your head — it manifests in the codebase in very specific, recognizable patterns. Let me show you what accumulates during an unchecked vibe coding session. Symptom 1: Contradictory Abstractions In hour one, you asked the model to set up a clean service layer. In hour three, you asked it to "just make the API call from the component." Now you have both patterns coexisting. You didn't notice because by hour three, you'd stopped checking whether new code was consistent with existing code. The model doesn't know about your architectural intent from two hours ago. It solves each prompt in isolation. When you're alert, you catch this. When you're in hour three, you see a component that fetches data and renders it, and that looks correct enough. Symptom 2: The Ballooning State Problem Models love useState. Ask them to add a feature, and they'll add state. Ask them to fix a bug, and they'll add more state. After four hours of accepting, you end up with components that look like this: Fifteen pieces of state, most of which are derived from each other. filteredData is derived from data, searchTerm, and dateRange. But the model created a separate useState for it and added a useEffect to sync them. An alert engineer would have used useMemo. A vibe coder in hour four doesn't even see the problem. ( Please pay attention to this !!!!! ) Symptom 3: Error Handling Theater This one is insidious. The model always adds try-catch blocks. This makes the code look robust at a glance. But look at what's inside: There are at least three categories of failure here — network errors, card declines, and insufficient funds — each requiring a different response. Worse: if the charge succeeds but updateOrderStatus fails, you've charged the customer but haven't recorded the payment. An alert engineer would spot this. A vibe coder in hour four sees a try-catch and moves on. 4. Why Your Brain Surrenders What's happening in your head during this five-hour slide isn't weakness or laziness. It's a predictable consequence of how human cognition handles sustained decision-making with diminishing engagement. Decision Fatigue Is Real and Measurable Every time you evaluate a code suggestion — accept it, reject it, modify the prompt — you're making a decision. Research in cognitive psychology consistently shows that the quality of decisions degrades over time. This is the same mechanism that makes judges grant more paroles after lunch breaks than before them. In a vibe coding session, you're making micro-decisions at an extraordinary rate. "Is this function signature right?" "Is this the right abstraction?" "Should I restructure this?" Each decision costs cognitive resources. After two hours, your brain starts conserving energy by defaulting to acceptance. It's not that you've decided the code is good. It's that deciding is expensive and accepting is free. The Automation Complacency Effect There's a well-studied phenomenon in human factors research called automation complacency. When humans monitor automated systems, their vigilance drops dramatically over time — even when the stakes are high. The pattern is consistent: high alertness in the first 20-30 minutes, steady decline afterward, with occasional spikes when something obviously goes wrong. Vibe coding is exactly this pattern. The AI is the automated system. You are the monitor. And you are subject to the same complacency curve that has been documented in every other domain where humans supervise automated processes. Your brain literally cannot maintain vigilance over a system that mostly produces acceptable output. Loss of the Builder's Mental Model When you write code yourself, even slowly, you build a mental model of the system. You know why that function exists. You know what that variable name means. You know the tradeoff you made when you chose a hash map over a sorted array. When the model writes the code, you can build that mental model — but only if you read carefully and reason about each choice. That's what happens in hour one. By hour three, you've stopped building the model. You're operating on an increasingly outdated and incomplete map of a growing territory. By hour four, the codebase is essentially someone else's code — and you're modifying someone else's code without reading it. The Core Problem: The truly dangerous outcome isn't bad code — it's code you can't evaluate anymore. Once you've lost your mental model of the system, you can't distinguish between a good suggestion and a bad one. You're not making engineering decisions. You're making aesthetic ones — "that looks about right." 5. The Dangerous Middle: Code That Works But Shouldn't Ship The most insidious output of a long vibe coding session isn't code that crashes — it's code that works. It renders the right things. The API returns 200s. The tests pass. But there's an entire category of problems that don't show up until later. The Silent Performance Bomb With 50 users, this is fine. Lets say with 5,000 users, you've just created an O(n²) rendering pass that will freeze the browser. The model doesn't think about your dataset size. An alert engineer would pre-compute a map: The Security Hole Hidden in Convenience No authentication middleware. No authorization check. No validation that the role value is legitimate. Any unauthenticated user can make anyone an admin. The model gave you exactly what you asked for — an endpoint that updates roles. You were on autopilot. 6. The Pragmatic Playbook: How to Actually Do It Right I'm not going to tell you to stop vibe coding. It's too useful. The throughput gains are real, and for certain categories of work — prototyping, boilerplate, UI scaffolding, one-off scripts — it's a genuine force multiplier. The goal isn't to eliminate it. The goal is to structure your sessions so that your brain stays in the game. Rule 1: Timebox Ruthlessly — The 90-Minute Block Cognitive research consistently points to roughly 90 minutes as the upper bound for sustained focused work before significant degradation. Don't fight your biology. Code in 90-minute blocks with hard breaks. During the break, do something that has nothing to do with code. Walk. Make coffee. Let your prefrontal cortex recover. Rule 2: Read Before You Run This is the hardest discipline to maintain and the most important. Before you run the code, read it. Not skim it. Read it. If you can't explain what it does to a colleague, you shouldn't accept it. The Explain Test: After accepting a piece of generated code, pause and explain it out loud in one sentence. "This sets up a websocket connection that retries with exponential backoff and dispatches messages to a Redux store." If you can't do this, you didn't read it. Read it. Rule 3: Maintain a Running Architecture Doc Keep a simple markdown file open alongside your code. Every time you make an architectural decision — where state lives, what the service boundaries are, what patterns you're using — write it down. When you're in hour three and the model generates a component that calls directly, you'll have a written rule to catch it against, even when your brain would have let it slide. Rule 4: Write the Tests Yourself This is counterintuitive. Why not have the model write the tests too? Because writing tests is the one activity that forces you to think about what the code should actually do. When you write a test, you're articulating your expectations. You're thinking about edge cases. The act of writing that fourth test case is the moment you would have caught the payment bug. The model wouldn't have written that test — it would have tested the happy path and called it done. Rule 5: Do Periodic "Where Am I" Reviews ( VERY Important !! ) Every 30 minutes, stop and answer three questions: Mental model check: Can I draw the data flow of this system on a whiteboard right now? If not, you've lost your mental model. Stop and rebuild it before continuing. Vigilance check: What was the last piece of generated code I actually read line-by-line? If it was more than 15 minutes ago, you've entered approver mode. Avoidance check: What decision am I avoiding? Usually, it's a hard architectural question. The model can't answer it. You're hoping that if you keep shipping features, it'll resolve itself. It won't. 7. Pushing Models Harder: Getting More Juice Most people vibe code by giving the model vague instructions and accepting whatever comes back. This is like using a professional kitchen to microwave frozen dinners. The models are far more capable than the average prompt reveals. Here's how to push them. Technique 1: Front-Load Constraints Don't say "build me a user settings page." Say this: This prompt produces dramatically better code because you've eliminated the model's biggest source of mediocrity: ambiguity. When you don't specify constraints, the model makes default choices. Default choices are, by definition, average. Technique 2: Ask for Reasoning Before Code Before asking for implementation, ask for a plan. This works especially well for complex logic: When you ask the model to think before it codes, two things happen. You get better architecture because the model has "thought through" the problem. And you get something you can evaluate — it's much easier to spot a flawed approach in a three-paragraph plan than in 200 lines of code. Technique 3: Adversarial Review Prompts After the model generates code, don't just accept it. Ask the model to attack its own work: Models are surprisingly good at critiquing their own output when explicitly asked. They'll often catch the exact issues you would have caught in hour one but missed in hour three. This effectively outsources part of the vigilance you've lost to the model itself. Technique 4: Provide Examples of What You Want Models are excellent few-shot learners. If you have an existing component that represents your coding standard, show it: This eliminates the "contradictory abstractions" problem. The model now has a concrete reference for your standards, not just its training data defaults. Technique 5: Checkpoint Refactoring Every 3-4 features, stop building new things and ask the model to refactor: This is like running a cleanup pass that counteracts the entropy of rapid prompt-driven development. It's especially effective because the model can see all the files at once and identify inconsistencies that accumulated while you were focused on feature-by-feature delivery. 8. The Session Architecture Here's what a well-structured 4-hour vibe coding session looks like: The structured version produces less code. That's the point. It produces less code that you actually understand, that's architecturally consistent, and that you can maintain and extend tomorrow. The Pre-Session Ritual (10-15 minutes): Define the architecture in writing. What are the boundaries? What patterns are you using? Where does state live? Write this down. This is your constitution. List what you're building. Not a vague goal ("build the dashboard"). A concrete list: "dashboard layout, stats cards with React Query, data table with sorting, filter sidebar." This prevents scope creep and gives you natural stopping points. Set up your system prompt. Give the model your architecture doc, your coding conventions, an example component. Front-load the context so every prompt in the session benefits from it. The Post-Session Ritual (15 minutes): Read through every file the model created or modified. Run the full test suite (and note what isn't tested). Update your architecture doc with any new decisions. Make a list of things that feel wrong but you accepted anyway. These are your first tasks for tomorrow, when you're fresh. 9. The Meta-Skill: Knowing When You've Stopped Thinking Everything in this article comes down to one skill: self-awareness about your own cognitive state. The moment you stop reading the code is the moment vibe coding becomes dangerous. The moment you start describing bugs to the model without trying to understand them yourself is the moment you've become a passenger. The best engineers I've seen use AI coding tools have a kind of dual consciousness. They let the model move fast, but they maintain a separate, slower thread of reasoning: "Does this fit the architecture? Could this fail in a way I haven't considered? Am I still steering, or am I being steered?" There are some practical litmus tests. If you can't explain the last three accepted changes without looking at the code, take a break. If you've accepted more than five suggestions in a row without modifying any of them, slow down. If you find yourself getting annoyed when the model asks for clarification, that's your fatigue talking. The irony of vibe coding is that it requires more engineering judgment, not less. When you're writing every line yourself, the code can only be as wrong as your own mistakes. When a model is generating thousands of lines per hour, the failure modes are subtler, structural, and they compound. You need to be at your sharpest to catch them. And you can't be at your sharpest for five continuous hours. Nobody can. So build the structure. Take the breaks. Read the code. Write the tests yourself. Push the model harder so it does better work. And the moment you catch yourself mindlessly hitting "accept" — stop. Walk away. Come back when you're ready to think again. The code will still be there. It'll be better if you are too. Keep reading on LinkedIn https://www.linkedin.com/comm/pulse/vibe-coding-trap-what-5-hours-auto-accepting-does-your-ajay-kumar-sslhe?lipi=urn%3Ali%3Apage%3Aemail_email_series_follow_newsletter_02%3BqJ%2BbyF%2B2QG%2BBHXhbvR3pqw%3D%3D&midToken=AQFWuuwTtiQOsA&midSig=0SrMVlx5zn9cc1&trk=eml-email_series_follow_newsletter_02-newsletter_content_preview_text-0-readmore_button_&trkEmail=eml-email_series_follow_newsletter_02-newsletter_content_preview_text-0-readmore_button_-null-21ltud~mmy7yqef~1i-null-null&eid=21ltud-mmy7yqef-1i ---------------------------------------- This email was intended for Phillip Carter (tech pm @ salesforce) Learn why we included this: https://www.linkedin.com/help/linkedin/answer/4788?lang=en&lipi=urn%3Ali%3Apage%3Aemail_email_series_follow_newsletter_02%3BqJ%2BbyF%2B2QG%2BBHXhbvR3pqw%3D%3D&midToken=AQFWuuwTtiQOsA&midSig=0SrMVlx5zn9cc1&trk=eml-email_series_follow_newsletter_02-SecurityHelp-0-textfooterglimmer&trkEmail=eml-email_series_follow_newsletter_02-SecurityHelp-0-textfooterglimmer-null-21ltud~mmy7yqef~1i-null-null&eid=21ltud-mmy7yqef-1i You are receiving LinkedIn notification emails. Others can see that you are a subscriber. Unsubscribe: https://www.linkedin.com/series-notifications/?action=unsubscribe&memberToken=ADoAAAdec1UB_vNwvxq8UZExiYlSMj6Ui8lwKOw&newsletterId=7358823231413239808&newsletterTitle=AI-Club&lipi=urn%3Ali%3Apage%3Aemail_email_series_follow_newsletter_02%3BqJ%2BbyF%2B2QG%2BBHXhbvR3pqw%3D%3D&midToken=AQFWuuwTtiQOsA&midSig=0SrMVlx5zn9cc1&ek=email_series_follow_newsletter_02&e=21ltud-mmy7yqef-1i&eid=21ltud-mmy7yqef-1i&m=unsubscribe&ts=footerGlimmer&li=0&t=plh · Help: https://www.linkedin.com/help/linkedin/answer/67?lang=en&lipi=urn%3Ali%3Apage%3Aemail_email_series_follow_newsletter_02%3BqJ%2BbyF%2B2QG%2BBHXhbvR3pqw%3D%3D&midToken=AQFWuuwTtiQOsA&midSig=0SrMVlx5zn9cc1&trk=eml-email_series_follow_newsletter_02-help-0-textfooterglimmer&trkEmail=eml-email_series_follow_newsletter_02-help-0-textfooterglimmer-null-21ltud~mmy7yqef~1i-null-null&eid=21ltud-mmy7yqef-1i © 2026 LinkedIn Corporation, 1zwnj000 West Maude Avenue, Sunnyvale, CA 94085. LinkedIn and the LinkedIn logo are registered trademarks of LinkedIn.
Send