Do Lemons Dream of Vibe Coding!
File Options Window Help

Banner So recently my favorite VTuber, Kaneko Lumi, decided that she wants to code... so that she doesn't need 500 scenes in OBS just to do a small skit stream. And honestly that's a great idea, she's creative enough to pull off some wild interactive stuff...

But the moment she said she wanted to "vibe code" it, my soul left my body. I should have seen it coming... she is totally on the AI hype train... She doesn't actually want to code. She wants the AI to do it for her. Something about paying 200$ a month for ChatGPT so it can write little scripts for her... it sounds convenient until you realize convenience comes with serious risks.

When Copilot become the Pilot

The whole "AI write it, I just vibe" thing sounds great until the AI quietly ships you a ticking time bomb. Letting ChatGPT, Clause, or Gemini handle the logic of something you don't understand is like closing your eyes on the highway and letting your passenger steer. Sure, maybe you'll get there... or maybe you'll redecorate the guardrail.

If you can't read the code well enough to know if it's safe, you have no clue what you're running. You don't know if it leaks data, if it format your hard drive (actually happened to a streamer I watch...), or if it subtly opens you up to exploits. The moment you run code written by something you don't understand, you're not developing, you're gambling... and the house is the one that always wins... OpenAI, Anthropic, Google, they got your money... but if your code breaks, it's your fault.

Chat integrations... coded by LLM...

Twitch chat is a circus on a good day. Imagine handing over control of your chat interactions to some generated code you barely understand. That's not "coding", that's "hoping nothing explodes live on stream". You're piping untrusted user input straight into unverified logic crafted by a machine that doesn't understand "security" beyond a few keywords.

One rogue message, one weird emote, and your whole setup could turn into a panic slideshow. You might trust your community... but never trust raw input from the internet. And definitely not through an LLM's untested code. People use AI to code SAAS (Software As A Service), but in reality, AI is way better at coding VAAS (Vulnerability As A Service).

LLM Assisted Coding, Done the Right Way

There's a proper way to use LLMs for coding, and spoiler alert: it's not “type in a prompt and ship whatever it spits out.” The sane approach is to treat it like an intern that can type really fast but has no idea how production code works. You don't let it architect your project, you just let it handle the boilerplate crap you're tired of writing for the 10,000th time.

I use tab completion for simple snippets, stuff like if (err != nil) {} in Go or if (ptr == NULL) {} in C—because that's exactly the kind of repetitive nonsense an AI autocomplete should deal with. It's fast, it saves keystrokes, and it doesn't make critical decisions while I'm busy thinking about the logic of the code.

Tools like Cursor, VS Code's Copilot, or even CLI agents like Opencode or similar can be powerful when used in limited context. The key word here is scope. If you're editing one part of a project, let the AI fill in stubs somewhere else. When it's done, review every damn line it produced as if you just got a questionable pull request from an intern dev on their first week. Correct the stupid mistakes, delete half of it, rewrite part of it, and keep only what actually makes sense. That way, it's not coding for you, it's coding with you, and that's a massive difference.

Most developers don't fear AI tools, we just don't trust them blindly. Use it for the boring parts, but keep your hands on the wheel. Otherwise you're back to vibe coding… just with slightly fancier dice.

So How Do You Actually Learn?

The sad reality is that vibe coding doesn't teach you a damn thing. You won't learn by watching the AI spit out functions you don't read. Coding is not about copying and pasting... it's about doing, breaking stuff, and fixing it. The mistakes teach more than success ever will.

The responsible way to use AI for learning is simple: use it in ask mode. Ask *why* your function crashed, not *write me a new one*. Ask *what does this syntax actually do*, not *make a bot that reads Twitch chat and throws confetti*. Use the AI like a tutor helping rou understanding the concepts, not like a way to outsource your thinking process.

Because if all you do is vibe while it codes, you'll never understand what any of it means... and someday when it breaks, you'll just stare at it like it's magic that stopped working.