Site Logo
Osman's Odyssey: Byte & Build
Chronicles of a Perpetual Learner

Software Engineers Aren't Getting Automated—Local AI Has To Win

Why Full-Stack Ownership is the Only Real Job Security in The Age of AI


Posted on
8 Minutes

Cover Image
~$ ./run_local_ai.sh

In The Age of AI

This blogpost was published on my X/Twitter account on June 21st, 2025 .

Snippets

Real technical ability is fading. Worried about AI replacing you? Build real technical depth. LLMs are leverage, a force multiplier, but only if you know what you’re doing. You’re not losing to AI. You’re losing to people who use AI better than you because they actually understand the tech. Get sharper.

This goes way beyond privacy or ideology. As optimization and model alignment get more personal (and more opaque), your only actual safety net is full local control. If you’re building a business, a workflow, or even a habit that depends on a remote black box, you’re not the customer; you’re the product. Full-stack ownership isn’t just to show off. It’s pure risk management.

The future belongs to those who can build, debug, and document, not just rent someone else’s toolchain. Bootcamps don’t cut it anymore.

“Every day these systems run is a miracle. Most engineers wouldn’t last five minutes outside their cloud sandbox.”

Our industry is obsessed with AI hype, most devs have never seen the bare metal, never written a real doc, and never owned their own stack. Meanwhile, the only thing standing between us and our systems’ total collapse is duct tape, a few command-line obsessives, and the shrinking number of people who still know how to fix things when the it all stops working. We’re staring down an industry where the median troubleshooting skill is somewhere between “reboot and pray” and “copy-paste from Stack Overflow”.

So please, stop the doomscroll and quit worrying about being replaced. LLMs amplify you; they don’t substitute for you. The edge is in the hard parts: critical thinking, debugging, taste for clean architecture, putting it all together. That’s not going anywhere. The job is shifting not getting eliminated: more architecture, more security, more maintenance, more troubleshooting. Still deeply human, and still non-trivial to automate.

This is blogpost #3 in my 101 Days of Blogging . If it sparks anything; ideas, questions, or critique, my DMs are open. Hope it gives you something useful to walk away with.

Agency

Yesterday morning I hosted an X/Twitter Audio Space on how LLMs, open-source, and the gravitational pull of platform centralization are forcing us all to rethink what it actually means to be a developer. The cloud got us coddled… The cloud was a mistake , and I believe that next decade’s winners won’t be the ones who just ship the most code (LLMs are really good at that BTW), but the ones who get obsessed with understanding, documenting, and actually owning their tools, top to bottom.

I. RTFM and Technical Self-Reliance

Let’s set the scene. Google Cloud outage just crashed the internet. X/Twitter is in full panic mode , Cursor/Claude Code/Windsurf/etc aren’t working anymore. LLMs have become the default code generator, human programming skills are fading. For me, I didn’t even notice the outage until I got online. My local agents, running on my hardware from my basement , kept running.

If you think local models don’t work, you probably skipped the manual. RTFM applies to LLMs too, and that includes the models themselves, Inference Engines, Samplers, Platform/Infra, among other things. RTFM, please.

But this phenomena goes beyond LLMs: Engineers, conditioned by managed platforms and cloud UIs, often panic at the command line. Most troubleshooting is “copy-paste and hope.” We’re staring down an industry where the median troubleshooting skill is somewhere between “reboot and pray” and “copy-paste from Stack Overflow.” Seriously, drop most devs into a raw terminal and you’ll see the panic set in. Engineering has become SaaS babysitting.

“Hardware is not that difficult to deal with, and I feel like software engineers have been coddled and coddled and coddled, and this needs to change.” “If you knew how duct-taped these systems are, you’d see that every day they’re up is a miracle.”

Decentralization-and more importantly the lessons that come with it-is basic resilience. Centralized systems-“The Cloud” -are brittle, and few know how anything works under the hood. Fewer can repair, debug, or improve them. Real technical ability is fading. Worried about AI replacing you? Build real technical depth. LLMs are leverage, a force multiplier, but only if you know what you’re doing. You’re not losing to AI. You’re losing to people who use AI better than you because they actually understand the tech. Get sharper.

II. Open-source’s Docs and The Gatekeeping Culture

On the other hand, open-source has a documentation problem. If your project’s README doesn’t get me to “pip install” in three paragraphs or less, you’re not doing me or your project any favors. I don’t need to go through 7-pages to get to installation. Your frontpage needs to send me an RoI signal.

“Can you really fault the average person? They ask, ‘How do I set up Ollama?’ and the first response is, ‘LOL, bro, just set up llama.cpp.’ That doesn’t help.”

Why does this even matter? Bad docs and exclusionary behavior kill new subcultures. Without better documentation and less gatekeeping, open-source will stagnate.

Creators need to lower barriers so that users, who should show up with curiosity, are not completely thrown off.

III. Coding with LLMs: Skills and the 20% That Won’t Be Automated

LLMs multiply output, but code quality depends on architectural sense and maintenance. System design, error handling, long-term structure can’t be automated. LLMs stop at “good enough.” Actual production code, scale, and troubleshooting still need humans with context and taste. The actual value is shifting away from just cranking out code, and toward understanding, integrating, and keeping these increasingly complex systems alive.

“LLMs get me to the 80% point. The last 20% are still so painful that I don’t put stuff out. AGI is not here.”

“You can kind of judge how senior someone is by… at what size does the code base start turning into slop. With LLMs? That’s a couple of thousands.”

Code is easy to generate, but maintenance and architecture expose skill. The shift is toward design, security, and debugging. The job changes, but the human edge persists.

So please, stop the doomscroll and quit worrying about being replaced. LLMs amplify you; they don’t substitute for you. The edge is in the hard parts: critical thinking, debugging, taste for clean architecture, putting it all together. That’s not going anywhere. The job is shifting not getting eliminated: more architecture, more security, more maintenance, more troubleshooting. Still deeply human, and still non-trivial to automate.

IV. Centralization and Corporate Lock-In

“Big AI companies don’t want you to try—they need you to believe you can’t compete, so you just pay subscriptions.”

OpenAI, Anthropic-pick your corporate overlord-profit from developer dependency. They want users to believe they can’t compete, pushing lock-in and endless subscriptions. AGI hype conditions users to consume, not create. That’s the business model.

The constant “AGI/ASI is coming”-and in other versions “it’s already here”-drumbeat isn’t to inspire you, it’s to keep the money coming. The goalposts move, the narrative shifts, but as long as you’re buying in that’s all that matters to them and their investors.

“They need you not to try. They need you to be afraid and sit there at home and be like, ‘I guess I’ll just pay my subscription.’ That’s all they care about.”

Behind the scenes, these systems are fragile. Anthropic literally “dynamically quantizes” models to save on cloud bills. That whole “invincible AGI” narrative is just a way to condition the market to pay up and stay put.

How do you defend yourself? Simple: run local. Use open weights. If you’re not holding the entire stack, you’re living on someone else’s compute, and it’s one API tweak away from breaking your business, your research, your workflow.

This goes way beyond privacy or ideology. As optimization and model alignment get more personal (and more opaque), your only actual safety net is full local control. If you’re building a business, a workflow, or even a habit that depends on a remote black box, you’re not the customer; you’re the product. Full-stack ownership isn’t just to show off. It’s pure risk management.

And for the sci-fi crowd: every closed model is a potential vector for “payloads”: recommendations, nudges, mindshaping. Opaque APIs mean “hidden levers.” If you don’t control it, it will, eventually, control you.

Memorable Quotes

“Hardware is not that difficult to deal with, and I feel like software engineers have been coddled and coddled and coddled, and this needs to change.”

“If you knew how duct-taped these systems are, you’d see that every day they’re up is a miracle.”

“Can you really fault the average person? They ask, ‘How do I set up Ollama?’ and the first response is, ‘LOL, bro, just set up llama.cpp.’ That doesn’t help.”

“Quick installs matter. Don’t bury essential info beneath buzzwords and extensive READMEs.”

“LLMs get me to the 80% point. The last 20% are still so painful that I don’t put stuff out. AGI is not here.”

“If it’s not your weights, if you’re not in full control of the models, they can change them on you at the snap of a finger.”

“They need you not to try. They need you to be afraid and sit there at home and be like, ‘I guess I’ll just pay my subscription.’ That’s all they care about.”

Stay sharp.