Site Logo
Osman's Odyssey: Byte & Build
Chronicles of a Perpetual Learner

So You Want to Learn LLMs? Here's the Roadmap

A Real-World, No-Bloat Guide to Building, Training, and Shipping LLMs


Posted on
7 Minutes

Cover Image
~$ ./learn_llms.sh

So You Want to Learn LLMs? Here’s the Roadmap

This blogpost was published on my X/Twitter account on June 23rd, 2025 .

Welcome to the “how do I actually learn how LLMs work” guide. If you’ve got a CS background and you’re tired of the endless machine learning prerequisites, this is for you. I built this with past me in mind, I wish I had it all drawn out like this. This roadmap should have you comfortable with building, training, and exploring and researching.

The links at the end let you go as deep as you want. If you’re stuck, rewatch or reread. If you already know something, skip ahead. The phases are your guardrails, not handcuffs. By the end, you’ll have actually built the skills. Every resource, every project, every link is there for a reason. Use it, adapt it, and make it your own. I hope you don’t just use this as a collection of bookmarks.

Remember, you can always use DeepResearch when you’re stuck, need something broken down to first principles, want material tailored to your level, need to identify gaps, or just want to explore deeper.

This is blogpost #4 in my 101 Days of Blogging . If it sparks anything; ideas, questions, or critique, my DMs are open. Hope it gives you something useful to walk away with.

TL;DR – What Are We Doing?

The short version:

You will:

How This Works

The approach here is simple.

Learn by Layering: Build Intuition ➡️ Strengthen Theory ➡️ More Hands-on ➡️ Paper Deep Dives ➡️ Build Something Real.

You’re going to use four kinds of resources:

  1. Visual Intuition (3Blue1Brown, Karpathy) – get the why and how well.
  2. Formal Theory (Stanford/MIT lectures, open courseware) – unfortunately, sometimes you do need the math.
  3. Papers (“Attention Is All You Need”, BERT, LoRA, etc) – get used to reading papers.
  4. Coding Projects.

Concepts first, then the breakdown, then the tools to go do it.

The Roadmap Overview section is there to give you the conceptual big picture, it tells you what you’ll need to understand, at a high level. After that, the How To Actually Learn section breaks those concepts down into actual learning phases: what to study, how to build intuition, which projects to complete, and in what order. Finally, the Where To Learn Them section links out to the exact videos, lectures, papers, and codebases that’ll help you execute this roadmap. So: concepts first, then the breakdown, then the tools to go do it.

Roadmap Overview & Topics

Foundations Refresher

Transformers

Scaling and Training

Alignment + Fine-Tuning

Inference Optimizations

How To Actually Learn (The Real Plan)

Phases 0: Foundations Refresher

You do not need a PhD in math to understand LLMs. But if you can’t follow a simple PyTorch training loop, or you have zero intuition for matrix multiplication, things will seem very confusing (they really aren’t once you get your head around them).

Phases 1: Transformers

The Scariest Thing In LLMs and AI Isn’t The Models Or The Math It’s The Names
The Scariest Thing In LLMs and AI Isn’t The Models Or The Math: It’s The Names

I have this meme about how the words are the scariest part in LLMs. Transformers is the very first word you need your brain to think “easy” when you hear. They’re just stacks of matrix multiplications and attention blocks, with some really clever engineering.

Phases 2: Scaling Laws & Training for Scale

LLMs got good through figuring out what to scale, how to scale it, proving it could scale, and showing that it actually works.

Phases 3: Alignment & PEFT

Fine-tuning is not just a cheap trick. RLHF and PEFT are the reason you can actually use LLMs for real-world use cases.

Phases 4: Production

You made it to the only part that most people ever see: the actual app.

Where To Learn Them

Below is what to read/watch for the this learning plan.

Math/CS Pre-Reqs

PyTorch Fundamentals

Transformers & LLMs

Scaling & Distributed Training

Alignment & PEFT

Inference

The Endgame

If you actually do the roadmap above, build the projects, and push past the YouTube tutorial hell, you’ll understand LLMs extremely well. You’ll see through the hype, spot nonsense at a glance, and build your own models and tooling.

If you make it through this plan and actually ship something, DM me, I wanna see it.

Happy hacking.