Learn Python Series (#57) - Why I'm Building an AI Series (And Why You Should Follow Along)

Learn Python Series (#57) - Why I'm Building an AI Series (And Why You Should Follow Along)

python-logo.png

Repository

What will I learn

  • You will learn why the Learn Python Series is evolving into something bigger;
  • what "Build Your Own AI" actually means and what it doesn't;
  • the philosophy behind teaching AI from scratch instead of from sklearn.fit();
  • how 56 episodes of Python fundamentals directly feed into understanding ML/AI;
  • the full roadmap from first principles through transformers, LLMs, and production AI;
  • what tools and prerequisites you need before episode one drops.

Requirements

  • A working modern computer running macOS, Windows or Ubuntu;
  • An installed Python 3(.11+) distribution;
  • Familiarity with the Learn Python Series (especially recent episodes on NumPy and Pandas);
  • The ambition to learn Python programming.

Difficulty

  • Intermediate

Curriculum (of the Learn Python Series):

GitHub Account

https://github.com/realScipio

Learn Python Series (#57) - Why I'm Building an AI Series (And Why You Should Follow Along)

This episode is different. No code. No exercises. I want to talk about where this series is going - and why.

If you've been following along, you've noticed a trajectory. Strings and lists → web crawlers and databases → a full FastAPI/SQLAlchemy/auth/deployment stack → and in the last three episodes, modern Pandas and deep NumPy. That trajectory wasn't random. It was building toward something.

The elephant in the room

Every time I explain broadcasting, vectorized operations, or memory layout, there's an unspoken question: what's the point of all this array manipulation?

The honest answer: most of what makes NumPy and Pandas powerful only fully reveals itself when you use them for machine learning. Broadcasting exists because neural networks need it. Vectorization matters because training loops process millions of data points. Memory layout matters because GPU kernels need contiguous data. We've been building the engine. Now it's time to build the car.

Why now

In 2026, understanding AI isn't optional for serious developers. Not "understanding" as in reading blog posts about ChatGPT - understanding as in: you can explain what a transformer does, write a training loop from scratch, and look at a model architecture diagram and know what each layer computes.

The divide is stark. Developers who use AI as a black box (calling APIs, hoping the output is correct, unable to debug when it isn't) versus developers who understand the internals (can fine-tune, evaluate, debug, and read papers). Those two groups will have very different career trajectories. I want my readers in the second one.

There's a personal angle too. I've spent years working with AI tools and reading papers - but scattered knowledge isn't the same as structured knowledge. Teaching forces you to organize what you know into a coherent path. That's what this series is: taking everything I've accumulated and distilling it into the progression I wish had existed when I started ;-)

The philosophy: understand first, libraries second

The new series - the Learn AI Series - follows one core principle: build things from scratch before using libraries.

Most AI tutorials start with model.fit(X, y) and call it a day. You've "done machine learning" but learned nothing about what happened inside. When it breaks - and it will - you can't debug it.

Say your classifier is stuck at 50% accuracy. If you only know the API, your toolkit is: try different hyperparameters, try a different model, ask Stack Overflow. That's guessing. If you understand the internals, your debugging is systematic: is the loss decreasing? (If not: learning rate problem.) Loss decreasing but accuracy flat? (Class imbalance.) Training accuracy great but validation terrible? (Overfitting.) Each diagnosis requires understanding the training loop from the inside.

So we build things from scratch with NumPy first - gradient descent, regression, neural networks. Then we use scikit-learn, PyTorch, and Hugging Face, because at that point those library calls are recognition, not mystery.

The direction

I'm not laying out a detailed curriculum - the series goes where the material takes it. But the direction is clear:

We start from absolute zero. What is machine learning? What does "learning from data" actually mean? Then we build intuition with tiny examples, earn the math we need (only what's needed), and construct real algorithms from scratch.

The natural progression from there: classical ML → neural networks → deep learning → transformers → LLMs → and wherever the field goes next. Each concept built on the last. Mini-projects along the way. How far we go depends on how deep the rabbit hole is. (It's very deep.)

You're more prepared than you think

If you've been following the Learn Python Series, you already have critical prerequisites locked in:

  • Pandas (#30-33, #54-55): loading, cleaning, transforming tabular data - the first thing every ML project does
  • NumPy (#56): memory layout, broadcasting, vectorization - the exact operations underneath every ML library
  • FastAPI stack (#49-53): building and deploying APIs - how trained models get served in production
  • Advanced Python (#46-48): descriptors, generators, threading - the mechanics powering PyTorch internals

Even if you haven't followed every episode: solid Python plus basic NumPy and Pandas is enough to start.

What this is NOT

  • Not a math course. We cover math only to understand algorithms - visualizations over formal proofs
  • Not a prompt engineering course. If we reach LLMs, we'll understand why prompts work, not memorize magic phrases
  • Not a shortcut. AI is deep. Genuine competence takes time. No "master AI in 10 lessons" nonsense
  • Not leaving Python behind. Same language, same foundation, fresh numbering - the DNA is identical

The Learn Python Series isn't ending

Think of it as having reached a natural milestone. Fifty-seven episodes from absolute beginner through advanced metaprogramming, web development, and data science. That's a complete education. It may get occasional additions, but the main arc is done.

The Learn AI Series picks up the torch. Same author. Same philosophy. Same community.

I've spent nine years building this series. The next chapter is the one I've been working toward.

Tot de volgende serie. Dit wordt mooi ;-)

@scipio



0
0
0.000
0 comments