How I Approach Learning as a Machine Learning Engineer



One of the most frequent questions I get is:
“How do you keep up with everything in ML?”
The truth? I don’t try to learn everything.
But I’ve built a system to learn what matters, when it matters.
Here’s what that looks like.
1. I learn in layers, not all at once
Most topics in ML feel overwhelming — transformer internals, recommendation infra, prompt tuning, etc.
So I don’t try to master it in one go.
Instead, I skim → explore → deepen.
Example:
- Skim: Watch a YouTube explainer or podcast
- Explore: Read a paper or Medium post
- Deepen: Build a toy project or read the source code
2. I pick what’s useful and what’s exciting
If I only chase what’s useful, I burn out.
If I only chase what’s exciting, I don’t ship.
So I split my learning into two tracks:
- Need to learn (for work, project, role)
- Want to learn (curiosity, creative fuel)
Balance = sustainable growth.
3. I write or teach to reinforce it
Writing is a cheat code for understanding.
Every time I’ve written a doc, a blog post, or explained something to a teammate — I’ve learned more than reading 5 papers.
This blog? Part of that same habit.
4. I treat tools as temporary
I don’t obsess over frameworks or syntactic sugar.
What matters more is understanding why something works and when to use it.
TensorFlow, PyTorch, JAX — tools change. Principles stick.
5. I timebox deep dives
Some topics suck you into rabbit holes. So I set timers:
45 mins for reading, then I either:
- Summarize what I learned
- Bookmark it and move on
- Schedule a deep dive if it’s worth it
This avoids overconsumption and guilt.
TL;DR
- Learn in layers
- Split learning into need-to / want-to
- Write it down to remember it
- Focus on principles, not just tools
- Use time-boxing to control attention
If you’ve struggled with ML overwhelm — know this:
You don’t have to know everything.
You just have to keep learning with intent.
Would love to hear how you learn. Drop a note, share a resource, or teach me something new. ✨