A Curated “Reverse-Learning” Approach
Why reverse-learning?
Just as you can learn to drive before you master the mechanics of an engine, you can start using AI before you understand every layer of the tech stack. Early wins keep motivation high; deeper theory can follow naturally.
- Course: Artificial Intelligence on Microsoft Azure – overview of AI workloads and ready-made cloud services.
https://www.coursera.org/learn/artificial-intelligence-microsoft-azure/
- Quick start: AI Python for Beginners – DeepLearning.AI short course.
https://www.deeplearning.ai/short-courses/ai-python-for-beginners/ - Deeper dive: Python Specialization (Coursera).
https://www.coursera.org/specializations/python - Optional: Data Structures & Algorithms – Solid CS fundamentals.
https://www.coursera.org/specializations/data-structures-algorithms
- Hands-on intro: Open-Source Models with Hugging Face
https://www.deeplearning.ai/short-courses/open-source-models-hugging-face/ - Embeddings & vector search:
- Embedding Models: Architecture → Implementation
https://www.deeplearning.ai/short-courses/embedding-models-from-architecture-to-implementation/ - Google Cloud Vertex AI Embeddings
https://www.deeplearning.ai/short-courses/google-cloud-vertex-ai/
- Embedding Models: Architecture → Implementation
- Course: Vibe Coding 101 (Replit) – build apps with natural-language prompts.
https://www.deeplearning.ai/short-courses/vibe-coding-101-with-replit/ - Tools to know
Tool Best For Link Claude 3 Sonnet (Anthropic) Generating Python/Streamlit snippets https://claude.ai/ GitHub Copilot (Ask / Edit / Agent) IDE-native pair programming https://github.blog/ai-and-ml/github-copilot/copilot-ask-edit-and-agent-modes-what-they-do-and-when-to-use-them/ Replit Cloud IDE + one-click deployment https://replit.com/ Windsurf Multi-file LLM coding (≤ 1 000 lines per file) https://windsurf.com/editor
Approach | When to Use | Key Course | Link |
---|---|---|---|
Prompt Engineering | Quick wins without extra data | ChatGPT Prompt Engineering for Developers | https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/ |
RAG (Prompt + Retrieval) | Need grounded, up-to-date answers | Building Multimodal Search & RAG | https://www.deeplearning.ai/short-courses/building-multimodal-search-and-rag/ |
Fine-Tuning | Desired style/behavior outside base model | ||
• Supervised Fine-Tuning (SFT) | Labeled pairs available | Finetuning Large Language Models | https://www.deeplearning.ai/short-courses/finetuning-large-language-models/ |
• RLHF | Align to human preferences | Reinforcement Learning from Human Feedback | https://www.deeplearning.ai/short-courses/reinforcement-learning-from-human-feedback/ |
All courses by DeepLearning.AI.
Core Topics & Resources
Additional deep-dive videos (free):
- Stanford CS336 (2025) Language Modeling from Scratch https://www.youtube.com/watch?v=Rvppog1HZJY
- Stanford Online Youtube AI Course Play List https://www.youtube.com/watch?v=vf93B08v7Qs&list=PLoROMvodv4rOu17_W_IfCghVFseNVTLNp
- Let’s Build GPT from Scratch – Andrej Karpathy. https://www.youtube.com/watch?v=kCc8FmEb1nY&t=1886s
- Deep Dive into LLMs like ChatGPT – Andrej Karpathy. https://www.youtube.com/watch?v=7xTGNNLPyMI
- How I use LLMs – Andrej Karpathy https://www.youtube.com/watch?v=EWvNQjAaOHw&t=281s
- Intro to Large Language Models – Andrej Karpathy https://www.youtube.com/watch?v=zjkBMFhNj_g
Choose tracks that match your goals:
- Model Optimization & LLM Ops – quantization, LLM compression, parallelism (e.g, multi-token prediction), Inference (vLLM, SGlang).
- Pre-Training – Data curation, Tokenizer design, multi-modal training.
- Post-Training – SFT, RLHF, DPO, Reward modeling.
- Serving & Inference – low-latency architectures, Model routing, GPU vs. CPU routing.
- Alternative Architectures – Mixture-of-Experts, RWKV, state-space models.
- AI Agents – autonomous planning, tool use, multi-agent orchestration (Lang Graph, Semantic Kernel, Autogen), MCP, A2A
Structured Programs
- Stanford Artificial Intelligence Professional Program
https://online.stanford.edu/programs/artificial-intelligence-professional-program - Stanford Artificial Intelligence Graduate Program
https://online.stanford.edu/Artificial-Intelligence-Certificate-Guide
If you need classic machine-learning and deep learning depth:
- ML Theory: Mathematics for Machine Learning – Andrew Ng https://www.coursera.org/specializations/mathematics-machine-learning?msockid=19689a2cbc9d6a1a21aa885abdd16bee
- ML Specialization: Machine Learning (Coursera) https://www.coursera.org/learn/machine-learning
- Deep Learning: Deep Learning Specialization – DeepLearning.AI https://www.coursera.org/specializations/deep-learning ; Stanford CS-230 https://cs230.stanford.edu/
- Deep Generative Models: Stanford CS-236 https://deepgenerativemodels.github.io/
- MLOps: Machine-Learning Engineering for Production – Andrew Ng. https://www.coursera.org/specializations/machine-learning-engineering-for-production-mlops
Python libraries, packages, and broad and shallow coverage:
- Data Scientist with Python: (DataCamp) https://learn.datacamp.com/career-tracks/data-scientist-with-python
- Machine Learning Scientist with Python: (DataCamp) https://app.datacamp.com/learn/career-tracks/machine-learning-scientist-with-python
Begin with application, sustain momentum with hands-on wins, then spiral inward to theory and systems. This reverse-learning path balances practical impact with foundational depth—essential for modern AI engineers.