Skip to content

DrSquare/AI-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

AI Engineering Learning Path: Reverse Learning Approach

A Curated “Reverse-Learning” Approach

Why reverse-learning?
Just as you can learn to drive before you master the mechanics of an engine, you can start using AI before you understand every layer of the tech stack. Early wins keep motivation high; deeper theory can follow naturally.


1 Beginner: Explore Pre-Trained Models & Core Skills

1.1 AI Fundamentals (No-Code)

1.2 Python Foundations

1.3 Open-Source, Pre-Trained Models

1.4 “Vibe” Coding & AI-Assisted Development


2 Intermediate (Part A): Map Business Problems to Model Adaptation Approaches

Approach When to Use Key Course Link
Prompt Engineering Quick wins without extra data ChatGPT Prompt Engineering for Developers https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/
RAG (Prompt + Retrieval) Need grounded, up-to-date answers Building Multimodal Search & RAG https://www.deeplearning.ai/short-courses/building-multimodal-search-and-rag/
Fine-Tuning Desired style/behavior outside base model
 • Supervised Fine-Tuning (SFT) Labeled pairs available Finetuning Large Language Models https://www.deeplearning.ai/short-courses/finetuning-large-language-models/
 • RLHF Align to human preferences Reinforcement Learning from Human Feedback https://www.deeplearning.ai/short-courses/reinforcement-learning-from-human-feedback/

All courses by DeepLearning.AI.


3 Intermediate (Part B): Understand the Innerworkings

Core Topics & Resources

Topic Primary Resource Supplement
Transformer LLMs How Transformer LLMs Work (Deeplearning.AI) https://www.deeplearning.ai/short-courses/how-transformer-llms-work/ Stanford CS-330 PyTorch notebook https://github.com/DrSquare/AI_Coding/blob/main/CS330_PyTorch_Tutorial.ipynb
Attention Mechanisms Attention in Transformers (PyTorch) https://www.deeplearning.ai/short-courses/attention-in-transformers-concepts-and-code-in-pytorch/
NLP Specialization Stanford CS-224N/224U https://web.stanford.edu/class/cs224n/ https://web.stanford.edu/class/cs224u/ NLP Specialization (Deeplearning.AI) https://www.deeplearning.ai/courses/natural-language-processing-specialization/
Multi-Task & Meta-Learning Stanford CS-330 https://cs330.stanford.edu/
LLM Ops & Practice LLM Engineer's Handbook (Packt) https://github.com/PacktPublishing/LLM-Engineers-Handbook
Hands-On Large Language Models https://github.com/HandsOnLLM/Hands-On-Large-Language-Models
AI Engineering (O’Reilly)
https://www.oreilly.com/library/view/ai-engineering/9781098166298/

Additional deep-dive videos (free):


4 Advanced: Specialize & Scale

Choose tracks that match your goals:

  1. Model Optimization & LLM Ops – quantization, LLM compression, parallelism (e.g, multi-token prediction), Inference (vLLM, SGlang).
  2. Pre-Training – Data curation, Tokenizer design, multi-modal training.
  3. Post-Training – SFT, RLHF, DPO, Reward modeling.
  4. Serving & Inference – low-latency architectures, Model routing, GPU vs. CPU routing.
  5. Alternative Architectures – Mixture-of-Experts, RWKV, state-space models.
  6. AI Agents – autonomous planning, tool use, multi-agent orchestration (Lang Graph, Semantic Kernel, Autogen), MCP, A2A

Structured Programs


5 (Optional) Traditional ML, DL & MLOps

If you need classic machine-learning and deep learning depth:

Python libraries, packages, and broad and shallow coverage:


Final Thoughts

Begin with application, sustain momentum with hands-on wins, then spiral inward to theory and systems. This reverse-learning path balances practical impact with foundational depth—essential for modern AI engineers.

About

AI Learning Material Recommendation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published