Skip to content

Commit 4da923f

Browse files
YorkoYury Kashnitsky
and
Yury Kashnitsky
authored
add ToC to each lecture (#770)
Co-authored-by: Yury Kashnitsky <[email protected]>
1 parent 9635501 commit 4da923f

18 files changed

+892
-969
lines changed

mlcourse_ai_jupyter_book/book/topic01/topic01_pandas_data_analysis.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,9 +29,8 @@ _Source: Getty Images_
2929

3030
## Article outline
3131

32-
1. [Demonstration of the main Pandas methods](#1-demonstration-of-the-main-pandas-methods)
33-
2. [First attempt at predicting telecom churn](#2-first-attempt-at-predicting-telecom-churn)
34-
3. [Useful resources](#3-useful-resources)
32+
```{contents}
33+
```
3534

3635
## 1. Demonstration of the main Pandas methods
3736

mlcourse_ai_jupyter_book/book/topic02/topic02_additional_seaborn_matplotlib_plotly.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,8 @@ Author: [Egor Polusmak](https://www.linkedin.com/in/egor-polusmak/). Translated
2727

2828
## Article outline
2929

30-
1. [Dataset](1-dataset)
31-
2. [DataFrame.plot()](2-dataframe-plot)
32-
3. [Seaborn](3-seaborn)
33-
4. [Plotly](4-plotly)
34-
5. [Useful resources](5-useful-resources)
30+
```{contents}
31+
```
3532

3633
## 1. Dataset
3734

mlcourse_ai_jupyter_book/book/topic02/topic02_visual_data_analysis.md

Lines changed: 2 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -33,19 +33,8 @@ In this article, we are going to get hands-on experience with visual exploration
3333

3434
## Article outline
3535

36-
1. [Dataset](1-dataset)
37-
2. [Univariate visualization](2-univariate-visualization)
38-
* 2.1 [Quantitative features](21-quantitative-features)
39-
* 2.2 [Categorical and binary features](22-categorical-and-binary-features)
40-
3. [Multivariate visualization](3-multivariate-visualization)
41-
* 3.1 [Quantitative vs. Quantitative](31-quantitative-vs-quantitative)
42-
* 3.2 [Quantitative vs. Categorical](32-quantitative-vs-categorical)
43-
* 3.3 [Categorical vs. Categorical](33-categorical-vs-categorical)
44-
4. [Whole dataset visualizations](4-whole-dataset-visualizations)
45-
* 4.1 [Naive approach](41-a-naive-approach)
46-
* 4.2 [Dimensionality reduction](42-dimensionality-reduction)
47-
* 4.3 [t-SNE](43-t-SNE)
48-
5. [Useful resources](5-useful-resources)
36+
```{contents}
37+
```
4938

5039
## 1. Dataset
5140

mlcourse_ai_jupyter_book/book/topic03/topic03_decision_trees_kNN.md

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -22,13 +22,8 @@ Author: [Yury Kashnitsky](https://yorko.github.io). Translated and edited by [Ch
2222

2323
## Article outline
2424

25-
1. [Introduction](introduction)
26-
2. [Decision Tree](decision-tree)
27-
3. [Nearest Neighbors Method](nearest-neighbors-nethod)
28-
4. [Choosing Model Parameters and Cross-Validation](choosing-model-parameters-and-cross-validation)
29-
5. [Application Examples and Complex Cases](application-examples-and-complex-cases)
30-
6. [Pros and Cons of Decision Trees and the Nearest Neighbors Method](pros-and-cons-of-decision-trees-and-the-nearest-neighbors-method)
31-
7. [Useful resources](useful-resources)
25+
```{contents}
26+
```
3227

3328
## 1. Introduction
3429

mlcourse_ai_jupyter_book/book/topic04/topic4_linear_models_part1_mse_likelihood_bias_variance.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,9 @@ Author: [Pavel Nesterov](http://pavelnesterov.info/). Translated and edited by [
2424

2525

2626
## Article outline
27-
1. [Introduction](introduction)
28-
2. [Maximum Likelihood Estimation](maximum-likelihood-estimation)
29-
3. [Bias-Variance Decomposition](bias-variance-decomposition)
30-
4. [Regularization of Linear Regression](regularization-of-linear-regression)
31-
5. [Useful resources](useful-resources)
27+
28+
```{contents}
29+
```
3230

3331

3432
## 1. Introduction

mlcourse_ai_jupyter_book/book/topic04/topic4_linear_models_part2_logit_likelihood_learning.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,9 @@ kernelspec:
2525
Author: [Yury Kashnitsky](https://yorko.github.io). Translated and edited by [Christina Butsko](https://www.linkedin.com/in/christinabutsko/), [Nerses Bagiyan](https://www.linkedin.com/in/nersesbagiyan/), [Yulia Klimushina](https://www.linkedin.com/in/yuliya-klimushina-7168a9139), and [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose.
2626

2727
## Article outline
28-
1. [Linear Classifier](linear-classifier)
29-
2. [Logistic Regression as a Linear Classifier](logistic-regression-as-a-linear-classifier)
30-
3. [Maximum Likelihood Estimation and Logistic Regression](maximum-likelihood-estimation-and-logistic-regression)
31-
4. [$L_2$-Regularization of Logistic Loss](l-2-regularization-of-logistic-loss)
32-
5. [Useful resources](useful-resources)
28+
29+
```{contents}
30+
```
3331

3432
## 1. Linear Classifier
3533

mlcourse_ai_jupyter_book/book/topic04/topic4_linear_models_part3_regul_example.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,11 @@ kernelspec:
2424

2525
Author: [Yury Kashnitsky](https://yorko.github.io). Translated and edited by [Christina Butsko](https://www.linkedin.com/in/christinabutsko/), [Nerses Bagiyan](https://www.linkedin.com/in/nersesbagiyan/), [Yulia Klimushina](https://www.linkedin.com/in/yuliya-klimushina-7168a9139), and [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose.
2626

27+
## Article outline
28+
29+
```{contents}
30+
```
31+
2732
In the first article, we demonstrated how polynomial features allow linear models to build nonlinear separating surfaces. Let's now show this visually.
2833

2934
Let's see how regularization affects the quality of classification on a dataset on microchip testing from Andrew Ng's course on machine learning. We will use logistic regression with polynomial features and vary the regularization parameter $C$. First, we will see how regularization affects the separating border of the classifier and intuitively recognize under- and overfitting. Then, we will choose the regularization parameter to be numerically close to the optimal value via (`cross-validation`) and (`GridSearch`).

mlcourse_ai_jupyter_book/book/topic04/topic4_linear_models_part4_good_bad_logit_movie_reviews_XOR.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,9 @@ Author: [Yury Kashnitsky](https://yorko.github.io). Translated and edited by [Ch
2525

2626

2727
## Article outline
28-
1. [Analysis of IMDB movie reviews](analysis-of-imdb-movie-reviews)
29-
2. [A Simple Word Count](a-simple-word-count)
30-
3. [The XOR Problem](the-xor-problem)
31-
4. [Useful resources](useful-resources)
28+
29+
```{contents}
30+
```
3231

3332
## 1. Analysis of IMDB movie reviews
3433

mlcourse_ai_jupyter_book/book/topic04/topic4_linear_models_part5_valid_learning_curves.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,12 @@ kernelspec:
2424
Author: [Yury Kashnitsky](https://yorko.github.io). Translated and edited by [Christina Butsko](https://www.linkedin.com/in/christinabutsko/), [Nerses Bagiyan](https://www.linkedin.com/in/nersesbagiyan/), [Yulia Klimushina](https://www.linkedin.com/in/yuliya-klimushina-7168a9139), and [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose.
2525

2626

27+
## Article outline
28+
29+
```{contents}
30+
```
31+
32+
2733
```{code-cell} ipython3
2834
import warnings
2935
import numpy as np

mlcourse_ai_jupyter_book/book/topic05/topic5_part1_bagging.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23,11 +23,8 @@ Authors: [Vitaliy Radchenko](https://www.linkedin.com/in/vitaliyradchenk0/), and
2323

2424
## Article outline
2525

26-
1. [Ensembles](ensembles)
27-
2. [Bootstrapping](bootstrapping)
28-
3. [Bagging](bagging)
29-
4. [Out-of-bag error](out-of-bag-error)
30-
5. [Useful resources](useful-resources)
26+
```{contents}
27+
```
3128

3229
$\DeclareMathOperator{\Var}{Var}$
3330
$\DeclareMathOperator{\Cov}{Cov}$

mlcourse_ai_jupyter_book/book/topic05/topic5_part2_random_forest.md

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -24,16 +24,8 @@ Authors: [Vitaliy Radchenko](https://www.linkedin.com/in/vitaliyradchenk0/), and
2424

2525
## Article outline
2626

27-
1. [Algorithm](algorithm)
28-
2. [Comparison with Decision Trees and Bagging](comparison-with-decision-trees-and-bagging)
29-
3. [Parameters](parameters)
30-
4. [Variance and Decorrelation](variance-and-decorrelation)
31-
5. [Bias](bias)
32-
6. [Extremely Randomized Trees](extremely-randomized-trees)
33-
7. [Similarities between Random Forest and k-Nearest Neighbors](similarities-between-random-forest-and-k-nearest-neighbors)
34-
8. [Transformation of a dataset into a high-dimensional representation](transformation-of-a-dataset-into-a-high-dimensional-representation)
35-
9. [Pros and cons of random forests](pros-and-cons-of-random-forests)
36-
10. [Useful resources](useful-resources)
27+
```{contents}
28+
```
3729

3830
$\DeclareMathOperator{\Var}{Var}$
3931
$\DeclareMathOperator{\Cov}{Cov}$

mlcourse_ai_jupyter_book/book/topic05/topic5_part3_feature_importance.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23,11 +23,8 @@ Authors: [Vitaliy Radchenko](https://www.linkedin.com/in/vitaliyradchenk0/), [Yu
2323

2424
## Article outline
2525

26-
1. [Intuition](intuition)
27-
2. [Illustrating permutation importance](illustrating-permutation-importance)
28-
3. [Sklearn Random Forest Feature Importance](sklearn-random-forest-feature-importance)
29-
4. [Practical example](practical-example)
30-
5. [Useful resources](useful-resources)
26+
```{contents}
27+
```
3128

3229
It's quite often that you want to make out the exact reasons of the algorithm outputting a particular answer. Or at the very least to find out which input features contributed most to the result. With Random Forest, you can obtain such information quite easily.
3330

mlcourse_ai_jupyter_book/book/topic06/topic6_feature_engineering_feature_selection.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,12 @@ kernelspec:
2020

2121
Author: [Arseny Kravchenko](https://arseny.info/pages/about_me.html#about_me). Translated and edited by [Christina Butsko](https://www.linkedin.com/in/christinabutsko/), [Yury Kashnitsky](https://yorko.github.io/), [Egor Polusmak](https://www.linkedin.com/in/egor-polusmak/), [Anastasia Manokhina](https://www.linkedin.com/in/anastasiiamanokhina/), [Anna Larionova](https://www.linkedin.com/in/anna-larionova-74434689/), [Evgeny Sushko](https://www.linkedin.com/in/evgenysushko/) and [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose.
2222

23+
24+
## Article outline
25+
26+
```{contents}
27+
```
28+
2329
In this course, we have already seen several key machine learning algorithms. However, before moving on to the more fancy ones, we’d like to take a small detour and talk about data preparation. The well-known concept of “garbage in — garbage out” applies 100% to any task in machine learning. Any experienced professional can recall numerous times when a simple model trained on high-quality data was proven to be better than a complicated multi-model ensemble built on data that wasn’t clean.
2430

2531
To start, I wanted to review three similar but different tasks:

mlcourse_ai_jupyter_book/book/topic07/topic7_pca_clustering.md

Lines changed: 3 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -28,17 +28,9 @@ In this lesson, we will work with unsupervised learning methods such as Principa
2828

2929

3030
## Article outline
31-
1. [Introduction](introduction)
32-
2. [Principal Component Analysis (PCA)](principal-component-analysis-pca)
33-
- [Intuition, theories, and application issues](intuition-theories-and-application-issues)
34-
- [Examples](examples)
35-
3. [Clustering](clustering)
36-
- [K-means](k-means)
37-
- [Affinity Propagation](affinity-propagation)
38-
- [Spectral clustering](spectral-clustering)
39-
- [Agglomerative clustering](agglomerative-clustering)
40-
- [Accuracy metrics](accuracy-metrics)
41-
4. [Useful links](useful-links)
31+
32+
```{contents}
33+
```
4234

4335
## 1. Introduction
4436

0 commit comments

Comments
 (0)