diff --git a/LearningModels.ipynb b/LearningModels.ipynb index 7c76d48..5824e3a 100644 --- a/LearningModels.ipynb +++ b/LearningModels.ipynb @@ -588,7 +588,7 @@ "\n", "We sum this measure up over all our data points, to create whats known as the **error functional** or **risk functional** (also just called **error*, **cost**, or **risk**) of using line $h_1(x)$ to fit our points $y_i \\in \\cal{D}$ (this notation is to be read as \"$y_i$ in $\\cal{D}$\") :\n", "\n", - "$$ R_{\\cal{D}}(h_i(x)) = \\frac{1}{N} \\sum_{y_i \\in \\cal{D}} (y_i - h_1(x_i))^2 $$\n", + "$$ R_{\\cal{D}}(h_1(x)) = \\frac{1}{N} \\sum_{y_i \\in \\cal{D}} (y_i - h_1(x_i))^2 $$\n", "\n", "where $N$ is the number of points in $\\cal{D}$.\n", "\n", @@ -2022,7 +2022,7 @@ "\n", ">The predictor interface extends the notion of an estimator by adding a predict method that takes an array X_test and produces predictions for X_test, based on the learned parameters of the estimator.\n", "\n", - "So, for increasing polynomial degree, and thus feature dimension `d`, we fit a `LinearRegression` model on the traing set. We then use scikit-learn again to calculate the error or risk. We calculate the `mean_squared_error` between the model's predictions and the data, BOTH on the training set and test set. We plot this error as a function of the defree of the polynomial `d`." + "So, for increasing polynomial degree, and thus feature dimension `d`, we fit a `LinearRegression` model on the traing set. We then use scikit-learn again to calculate the error or risk. We calculate the `mean_squared_error` between the model's predictions and the data, BOTH on the training set and test set. We plot this error as a function of the degree of the polynomial `d`." ] }, {