Cost Function vs Error Function: What is Difference?
Both cost functions and error functions help evaluate model performance, but they differ in scope and usage.
1️⃣ Error Function
🔹 Purpose:
- Measures the error for a single data point (individual sample).
- Used to compute how far the model’s prediction is from the actual value.
🔹 Example:
For a single sample in regression, the Mean Squared Error (MSE) formula is:Error=(ytrue−ypred)2\text{Error} = (y_{\text{true}} – y_{\text{pred}})^2Error=(ytrue−ypred)2
🔹 Example (Error Function in Python):
y_true = 10
y_pred = 8
error = (y_true - y_pred) ** 2 # Squared error
print(f"Error: {error}") # Output: 4
2️⃣ Cost Function
🔹 Purpose:
- Aggregates the errors over the entire dataset (averages or sums them).
- Used to optimize the model by minimizing the overall error.
🔹 Example:
For a dataset with nnn samples, the MSE cost function is:Cost=1n∑i=1n(ytrue,i−ypred,i)2\text{Cost} = \frac{1}{n} \sum_{i=1}^{n} (y_{\text{true},i} – y_{\text{pred},i})^2Cost=n1i=1∑n(ytrue,i−ypred,i)2
🔹 Example (Cost Function in Python using NumPy):
import numpy as np
y_true = np.array([10, 12, 14]) # Actual values
y_pred = np.array([8, 11, 13]) # Predicted values
cost = np.mean((y_true - y_pred) ** 2) # MSE
print(f"Cost: {cost}") # Output: 2.33
🔑 Key Differences
Feature | Error Function | Cost Function |
---|---|---|
Scope | Single data point | Whole dataset |
Usage | Measures error for one instance | Computes overall model performance |
Optimization? | No (not directly used for training) | Yes (minimized using optimizers like gradient descent) |
Example | Squared error for one point | Mean Squared Error (MSE), Cross-Entropy Loss |
🛠️ When to Use Each?
- Error function is used to analyze individual errors.
- Cost function is used for training and optimization.
🚀 Final Thought
✅ The cost function is the average of all error functions across the dataset. In ML, we minimize the cost function to improve model performance.
Let me know if you need further clarification! 🚀