- Hands-On Artificial Intelligence for IoT
- Amita Kapoor
- 268字
- 2025-04-04 15:11:28
Logistic regression for classification
In the previous section, we learned how to predict. There's another common task in ML: the task of classification. Separating dogs from cats and spam from not spam, or even identifying the different objects in a room or scene—all of these are classification tasks.
Logistic regression is an old classification technique. It provides the probability of an event taking place, given an input value. The events are represented as categorical dependent variables, and the probability of a particular dependent variable being 1 is given using the logit function:

Before going into the details of how we can use logistic regression for classification, let's examine the logit function (also called the sigmoid function because of its S-shaped curve). The following diagram shows the logit function and its derivative varies with respect to the input X, the Sigmoidal function (blue) and its derivative (orange):

A few important things to note from this diagram are the following:
- The value of sigmoid (and hence Ypred) lies between (0, 1)
- The derivative of the sigmoid is highest when WTX + b = 0.0 and the highest value of the derivative is just 0.25 (the sigmoid at same place has a value 0.5)
- The slope by which the sigmoid varies depends on the weights, and the position where we'll have the peak of derivative depends on the bias
I would suggest you play around with the Sigmoid_function.ipynb program available at this book's GitHub repository, to get a feel of how the sigmoid function changes as the weight and bias changes.