- Hands-On Artificial Intelligence for IoT
- Amita Kapoor
- 504字
- 2025-04-04 15:11:28
Electrical power output prediction using regression
Now that you've understood the basics of linear regression, let's use it to predict the electrical power output of a combined cycle power plant. We described this dataset in Chapter 1, Principles and Foundations of AI and IoT; here, we'll use TensorFlow and its automatic gradient to find the solution. The dataset can be downloaded from the UCI ML archive (http://archive.ics.uci.edu/ml/datasets/combined+cycle+power+plant). The complete code is available on GitHub (https://github.com/PacktPublishing/Hands-On-Artificial-Intelligence-for-IoT) under the filename ElectricalPowerOutputPredictionUsingRegression.ipynb.
Let's understand the execution of code in the following steps:
- We import tensorflow, numpy, pandas, matplotlib, and some useful functions of scikit-learn:
# Import the modules
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
%matplotlib inline # The data file is loaded and analyzed
- The data file is loaded and analyzed:
filename = 'Folds5x2_pp.xlsx' # download the data file from UCI ML repository
df = pd.read_excel(filename, sheet_name='Sheet1')
df.describe()
- Since the data isn't normalized, before using it, we need to normalize it using the MinMaxScaler of sklearn:
X, Y = df[['AT', 'V','AP','RH']], df['PE']
scaler = MinMaxScaler()
X_new = scaler.fit_transform(X)
target_scaler = MinMaxScaler()
Y_new = target_scaler.fit_transform(Y.values.reshape(-1,1))
X_train, X_test, Y_train, y_test = \
train_test_split(X_new, Y_new, test_size=0.4, random_state=333)
- Now, we define a class, LinearRegressor; this is the class where all of the real work happens. The class initialization defines the computational graph and initializes all of the Variables (weights and bias). The class has the function method, which models the function y = F(X,W); the fit method performs the auto gradient and updates the weights and bias, the predict method is used to get the output y for a given input X, and the get_weights method returns the learned weights and bias:
class LinearRegressor:
def __init__(self,d, lr=0.001 ):
# Placeholders for input-output training data
self.X = tf.placeholder(tf.float32,\
shape=[None,d], name='input')
self.Y = tf.placeholder(tf.float32,\
name='output')
# Variables for weight and bias
self.b = tf.Variable(0.0, dtype=tf.float32)
self.W = tf.Variable(tf.random_normal([d,1]),\
dtype=tf.float32)
# The Linear Regression Model
self.F = self.function(self.X)
# Loss function
self.loss = tf.reduce_mean(tf.square(self.Y \
- self.F, name='LSE'))
# Gradient Descent with learning
# rate of 0.05 to minimize loss
optimizer = tf.train.GradientDescentOptimizer(lr)
self.optimize = optimizer.minimize(self.loss)
# Initializing Variables
init_op = tf.global_variables_initializer()
self.sess = tf.Session()
self.sess.run(init_op)
def function(self, X):
return tf.matmul(X, self.W) + self.b
def fit(self, X, Y,epochs=500):
total = []
for i in range(epochs):
_, l = self.sess.run([self.optimize,self.loss],\
feed_dict={self.X: X, self.Y: Y})
total.append(l)
if i%100==0:
print('Epoch {0}/{1}: Loss {2}'.format(i,epochs,l))
return total
def predict(self, X):
return self.sess.run(self.function(X), feed_dict={self.X:X})
def get_weights(self):
return self.sess.run([self.W, self.b])
- We use the previous class to create our linear regression model and train it:
N, d = X_train.shape
model = LinearRegressor(d)
loss = model.fit(X_train, Y_train, 20000) #Epochs = 20000
Let's see the performance of our trained linear regressor. A plot of mean square error with Epochs shows that the network tried to reach a minimum value of mean square error:

On the test dataset, we achieved an R2 value of 0.768 and mean square error of 0.011.