FFNN Keras implementation

To implement our network in Keras, we will again use the Sequential model, but this time with one input neuron, three hidden units, and of course, one output unit, as we are doing a binary prediction:

  1. Let's import all of the necessary parts to create our network:
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from sklearn.metrics import mean_squared_error
import os from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping, TensorBoard
  1. Now, we need to define the first hidden layer of the network. To accomplish this, it's sufficient to specify the hidden layer's input—two in the XOR case. We can also specify the number of neurons in the hidden layer, which is as follows:
model = Sequential()
model.add(Dense(2, input_dim=2))
  1. As an activation function, we chose to use tanh:
model.add(Activation('tanh'))
  1. We then add another fully connected layer with one neuron, which, with a sigmoid activation function, will give us the output:
model.add(Dense(1))
model.add(Activation('sigmoid'))

  1. We again use SGD as the optimization method to train our neural network:
sgd = SGD(lr=0.1)
  1. We then compile our network, specifying that we want to use the MSE as loss function:
model.compile(loss='mse', optimizer=sgd)
  1. As the last step, we train our network, but this time we don't care about the batch size and we run it for 2 epochs:
model.fit(train_x[['x1', 'x2']], train_y,batch_size=1, epochs=2)
  1. As usual, we measure the MSE on the test set as follows:
pred = model.predict_proba(test_x)

print('NSE: ',mean_squared_error(test_y, pred))