In this lecture, we will review the fundamentals of neural networks and implement some simple examples to illustrate their functionality. We will be using the Pytorch framework for coding the nets (check here https://pytorch.org/get-started/locally/ for installation). Since this lecture focuses on initial perceptron and sigmoid units, no GPU is necessary to run the examples.
Before start, let's load the necessary libraries to run the code:
import numpy as np
# we import the Pytorch framework
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
import data.DataProvider as dp
Consider a training set $T=\{(x_i, y_i = f(x_i)) | i = 1, \dots, m. x_i \in \Omega, y_i \in Y \}$ where:
$x_i$ is a data sample (e.g. an image or statistics obtained from the image) and $\Omega$ is the set of all samples that can be generated. We can call the elements of this vector "features", and the vector itself the "features vector".
$y_i$ is a label or class related to $x_i$ with $Y$ the set of all the available classes and $|Y|=k$ the number of classes.
$f:\Omega \to Y$ is the function that relates every sample in $\Omega$ to a class in $Y$.
The supervised learning problem consist in use $T$ to find a function $h:\Omega \to Y$ that, for all $z \in \Omega$:$$ h(z) \approx y_z = f(z) $$
Some considerations:
How to define $h$?
How to measure how similar is $h(z)$ to $y_z= f(z)$?
How to use $T$ to find a "good" $h$?
An artificial neuron is a unit that takes a set of real-valued inputs, performs some operation $o(\cdot)$ and produces a single real-valued output. The figure shows a graphic representation of this (with $g$ the activation function):
We can define the neuron to be: $$o_p(x; w, b)=sgn(wx + b)$$
with $w$ a vector of weights and $b$ the bias parameter of the neuron. We call this function the Perceptron neuron.
We can choose our model $h(x)=o_p(x; w,b)$ to be a perpceptron neuron. This network can approximate linear functions, e.g. boolean functions AND and OR. Considering the set $\Omega = \{-1, 1\}^2$ and $Y=\{-1, 1\}$, the AND function can be defined as:
$$o_{p\_and}(x; [0.5, 0.5], -0.3)=sgn([0.5, 0.5]x - 0.3)$$We can use Pytorch to implement this AND neuron. For this, we write a class that implements our single perceptron model:
class Perceptron_AND(nn.Module):
def __init__(self): #We define our model architecture here.
super(Perceptron_AND, self).__init__()
self.w = torch.tensor([[0.5], [0.5]]) # 2x1 weight
self.b = torch.tensor(-0.3) # bias
def forward(self, x): # The evaluation function.
y = torch.mm(x, self.w) + self.b
return torch.sign(y)
We are now ready to test the neuron. For this, we need to define our inputs as a Pytorch tensor:
# Simple numpy 1 x n inputs.
x1 = np.array([[1.0, 1.0]])
x2 = np.array([[1.0, -1.0]])
x3 = np.array([[-1.0, 1.0]])
x4 = np.array([[-1.0, -1.0]])
# Convert to tensorx.
x1 = torch.from_numpy(x1).float()
x2 = torch.from_numpy(x2).float()
x3 = torch.from_numpy(x3).float()
x4 = torch.from_numpy(x4).float()
#now we evaluate our model
and_model = Perceptron_AND()
y = and_model(x1)
print("{} - {}".format(x1, y))
y = and_model(x2)
print("{} - {}".format(x2, y))
y = and_model(x3)
print("{} - {}".format(x3, y))
y = and_model(x4)
print("{} - {}".format(x4, y))
In a similar way, we can define the OR function as: $$o_{p\_or}(x; [0.5, 0.5], -0.3)=sgn([0.5, 0.5]x + 0.3)$$
class Perceptron_OR(nn.Module):
def __init__(self): #We define our model architecture here.
super(Perceptron_OR, self).__init__()
#TODO: Define the OR model
def forward(self, x): # The evaluation function.
#TODO: Define the evaluation function
And we can test the class:
or_model = Perceptron_OR()
y = or_model(x1)
print("{} - {}".format(x1, y))
y = or_model(x2)
print("{} - {}".format(x2, y))
y = or_model(x3)
print("{} - {}".format(x3, y))
y = or_model(x4)
print("{} - {}".format(x4, y))
We can also learn the weights instead of setting them manually. For this, we can use the perceptron rule (for simplicity, we omit the weight and bias in the function call):
with $\alpha$ the learning rate. We can implement this rule in Pytorch:
#Hint: Need to transpose something? check here https://pytorch.org/docs/stable/torch.html#torch.transpose
class Perceptron_AND2(nn.Module):
def __init__(self): #We define our model architecture here.
super(Perceptron_AND2, self).__init__()
self.w = #TODO: Define a 2 x 1 random weight. Hint: Take a look at torch.randn (https://pytorch.org/docs/stable/torch.html#torch.randn)
self.b = #TODO: Define a 1 x 1 bias tensor. (Initialize to 0)
def forward(self, x): # The evaluation function.
#TODO: Define the evaluation function
#We get a model, n x 1 x 2 train data, n x 1 x 1 train labels, and learning rate lr
def learn_perceptron(model, train_data, train_labels, lr=0.1):
num_data = len(train_data)
for t in range(10):
for x in range(num_data):
o = #TODO: Evaluate the model.
model.w = #TODO: update the weights.
model.b = #TODO: update the bias.
return model
Now we can try the perceptron learning rule to fit the and function using a neuron.
y1 = torch.tensor([[1.0]])
y2 = torch.tensor([[-1.0]])
y3 = torch.tensor([[-1.0]])
y4 = torch.tensor([[-1.0]])
#Test before training
print("Before training:")
model = Perceptron_AND2()
y = model(x1)
print("{} - {}".format(x1, y))
y = model(x2)
print("{} - {}".format(x2, y))
y = model(x3)
print("{} - {}".format(x3, y))
y = model(x4)
print("{} - {}".format(x4, y))
train_dat = [x1, x2, x3, x4]
train_lab = [y1, y2, y3, y4]
model = learn_perceptron(model, train_dat, train_lab)
print("\nAfter training:")
y = model(x1)
print("{} - {}".format(x1, y))
y = model(x2)
print("{} - {}".format(x2, y))
y = model(x3)
print("{} - {}".format(x3, y))
y = model(x4)
print("{} - {}".format(x4, y))
train_dat = [x1, x2, x3, x4]
train_lab = [y1, y2, y3, y4]
We can also use gradient descent to learn the weights. For this, we will need a differentiable activation function, a performance metric or loss function, and the gradient of this loss w.r.t. the weights:
$o(x) = wx + b$ (Linear units)
Loss function: $E(w, b)=1/2 \sum _{j=1}^m[y_j - o(x_j)]^2$.
According to the gradient descent method, we will update the weights according of the loss gradient as:
$$w_i = w_i - \alpha {{\partial E} \over \partial w_i}$$Obtaining the gradient for $E$ leads to the following update rule:
$$w_i = w_i + \alpha \sum _{j=1}^m[y_j - o(x_j)]x_{ij}$$A single perceptron can not be used for learning non-linear separable functions. For example, consider the XOR boolean function:
As we can see, this is not a single linear separable problem, and we will need a more complex structure to solve it. We can use a perceptron neural network to represent this function:
Let's implement this with Pytroch. We can re-use our previous AND and OR implementations for this:
# These methods could be useful
# https://pytorch.org/docs/stable/torch.html#torch.stack
# https://pytorch.org/docs/stable/torch.html#torch.reshape
class Perceptron_XOR(nn.Module):
def __init__(self): #We define our model architecture here.
super(Perceptron_XOR, self).__init__()
#TODO: Use the AND and OR gates to define the model.
def forward(self, x):
#TODO: define the evaluation function.
Now, let's check the implementation.
xor_model = Perceptron_XOR()
y = xor_model(x1)
print("{} - {}".format(x1, y))
y = xor_model(x2)
print("{} - {}".format(x2, y))
y = xor_model(x3)
print("{} - {}".format(x3, y))
y = xor_model(x4)
print("{} - {}".format(x4, y))
The backpropagation algorithm allows us to find the parameters for a multi-layered neural network (MLNN). Since it is based on the gradient of the loss, we need a differentiable loss function and neurons with differentiable activations. We could use the linear units, however, it is better to include non-linear transformation of the input data. For this, we can use the sigmoid units, defined as: $$o({\bf x}) = \sigma({\bf wx + b}) = {1\over {1 + e^{-({\bf wx + b})}}} $$
Since we can also have multiple classes, our desired output $\bf y$ can be represented by a vector of $k$ elements, where all the elements will be $0$ except for the index that correspond to the desired class. In this case, our neural network will have $k$ output neurons. Considering this, we can divide the neurons in the network as:
For a basic formulation of a MLNN, we can represent the model as a directed acyclic graph (feedforward neural network):
Considering sigmoid units and a loss function defined as:
$$E({\bf w, b})={1\over 2} \sum _{j=1}^m\sum _{i=1}^k[y_{ij} - o(x_{ij})]^2$$The backpropagation algorithm for a two-layered neural network is formulated as follows:
Initialize the weights to small random numbers.
Repeat until termination condition is met:
Propagate the input forward through the network (evaluate the output for every neuron).
Propagate the errors backward through the network:
Compute the error term for each output unit r:
$$\delta _r = o_r(1-o_r)(y_r-o_r)$$
Compute the error for each hidden layer h:
$$\delta _h = o_h(1-o_h)\sum_{r=1}^k w_{rh}\delta_r$$
Compute the weights according to:
$$w_{ji}=w_{ji} + \alpha \delta_j x_{ji}$$With $x_{ji}$ and $w_{ji}$ the $i$th input component to unit $j$ and its corresponding weight, respectively.
Now, let's check how to train a mul-tilayered neural network in Pytorch. Since many training algorithms are already implemented, we can check how to use them. We will need to define a model with trainable instead of constant weights. We will train the network to learn the XOR function:
class XOR_Model(nn.Module):
def __init__(self): #We define our model architecture here.
super(XOR_Model, self).__init__()
#TODO: Define the XOR architecture: 2 inputs, 2 hidden neurons, 1 output neuron.
#HINT: You can define trainable linear units with https://pytorch.org/docs/stable/nn.html#torch.nn.Linear
def forward(self, x):
#TODO: Define the evaluation function.
# The sigmoid function in Pytorch: https://pytorch.org/docs/stable/torch.html#torch.sigmoid
# Now we write a function to train our model.
def train_model(model, train_set, train_labels, lr=0.01, T=100):
criterion = nn.MSELoss() # our loss functoin.
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) # Our learning algorithm
num_samples = len(train_set)
model.train() # set the model in training mode.
for t in range(T):
for i in range(num_samples):
x = train_set[i]
y = train_labels[i]
opt.zero_grad() # Clean the gradients before backpropagate and update
prediction = model(x) # get current model prediction.
loss = criterion(prediction, y) # evaluate the loss.
loss.backward() # backpropagating gradients.
opt.step() # Updating all the weights.
return model
Let's check the code:
# Simple numpy 1 x n inputs.
x1 = np.array([[1.0, 1.0]])
x2 = np.array([[1.0, 0.0]]) # we set the values for working with our model (0, 1) instead of (-1, 1).
x3 = np.array([[0.0, 1.0]])
x4 = np.array([[0.0, 0.0]])
# Convert to tensorx.
x1 = torch.from_numpy(x1).float()
x2 = torch.from_numpy(x2).float()
x3 = torch.from_numpy(x3).float()
x4 = torch.from_numpy(x4).float()
y1 = torch.tensor([[0.0]])
y2 = torch.tensor([[1.0]])
y3 = torch.tensor([[1.0]])
y4 = torch.tensor([[0.0]])
model = XOR_Model()
#now we evaluate our model before training.
print("we evaluate our model before training.")
model.eval() # Evaluation mode
y = model(x1)
print("{} - {}".format(x1, y))
y = model(x2)
print("{} - {}".format(x2, y))
y = model(x3)
print("{} - {}".format(x3, y))
y = model(x4)
print("{} - {}".format(x4, y))
train_dat = [x1, x2, x3, x4]
train_lab = [y1, y2, y3, y4]
model = train_model(model, train_dat, train_lab, lr=1e-1, T=1000)
#now we evaluate our model after training.
print("\n we evaluate our model after training.")
model.eval() # Evaluation mode
y = model(x1)
print("{} - {}".format(x1, y))
y = model(x2)
print("{} - {}".format(x2, y))
y = model(x3)
print("{} - {}".format(x3, y))
y = model(x4)
print("{} - {}".format(x4, y))
We can try with different parameters to see how the learning process is affected.
Now, let’s implement a mlnn to learn on real data. The dataset for this exercise is composed by 2000 images for training and 876 for testing. Each image is a RGB $24\times 24$ bounding box obtained from a blood sample and can contain either background or a blood parasite know as T. cruzi. The dataset was one of the results from a Mexican project for the diagnosis of Chagas disease (CONACYT/SALUD-2009-C01-113848, contact: Dr. Hugo Ruiz rpina@correo.uady.mx). The picture bellow shows some negative (top) and positive (bottom) samples from the dataset.
Considering the last xor learning example, implement one mlnn to classify images as containing a parasite (1) or not (0).
#an auxiliar function for loading data
def load_data_img(mode="train"):
data = np.load("data/chagas.dataset.lr.npy")
labels = np.load("data/chagas.labels.lr.npy")
if mode == "train":
return data[0:2000] / 255.0, labels[0:2000]
if mode == "test":
return data[2000:] / 255.0, labels[2000:]
return None
class MlNN_Model(nn.Module):
def __init__(self): #We define our model architecture here.
super(MlNN_Model, self).__init__()
#TODO: Define the model architecture: 32 hidden neurons, 1 output network.
# We will receive a vectorized version of the RGB images.
def forward(self, x):
#TODO: Define the evaluation function.
def train_model2(model, train_provider, lr=0.01, T=1000):
#TODO: Define the criterion, optimizer, etc.
i = 0
losses = []
while(1): # Here, the provider will tell us when to stop.
x, y = train_provider. nextBatch() # get a pair of data, label np arrays.
x = np.reshape(x, (-1, 24*24*3))
x = torch.from_numpy(x).float() #we need to convert the arrays to tensors.
y = torch.from_numpy(y).float()
#TODO: Compute the loss, backpropagate the errors, update W, etc
loss = ????
if not i % 100:
losses.append(loss.item())
if train_provider.shouldStop():
break
i+= 1
print("done")
return model, losses
def test_model2(model, test_provider):
print("\n testing")
model.eval()
torch.set_grad_enabled(False)
count = 0
mean_err = 0
while(1):
x, y = test_provider.nextBatch()
x = np.reshape(x, (-1, 24*24*3))
x = torch.from_numpy(x).float() #we need to convert the arrays to tensors.
predict = model(x)
predxict = predict.numpy() # We get a numpy representation of the prediction.
predict = float(predict > 0.5)
# print("label: " + str(y) + " prediction: " + str(predict))
count += 1
mean_err += (y - predict)
if test_provider.shouldStop():
break
print("testing error: " + str(mean_err / count))
Now let's try the code:
#setting up providers
train_provider = dp.DataProvider(reader=load_data_img)
test_provider = dp.DataProvider(reader=load_data_img)
train_provider.loadSamples(mode="train")
test_provider.loadSamples(mode="test")
#we can configure batch size and epoch here
train_provider.num_epoch = 100
train_provider.setBatchSize(32, "train")
model = MlNN_Model()
test_model2(model, test_provider)
torch.set_grad_enabled(True)
model, losses = train_model2(model, train_provider, lr=0.001, T=1000)
# Testing our model
test_provider = dp.DataProvider(reader=load_data_img)
test_provider.loadSamples(mode="test")
test_model2(model, test_provider)
plt.plot(losses)
plt.show()
Again, try with different combinations of hyper-parameters.
The backpropagation algorithm does not guarantee to converge to a global minima. Adding momentum in the the weight update rule could help in this problem. $$ \Delta w_{ji}(n) = \alpha \delta _j x_{ji} + \beta \Delta w_{ji}(n-1) $$ $$ w_{ji} = w_{ji} + \Delta w_{ji}(n) $$
With $\Delta w_{ji}(n)$ the weight update in the $n$th learning iteration. In order to avoid overfitting we can add a regularization term to the loss function.
Tom M. Mitchel. Machine Learning. McGraw-Hill Science/Engineering/Math; 1999.
Srivastava, Hinton, Krizhevsky, Sutskever, and Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. 2014
Nair and Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines. Proceedings of the 27th International Conference on International Conference on Machine Learning. 2010.
Goh. Why Momentum Really Works. Distill, 2017. https://distill.pub/2017/momentum/