Interactive multi-layer perceptron
h=f(1)h(w(1)hxx+w(1)hyy+b(1)h) v=f(1)v(w(1)vxx+w(1)vyy+b(1)v) z=f(2)z(w(2)zhh+w(2)zvv+b(2)z)
This page visualizes a multi-layer perceptron with two inputs x and y, two hidden units h and v, and one output z.
Here:
- w(1)hx, w(1)hy, b(1)h, f(1)h denote two weights, a bias, and an activation function for computing h from the inputs x and y;
- w(1)vx, w(1)vy, b(1)v, f(1)v denote two weights, a bias, and an activation function for computing v from the inputs x and y;
- w(2)zh, w(2)zv, b(2)z, f(2)z denote two weights, a bias, and an activation function for computing z from the hidden units h and v;
In this visualization, one can see outputs from the two-layer perceptron as a heat map as they interactively change the parameters in the perceptron. The heatmap represents two inputs x and y as x-axis and y-axis, respectively, and an output as color thickness of the plotting area.
Let’s confirm that changing the parameters of the perceptron can realize XOR.
x | y | AND | OR | NAND | XOR |
---|---|---|---|---|---|
0 | 0 | 0 | 0 | 1 | 0 |
0 | 1 | 0 | 1 | 1 | 1 |
1 | 0 | 0 | 1 | 1 | 1 |
1 | 1 | 1 | 1 | 0 | 0 |