Interactive multi-layer perceptron

\[ h = f_h^{(1)}(w_{hx}^{(1)} x + w_{hy}^{(1)} y + b_h^{(1)}) \] \[ v = f_v^{(1)}(w_{vx}^{(1)} x + w_{vy}^{(1)} y + b_v^{(1)}) \] \[ z = f_z^{(2)}(w_{zh}^{(2)} h + w_{zv}^{(2)} v + b_z^{(2)}) \]

This page visualizes a multi-layer perceptron with two inputs $x$ and $y$, two hidden units $h$ and $v$, and one output $z$.

Here:

  • $w_{hx}^{(1)}$, $w_{hy}^{(1)}$, $b_h^{(1)}$, $f_h^{(1)}$ denote two weights, a bias, and an activation function for computing $h$ from the inputs $x$ and $y$;
  • $w_{vx}^{(1)}$, $w_{vy}^{(1)}$, $b_v^{(1)}$, $f_v^{(1)}$ denote two weights, a bias, and an activation function for computing $v$ from the inputs $x$ and $y$;
  • $w_{zh}^{(2)}$, $w_{zv}^{(2)}$, $b_z^{(2)}$, $f_z^{(2)}$ denote two weights, a bias, and an activation function for computing $z$ from the hidden units $h$ and $v$;

In this visualization, one can see outputs from the two-layer perceptron as a heat map as they interactively change the parameters in the perceptron. The heatmap represents two inputs $x$ and $y$ as $x$-axis and $y$-axis, respectively, and an output as color thickness of the plotting area.

Let’s confirm that changing the parameters of the perceptron can realize XOR.

$x$ $y$ AND OR NAND XOR
0 0 0 0 1 0
0 1 0 1 1 1
1 0 0 1 1 1
1 1 1 1 0 0