This page visualizes a single-layer perceptron with two inputs $x$ and $y$ and one output $z$:
\[ z = f(w_x x + w_y y + b) \]
- $w_x$ and $w_y$ denote two weights for the inputs $x$ and $y$, respectively;
- $b$ is a bias term;
- $f$ presents an activation function (e.g., sigmoid, tanh, ReLU functions);
In this visualization, one can see outputs from the perceptron as a heat map as they interactively change the parameters in the perceptron, more concretely, values of $w_x, w_y, b$ and an activation function $f$. The heatmap represents two inputs $x$ and $y$ as $x$-axis and $y$-axis, respectively, and an output as color thickness of the plotting area.
Supposing $0$ as false and $1$ as true, the perceptron can realize logic units (aka. threshold logic units) such as AND, OR, and NAND. Let’s confirm that changing the parameters of the perceptron realizes these logic units. In addition, experience the reason why a single-layer perceptron cannot realize XOR.