A javascript demonstration of the perceptron algorithm, written for a reading group session. This was originally a separate webpage, but when I rewrote my website I decided to make it a blog post. This post can still be reached from /perceptron-demo, though.

# Perceptron

Perceptron is a very simple binary classification online learning algorithm, dating from the 1950s^{1}. Such an algorithm tries to classify input (here, points in the plane) into one of two categories. It receives the correct answer each time, and aims to minimize the total number of mistakes compared to a *hypothesis class*. Here, the hypothesis class is the set of halfplanes. Perceptron thus aims to perform as well at classifying the input points as any halfplane, which simply assigns points to one category if they are in the halfplane, and to the other if they are out of the halfplane. In the so-called *realizable case*, the true classifications really do come from a halfplane in this manner. Even in the realizable case, there is no online classification algorithm which can be guaranteed to make only finitely many mistakes given an infinite stream of inputs, even if all the points lie within some radius *R* of the origin^{2}.

Perceptron’s simplicitly lends it to analytic study. It can be shown that if all the points it is asked to classify (1) lie within some radius *R* of the origin and (2) stay separated from the boundary of the halfplane by some distance *d*, then it will make finitely many mistakes^{3}.

The following demo of perceptron was made for a reading group following the notes of Shai Shalev-Schwartz^{4}.

**Using the demo**: You can click to input points *x* for the algorithm to classify, or send in random points. It classifies the points by maintaining a halfplane with normal vector^{5} *w* and reporting which side of the halfplane the points fall. The true classification is given by the blue halfplane (which the algorithm does not have access to). You can change *u*, the normal vector to the blue halfplane, by dragging it or typing in coordinates. After attempting to classify each point, perceptron gets the feedback *y* which is +1 when *x* is in the blue halfplane and -1 otherwise. It updates its guess, the (normal vector for) purple halfplane, based on this feedback and the process repeats. The bound shown here is from Theorem 3.9 of Shalev-Schwartz’s notes^{6}.

You can view the full source code at perceptron.js (no promises as to the quality though, unfortunately).

Frank Rosenblatt, 1957 technical report (pdf) ↩︎

OL p. 169 ↩︎

See Theorem 3.9 of OL, or Mohri, Rostamizadeh 2013 for a recent proof of new bounds. ↩︎

Shai Shalev-Schwartz, Online Learning Survey (OL). OLsurvey.pdf. The perceptron algorithm itself is on p. 170. ↩︎

Here, the associated halfplane to a normal vector v is everything behind the line through the origin which is perpendicular to v. ↩︎

This is upper bound on the number of mistakes Perceptron can make in terms of the inner products <x,u> for each point x given, largest norm of the points x, and the norm of u. This bound is computed whenever a new point x is placed and is displayed below the demonstration, along with some other stats. Note that the bound only holds if you do not change u during the process, which (along with the existence of such a u governing the true classification) corresponds to the realizable case. Choosing a sequence of

*x*increasingly close to the line dividing each side of the halfplane such that Perceptron makes a mistake each time will cause the bound to diverge (this should happen when sending in random points for long enough). ↩︎