Perceptrons in C++

Last time, I talked about a simple kind of neural net called a perceptron that you can cause to learn simple functions. For the purposes of experimenting, I coded a simple example using Excel. That’s handy for changing things on the fly, but not so handy for putting the code in a microcontroller. This time, I’ll show you how the code looks in C++ and also tell you more about what you can do when faced with a more complex problem.

I built a generic base class that implements the core logic and can handle different vector sizes. Here’s the header:

class Learn
{
protected:
 unsigned int stimct; // number of stimulus (will be +1 due to bias)
 unsigned int resct; // number of results
 float threshold; // threshold (default 1.0)
 float **weights; // weight matrix
 float *result; // results
 float *_stim; // place to put stim + bias
 void set_stim(float *s); // helper to load stim+bias
public:
 // Note: stimulusct is your stimulus count;
 // The code will add one to it to account for bias
 Learn(unsigned int stimulusct, unsigned int resultct);
 ~Learn();
 int init(void); // reset weights and threshold
 // perform training (stimulus should give result and use a training rate)
 int train(float *stim, unsigned int result, float wt=1.0f);
 // Get result for given stimulus
 unsigned int fetch(float *stim);
 // set/get threshold value
 void setThreshold(float t) { threshold=t; };
 float getThreshold(void) { return threshold; };
 // load weights from file
 int load(const char *fn);
 // save weights to file
 int save(const char *fn);
 // debug output
 void dump(void);
};

The class handles adding the bias input and doing all the math for training and classification. You can find the class code, along with a simple demo called simple.cpp, on Github. Just run make to build the example program.

The simple demo has an array of training data for AND logic and another for OR logic. Here’s the AND data:

float and_trng[4][2] = {
 { 0.0, 0.0},
 { 0.0, 1.0 },
 { 1.0, 0.0 },
 { 1.0, 1.0}
};

int and_trngr[]={ 0, 0, 0, 1 };

The code does the training repeatedly until the right answers appear:

 for (i=0;i<4;i++) lobj.train(trainingdata[i],trainresult[i],TRAINRATE);
 for (i=0;i<4;i++) if ((res=lobj.fetch(trainingdata[i]))!=trainresult[i]) again=1;

The again variable causes the training/testing sequence to repeat. If the data isn’t linearly separable (for example, try changing the data to do an XOR), the code will break out after 500 passes through the training data.

Less Exciting

Of course, AND and OR gates aren’t very exciting. To do anything really interesting, you need multiple layers of perceptrons to form a real neural network. According to the math, three layers of perceptrons is sufficient to handle any case. One layer accepts inputs. The outputs from that layer feeds the “hidden” layer. Those outputs feed a layer of output perceptrons.

Training a network like that is more difficult, of course. However, the principle is the same. Errors in the outputs are backpropagated to the perceptrons using a learning algorithm.

Open Source and Hardware

I wanted to write some basic software to illustrate the principles. However, if you want to really ramp up, you probably should turn to a well-developed library like OpenNN (you can see a similar example for OpenNN and it even includes XOR). Or you might consider FANN, if you don’t like OpenNN.

If you want to try something in hardware, these algorithms are not very hard to do in an FPGA. Or you can buy some hardware (you can see a video about that card, below).

If you want to see some of the projects we’ve looked at that use neural networks, you can read about targeting cats, play Super Mario, or control a helicopter.


Filed under: Hackaday Columns, Skills, software hacks

// from Hackaday http://ift.tt/2ej0qS7
site=blogger">IFTTT


EmoticonEmoticon

Comments system