Let’s keep things simple, you want to read a post and conveniently I’ve written one for you!
I’ll spare everyone my recent fascinations with macabre subjects and opt to get right to the topic of the day!
Anyway, as the Jane Goodall of bots I’ve learned a little about how to communicate with them using rules they understand and today I’m going to show you how you make rules that get a bot to understand and do, what you want it to do.
But… before we get into that, here’s the wallpaper!
The Contrast-inator Bot Wallpaper
Why Contrast-inator? Well, as far as the “-inator” part is concerned let’s just say I enjoy Phineas and Ferb. 😛
As for the “Contrast” part, we’ll get into that next but the big eyes of this bot are important. 😛
Also… it doesn’t need hands, arms, legs or feet to do it’s job so I didn’t 3D print the parts, waste not want not! 😉 😛

The Contrast-inator
So, recently I received a few comments that amount to something along the lines of “How do you decide on the ‘rules’ for training your neural network?”.
My response is basically if you can “phrase” a training rule in a way the bot can understand, then you can make the rules to be whatever you want/need them to be.
And the thing we’re going to try to teach the bot today to help us explore this topic is… given an input value, tell us if the value falls to the “left” or to the “right” of an “anchor point”.
That sounds more complicated than it really is and I intend this tutorial to be for beginners so let me try to simplify my description… uh… so, think of a gray-scale gradient of black to white.
Imagine a red line in the center of the gradient.
Now, given some gray-scale color e.g. RGB(102, 102, 102) as “input”, how can we train a bot to tell us if the color falls to the left or to the right of the red line… like this:

I know that might seem complicated while at the same time also seem kind of useless… I assure you that neither stipulated condition is true.
In regard to usefulness, just as a hypothetical example… a neural network like this could act as the “brain” of a “line follower” robot but instead of reading the voltages directly from a photodiode and then using if/else to direct motors, you could pass the diode voltages to the neural network and let it decide which motor to move via an H-Bridge / Digital Potentiometer.
An Arduino would need a WiFi Shield for something like that to work but a line follower built on something like a Raspberry Pi could run it’s neural network “brain” locally.
Which brings us back to complexity and how we build a rule set to teach our Contrast-inator bot to tell us if a pixel is brighter or darker than the color where the red line is.
Forget about what I said about the hypothetical line-follower robot, the Arduino and the Raspberry Pi… it’s more complicated than I want this post to be and it’s just an example anyway. 😛
Let’s start over…
We know that any answers our bot gives us (the output) will look like a “floating point” number (a decimal value e.g. 0.01) and basically our input will also be a floating point number too.
With this in mind we can start to imagine that our training data inputs and the associated outputs will look like a series of numbers.
But what will the numbers mean and how can we know if the bot is correct?
Well, let’s step back again and think about what rules we need to teach the bot first before we even worry about encoding the training data for the bot.
What rules might a human need if we had to describe the process to someone for them to be able to do it?
Plain English Rules For the Bot to Learn:
- If the color falls to the left of the red line then it can be described as “Darker”.
- If the color is neither to the left or the right of the red line, then we can say the color is directly in the center. We might describe this position or color state as being “Neutral” in relation to the red line.
- If the color falls to the right of the red line then it can be described as “Brighter”.
Given these three super easy rules I believe most, if not all of you should be able to answer if a color falls to the left or the right of the red line with a high degree of accuracy.
However, your accuracy would diminish the closer the color is to the red line in the center because you are intuitively guessing and the difference between the colors that surround either side of the center of the color gradient all look like very similar grays, e.g. they have a low contrast between them.
The colors at the ends of the color gradient (black/left and white/right) have the largest contrast between them and are the easiest to determine which side they fall on.
With our rules layed out in English, let’s return to the idea of the training data (our rules) which consists of numbers and how we will represent our three rules as numbers.
I’ve already said the inputs and outputs will be floating point numbers but what we haven’t covered yet is the fact that our numbers are “signed” (have negative and positive polarities to it’s range) with our range being -1.00 to 1.00.
This means that Black can be encoded as: -1.00 or -1 for simplicity with the decimal being implied.
This also means that White can be encoded as: 1.00 or 1, also with the decimal being implied.
Given our signed float range and a few colors converted to a float within our range, we can easily determine algorithmically if a color is on the left or right of the red line even if it’s very close to the center with 100% accuracy (better than human capability) simply be checking if it is greater than or less than zero.
Meaning… a neural network is NOT needed to accomplish this task, but… that’s not the point! 😛
Our goal is to teach a neural network to do this nonetheless because it is a simple problem and the rules (training data) are simple enough that a beginner should be able to understand how they are derived if they exert even a modicum of effort!
Here’s what that looks like:

Notice that the first two colors are to the left of zero (darker) because they are negative and the third color is far to the right (much lighter) because it is closer to 1 than 0.
Color | R, G, B | As Float | Side | Meaning |
42, 42, 42 | -0.66797385620915 | Left | Darker | |
102, 102, 102 | -0.19738562091503 | Left | Darker | |
221, 221, 221 | 0.7359477124183 | Right | Lighter |
Fascinating… but… how are you converting the colors to floats?
Okay look, this won’t be on the mid-term test and it’s in no way actually necessary to go over because we won’t need to do this to train the bot but since you are curious here’s a function you can use to convert actual RGB & Grayscale colors to a float in the right range:
How to convert a color to a signed float between -1.00 to 1.00:
<?php // Input a number between 0 and $max and get a number inside // a range of -1 to 1 function ConvertColorToInputFloatRange($color_int_value, $max = 255){ Â Â Â Â return ((($color_int_value - -1) * (1 - -1)) / ($max - 0)) + -1; } // RGB Color to range of -1 to 1 $R = 42; $G = 42; $B = 42; echo "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL; // RGB to Gray-scale to range of -1 to 1 $gray = ($R+$G+$B) / 3; echo "Pixel_Grayscale($gray) = " . ConvertColorToInputFloatRange($gray, 255) . PHP_EOL; // RGB Color to range of -1 to 1 $R = 102; $G = 102; $B = 102; echo "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL; // RGB Color to range of -1 to 1 $R = 221; $G = 221; $B = 221; echo "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL; /* Output: Pixel_RGB(42,42,42) = -0.66797385620915 Pixel_Grayscale(42) = -0.66274509803922 Pixel_RGB(102,102,102) = -0.19738562091503 Pixel_RGB(221,221,221) = 0.7359477124183 */
Now that you all at least believe it’s possible to convert a color to a float between -1 & 1 forget all about this function because we won’t need it to train the bot! 😛
Then… how do we teach a neural network to do this?
Well, let’s talk about what the output for this bot looks like before we get back to creating the training data from our rules.
We know that our output is a float, and even though it is possible to teach the neural network to do this with a single output, I find I get better results from the neural network using two outputs.
This is because it’s actually very easy for the bot to understand we want it to detect if the input value (color) is slightly offset to the left or right of the red line but it’s not the easiest thing for it to determine exactly where the center is (just like you but it’s still better at it) so our margin of error (the number of colors it can’t tell are on the right or left… e.g. the colors it will say are neutral) tends to be slightly larger if we only use a single output float.
What that means is:
- Our Input looks like: float
- Our output looks like: float_left float_right
With that in mind we have now covered everything necessary to begin converting our rules to training data.
Remember, that the decimals are implied!
Lets start by teaching it what the darker colors on the left look like:
Black RGB(0,0,0), is the farthest most color to the left and is encoded as -1 and with two output values representing Left & Right we get a rule that looks like this:
Learn “Darker” colors (floats closest to -1.00) are on the left:
The output value on the left is set to 1 which means negative values more strongly polarize to the left and this is reflected on the left output being 1.00 and the right output value being -1.00.
-1 1 -1
Learn “Neutral” colors (floats closest to 0.00) are near the center:
I’m using -1.00 & -1.00 to mean that an input of exactly zero is not strongly polarized to either side of the gradient with zero (the exact center – whatever color that is) is not strongly polarizing (-1.00, -1.00) in either direction.
The goal here is that this will help it learn values near zero are are not strongly polarized and zero isn’t polarized at all.
0 -1 -1
Learn “Brighter” colors (floats closest to 1.00) are on the right:
The output value on the right is set to 1 which means positive values more strongly polarize to the right and this is reflected by the right output being 1.00 and the left output value being -1.00.
1 -1 1
FANN (The library we’re using for training the neural network) requires a header is stored with the training data so it can read the training data and that looks like:
Number_Of_Training_Examples Number_Of_Inputs Number_Of_Outputs
*Note the spaces between values
So, combined our training data file looks like this:
Contrastinator.data
3 1 2 -1 1 -1 0 -1 -1 1 -1 1
And that’s it, we’ve converted our rules to training data so… lets train the bot!
TrainContrastinator.php
You will need FANN installed to train this bot.
Follow this tutorial to learn how to install FANN.
<?php $num_input = 1; $num_output = 2; $num_layers = 3; $num_neurons_hidden = 2; $desired_error = 0.000001; $max_epochs = 500000; $epochs_between_reports = 1000; $ann = fann_create_standard($num_layers, $num_input, $num_neurons_hidden, $num_output); if ($ann) { Â Â Â Â fann_set_activation_function_hidden($ann, FANN_SIGMOID_SYMMETRIC); Â Â Â Â fann_set_activation_function_output($ann, FANN_SIGMOID_SYMMETRIC); Â Â Â Â $filename = dirname(__FILE__) . "/Contrastinator.data"; Â Â Â Â if (fann_train_on_file($ann, $filename, $max_epochs, $epochs_between_reports, $desired_error)) Â Â Â Â Â Â Â Â echo 'Contrastinator trained.' . PHP_EOL; Â Â Â Â if (fann_save($ann, dirname(__FILE__) . "/Contrastinator.net")) Â Â Â Â Â Â Â Â echo 'Contrastinator.net saved.' . PHP_EOL; Â Â Â Â fann_destroy($ann); }
It won’t take very long for the bot to learn our rules.
Once you see the message “Contrastinator trained.” you are ready to test your new bot!
TestContrastinator.php
This code will test Contrastinator using input values is has not trained on but because we designed good rules, the bot is able to answer correctly even when it never actually saw most of the test values, it did see -1, 0 and 1 though along with their “ideal” outputs.
Notice, the $brighter and $darker variables are the the output of the neural network.
The $evaluation variable is a test for our benefit and does not modify or affect the output/results of the bot’s answers and the answers are correct even if we don’t do the evaluation, it just helps us confirm/interpret programmatically what the bot’s answers mean.
<?php $train_file = (dirname(__FILE__) . "/Contrastinator.net"); if (!is_file($train_file)) Â Â Â Â die("Contrastinator.net has not been created! Please run TrainContrastinator.php to generate it" . PHP_EOL); $ann = fann_create_from_file($train_file); if ($ann) { Â Â Â Â Â Â Â Â foreach(range(-1, 1, 0.1) as $test_input_value){ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â $input = array($test_input_value); Â Â Â Â Â Â Â Â $result = fann_run($ann, $input); Â Â Â Â Â Â Â Â $darker = $result[0]; Â Â Â Â Â Â Â Â $brighter = $result[1]; Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â if($brighter < 0 && $darker < 0){ Â Â Â Â Â Â Â Â Â Â Â Â $evaluation = 'Neutral'; Â Â Â Â Â Â Â Â } Â Â Â Â Â Â Â Â elseif($brighter > $darker){ Â Â Â Â Â Â Â Â Â Â Â Â $evaluation = 'Brighter'; Â Â Â Â Â Â Â Â } Â Â Â Â Â Â Â Â elseif($brighter < $darker){ Â Â Â Â Â Â Â Â Â Â Â Â $evaluation = 'Darker'; Â Â Â Â Â Â Â Â } Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â echo 'Contrastinator(' . $input[0] . ") -> [$darker, $brighter] - Input is $evaluation" . PHP_EOL; Â Â Â Â } Â Â Â Â Â Â Â Â Â Â Â Â fann_destroy($ann); } else { Â Â Â Â die("Invalid file format" . PHP_EOL); }
Results:
Notice that it has no trouble detecting that an input of zero (0.00) is neutral and that it also correctly determines which side a color (represented by a float) falls on in relation to the center zero value.
Contrastinator(-1) -> [1, -1] - Input is Darker Contrastinator(-0.9) -> [1, -1] - Input is Darker Contrastinator(-0.8) -> [1, -1] - Input is Darker Contrastinator(-0.7) -> [1, -1] - Input is Darker Contrastinator(-0.6) -> [1, -1] - Input is Darker Contrastinator(-0.5) -> [1, -1] - Input is Darker Contrastinator(-0.4) -> [1, -1] - Input is Darker Contrastinator(-0.3) -> [1, -1] - Input is Darker Contrastinator(-0.2) -> [1, -1] - Input is Darker Contrastinator(-0.1) -> [1, -1] - Input is Darker Contrastinator(0) -> [-0.9997798204422, -0.99950748682022] - Input is Neutral Contrastinator(0.1) -> [-1, 0.9995544552803] - Input is Brighter Contrastinator(0.2) -> [-1, 0.99954569339752] - Input is Brighter Contrastinator(0.3) -> [-1, 0.99953877925873] - Input is Brighter Contrastinator(0.4) -> [-1, 0.9995334148407] - Input is Brighter Contrastinator(0.5) -> [-1, 0.99952918291092] - Input is Brighter Contrastinator(0.6) -> [-1, 0.9995259642601] - Input is Brighter Contrastinator(0.7) -> [-1, 0.99952346086502] - Input is Brighter Contrastinator(0.8) -> [-1, 0.99952149391174] - Input is Brighter Contrastinator(0.9) -> [-1, 0.99952000379562] - Input is Brighter Contrastinator(1) -> [-1, 0.99951887130737] - Input is Brighter
Contrastinator on Github
As with all my public code, you can download a copy of this project for free on my GitHub profile.
GitHub: Contrastinator
I hope this helps you better understand how create your own training data sets and as always, if you have any questions or trouble understanding any part of this post, please leave a comment and I would be happy to try and help you.
If you enjoy my content and or tutorials like this one, consider supporting me on Patreon for as little as $1 a month and cancel any time!
It’s not required but it helps me out.
But if all you can do is Like, Share, Comment and Subscribe, well… that’s cool too!
Much Love,
~Joy