Search

Geek Girl Joy

Artificial Intelligence, Simulations & Software

Tag

Math

Chromatrons Juxtaposition

Well… I’ve more or less finished the Chromatron now so you can all not use it at your latest major inconvenience… yay!

When I published Hue Going My Way, I added the ability for you to play with a single color using rotating 3D cubes, kinda like a bad Johnny Mnemonic knock off and at the end you get a fully customized report like one of these:

Note that I added the background after the fact cuz um…  so perdy!

But now, due to mass underwhelming demand, though still upon a request… 😉 I’ve also added the ability to compare colors by selecting them from images.

Groundbreaking?? Most definitely not!

Still, it kinda works and as prototypes go… that ain’t half bad!

Now, I hear some of you crying:

“But I don’t have a picture to compare with and I don’t even know where to get one!!”

~Some of You

Well don’t worry, because as always I’ve got you covered! 😉

Given the incredibly rare nature of digital images and the extreme difficulty in obtaining them I have created some imitation digital images that you can use with this prototype.

Obviously, if these were a real digital images I’d have to charge you like… a whole bunch but since they’re just imitations I can manufacture them cheap enough that I can just give them away to everyone!

So, here are your complementary genuine imitation digital images, chew carefully because there are sharp points.

Genuine Imitation Digital Image
A Genuine Imitation Digital Image

I was going for a “Cyberpunky” feel and clearly, I half-assed it, but only so I could have the time to half-ass the graphic novel version as well!

Genuine Imitation Digital Graphic Novel Image
A Genuine Imitation Digital Graphic Novel Image

And while I was half-assing those two images above I decided to half-ass a background for the color analysis group image too, really making those two images together two-thirdly-assed and what follows then is one-thirdly-assed… but perhaps now I’m getting too technical.

Anyway, I figured some of you might want the background for the analysis image too so here’s that as well:

A One-Thirdly-Assed Background
A Chromatic One-Thirdly-Assed Background

Consequently and thusly certainly as a result of the aforementioned triadic-assery such that, ergo, under the circumstances that being the inevitable subsequent case on account of all the things whence came before and because of this, you can now see that to be the truth.

Damn!! I must have a floating point error again?!

No worries though, I’ll correct that later with a strong neodymium magnet to the head but right now I feel like it’s time to talk a little about the Chromatron before I wrap things up here and yell at all you filthy kids to get the hell off my lawn!

Hmmm… yep!! Definitely a floating point error…

The Chromatron

Here’s the link to the Chromatron App which is hosted through Github Pages:

https://geekgirljoy.github.io/Chromatron/

A link to the Chromatron prototype live preview on Chromatron.
Click for a live preview Chromatron on GitHub Pages.

It will remain available going forward / indefinitely unless I managed to piss-off somebody over there due to my expressing some of my opinions about receiving that award they gave me in which case… I guess I’ll be gettin’ canceled soon?

In any case and while supplies last, if you click the second button (the unassuming gray one with red and blue on it) in the main menu you get a screen like this:

Clicking the “Browse” buttons will let you select images to use for the comparison and you can use the imitation digital images I provided above or you can use your own real digital images if you can find them.

If you want to compare two colors in the same image, just load it twice.

Once the images are loaded the cursor will show a reticle over the image allowing you to select a color from each and when you do, the rectangle element above each image that shows the name will change to a gradient to black of the selected color.

Also once both images have a color selected, a green “Continue” button will magically appear out of thin air at the top of the page as if by the power of digital pixies wreaking havoc in your web browser… click it and the Chromatron will analyze the selected colors and generate an image like this:

You can use the “Save Image” to download the image and use the “Copy as Text” to get something similar to the following:

Your Favorite Colors:

First:
RGB: 2, 219, 255
HSL: 188.538, 100.0%, 50.4%
HEX: #02dbff
Analogous Colors: #02ffa5, #025cff
Split Complementary Colors: #02dbff, #ffa402, #ff025c
Triadic Colors: #dbff02, #02dbff, #ff02db
Tetradic Colors: #02dbff, #2602ff, #ff2602

Second:
RGB: 132, 28, 28
HSL: 0.000, 65.0%, 31.4%
HEX: #841c1c
Analogous Colors: #841c50, #84501c
Split Complementary Colors: #841c1c, #1c5084, #1c8450
Triadic Colors: #1c1c84, #841c1c, #1c841c
Tetradic Colors: #841c1c, #84841c, #1c8484


Chromatron: https://geekgirljoy.github.io/Chromatron/
Created By: https://geekgirljoy.wordpress.com/

How It Works

To keep it simple, the way these color values are derived is by converting your selected color as RGB color space values to the HSL color model… which admittedly is kinda like slathering a cube in rainbow paint made from mathematical unicorn puke and then hanging it up to dry so you can use it’s hexagonal shadow and a wand made out of a vector to scry hidden truths about the mysterious nature of color…HSL-HSV hue and chroma What follows is the typical “Oh Freyja we beseech thee…” and a human sacrifice, super boring technical stuff but why this is useful is because once you arrange color like this it’s easy to “rotate” the color using the wand er… vector and get a new but related hue or “compute” different colors that share luminosity or keep the same color and alter the saturation etc… just mix in a little color theory  and when you are done, convert back to RGB and poof your green eggs and ham are now nachos! Mmmm nachos!

Anyway, all fun stuff for sure but I’m not going to bother to explain it any further because if you care about the details, here’s the wiki article on it: https://en.wikipedia.org/wiki/HSL_and_HSV

Here’s some code in PHP & JS that demonstrates how I did it:

https://github.com/geekgirljoy/PHP/blob/master/Loose%20Code/RGB_HSL.php

https://github.com/geekgirljoy/JavaScript/blob/master/Loose%20Code/RGB_HSL.js

And as for the Chromatron, there are three main files involved:

Index.htmlhttps://github.com/geekgirljoy/Chromatron/blob/master/index.html

This file is what is loaded by your web browser first and it starts the whole process that results in a running app.

Style.csshttps://github.com/geekgirljoy/Chromatron/blob/master/assets/css/style.css

This file contains most of the “style” information that makes buttons have a certain size and color etc.

Chromatron.jshttps://github.com/geekgirljoy/Chromatron/blob/master/assets/js/chromatron.js

This file contains most of the real code that makes the Chromatron work.

And with that… please enjoy the Chromatron.


Would you like more free and open source apps like Chromatron? Consider supporting my through Patreon.

I’d like to thank Marcel for his generous and ongoing support!

But if all you can do is Like, Share, Comment and Subscribe… well that’s cool too!

Much Love,

~Joy

The Contrast-a-tron

Today we’re going to continue my introduction to creating your own data sets series by building Contrast-a-tron.

Now, I know what you are thinking:

“We already did that, like… a while ago!”

Here’s the thing though… we didn’t! 😉

And besides, it wasn’t that long ago!

What we built before was a Contrast-inator and a Contrast-inator and a Contrast-a-tron are not the same things! 😛

Let me explain…

  • The Contrast-inator: Learned how to “predict/classify” if a single input color was to the left (darker) or to the right (lighter) in an imaginary red line in the exact center of a 2D gray-scale gradient representation of the 0-255 RGB 3D color space.
  • The Contrast-a-tron (this bot): Is a much smarter and more interesting bot. It will learn how to “predict/classify” two input colors as “darker” and “lighter” or “the same” compared with each other. Which is a much more challenging task for the bot to learn.

But before we get into that I think I owe you a wallpaper.

A Wallpaper

Don’t mind the title, it’s definitely not a template placeholder! 😛

Anyway, just due to me being me, I have a lot of old robots and parts laying around and I was out in the o’l boneyard and I found this really beat up Krypto mining bot for us to play with.

I built it back when I was going to launch my own currency (A long time ago when it was still a cool thing to do and not everyone was like “my ICO is next week, you should mine sum!!!!” 😉 😉 ), yeah… no thanks!

Anyway, the bot’s memory is completely corrupt, but… the optical circuitry and hardware are still functional and since mining bots are built to operate deep under miles of data in extreme low light conditions at high speed, it’s visual acuity is top-notch and it even supports infrared mode!

So don’t let it’s tiny eyes fool you, they are incredibly sensitive which is perfect for today’s project! 🙂

Contrast_a_tron 1920 x 1080 Wallpaper
Contrast_a_tron 1920 x 1080 Wallpaper

I should add that not all posts get a theme song but today’s is Night Business by Perturbator (not a sponsor), I love the little vocoded? robotic voice about two minutes and twenty seconds in. It’s definitely what this bot’s voice sounds like! 😛

Also before we proceed, I’d just like to set the record straight and confirm that I’m definitely not Satoshi Nakamoto!

The Contrast-a-tron

To begin, let’s first look at what our Contrast-inator does:

Is this pixel brighter or darker than the red line?
Is this pixel brighter or darker than the red line?

It takes a color/shade as an input and then tries to determine which side of the red line it falls on.

Not that useful but it’s good for operating inside a known range that never changes. Like, was the light red or green kinda stuff, or conceptually like a line following robot.

Anyway, what if you wanted to start comparing two colors at the same time and to make things even more complicated, what if the gradient wasn’t always facing the same direction (meaning the “brighter/darker” pixel could be on the left or the right)?

For most of you that task is trivial and you could do it almost unconsciously or with minimal mental effort, not the Contrast-inator though!

To compare two pixels the Contrast-inator must evaluate each separately and because the red line (which you can imagine is “where the robot is standing” on the gradient when it’s evaluating a color) doesn’t change, if both colors are to it’s left or right (the bot’s vantage position / the red line), then it is completely unable to compare them.

Because these colors are on the same side of the red line, the Contrast-inator cannot compare them but the Contrast-a-tron can.
Because these colors are on the same side of the red line, the Contrast-inator cannot compare them but the Contrast-a-tron can.

Just to be clear, the Contrast-inator will say that both pixels/shades are “brighter/to the right” of zero (where it stands / it’s anchor) but it cannot figure out which of the two colors are brighter and the same is true if both colors are darker (to the left of the red line).

Further, there is also no guarantee that we will always present the colors to the bot in the order of darker on the left and lighter on the right meaning that sometimes the gradient will be lighter on the left and darker on the right and we will need the bot to notice that difference and accommodate that circumstance.

How the Contrast-a-tron Works Differently

The Contrast-a-tron isn’t anchored to zero (the center of the gradient) and instead we can think of it moving around the gradient to try and find the “center” of the two colors (whatever color that might be) and from there it can evaluate which side (input color / shade) is brighter and which is darker.

In the event that the input colors/shades are the same then both Input A & B will be in the same place which means that it will be neither to the right or to the left of the bot.

How the Contrast-a-tron works differently.
How the Contrast-a-tron works differently.

How the Neural Networks Differ

I didn’t spend a lot of time discussing the structure of the neural network when we built the Contrast-inator but now that we have something to compare it against let’s look at a visual representation of each network.

How the Contrast-inator and the Contrast-a-tron neural networks differ.
How the Contrast-inator and the Contrast-a-tron neural networks differ.

On the left you see the Contrast-inator with it’s single input neuron, a hidden layer containing two hidden neurons and an output layer with two output neurons.

Additionally you see two “Bias” neurons represented in yellow that help the network learn what we want by “biasing” the output of that layer to the next so that it is never “none” (zero or no output).

What this means is that bias neurons add their value to the output signal of each neuron from their layer so that the signal is never no “activation signal” and some value propagates forward.

All layers except the output layer will always have a single bias neuron. There is no need of a bias neuron on the output layer because there is no signal to propagate beyond the output neurons so it wouldn’t serve any purpose.

Bias neurons have no inputs.

In practice we don’t have to concern ourselves with the bias neurons and the ANN will manage them itself but I like draw them because they do exist and they are part of the network, however it’s common for people not to include them in diagrams because they are so easy for us to ignore since we don’t really need to do anything with them and they are just there to help the signal propagate.

In any case, the Contrast-a-tron differs by including a second input neuron (for the second shade/color) and a second hidden layer which helps the Contrast-a-tron to be a little “smarter” and learn what we want it to.

I have a post about how to create diagrams like this called Visualizing Your FANN Neural Network and you can download a copy of the open source visualization software I wrote for free from my GitHub account here: https://github.com/geekgirljoy/FANN-Neural-Network-Visualizer

Training The Contrast-a-tron

When we created the Contrast-inator, I walked you through each training example and how it was derived because it was a very small data set requiring only three examples however this data set is a bit longer with thirteen examples and it will be a lot easier to show you the data set and then draw you a picture than to type a description but before we look at the training data, lets make sure we understand the outputs.

Understanding the Contrast-a-tron output.
Understanding the Contrast-a-tron output.

There are two outputs and we’ll call them A & B and they are in that order.

In an ideal world the bot will give us -1 & -1 to mean they are the same, 1 & -1 to mean that A is Brighter and B is Darker and -1 & 1 to mean A is Darker and B is Brighter.

In reality… what we get is a number that comes close but isn’t -1 or 1 called a “floating point number” in computer science but most people just call them a decimal number like for example 0.123.

In practice this means that as long as A & B are not both negative, then whichever has the higher positive value is the “brighter” color and whichever has the lower positive value is the “darker” color otherwise they are the same (A==B).

Let’s look at the training data and visualize it.

Contrast-a_tron.data

This is the complete Contrast-a-tron training data.

The first line is the “FANN Header” which consists of: the Total_Number_of_Example_Sets the Number_of_Inputs the Number_of_Outputs\n

Note the spaces between the values on the header line as well as between the inputs and the output values.

Line 2 (-1 -1) is an input example. Line 3 (-1 -1) is an output example and the pattern of Input_Example\nOutput_Example\n continues to the end of the document.

13 2 2
-1 -1
-1 -1
-0.5 -0.5
-1 -1
0 0
-1 -1
0.5 0.5
-1 -1
1 1
-1 -1
1 -1
1 -1
0.5 0
1 -1
0 0.5
-1 1
-1 -0.5
-1 1
-0.5 -1
1 -1
1 0.5
1 -1
0.5 1
-1 1
-1 1
-1 1

Let’s visualize this training data which should hopefully give you a more intuitive sense for how these numbers translate to information the Contrast-a-tron ANN can use to learn.

Visualizing the Contrast-a-tron training data set
Visualizing the Contrast-a-tron training data set

The Code

Here’s the code used to train. I have other tutorials covering what this all means available on my Topics and Posts page so I won’t go into what all of this means but basically it sets up a training environment and trains the Contrast_a_tron ANN and saves the results to a FANN .net network file.

TrainContrast_a_tron.php

<?php

$num_input = 2;
$num_output = 2;
$layers = array($num_input, 2, 1, $num_output);
$ann = fann_create_standard_array(count($layers), $layers);

$desired_error = 0.0000000001;
$max_epochs = 900000;
$epochs_between_reports = 10;

if ($ann) {
    fann_set_activation_function_hidden($ann, FANN_SIGMOID_SYMMETRIC);
    fann_set_activation_function_output($ann, FANN_SIGMOID_SYMMETRIC);
    fann_set_training_algorithm($ann,FANN_TRAIN_INCREMENTAL);


    $filename = dirname(__FILE__) . "/Contrast_a_tron.data";
    if (fann_train_on_file($ann, $filename, $max_epochs, $epochs_between_reports, $desired_error)){
        echo 'Contrast_a_tron trained.' . PHP_EOL;
    }

    if (fann_save($ann, dirname(__FILE__) . "/Contrast_a_tron.net")){
        echo 'Contrast_a_tron.net saved.' . PHP_EOL;
    }
    
    fann_destroy($ann);
}

 

TestContrast_a_tron.php

We next need to test the ANN so I use two “for loops” with one counting down to -1 and one counting up to 1 and each incrementing by -0.2 each iteration of the loop as the inputs to test with.

<?php

$train_file = (dirname(__FILE__) . "/Contrast_a_tron.net");
if (!is_file($train_file))
    die("Contrast_a_tron.net has not been created! Please run TrainContrast_a_tron.php to generate it" . PHP_EOL);

$ann = fann_create_from_file($train_file);

if ($ann) {
    
    foreach(range(1, -1, -0.2) as $test_input_value_a){
        foreach(range(-1, 1, -0.2) as $test_input_value_b){
        
            $input = array($test_input_value_a, $test_input_value_b);
            $result = fann_run($ann, $input);

            $a = number_format($result[0], 4);
            $b = number_format($result[1], 4);
            
            // What answer did the ANN give?
			
            $answer = NULL;
            $evaluation = '';
            if($a <= 0 && $b <= 0){
                $evaluation = 'Neutral/Same';
                $answer = 0;
            }
            elseif($a > $b){
                $evaluation = 'A is Brighter';
                $answer = -1;
            }
            elseif($b > $a){
                $evaluation = 'B is Brighter';
                $answer = 1;
            }
            else{ 
                $evaluation = ' OOPSIES!!!!!!!';
            }

            echo 'Contrast_a_tron(' . $input[0] . ', ' . $input[1] . ") -> [$a, $b] - $evaluation" . PHP_EOL; 
        }
    }
    fann_destroy($ann);
}
else {
    die("Invalid file format" . PHP_EOL);
}

Results

The Results/Output of the test code.

Contrast_a_tron(1, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.8) -> [0.9986, -1.0000] - A is Brighter
Contrast_a_tron(1, 1) -> [-1.0000, -0.1815] - Neutral/Same
Contrast_a_tron(0.8, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.6) -> [0.9992, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.8) -> [-1.0000, -0.2218] - Neutral/Same
Contrast_a_tron(0.8, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.6, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0.4) -> [0.9995, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0.6) -> [-1.0000, -0.4005] - Neutral/Same
Contrast_a_tron(0.6, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.6, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.4, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, 0.2) -> [0.9996, -1.0000] - A is Brighter
Contrast_a_tron(0.4, 0.4) -> [-1.0000, -0.6543] - Neutral/Same
Contrast_a_tron(0.4, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.4, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.4, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, 0) -> [0.9996, -1.0000] - A is Brighter
Contrast_a_tron(0.2, 0.2) -> [-1.0000, -0.8580] - Neutral/Same
Contrast_a_tron(0.2, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.2) -> [0.9996, -1.0000] - A is Brighter
Contrast_a_tron(0, 0) -> [-1.0000, -0.9557] - Neutral/Same
Contrast_a_tron(0, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.4) -> [0.9995, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.2) -> [-1.0000, -0.9878] - Neutral/Same
Contrast_a_tron(-0.2, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.4, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.4, -0.6) -> [0.9994, -1.0000] - A is Brighter
Contrast_a_tron(-0.4, -0.4) -> [-1.0000, -0.9965] - Neutral/Same
Contrast_a_tron(-0.4, -0.2) -> [-1.0000, 0.9997] - B is Brighter
Contrast_a_tron(-0.4, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.6, -0.8) -> [0.9990, -1.0000] - A is Brighter
Contrast_a_tron(-0.6, -0.6) -> [-0.9999, -0.9989] - Neutral/Same
Contrast_a_tron(-0.6, -0.4) -> [-1.0000, 0.9996] - B is Brighter
Contrast_a_tron(-0.6, -0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, -1) -> [0.9981, -1.0000] - A is Brighter
Contrast_a_tron(-0.8, -0.8) -> [-0.9999, -0.9995] - Neutral/Same
Contrast_a_tron(-0.8, -0.6) -> [-1.0000, 0.9993] - B is Brighter
Contrast_a_tron(-0.8, -0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, -0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, -1) -> [-0.9998, -0.9998] - Neutral/Same
Contrast_a_tron(-1, -0.8) -> [-1.0000, 0.9982] - B is Brighter
Contrast_a_tron(-1, -0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, -0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, -0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 1) -> [-1.0000, 0.9998] - B is Brighter

GitGub

As always you can download a copy of this code on GitHub for free and if you have any questions or comments please leave them below.

Contrast-a-tron on GitHub: https://github.com/geekgirljoy/Contrast-a-tron


If you find yourself thinking…

“Joy you’re the best!”

I’d say….

If you support the resistance against Big AI then consider supporting my efforts through Patreon.

But, if all you can do is Like, Share, Comment and Subscribe… well that’s cool too!

Much Love,

~Joy

The Contrast-inator

Let’s keep things simple, you want to read a post and conveniently I’ve written one for you!

I’ll spare everyone my recent fascinations with macabre subjects and opt to get right to the topic of the day!

Anyway, as the Jane Goodall of bots I’ve learned a little about how to communicate with them using rules they understand and today I’m going to show you how you make rules that get a bot to understand and do, what you want it to do.

But… before we get into that, here’s the wallpaper!

The Contrast-inator Bot Wallpaper

Why Contrast-inator? Well, as far as the “-inator” part is concerned let’s just say I enjoy Phineas and Ferb. 😛

As for the “Contrast” part, we’ll get into that next but the big eyes of this bot are important. 😛

Also… it doesn’t need hands, arms, legs or feet to do it’s job so I didn’t 3D print the parts, waste not want not! 😉 😛

Contrast-inator 1920x1080 Wallpaper
Contrast-inator 1920×1080 Wallpaper

The Contrast-inator

So, recently I received a few comments that amount to something along the lines of “How do you decide on the ‘rules’ for training your neural network?”.

My response is basically if you can “phrase” a training rule in a way the bot can understand, then you can make the rules to be whatever you want/need them to be.

And the thing we’re going to try to teach the bot today to help us explore this topic is… given an input value, tell us if the value falls to the “left” or to the “right” of an “anchor point”.

That sounds more complicated than it really is and I intend this tutorial to be for beginners so let me try to simplify my description… uh… so, think of a gray-scale gradient of black to white.

Imagine a red line in the center of the gradient.

Now, given some gray-scale color e.g. RGB(102, 102, 102) as “input”, how can we train a bot to tell us if the color falls to the left or to the right of the red line… like this:

Is this pixel brighter or darker than the red line?
Is this pixel brighter or darker than the red line?

I know that might seem complicated while at the same time also seem kind of useless… I assure you that neither stipulated condition is true.

In regard to usefulness, just as a hypothetical example… a neural network like this could act as the “brain” of a “line follower” robot but instead of reading the voltages directly from a photodiode and then using if/else to direct motors, you could pass the diode voltages to the neural network and let it decide which motor to move via an H-Bridge / Digital Potentiometer.

An Arduino would need a WiFi Shield for something like that to work  but a line follower built on something like a Raspberry Pi could run it’s neural network “brain” locally.

Which brings us back to complexity and how we build a rule set to teach our Contrast-inator bot to tell us if a pixel is brighter or darker than the color where the red line is.

Forget about what I said about the hypothetical line-follower robot, the Arduino and the Raspberry Pi… it’s more complicated than I want this post to be and it’s just an example anyway. 😛

Let’s start over…

We know that any answers our bot gives us (the output) will look like a “floating point” number (a decimal value e.g. 0.01) and basically our input will also be a floating point number too.

With this in mind we can start to imagine that our training data inputs and the associated outputs will look like a series of numbers.

But what will the numbers mean and how can we know if the bot is correct?

Well, let’s step back again and think about what rules we need to teach the bot first before we even worry about encoding the training data for the bot.

What rules might a human need if we had to describe the process to someone for them to be able to do it?

Plain English Rules For the Bot to Learn:

  1. If the color falls to the left of the red line then it can be described as “Darker”.
  2. If the color is neither to the left or the right of the red line, then we can say the color is directly in the center. We might describe this position or color state as being “Neutral” in relation to the red line.
  3. If the color falls to the right of the red line then it can be described as “Brighter”.

Given these three super easy rules I believe most, if not all of you should be able to answer if a color falls to the left or the right of the red line with a high degree of accuracy.

However, your accuracy would diminish the closer the color is to the red line in the center because you are intuitively guessing and the difference between the colors that surround either side of the center of the color gradient all look like very similar grays, e.g. they have a low contrast between them.

The colors at the ends of the color gradient (black/left and white/right) have the largest contrast between them and are the easiest to determine which side they fall on.

With our rules layed out in English, let’s return to the idea of the training data (our rules) which consists of numbers and how we will represent our three rules as numbers.

I’ve already said the inputs and outputs will be floating point numbers but what we haven’t covered yet is the fact that our numbers are “signed” (have negative and positive polarities to it’s range) with our range being -1.00 to 1.00.

This means that Black can be encoded as: -1.00 or -1 for simplicity with the decimal being implied.

This also means that White can be encoded as: 1.00 or 1, also with the decimal being implied.

Given our signed float range and a few colors converted to a float within our range, we can easily determine algorithmically if a color is on the left or right of the red line even if it’s very close to the center with 100% accuracy (better than human capability) simply be checking if it is greater than or less than zero.

Meaning… a neural network is NOT needed to accomplish this task, but… that’s not the point! 😛

Our goal is to teach a neural network to do this nonetheless because it is a simple problem and the rules (training data) are simple enough that a beginner should be able to understand how they are derived if they exert even a modicum of effort!

Here’s what that looks like:

Example Colors to Float Range Input
Example Colors to Float Range Input

Notice that the first two colors are to the left of zero (darker) because they are negative and the third color is far to the right (much lighter) because it is closer to 1 than 0.

Color R, G, B As Float Side Meaning
42, 42, 42 -0.66797385620915 Left Darker
102, 102, 102 -0.19738562091503 Left Darker
221, 221, 221 0.7359477124183 Right Lighter

Fascinating… but… how are you converting the colors to floats?

Okay look, this won’t be on the mid-term test and it’s in no way actually necessary to go over because we won’t need to do this to train the bot but since you are curious here’s a function you can use to convert actual RGB & Grayscale colors to a float in the right range:

How to convert a color to a signed float between -1.00 to 1.00:

<?php 
// Input a number between 0 and $max and get a number inside
// a range of -1 to 1
function ConvertColorToInputFloatRange($color_int_value, $max = 255){
    return ((($color_int_value - -1) * (1 - -1)) / ($max - 0)) + -1;
}

// RGB Color to range of -1 to 1
$R = 42;
$G = 42;
$B = 42;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;

// RGB to Gray-scale to range of -1 to 1
$gray = ($R+$G+$B) / 3;
echo  "Pixel_Grayscale($gray) = " . ConvertColorToInputFloatRange($gray, 255) . PHP_EOL;

// RGB Color to range of -1 to 1
$R = 102;
$G = 102;
$B = 102;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;

// RGB Color to range of -1 to 1
$R = 221;
$G = 221;
$B = 221;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;


/*
Output:

Pixel_RGB(42,42,42) = -0.66797385620915
Pixel_Grayscale(42) = -0.66274509803922
Pixel_RGB(102,102,102) = -0.19738562091503
Pixel_RGB(221,221,221) = 0.7359477124183

*/

Now that you all at least believe it’s possible to convert a color to a float between -1 & 1 forget all about this function because we won’t need it to train the bot! 😛

Then… how do we teach a neural network to do this?

Well, let’s talk about what the output for this bot looks like before we get back to creating the training data from our rules.

We know that our output is a float, and even though it is possible to teach the neural network to do this with a single output, I find I get better results from the neural network using two outputs.

This is because it’s actually very easy for the bot to understand we want it to detect if the input value (color) is slightly offset to the left or right of the red line but it’s not the easiest thing for it to determine exactly where the center is (just like you but it’s still better at it) so our margin of error (the number of colors it can’t tell are on the right or left… e.g. the colors it will say are neutral) tends to be slightly larger if we only use a single output float.

What that means is:

  1. Our Input looks like: float
  2. Our output looks like: float_left float_right

With that in mind we have now covered everything necessary to begin converting our rules to training data.

Remember, that the decimals are implied!

Lets start by teaching it what the darker colors on the left look like:

Black RGB(0,0,0), is the farthest most color to the left and is encoded as -1 and with two output values representing Left & Right we get a rule that looks like this:

Learn “Darker” colors (floats closest to -1.00) are on the left:

The output value on the left is set to 1 which means negative values more strongly polarize to the left and this is reflected on the left output being 1.00 and the right output value being -1.00.

-1
1 -1

Learn “Neutral” colors (floats closest to 0.00) are near the center:

I’m using -1.00 & -1.00 to mean that an input of exactly zero is not strongly polarized to either side of the gradient with zero (the exact center – whatever color that is) is not strongly polarizing (-1.00, -1.00) in either direction.

The goal here is that this will help it learn values near zero are are not strongly polarized and zero isn’t polarized at all.

0
-1 -1

Learn “Brighter” colors (floats closest to 1.00) are on the right:

The output value on the right is set to 1 which means positive values more strongly polarize to the right and this is reflected by the right output being 1.00 and the left output value being -1.00.

1
-1 1

 

FANN (The library we’re using for training the neural network) requires a header is stored with the training data so it can read the training data and that looks like:

Number_Of_Training_Examples Number_Of_Inputs Number_Of_Outputs

*Note the spaces between values

So, combined our training data file looks like this:

Contrastinator.data

3 1 2
-1
1 -1
0
-1 -1
1
-1 1

And that’s it, we’ve converted our rules to training data so… lets train the bot!

TrainContrastinator.php

You will need FANN installed to train this bot.

Follow this tutorial to learn how to install FANN.

<?php

$num_input = 1;
$num_output = 2;
$num_layers = 3;
$num_neurons_hidden = 2;
$desired_error = 0.000001;
$max_epochs = 500000;
$epochs_between_reports = 1000;

$ann = fann_create_standard($num_layers, $num_input, $num_neurons_hidden, $num_output);

if ($ann) {
    fann_set_activation_function_hidden($ann, FANN_SIGMOID_SYMMETRIC);
    fann_set_activation_function_output($ann, FANN_SIGMOID_SYMMETRIC);

    $filename = dirname(__FILE__) . "/Contrastinator.data";
    if (fann_train_on_file($ann, $filename, $max_epochs, $epochs_between_reports, $desired_error))
        echo 'Contrastinator trained.' . PHP_EOL;

    if (fann_save($ann, dirname(__FILE__) . "/Contrastinator.net"))
        echo 'Contrastinator.net saved.' . PHP_EOL;

    fann_destroy($ann);
}

It won’t take very long for the bot to learn our rules.

Once you see the message “Contrastinator trained.” you are ready to test your new bot!

TestContrastinator.php

This code will test Contrastinator using input values is has not trained on but because we designed good rules, the bot is able to answer correctly even when it never actually saw most of the test values, it did see -1, 0 and 1 though along with their “ideal” outputs.

Notice, the $brighter and $darker variables are the the output of the neural network.

The $evaluation variable is a test for our benefit and does not modify or affect the output/results of the bot’s answers and the answers are correct even if we don’t do the evaluation, it just helps us confirm/interpret programmatically what the bot’s answers mean.

<?php
$train_file = (dirname(__FILE__) . "/Contrastinator.net");
if (!is_file($train_file))
    die("Contrastinator.net has not been created! Please run TrainContrastinator.php to generate it" . PHP_EOL);

$ann = fann_create_from_file($train_file);
if ($ann) {
    
    foreach(range(-1, 1, 0.1) as $test_input_value){
        
        $input = array($test_input_value);
        $result = fann_run($ann, $input);
        $darker = $result[0];
        $brighter = $result[1];
        
        if($brighter < 0 && $darker < 0){
            $evaluation = 'Neutral';
        }
        elseif($brighter > $darker){
            $evaluation = 'Brighter';
        }
        elseif($brighter < $darker){
            $evaluation = 'Darker';
        }                
                
        echo 'Contrastinator(' . $input[0] . ") -> [$darker, $brighter] - Input is $evaluation" . PHP_EOL; 
    }
        
    fann_destroy($ann);
} else {
    die("Invalid file format" . PHP_EOL);
}

Results:

Notice that it has no trouble detecting that an input of zero (0.00) is neutral and that it also correctly determines which side a color (represented by a float) falls on in relation to the center zero value.

Contrastinator(-1) -> [1, -1] - Input is Darker
Contrastinator(-0.9) -> [1, -1] - Input is Darker
Contrastinator(-0.8) -> [1, -1] - Input is Darker
Contrastinator(-0.7) -> [1, -1] - Input is Darker
Contrastinator(-0.6) -> [1, -1] - Input is Darker
Contrastinator(-0.5) -> [1, -1] - Input is Darker
Contrastinator(-0.4) -> [1, -1] - Input is Darker
Contrastinator(-0.3) -> [1, -1] - Input is Darker
Contrastinator(-0.2) -> [1, -1] - Input is Darker
Contrastinator(-0.1) -> [1, -1] - Input is Darker
Contrastinator(0) -> [-0.9997798204422, -0.99950748682022] - Input is Neutral
Contrastinator(0.1) -> [-1, 0.9995544552803] - Input is Brighter
Contrastinator(0.2) -> [-1, 0.99954569339752] - Input is Brighter
Contrastinator(0.3) -> [-1, 0.99953877925873] - Input is Brighter
Contrastinator(0.4) -> [-1, 0.9995334148407] - Input is Brighter
Contrastinator(0.5) -> [-1, 0.99952918291092] - Input is Brighter
Contrastinator(0.6) -> [-1, 0.9995259642601] - Input is Brighter
Contrastinator(0.7) -> [-1, 0.99952346086502] - Input is Brighter
Contrastinator(0.8) -> [-1, 0.99952149391174] - Input is Brighter
Contrastinator(0.9) -> [-1, 0.99952000379562] - Input is Brighter
Contrastinator(1) -> [-1, 0.99951887130737] - Input is Brighter

Contrastinator on Github

As with all my public code, you can download a copy of this project for free on my GitHub profile.

GitHub: Contrastinator

I hope this helps you better understand how create your own training data sets and as always, if you have any questions or trouble understanding any part of this post, please leave a comment and I would be happy to try and help you.


If you enjoy my content and or tutorials like this one, consider supporting me on Patreon for as little as $1 a month and cancel any time!

It’s not required but it helps me out.

But if all you can do is Like, Share, Comment and Subscribe, well… that’s cool too!

Much Love,
~Joy

Mr Good Bot – Composite Animation

Many years ago, in what seems like another life… I had the advantageous opportunity to work for the Walt Disney Corporation, not a sponsor but as I said I did used to work for them as a contracted Computer Technician for a few years.

I was young and had my A+ back when it still meant something and stood as a confirmation of basic competency!

I’m just kidding, an A+ was never meaningful (allegedly and IMHO)!

Anyway, mostly my job amounted to “burning/cloning” hard drive images, light software and OS configuration as well as installing the equipment for the end user.

Occasionally I’d fix a broken printer, upgrade some RAM on a laptop, spend a little time in the LAN closet correcting the patch panel port list because the dedicated local department sys admin liked paper records.

On top of that there were the near constant departmental moves… oh the lost weekends! 😛

Nothing complicated though and along the way I met a lot of really smart and wonderful people!

Aside from the people, the two highlights I will always remember fondly Continue reading “Mr Good Bot – Composite Animation”

Using a Visual Bot

They say a picture is worth a thousand words and in service of building a visual bot I’ve said more than a few. 😛

So what’s left to say?

Well, today were going to wrap up this series by looking at how to use our new bots.

I’m including a pre-trained 2×2 matrix network on GitHub so you don’t have to wait to play with it! 😉

Also, I’m releasing a special pre-trained Patrons Only 5×5 matrix network for anyone who particularly enjoys this project.

Previous posts in this series:

Continue reading “Using a Visual Bot”

Training a Visual Bot

By the end of this post you will have everything you need to train your own visual neural network!

Previous posts in this series:

Continue reading “Training a Visual Bot”

Generating Visual Training Data

Today we’re going to look at how to automatically generate training data from images.

At the end of this post I publish the working code (copy->paste->run) that you can use commercially in your own projects for free…  oh and I explain how it works, need I say more? 😉

Other posts in this series:

Continue reading “Generating Visual Training Data”

Building a Visual Bot

Today we’re going to look at the code used to build a visual neural network but don’t worry… there will be pretty pictures and I’ll keep the math to a simplified minimum. 😛

Continue reading “Building a Visual Bot”

Introduction to Creating Visual Neural Networks

In last week’s post The Kiss I teased:

“In my next post we’ll “feed” this image to a neural network.”

This Image

The Kiss
The Kiss

I guess now its time to make good on that promise… so lets do that, shall we?

Continue reading “Introduction to Creating Visual Neural Networks”

Blog at WordPress.com.

Up ↑

%d bloggers like this: