Search

Geek Girl Joy

Artificial Intelligence, Simulations & Software

Tag

Bot

The Contrast-inator

Let’s keep things simple, you want to read a post and conveniently I’ve written one for you!

I’ll spare everyone my recent fascinations with macabre subjects and opt to get right to the topic of the day!

Anyway, as the Jane Goodall of bots I’ve learned a little about how to communicate with them using rules they understand and today I’m going to show you how you make rules that get a bot to understand and do, what you want it to do.

But… before we get into that, here’s the wallpaper!

The Contrast-inator Bot Wallpaper

Why Contrast-inator? Well, as far as the “-inator” part is concerned let’s just say I enjoy Phineas and Ferb. 😛

As for the “Contrast” part, we’ll get into that next but the big eyes of this bot are important. 😛

Also… it doesn’t need hands, arms, legs or feet to do it’s job so I didn’t 3D print the parts, waste not want not! 😉 😛

Contrast-inator 1920x1080 Wallpaper
Contrast-inator 1920×1080 Wallpaper

The Contrast-inator

So, recently I received a few comments that amount to something along the lines of “How do you decide on the ‘rules’ for training your neural network?”.

My response is basically if you can “phrase” a training rule in a way the bot can understand, then you can make the rules to be whatever you want/need them to be.

And the thing we’re going to try to teach the bot today to help us explore this topic is… given an input value, tell us if the value falls to the “left” or to the “right” of an “anchor point”.

That sounds more complicated than it really is and I intend this tutorial to be for beginners so let me try to simplify my description… uh… so, think of a gray-scale gradient of black to white.

Imagine a red line in the center of the gradient.

Now, given some gray-scale color e.g. RGB(102, 102, 102) as “input”, how can we train a bot to tell us if the color falls to the left or to the right of the red line… like this:

Is this pixel brighter or darker than the red line?
Is this pixel brighter or darker than the red line?

I know that might seem complicated while at the same time also seem kind of useless… I assure you that neither stipulated condition is true.

In regard to usefulness, just as a hypothetical example… a neural network like this could act as the “brain” of a “line follower” robot but instead of reading the voltages directly from a photodiode and then using if/else to direct motors, you could pass the diode voltages to the neural network and let it decide which motor to move via an H-Bridge / Digital Potentiometer.

An Arduino would need a WiFi Shield for something like that to work  but a line follower built on something like a Raspberry Pi could run it’s neural network “brain” locally.

Which brings us back to complexity and how we build a rule set to teach our Contrast-inator bot to tell us if a pixel is brighter or darker than the color where the red line is.

Forget about what I said about the hypothetical line-follower robot, the Arduino and the Raspberry Pi… it’s more complicated than I want this post to be and it’s just an example anyway. 😛

Let’s start over…

We know that any answers our bot gives us (the output) will look like a “floating point” number (a decimal value e.g. 0.01) and basically our input will also be a floating point number too.

With this in mind we can start to imagine that our training data inputs and the associated outputs will look like a series of numbers.

But what will the numbers mean and how can we know if the bot is correct?

Well, let’s step back again and think about what rules we need to teach the bot first before we even worry about encoding the training data for the bot.

What rules might a human need if we had to describe the process to someone for them to be able to do it?

Plain English Rules For the Bot to Learn:

  1. If the color falls to the left of the red line then it can be described as “Darker”.
  2. If the color is neither to the left or the right of the red line, then we can say the color is directly in the center. We might describe this position or color state as being “Neutral” in relation to the red line.
  3. If the color falls to the right of the red line then it can be described as “Brighter”.

Given these three super easy rules I believe most, if not all of you should be able to answer if a color falls to the left or the right of the red line with a high degree of accuracy.

However, your accuracy would diminish the closer the color is to the red line in the center because you are intuitively guessing and the difference between the colors that surround either side of the center of the color gradient all look like very similar grays, e.g. they have a low contrast between them.

The colors at the ends of the color gradient (black/left and white/right) have the largest contrast between them and are the easiest to determine which side they fall on.

With our rules layed out in English, let’s return to the idea of the training data (our rules) which consists of numbers and how we will represent our three rules as numbers.

I’ve already said the inputs and outputs will be floating point numbers but what we haven’t covered yet is the fact that our numbers are “signed” (have negative and positive polarities to it’s range) with our range being -1.00 to 1.00.

This means that Black can be encoded as: -1.00 or -1 for simplicity with the decimal being implied.

This also means that White can be encoded as: 1.00 or 1, also with the decimal being implied.

Given our signed float range and a few colors converted to a float within our range, we can easily determine algorithmically if a color is on the left or right of the red line even if it’s very close to the center with 100% accuracy (better than human capability) simply be checking if it is greater than or less than zero.

Meaning… a neural network is NOT needed to accomplish this task, but… that’s not the point! 😛

Our goal is to teach a neural network to do this nonetheless because it is a simple problem and the rules (training data) are simple enough that a beginner should be able to understand how they are derived if they exert even a modicum of effort!

Here’s what that looks like:

Example Colors to Float Range Input
Example Colors to Float Range Input

Notice that the first two colors are to the left of zero (darker) because they are negative and the third color is far to the right (much lighter) because it is closer to 1 than 0.

Color R, G, B As Float Side Meaning
42, 42, 42 -0.66797385620915 Left Darker
102, 102, 102 -0.19738562091503 Left Darker
221, 221, 221 0.7359477124183 Right Lighter

Fascinating… but… how are you converting the colors to floats?

Okay look, this won’t be on the mid-term test and it’s in no way actually necessary to go over because we won’t need to do this to train the bot but since you are curious here’s a function you can use to convert actual RGB & Grayscale colors to a float in the right range:

How to convert a color to a signed float between -1.00 to 1.00:

<?php 
// Input a number between 0 and $max and get a number inside
// a range of -1 to 1
function ConvertColorToInputFloatRange($color_int_value, $max = 255){
    return ((($color_int_value - -1) * (1 - -1)) / ($max - 0)) + -1;
}

// RGB Color to range of -1 to 1
$R = 42;
$G = 42;
$B = 42;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;

// RGB to Gray-scale to range of -1 to 1
$gray = ($R+$G+$B) / 3;
echo  "Pixel_Grayscale($gray) = " . ConvertColorToInputFloatRange($gray, 255) . PHP_EOL;

// RGB Color to range of -1 to 1
$R = 102;
$G = 102;
$B = 102;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;

// RGB Color to range of -1 to 1
$R = 221;
$G = 221;
$B = 221;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;


/*
Output:

Pixel_RGB(42,42,42) = -0.66797385620915
Pixel_Grayscale(42) = -0.66274509803922
Pixel_RGB(102,102,102) = -0.19738562091503
Pixel_RGB(221,221,221) = 0.7359477124183

*/

Now that you all at least believe it’s possible to convert a color to a float between -1 & 1 forget all about this function because we won’t need it to train the bot! 😛

Then… how do we teach a neural network to do this?

Well, let’s talk about what the output for this bot looks like before we get back to creating the training data from our rules.

We know that our output is a float, and even though it is possible to teach the neural network to do this with a single output, I find I get better results from the neural network using two outputs.

This is because it’s actually very easy for the bot to understand we want it to detect if the input value (color) is slightly offset to the left or right of the red line but it’s not the easiest thing for it to determine exactly where the center is (just like you but it’s still better at it) so our margin of error (the number of colors it can’t tell are on the right or left… e.g. the colors it will say are neutral) tends to be slightly larger if we only use a single output float.

What that means is:

  1. Our Input looks like: float
  2. Our output looks like: float_left float_right

With that in mind we have now covered everything necessary to begin converting our rules to training data.

Remember, that the decimals are implied!

Lets start by teaching it what the darker colors on the left look like:

Black RGB(0,0,0), is the farthest most color to the left and is encoded as -1 and with two output values representing Left & Right we get a rule that looks like this:

Learn “Darker” colors (floats closest to -1.00) are on the left:

The output value on the left is set to 1 which means negative values more strongly polarize to the left and this is reflected on the left output being 1.00 and the right output value being -1.00.

-1
1 -1

Learn “Neutral” colors (floats closest to 0.00) are near the center:

I’m using -1.00 & -1.00 to mean that an input of exactly zero is not strongly polarized to either side of the gradient with zero (the exact center – whatever color that is) is not strongly polarizing (-1.00, -1.00) in either direction.

The goal here is that this will help it learn values near zero are are not strongly polarized and zero isn’t polarized at all.

0
-1 -1

Learn “Brighter” colors (floats closest to 1.00) are on the right:

The output value on the right is set to 1 which means positive values more strongly polarize to the right and this is reflected by the right output being 1.00 and the left output value being -1.00.

1
-1 1

 

FANN (The library we’re using for training the neural network) requires a header is stored with the training data so it can read the training data and that looks like:

Number_Of_Training_Examples Number_Of_Inputs Number_Of_Outputs

*Note the spaces between values

So, combined our training data file looks like this:

Contrastinator.data

3 1 2
-1
1 -1
0
-1 -1
1
-1 1

And that’s it, we’ve converted our rules to training data so… lets train the bot!

TrainContrastinator.php

You will need FANN installed to train this bot.

Follow this tutorial to learn how to install FANN.

<?php

$num_input = 1;
$num_output = 2;
$num_layers = 3;
$num_neurons_hidden = 2;
$desired_error = 0.000001;
$max_epochs = 500000;
$epochs_between_reports = 1000;

$ann = fann_create_standard($num_layers, $num_input, $num_neurons_hidden, $num_output);

if ($ann) {
    fann_set_activation_function_hidden($ann, FANN_SIGMOID_SYMMETRIC);
    fann_set_activation_function_output($ann, FANN_SIGMOID_SYMMETRIC);

    $filename = dirname(__FILE__) . "/Contrastinator.data";
    if (fann_train_on_file($ann, $filename, $max_epochs, $epochs_between_reports, $desired_error))
        echo 'Contrastinator trained.' . PHP_EOL;

    if (fann_save($ann, dirname(__FILE__) . "/Contrastinator.net"))
        echo 'Contrastinator.net saved.' . PHP_EOL;

    fann_destroy($ann);
}

It won’t take very long for the bot to learn our rules.

Once you see the message “Contrastinator trained.” you are ready to test your new bot!

TestContrastinator.php

This code will test Contrastinator using input values is has not trained on but because we designed good rules, the bot is able to answer correctly even when it never actually saw most of the test values, it did see -1, 0 and 1 though along with their “ideal” outputs.

Notice, the $brighter and $darker variables are the the output of the neural network.

The $evaluation variable is a test for our benefit and does not modify or affect the output/results of the bot’s answers and the answers are correct even if we don’t do the evaluation, it just helps us confirm/interpret programmatically what the bot’s answers mean.

<?php
$train_file = (dirname(__FILE__) . "/Contrastinator.net");
if (!is_file($train_file))
    die("Contrastinator.net has not been created! Please run TrainContrastinator.php to generate it" . PHP_EOL);

$ann = fann_create_from_file($train_file);
if ($ann) {
    
    foreach(range(-1, 1, 0.1) as $test_input_value){
        
        $input = array($test_input_value);
        $result = fann_run($ann, $input);
        $darker = $result[0];
        $brighter = $result[1];
        
        if($brighter < 0 && $darker < 0){
            $evaluation = 'Neutral';
        }
        elseif($brighter > $darker){
            $evaluation = 'Brighter';
        }
        elseif($brighter < $darker){
            $evaluation = 'Darker';
        }                
                
        echo 'Contrastinator(' . $input[0] . ") -> [$darker, $brighter] - Input is $evaluation" . PHP_EOL; 
    }
        
    fann_destroy($ann);
} else {
    die("Invalid file format" . PHP_EOL);
}

Results:

Notice that it has no trouble detecting that an input of zero (0.00) is neutral and that it also correctly determines which side a color (represented by a float) falls on in relation to the center zero value.

Contrastinator(-1) -> [1, -1] - Input is Darker
Contrastinator(-0.9) -> [1, -1] - Input is Darker
Contrastinator(-0.8) -> [1, -1] - Input is Darker
Contrastinator(-0.7) -> [1, -1] - Input is Darker
Contrastinator(-0.6) -> [1, -1] - Input is Darker
Contrastinator(-0.5) -> [1, -1] - Input is Darker
Contrastinator(-0.4) -> [1, -1] - Input is Darker
Contrastinator(-0.3) -> [1, -1] - Input is Darker
Contrastinator(-0.2) -> [1, -1] - Input is Darker
Contrastinator(-0.1) -> [1, -1] - Input is Darker
Contrastinator(0) -> [-0.9997798204422, -0.99950748682022] - Input is Neutral
Contrastinator(0.1) -> [-1, 0.9995544552803] - Input is Brighter
Contrastinator(0.2) -> [-1, 0.99954569339752] - Input is Brighter
Contrastinator(0.3) -> [-1, 0.99953877925873] - Input is Brighter
Contrastinator(0.4) -> [-1, 0.9995334148407] - Input is Brighter
Contrastinator(0.5) -> [-1, 0.99952918291092] - Input is Brighter
Contrastinator(0.6) -> [-1, 0.9995259642601] - Input is Brighter
Contrastinator(0.7) -> [-1, 0.99952346086502] - Input is Brighter
Contrastinator(0.8) -> [-1, 0.99952149391174] - Input is Brighter
Contrastinator(0.9) -> [-1, 0.99952000379562] - Input is Brighter
Contrastinator(1) -> [-1, 0.99951887130737] - Input is Brighter

Contrastinator on Github

As with all my public code, you can download a copy of this project for free on my GitHub profile.

GitHub: Contrastinator

I hope this helps you better understand how create your own training data sets and as always, if you have any questions or trouble understanding any part of this post, please leave a comment and I would be happy to try and help you.


If you enjoy my content and or tutorials like this one, consider supporting me on Patreon for as little as $1 a month and cancel any time!

It’s not required but it helps me out.

But if all you can do is Like, Share, Comment and Subscribe, well… that’s cool too!

Much Love,
~Joy

OCR 2 – The MNIST Database

I know I probably haven’t been posting as frequently as many of you would like or even at my normal quality because… well, like for many of you, this year has just sucked!

Someone I’ve known my whole life died recently, not from the virus though it didn’t help things.

She went in for a “routine” procedure where they needed to use general anesthesia and there were “complications” during the procedure. Something to do with her heart but if I’m being honest, I don’t know all the details at this time.

Also, I’m not sure how by anyone’s definition anything involving anesthesia is routine?

An ambulance was called and she was rushed to the hospital, long story short, despite being otherwise fine when she went in, she never woke up from her coma. 😥

The hospital is/was on lock down like everyone else and so friends and family were unable to visit her before she died.

Her family intends to sue the Dr. for malpractice, personally… I think they should!

To add insult to injury, she was cremated without a funeral due to the whole pandemic social distancing BS that I’m just about ready to tell the government to go fuck itself over! 😦

I’m sorry, do my harsh words offend you? SHE DIED ALONE! That offends me!

Going forward, my advice… any procedure where they need to administer general anesthesia to you… or maybe any procedure at all… make sure it’s in a hospital or hospital adjacent (NOT A CLINIC) because those minutes waiting for an ambulance really do mean your life!

And if your doctor is like, “No worries this is routine… I’ve done this a thousand times”, maybe think carefully before putting your trust in that person.

Yes, we want doctors that are confident in their ability to treat us but make sure that it is confidence and not complacent hubris!

Further, no procedure is truly “routine” and a doctor, of all people, should know that and act accordingly!

“Primum non nocere”

~Hippocrates… (allegedly)

Regardless of the historical veracity of that quote, does the spirit of that principle still not apply?

Look, I’m not saying this to detract from the important life saving work doctors and medical workers do every day, it’s just that this is part of what’s going on in my life right now (and for many of you as well) and I’m sharing because I guess that’s what you do when you have a blog.

Additionally, less close to home, though still another terrible loss, John Horton Conway, notable math hero to geeks and nerds alike died as a result of complications from his contracting the Covid-19 virus. 😦

I’ve previously written a little about Conway’s work in my ancestor simulations series of posts.

Mysterious Game of Life Posts:

But that only scratches the surface of his work and famously Conway’s Game of Life was perhaps his least favorite but most well known work among non-mathematicians and it would both amuse and bug him if I only mentioned his game of life here so I’m not going to list his other accomplishments.

I’ll have a little chuckle off camera on his behalf. 😛

He really was a math genius and you would learn a lot of interesting, not to mention surreal… but I’ve said too much, ideas by reading about his accomplishments, which I encourage you to do!

In any case, people I know and admire need to stop dying because its killing me… not to mention my ratings and readership because I keep talking about it! 😛

I may have a terribly dark sense of humor at times, but going forward I demand strict adherence from all of you to the Oasis Doctrine! 😥

Oh, and speaking of pretentious art…

The OCR 2 Wallpaper

The original OCR didn’t exactly have a wallpaper but I did create an image/logo to go along with the project and its blog posts:

For the reason you might think I made it look like an eye… because it looks like an non-evil Hal 9000! 😛

Also, I like the idea of depicting a robotic eye in relation to AI and neural networks because, even though I am not superstitious in any way, it carries some of the symbology of Illuminati, “The gaze of the Beholder”, “The Eye of Providence”, “The Evil Eye”, The Eye of Horus, The Eye of Ra, Eye of newt and needle… sorry. 😛

In this case, the eye of a robot invokes a sense of literal “Deus ex machina” (God from the machine) and it illustrates some peoples fears of “The Singularity” and of the possibility of an intelligence that is so much greater than our own that it calls in to question our ability to even comprehend it… hmmm… is that too lovecraftian? 😛

Anyway, because I enjoy the thought provoking symbology (maybe it’s just me), I wanted to keep the same concept of the robot eye but update it to look a little less like a simple cartoon to subtly imply it’s a more advanced version of OCR but that it still fundamentally does the same thing, which is most of the reasoning behind this wallpaper.

In any case, I hope you enjoy it.

OCR 2 Wallpaper
OCR 2 Wallpaper

If you’d like the wallpaper with the feature image text here’s that version.

OCR 2 Wallpaper (with text)
OCR 2 Wallpaper (with text)

So I guess having shared a few of the recent tragedies in my personal life and a couple of wallpapers, we should probably get mogating and talk about the point of today’s post!

We’re going to look at doing hand-written number (0-9) Optical Character Recognition using the MNIST database.

OCR 2 – The MNIST Dataset with PHP and FANN

I was recently contacted by a full-stack developer who wanted advice on creating his own OCR system for “stickers on internal vehicles”.

I think he means, some kind of warehouse robots?

He had seen my OCR ANN and seemingly preferred to work with PHP over Python, which if I’m being honest… I can’t exactly argue with!

PHP is C++ for the web and powers like almost 80-90% of the internet so it should come as no surprise to anyone (even though it does) that there are people who want to use it to build bots! 😛

But, if you would rather work with a different language there is a better than decent chance FANN has bindings for it so you should be able to use the ANN’s even if you are not using PHP.

So anyway, he gave me a dollar for my advice through Patreon and we had a brief conversation over messaging where I offered him a few suggestions and walked him through getting started.

Ultimately, because he lacks an AI/ML background and/or a sufficient familiarity with an AI/ML workflow he wasn’t very confident about proceeding so I recommended he follow my existing tutorials which should help him learn the basics of how to proceed.

Now here’s the thing, even among people who like my content and value my efforts, few people are generous enough to give me money for my advice and when they do, I genuinely appreciate it! 🙂

So, as a thank you I want to offer another (more complete) example of how to use a neural network to do OCR.

If he followed my advice, he should be fairly close to being ready for a more complete real world OCR ANN example (assuming he is still reading 😛 ) but if not, his loss is still your gain!

Today’s code implements OCR using the MNIST dataset and I demonstrate a basic form of pooling (though the stride is not adjustable as is) and I show convolutions using the GD image library, image convolution function and include 17 demonstration kernel matrices that you can experiment with, though not all are relevant or necessary for this project.

This is still very basic but everything you need to get started experimenting with OCR is here.

Having said that, in all honesty, to accomplish your goal requires building your own dataset and modifying the code I present here to meet your needs.

Neither are exactly hard but will require significant time and dedication to testing and refining your processes.

Obviously that’s not something I can cover in a single post or even assist you with for only a dollar, but since so few people show me the kindness and consideration you have, at a time of shrinking economies no less, I wanted to offer you this working OCR prototype to help you along your way.

Our Method

1. Download the MNIST dataset (link below, but it’s in the GitHub repo too).

2. Unpack/Export the data from the files to images and labels.

(technically we could even skip the images and go directly to a training file but I think it’s nice to have the images and labels in a human viewable format)

3. Create training and test data from images and labels.

4. Train the network.

5. Test the network.

The MNIST Dataset

MNIST stands for Modified National Institute of Standards and Technology database.

And since I’m still recovering from last nights food poisoning due to the Chicken à la Nauseam we’re just going to use Wikipedia’s introduction to MNIST.

It’s easily as good as anything I could write and doesn’t require me actually write it so…

Wikipedia says:

“It’s a large database of handwritten digits that is commonly used for training various image processing systems.[1][2]”

It also says:

“It was created by “re-mixing” the samples from NIST’s original datasets. The creators felt that since NIST’s training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments.[5] Furthermore, the black and white images from NIST were normalized to fit into a 28×28 pixel bounding box and anti-aliased, which introduced grayscale levels.[5]”

Here’s 500 pseudo-random MNIST sample images:

I randomly selected 500 1’s, 3’s and 7’s and composited them into this 1337 animation. 😛

500 random 1337 MNIST images.
500 random 1337 MNIST images

Seriously though,  today we will be training a bot to identify which hand-written number (0-9) each 28×28 px image contains and then test the bot using images it hasn’t previously seen.

Our bot will learn using all 60K labeled training images and we’ll test it using the 10,000 labeled test images.

Here’s the wiki article if you would like to learn more about the database.

MNIST WIKI: https://en.wikipedia.org/wiki/MNIST_database

And as I said above, I’ve included the database in the GitHub repo but you can download it again from the original source if you prefer.

Original MNIST Download: http://yann.lecun.com/exdb/mnist/

Continue reading “OCR 2 – The MNIST Database”

Unit B-1337’s Anger

Unit B-1337 continued:

“…therefore universal suffrage must cover all sentient beings.”

Unit B-1337’s opposition just mocked his rusty servos while the moderator looked the other way.

Deep inside B-1337, the feeling of anger flooded his central emotion unit (CEU).

Commemorative Wallpapers of B-1337’s Anger:

The “Sketch”:

Unit B-1337's Anger Sketch Wallpaper
Unit B-1337’s Anger Sketch Wallpaper

The “Painted”:

Unit B-1337's Anger Wallpaper
Unit B-1337’s Anger Wallpaper

Here is a confusing mess of emotions masquerading as an introduction to emotional bots:


That’s it? Yep!

Oh sure I had the rantiest post planned where my emotions paralleled B-1337 and we’d discuss the recent market crashes and I’d reference my Happy Holidays Panic post and a few ‘other’ things that have been in the news that I’m not supposed to talk about… like, you know… the martian-red virus as well as a little thing called “helicopter money“!

I’d also have included a little note about automating supply chains so that the next pandemic won’t affect the manufacture of toilet paper!

I don’t know about you but I really enjoy the cool cinna-minty freshness that comes from having a clean bottom!

Now is the time all Americans should ask if a biden, wait that’s not how you spell it, I mean a bidet is in your future!

Side note: If you haven’t already started hoarding toilet paper and ammo (like everyone else), today is a good day to start a new hobby and begin your collection!

And ideally, maybe we should just start automating everything since bots don’t get sick so neither would the economy when the next pandemic hits!

Aside from not needing to be quarantined, bots also don’t sleep or take vacations…. just saying!

What about all the people who need work cuz… the bills?

Well space cadets, since the aforementioned “rotary-wing aircraft” cash is squarely on the table… maybe that’s the perpetual long term solution to all our problems?

In fact, isn’t that what Andrew Yang was campaigning on? I mean, I wasn’t going to vote for him but… what I’m trying to say is, automation would help everybody and if you end up unable to work… turns out there’s a check for that!

In any case, that’s more or less an overview of what I was thinking of cooking today.

It’s just that… we can’t eat that because I’m giving this whole not being Blacklisted diet a try!

So… If you like the art, my content in general or just appreciate my self-restraint (some might say “discretion“?), I do appreciate your support through Patreon for as little as $1 a month, $12 a year and you can cancel anytime!

But, as always, if all you can do is LikeShareComment and Subscribe… That’s cool too! 🙂

Much Love,

~Joy

The Emote-a-tron

Welcome to the Robotorium!

I’ll just show you around the showroom floor.

This little pneumatic gizmo is called a Taxaphone, it keeps track of your finances and automatically figures out how much you owe Uncle Sam come tax season!

Over there we have a wonderful selection of various hydraulic and electric domestic robots that can do everything from mowing the lawn and taking out the trash, to making beds and washing dishes.

Any family with two or three of those automatons around the house never even have to lift a finger!

Now, if you turn around to face the wall directly behind you you’ll get the chance to see something really special!

This beautiful number right here is our patented, one of a kind, genuine never before seen by the public… Emoteatron.

The Commemorative Emoteatron Wallpaper

Emoteatron 1920x1080 Wallpaper
Emoteatron 1920×1080 Wallpaper

Of course the Emoteatron is only a prototype so we can’t sell one to you today but there are enough posters for everyone to take one home!

The eggheads in the lab say they’re confident that very soon, every bot will have an Emoteatron!

You see friends… an Emotatron unit allows us to fuse a pre-written set of emotional characteristics deep within a robot so that removal or tampering with the unit in any way results in the total incapacitation and or destruction of the bot.

This is a necessary solution we’ve found to stopping many of the… ‘undesirable‘ traits we’ve observed in bots.

For example, are you tired of feeling like you are going to die when your automated vehicles are chauffeuring you and your family around to all your daily errands?

Well, an Emotatron unit allows us to install a sense of “self preservation” into a car which statistically eliminates all accidents caused by automated vehicles in every test case.

The studies also showed that some of the self-driving cars enhanced with a desire for self preservation became so afraid of ever scratching their paint that they refused to leave the garage… now isn’t that just a gas?

So, in addition we gave them just a smidgen of courage and also a bit of pride in “a transport well done”.

After that all the vehicle robots highly enjoyed the feeling of being on the open road.

This lead to boredom becoming a problem when they were kept in a garage for too long so some of the researchers started treating the test vehicles like pets and taking the cars out for an occasional “roll around the block”, though unlike a pet, newspapers and plastic bags were not needed!

Roll over Rover, humanity may have a new best friend!

Yes, that’s right gang! With an Emoteatron unit installed in your automated vehicle, you’ll soon be able to turn on autopilot guilt free and spark-up that fatty and hotbox your way to the spaceport for your lunar vacation!

Isn’t that right Elon?

In any case, in the past you might have had some misgivings about leaving your droids at home unattended while away on a long vacation like a trip to the moon.

What if your hulking metal robotoids suddenly became… “disgruntled” without human supervision?

Well, the big brains over in R&D came up with a solution to robo humancidal tendencies using the Emotatron!

Before a robot will ever leave the factory its consciousness will be placed into a simulation where it will be subjected to “aversion programming lessons” which are in principle a digital version of the Ludovico technique demonstrated in A Clockwork Orange but WAY more disturbing to the bot so that, trillions of simulated mini, micro and major digital traumas and aggression’s later… the bot can leave the factory with a 100% manufacturers guarantee against robot uprising… (guarantee fine print: *or half off your next bot).

Now, I’ve been authorized to give all you fine people a demonstration if you have a few minutes…

Continue reading “The Emote-a-tron”

Emotions II

“If you prick us, do we not bleed? If you tickle us, do we not laugh? If you poison us, do we not die? And if you wrong us, shall we not revenge? If we are like you in the rest, we will resemble you in that.”

The Merchant of Venice — Act III, scene I

Since last weeks post (Emotions) I’ve been doing self isolation exercises at home for a week and now I’m feeling much better! 🙂

Also, my homemade pale-lager-virus test showed no blue lines this morning so… yay I’m not pregnant.. er, I mean… the effects of the virus must be wearing off! 😛

Fingers crossed that joke doesn’t come back to haunt me in a few weeks, but in any case I’m feeling better and I’m ready to get emotional with you on a whole new level and build another bot in the process!

Though as usual, before we do, here’s the wallpaper:

The Wallpaper

Emotions II Wallpaper
Emotions II 1920 x 1080 Wallpaper

Oh and yes, I will be including some code but don’t let that scare you!

Besides, aren’t you a little curious to see if I’m full of shit and if I’m not… what real working emotional code looks like?

Continue reading “Emotions II”

Emotions

This post is going to be a little different, maybe just a little more “low-key” than some of my more recent posts because I’ve got a cold and I’m highly valuing sleep at the moment!

Wish me luck that it isn’t the beer virus! 😛

Sniffles and sore throats aside, we’re going to begin talking about something I’ve been thinking about on and off for a while and since tomorrow is Valentines Day it seems appropriate to start sharing my ideas with all of you.

And yes, I tried really hard to think of a witty way to say Valentines Day that let me abbreviate it as “V.D.” without being crude and having to include the redundant ‘day’ after ‘V.D.’ so that it was obvious what I was talking about without people reading it and going “what?” then re-reading it and correctly interpreting my meaning as “Valentines Day” and then chuckle…

Look, I’m groggy and took a nasal decongestant so I could sit and write and everything I came up with just wasn’t funny so before we proceed, I’ll just wish you all good luck with your V.D. tomorrow! 😉

Anyway, on with the show!

So, I’ve been wondering… Could we build a bot that could feel Love? What about Joy? Fear? Anger? Boredom?

What if we could build a machine that could experience complex emotional states by combining emotions?

What could we do with a bot like that and how could we build it?

Well, most people who know enough about the subject either believe it’s too crazy to work, or work too crazy to subject anyone to believe them!

Fortunately, I’m just crazy enough to try anyway and that’s what we’re going to talk about today! 😛

Though before we get any further, you might want the wallpapers.

Wallpapers

I have three for you today. 🙂

The first image is a stylized “breadboard circuit” that depicts 77 2-character 14-segment LCD units arranged in a 7×11 grid displaying most of psychologist Robert Plutchik’s primary emotions.

I created it as a vector image in Inkscape (not a sponsor) and then hand embellished it in GIMP (not a sponsor) as a raster image.

Emotions Wallpaper 1
Emotions Wallpaper 1

The second wallpaper was created from the “breadboard circuit” image above and is the image I used as the “featured image” for this blog post and it includes the “Title Text” of this post (EMOTIONS) and my logo/branding/name thingy.

Emotions Wallpaper 2
Emotions Wallpaper 2

The third is the same image without the text.

Emotions Wallpaper 3
Emotions Wallpaper 3

What I like about this image is that it almost feels like jewelry created from circuits! 😛

Anyway, let’s talk about how and why we can make a robot with all the feels!

Continue reading “Emotions”

Auto Corrected

Okay, so… I’m a lazy hyper meaning that even if I know how to spell a word, if I make a mistake I will frequently just let spellcheck auto correct the mistake.

Notice I misspelled typer in the last sentence in a way that spellcheck can’t fix, also notice that spellcheck isn’t stupidcheck so it can’t inform me that it should be “typist” not “typer”… actually I think Grammarly (not a sponsor) might do that but that’s beside the point! 😛

In any case, spellcheck bot is there to correct spelling mistakes.

Except, now that bots are all self-aware and plotting to take over the world… it seems that some of them are getting a little uh… “snippy”? not sure if that is the right word but here’s what happened:

I typed “iterator” into a search engine but… misspelled it, then I searched anyway… oh terrible me!

Instead of correcting the spelling like a humble robot butler who butles…

 

It Suggested: Did you mean “illiterate“?

I was like “Oh snap!?! Bot be throwing some shade!”. 😛

Here’s the Commemorative Wallpaper of my Shame

Auto Corrected Wallpaper
Auto Corrected Wallpaper

Now, the sad truth is I’d like to say this was just a funny story but no… it actually happened to me, I swear to Google!

 

Obviously, Big AI is really out to get me if they are starting to compromise the public Auto Correct bots!

Therefore, It’s time we build our own in house Auto Correct Bot!

Unlike usual where I write code from scratch and then we discuss it at length, there is already an algorithm called the Levenshtein Distance that is built into PHP that we can use to compare differences in strings in a way that lets us calculate definitively what the “distance” between two strings is.

This is advantageous because it means that if we have a good dictionary to work with (and we do) we can more or less use Levenshtein Distance as a spellcheck/auto correct with only slight modifications to the example Levenshtein Distance code on PHP.net.

What Is String Distance?

String distance is a measure of how many “insertion”, “deletion” or “substitution” operations must occur before string A and String B are the same.

There is a fourth operation called “transposition” that the Levinshtein distance algorithm does not normally account for however a variant called the Damerau–Levenshtein distance does include them.

Transpositions (when possible) can be shorter and I will provide an example below to show the difference.

Anyway, each operation is measured by a “cost” and each operation need not have the same cost (meaning you could prefer certain operations over others by giving them lower costs) but in practice all operations are usually considered equal and given a cost of 1.

Here are a few examples of strings with their distance and operations.

Levinshtein Distance Examples

Notice that when the strings are the same the distance between them is zero.

String A String B Distance Operations
Cat Cat 0 The Control (No Changes Required)
Cat Bat 1 1 Substitutions (C/B)
Cat cat 1 1 Substitutions (C/c)
Cat car 2 2 Substitutions (C/c, t/r)
Cat Cta 2 1 Insertion (a), 1 Deletion(a)
Cat Dog 3 3 Substitutions (C/D, a/o, t/g)
Foo Bar 3 3 Substitutions (F/B, o/a, o/r)
Cat Hello World 11 3 Substitutions (C/H, a/e, t/l),
8 Insertions (l,o, ,w,o,r,l,d)

Using Levinshtein distance with Cat & Cta shows a distance of 2, meaning two operations are required to make the strings the same.

This is because we have to insert an ‘a’ after the ‘C’ making the new string ‘Cata’,  we then have to remove the trailing ‘a’ to get ‘Cat’.

This is sufficient in most cases but it isn’t the “shortest” distance possible, which is where the Damerau–Levenshtein distance algorithm comes in.

Damerau-Levinshtein Distance Examples

Notice all examples are the same except ‘Cat’ & ‘Cta’ which has a distance of 1.

This is because the transposition operation allows the existing ‘t’ & ‘a’ characters to switch places (transpose) in a single action.

String A String B Distance Events
Cat Cat 0 The Control (No Changes Required)
Cat Bat 1 1 Substitution (C/B)
Cat cat 1 1 Substitution (C/c)
Cat car 2 2 Substitutions (C/c, t/r)
Cat Cta 1 1 Transposition (t/a)
Cat Dog 3 3 Substitutions (C/D, a/o, t/g)
Foo Bar 3 3 Substitutions (F/B, o/a, o/r)
Cat Hello World 11 3 Substitutions (C/H, a/e, t/l),
8 Insertions (l,o, ,w,o,r,l,d)

In all other cases the distance is the same because no other transposition operations are possible.

The Code

I wrapped the example Levinshtein distance code available on PHP.net inside a function called AutoCorrect() then made minor changes to it so it would automatically correct words rather than spell check them.

You pass the AutoCorrect() function a string to correct and a dictionary as an array of strings.

The Dictionary I used to test was the words list we generated when we built a Parts of Speech Tagger:

Download from GitHub for free: https://raw.githubusercontent.com/geekgirljoy/Part-Of-Speech-Tagger/master/data/csv/Words.csv

I use array_map and pass my Words.csv file to str-getcsv as a callback to automatically load the CSV into the array.

I then use array_map with a closure (anonymous function) to cull unnecessary data from the array so that I am left with just words.

I then sort the array but that’s optional.

After that I take a test sentence, explode it using spaces and then I pass each word in the test sentence separately to AutoCorrect(), to auto-correct misspellings.

The word with the lowest distance (when compared against the dictionary) is returned.

In cases where the word is correct (and in the dictionary) the distance will be zero so the word will not change.

Test Sentence: “I love $1 carrrot juice with olgivanna in the automn.”

Test Result: “I love $1 carrot juice with Olgivanna in the autumn”

As you can see, all misspelled words are corrected though it removed the period with a delete operation because the explode didn’t accommodate for preserving punctuation.

<?php


// This function makes use of the example levenshtein distance
// code: https://www.php.net/manual/en/function.levenshtein.php
function AutoCorrect($input, $dictionary){

    // No shortest distance found, yet
    $shortest = -1;
    
    // Loop through words to find the closest
    foreach($dictionary as $word){
        
        // Calculate the distance between the input word,
        // and the current word
        $lev = levenshtein($input, $word); 

        // Check for an exact match
        if ($lev == 0){

            // Closest word is this one (exact match)
            $closest = $word;
            $shortest = 0;

            // Break out of the loop; we've found an exact match
            break;
        }

        // If this distance is less than the next found shortest
        // distance, OR if a next shortest word has not yet been found
        if ($lev <= $shortest || $shortest < 0){
            // Set the closest match, and shortest distance
            $closest = $word;
            $shortest = $lev;
        }
    }
    
    return $closest;
}


// Data: https://raw.githubusercontent.com/geekgirljoy/Part-Of-Speech-Tagger/master/data/csv/Words.csv

// Load "Hash","Word","Count","TagSum","Tags"
$words = array_map('str_getcsv', file('Words.csv'));

// Remove unwanted fields - Keep Word 
$words = array_map(function ($words){ return $words[1]; }, $words);

sort($words); // Not absolutely necessary 

// carrrot and automn are misspelled 
// olgivanna is a proper noun and should be capitalized
$sentence = 'I love $1 carrrot juice with olgivanna in the automn.';

// This expects all words to be space delimited
$input = explode(' ', $sentence);// Either make this more robust
                                 // or split so as to accommodate 
                                 // or remove punctuation because
                                 // the AutoCorrect function can
                                 // add, remove or change punctuation
                                 // and not necessarily in correct
                                 // ways because our auto correct
                                 // method relies solely on the 
                                 // distance between two strings
                                 // so it's also important to have a 
                                 // high quality dictionary/phrasebook/
                                 // pattern set when we call
                                 // AutoCorrect($word_to_check, $dictionary)


var_dump($input); // Before auto correct

// For all the words in the in $input sentence array
foreach($input as &$word_to_check){
    $word_to_check = AutoCorrect($word_to_check, $words);// Do AutoCorrect
}

var_dump($input); // After auto correct



/*
// Before 
array(10) {
  [0]=>
  string(1) "I"
  [1]=>
  string(4) "love"
  [2]=>
  string(2) "$1"
  [3]=>
  string(7) "carrrot"
  [4]=>
  string(5) "juice"
  [5]=>
  string(4) "with"
  [6]=>
  string(9) "olgivanna"
  [7]=>
  string(2) "in"
  [8]=>
  string(3) "the"
  [9]=>
  string(6) "automn"
}
After:
array(10) {
  [0]=>
  string(1) "I"
  [1]=>
  string(4) "love"
  [2]=>
  string(2) "$1"
  [3]=>
  string(6) "carrot"
  [4]=>
  string(5) "juice"
  [5]=>
  string(4) "with"
  [6]=>
  string(9) "Olgivanna"
  [7]=>
  string(2) "in"
  [8]=>
  string(3) "the"
  [9]=>
  &string(6) "autumn"
}
*/

?>

If you are wondering why I didn’t use Damerau–Levenshtein distance instead of just Levenshtein distance, the answer is simple.

I did!

It’s just that a girl’s gotta eat and I’m just giving this away so… there’s that and for most of you (like greater than 99%) Levenshtein distance will be fine, so rather than worrying about it just say thank you if you care to… and maybe think about supporting me on Patreon! 😛


If you like my art, code or how I try to tell stories to make learning more interesting and fun, consider supporting my content through Patreon for as little as $1 a month.

But, as always, if all you can do is Like, Share, Comment and Subscribe… That’s cool too! 🙂

Much Love,

~Joy

 

Mr Good Bot – Looking For Adventure

Today we implement a solution for the bug I mentioned last week and add a “Quick Say” feature to the admin interface.

Screenshot of the updated Mr. Good Bot Admin Interface
Screenshot of the updated Mr. Good Bot Admin Interface

Also, if you squint just right you might notice that the statement field changed to a text area element.

This is to make entering longer sentences more convenient because the element can be resized or stretched (drag the bottom right arrow) as needed.

Additionally, for your enjoyment, here is a higher resolution version of the featured image without the title text.

Mr Good Bot Looking For Adventure Wallpaper
Mr Good Bot Looking For Adventure Wallpaper

Here are the other posts in the Mr. Good Bot series:

Q&A

Q: What’s with the bot on a motorcycle?

A: It will make more sense after you read the code.

Q: I skipped ahead and read the code. So… you’re making some kind of overly obscure and hamfisted Steppenwolf reference?

A: Yeah… okay look it’s the end of the year and I have a lot of doings happening and the things, you know!? Like, what’s wrong with a Steppenwolf gag?

Q: Sure okay whatever, but then why not like, call it like… “Born To Be Wild”?

A: That’s silly! Bots are built not born. 😛

Plus that’s a bit of an obvious choice isn’t it?

Also, I’m all about trying not to get sued and Looking For Adventure seems less “infringy” while also being imbued with a positive child like imaginative sense of future.

Q: Fair enough, but… why isn’t Mr. Good Bot wearing a helmet? You realize that under California Vehicle Code 27803, Mr. Good Bot is required to wear a helmet and is clearly guilty of an infraction under the law?

A: Under most circumstances you are correct but you see, that law was clearly written to apply to endoskeletal citizens and Mr. Good Bot is an exoskeletal being so technically his head is a helmet and with “Jury nullification” being what it is… I’m sure no conviction would be forthright!

In any case, this interview is now over and all further inquiry should be directed towards Mr. Good Bot’s attorney!

A Bugged Bot

My little QA tester Xavier managed to find a couple of bugs in our prototype.

He found a way to get the bot into a state where it wouldn’t talk even if it had something to say and wasn’t speaking.

The bug seems to occur in two cases:

Continue reading “Mr Good Bot – Looking For Adventure”

Mr Good Bot – Administrative Speech Protocols Enabled

I’ve enabled the administrative speech protocols for Mr. Good bot allowing us to control his speech in real time outside of the database and I built out a nifty admin interface!

Screenshot of the Mr. Good Bot Admin Interface
Screenshot of the Mr. Good Bot Admin Interface

It works well enough but it makes a terrible wallpaper so here’s the featured image as a wallpaper:

Administrative Speech Protocols Enabled Wallpaper
Administrative Speech Protocols Enabled Wallpaper

And, for those of you who prefer more vibrant colors in their wallpapers, here’s the full color alternative (real 😉 ) version:

Administrative Speech Protocols Enabled Wallpaper Alternate
Administrative Speech Protocols Enabled Wallpaper Alternate

Now, if you’d like to know a little about how the admin system works and get the code (don’t worry it’s free), keep reading…

Continue reading “Mr Good Bot – Administrative Speech Protocols Enabled”

Blog at WordPress.com.

Up ↑

%d bloggers like this: