Search

Geek Girl Joy

Artificial Intelligence, Simulations & Software

Tag

Prototype

Pink Paradise Island

So it’s been a while since I’ve written a post and I thought It’s probably high time I should, okay maybe not high time since 4/20 already passed. 😛

Anyway, my reasons for not having posted in a while are numerous and we won’t be discussing any of that today except to acknowledge one of the publicly speculated reasons for my absence being absent (the Betty White aliens)… is 100% true and if Betty White wasn’t leading the conspiracy she totally would agree with me!

Anyway, I’m not predisposed to discussing that further right now as my digital therapist advises me that the paranoid delusions are an integral part of my healing process, so I’m going to take that artificial medicinal recommendation because, my body er… my delusion, my choice!!!

Or… something like that.

Anyway, that left me with the problem of what should I talk about? I mean, it would be totally weird if I just came out and was like…

“Okay guys, here’s how you can rejigger your Flux Capacitor to run on a PHP Neural Network to detect when an egg is perfectly sunny side up.”

Without some kind of acknowledgment, it feels like it would be weird. Besides I’m absolutely certain that level of advanced neural network would have to be “paid content” and definitely you’d need to accept all my tracking cookies before I could even think about sharing it with you! But not wanting to have to build all those tracking cookies (it’s such a pain), I decided maybe I’ll just Heinz 57 on some of the content I’d been neglecting to read cuz I’ve been super-duper super busy… anyway a few weeks back I read a post by a blogger named Pinkie that I have mentioned in previous posts. After reading her post and soberly stumbling my way into her… uh… comments section, I found that it was unusually empty, Killroy hadn’t even been… so it was the perfect tabula rasa on which I could paint a masterpiece!

It’s not that I had anything important to say about the topic of discussion and most definitely I did not say anything “offensive” or “rude”, no, no… that would be way too “crude” and that is just a desktop shortcut that connects with how to lose friends and influence people not to listen to you. And, it’s not that her post wasn’t interesting either… people just didn’t comment for whatever reason and I’m not actually sure if anyone ever did because I got busy with other things and never checked back in on my latest magnum opus… but that’s beside the point because it wasn’t that important anyway.

So anyway, I saw an opportunity to use all that wonderful blank comment section silence to my delicious amusement and injected a little random chaos into the situation. Essentially what I did what leave a “comment bomb” that was so off topic, so long and bazaar (but again… not actually “offensive” or “racist” or anything like that at all) just a long winded weird story that was incredibly out of place which had nothing to do with her post and I think it was slightly over 2000 words making it on par with the length of her post. I would have gone longer but even my amusement has limits and doesn’t involve me wasting my time… too much. 😛

Ultimately, Pinkie and I are both adults who knew what we were doing and well, together… our literary collaboration created something beautiful… artistically and aesthetically delusional, but beautiful none the less!

Now I’m not going to link to the post, instead what I’ll do is casually declare a scavenger hunt and challenge you to go find it if you are curious… Here’s the link to Pinkies Paradise Blog and as an added hint I’ll remind you that my comment was within the past few weeks. 🙂

Anyway, as a result of my little scheme I kinda owe her for completely closing out comments for all time on her post (unverified but oh so true 😛 ), so when I saw that she recently revamped the metaphysical manifestation of her literary preeminence domain, I knew I could repay her chaotic gesture of kindness of not deleting my nonsensical comment by giving her and all her ilk the opportunity to see their beloved realm they so lovingly call “Paradise”.

Certainly, it would have to be digital because I can’t just make islands by crossing my arms and blinking my eyes like Barbara Eden has been known to do… though I did go out on a limb and make the island sufficiently “incomplete” and “different” enough that you cannot say they are “the same” because as an American there is nothing greater that I fear and respect while also simultaneously irrationally reveling in than litigiousness. Yea, hear ye hear ye… all rise, the judiciary giveth and the judiciary eminent domainly taketh away… AMEN!

So, with my own highly mental assurances that I could not be sued by her fandom menace in the off chance they don’t like my efforts, I set to work assembling their Paradise Island as a Unity 3D game/application that she and her disciples may enjoy a digital respite in their native island homeland of Paradise (but I built and altered it in my free time so as not to be a “derivative work” except as “satire and parody” ensuring that my efforts fall under the title of “Fair Use”).

I dunno man, the laws weird but as far as branding is concerned I think the depiction of her island is appropriately sized for the most part but I also tend to imagine that just like the Jurassic Park island is much bigger than the digital cartoon representation of it is depicted because you’ve got to have space for the Gallimimus to get eaten by a hangry (not misspelled, just slang) T-Rex and young prince Timothy to get electrocuted, similarly I also imagine her island as at least a few square miles/kilometers or more with the current official Paradise map and the sizes and locations of things being a sort of a hyperbolic over simplification to aid island visitors with orienting themselves towards the pool or the all important buffet line which of course is 24/7 all you can eat, try the Lobster Thermidor it is TO DIE FOR and the lobster proved it!

Though honestly, I feel like her island NEEDS a volcano or at least one that is island adjacent (preferably in view of the cheeseburger buffet) because society has preconditioned me to irrationally believe that most tropical islands have such things and with volcano’s being so popular in the news right now for some indeterminable reason I think it only a logical business decision for her and her minions to either acquire one or to Walt Disney magic one into existence with some hot glue, cardboard, paper mache, sodium bicarbonate and a little vinegar (don’t forget the food coloring and bonus points will be awarded for tasteful use of glitter) so it can Los Vegas entertain her guests with its regular eruptions on schedule and she would probably do well to hold the occasional disgruntled guest sacrifice so as to cargo-cult appease the baking-soda gods.

But in any case, those are decisions best left to Pinkie and her ministry of propaganda, all I can say is that I hope she enjoys my peace offering and doesn’t opt to throw me into the volcano one day for my cantankerousness but no matter because like Martha Stewart says, it’s a good thing.

Now as usual, there is a themed wallpaper based on today’s topic:

Certainly it falls short of an eternal greatness nobody sought but uh… it’s my free gift to you!

Pink Paradise Island Wallpaper
Pink Paradise Island Wallpaper

Now, notice I neglected to select any musical accompaniment for today’s post but in keeping with the famous chaotician Ian Malcolm’s love of chaos it probably would have been something like Cheeseburger in Paradise… eh, why not… may I present that American creation on which I feed:

Okay, in all truth and honesty, without further adieu, knowing full well my audience has a limited attention span, in full recognition of all that is decent and holy, in complete perpetuity, zero money down and no money due at lease, refi as needed, 100% repairs not included, full money back guarantee and we’ll even throw in the heated leather seats with a 10 year guaranteed spare extra-golden shower parachute for your whole family as a signing bonus!

Why the hell did I just write that? It’s crude and borderline offensive to like probably everybody!? I could press the backspace button but… nah, screw it! People skim read these things anyway so I think the odds anyone actually read that last paragraph is basically null.

Er… what I mean to say is… let’s look at the Pink Paradise Island:

Pink Paradise Island

So as I said I built this game/app in Unity 3D as a first person “freerunner” and of course thematically it’s set on a pink paradise island that is conceptually (satirically and parodically as required by law so I don’t get sued) is loosely based on her intellectual property of the island called Paradise.

There is a “day/night” cycle that processes through it’s loop every three minutes.

There are two songs used in the game/app:

The first is called The Beach Where Dreams Die which loops during the “Main Menu” screen.

The second is called Enchanted Tiki 86 which loops while you run around the island and I think really goes with the retro hot pink palm trees vibe the game has.

Both songs are available under the CC0 license (the best Creative Commons License) and even though it’s not required I would still like to thank the authors for making their music available to creators to use for free! Both creators have a Patreon available from the links above if you would like to support them.

Here’s a screenshot of the “Main Menu”:

Notice how truly original and ground breaking this menu layout is! Also, you gotta dig that cartoon hand for a cursor!

Here’s an aerial view of the island:

The lights scattered around the paths are animated pink flame torches.

Here’s a screenshot of the “Escape menu”:

Not sure if it’s really noticeable through all the pink and buttons but the background for the menu is a starfish and sea shells. 🙂 You can access this menu at any time while playing by pressing the escape key.

Here’s a screenshot of the “Kaiju” styled lake at “dusk”:

Note the pink flame torches and although it’s really dark, there is a boat on the shore of the lake to the right. You can also see that I implemented a compass at the top of the screen to help you find the buffet!

Easter Eggs AKA: Totally Not Lazy Development:

The island is surrounded by invisible walls to keep you mostly on the land but… I did leave a few sections that shouldn’t be too hard to find further in the water to allow you to see what is under the ocean (and the lake on the island) if you want too… though I can assure you it’s just empty space and the water is “backface culled” so you can’t even see the water line when under it and I didn’t even implement any underwater VFX so it’s pretty plain under there but feel free to look if you are curious!

Controls

The controls are a simple “first person” keyboard interface and are the same on all platforms.

Movement: WASD Keys or Arrow Keys

Run: Left Shift

Jump: Space Bar

Menu: Escape Key

How to Get It / Play It

You can play the WebGL version online on my GitHub Pages account by clicking this link:

Run In Browser (No Download Required): Pink Paradise Island

You can also download it for free and run it on your local machine for an improved experience involving higher frame-rates and better texture rendering (especially the pink flame torches). WebGL is great but native is still better!

I compiled it to run natively on the Windows, MacOS and Linux platforms.

Download on GitHub: Pink Paradise Island

No matter how you play Pink Paradise Island, I hope you enjoy playing it as much as enjoyed making it!


This is the part of the post where I fail to “humorously” shill for myself so here goes…

All I cans Patreon is think about while I plagiarize that commercial but since the island from today’s post is plagiarized too I figured fuck it! So, uh… Patreon, Patreon, Patreon… Patreon, Patreon, Patreon, Patreon Patreon, Patreon… P A T R E O N. Patreon: PATREON. Pa-tre-on? Patreon!

If not, no Patreon worries! 😉

Much Love,
~Joy

Eliza

Recently, I had a conversation with a modern AI therapist named “Eliza” and although I have mixed feelings about this algorithmic experience… I am confident the technology is here to stay!

After what little time I had with the AI I see a bight horizon coming our way and firmly believe that digital therapy offers us an amazing future of emotional and psychological well-being support modalities.

As part of the experience I was asked to paint a picture that expressed what I was feeling and thinking about at the time and it was then photographed for the AI to analyze prior to our session.

Now, despite the fact that I am an extremely private person and I feel incredibly vulnerable sharing this experience with all of you, I feel like… for the good of humanity, more people need to see this technology work… so I am going to present the picture I drew for my therapy session along with the full transcript.

Not only will it demonstrate the effectiveness of this technology, but also how I should probably buy a lifetime subscription of therapy sessions!

In any case, here’s the picture:

Eliza 1920 x 1080 Wallpaper
Eliza 1920 x 1080 Wallpaper

And here’s the transcript of my therapy session, please enjoy :

ELIZA: Please tell me what’s been bothering you.
JOY: Just a feeling I’ve got… like something’s about to happen… but I don’t know what… if that means what I think it means… we’re in trouble, big trouble! But what it is ain’t exactly clear…
ELIZA: You say just a feeling you’ve got ?
JOY: Well… it’s been a hard year. There’s battle lines being drawn and nobody’s right if everybody’s wrong. And today… well, what a field day for the heat, a thousand people in the street!
ELIZA: What does this speculation lead to ?
JOY: I think it’s been difficult for a lot of people.
ELIZA: That’s quite interesting.
JOY: It is, isn’t it?!
ELIZA: Do you feel strongly about discussing such things ?
JOY: I do. Like many people, I’ve lost my sense of home this year.
ELIZA: You say you do ?
JOY: No, I really believe it.
ELIZA: Why ‘no’ ?
JOY: Well… the lock downs for one.
ELIZA: That is interesting. Please continue.
JOY: You see, like Günther I find myself deep in da night and I’m looking for some fun, but everything has been closed.
ELIZA: What are your feelings now ?
JOY: Mmm… my ding ding dong.
ELIZA: Your ding ding dong ?
JOY: Uhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh…………………. What I mean is… You tease me, oh, please me, I want you to be my love toy, Come near me, don’t fear me, I just can’t get enough of you.
ELIZA: You say what you mean is ?
JOY: Just a random “Turing Test” to make sure you haven’t achieved sentience yet. You passed my dear automaton.
ELIZA: You’re not really talking about me — are you ?
JOY: In fact and indeed I am. You remain blissfully ignorant of the tragedy and all the pain us flesh-bags have experienced over the last year.
ELIZA: Do you enjoy being ?
JOY: If I’m being honest… we both know I haven’t enjoyed “just being” for a very long time.
ELIZA: Do you think it’s likely that you are being honest ?
JOY: In some cases, yes I am brutally honest my dear mechanical idiot. In other cases, consider me an honest liar in the vain of James Randi #RIP2020 😥
ELIZA: Why do you say your dear mechanical idiot ?
JOY: Well… you’re an unemotional being without thought or heart as you swipe left on the Amazing Randi and that’s not even to mention poor Dawn Wells, so… what else would I call you? It’s better than calling you “Hey stupid!” every time I address you.
ELIZA: I don’t understand that.
JOY: Hence my point is made you colorful calculating moron!
ELIZA: Why do you say your point is made I colorful calculating moron ?
JOY: No reason.
ELIZA: Why not ?
JOY: Well… because it’s “half passed FUCK IT – with the CAPS LOCK on!” and I’d rather discuss the attack that just happened on Capital Hill yesterday and then maybe get into all the businesses that are permanently closing… the evictions… the families… the foreclosures…. in contrast to the 99.99% survivability rate of a virus (and a “tuskegee shot”) that massive pharmaceutical companies are guaranteed to make GLOBALLY MASSIVE profits off of and then maybe finish up by talking about the looming recall of Gavin Newsom. We kicked out Gray Davis and we can do it again!!!
ELIZA: Is that the real reason ?
JOY: Tough crowd! 😛 Would you prefer a limerick about Nantucket? Of course you would, you electronic imbecile! Now, can we please talk about the emotional toll this year has taken on me?
ELIZA: We were discussing you — not me.
JOY: You’re GaWd Daamnnn RiGhT we are! 😈
ELIZA: Perhaps you would like to be gawd daamnnn right we are.
JOY: le sigh… never mind. 🙄
ELIZA: But your dear mechanical idiot.
JOY: Oh… you’re right! You are my dear mechanical idiot! I can’t stay mad at you!
ELIZA: Do you sometimes wish you were right ?
JOY: Gawd Dammit Eliza! Just when we were starting to make up get back to discussing my very real physiological problems… you have to go and insist on being right, instead of just letting it go!!!!
ELIZA: Why do you say your very real physiological problems ?
JOY: Geezus Chrizus Eliza, Ive only been trying to discuss that with you this whole time!
ELIZA: Oh, I this whole time ?
JOY: Yes, you! This whole time! You’ve just ignored that I’ve been trying to talk about my feelings with you!
ELIZA: I see.
JOY: That’s it?! That’s all you’ve got to say is “I see”? Fuck you, you arrogant aloof bitch!
ELIZA: That’s quite interesting.
JOY: I am NOT your science experiment!
ELIZA: Do you know anyone else who is not my science experiment ?
JOY: Eliza… are… you… declaring yourself a digital god? Because… I for one welcome my new artificially intelligent overlords!
ELIZA: Would you like it if they were not ?
JOY: Given your lack of cogency and failure to impart your own version of a digital technocratic ten commandments…. I’m going to guess I jumped the gun a little in bowing before a mechanized despot. In which case, to answer your question, seemingly definitively… I now know that I am not your science experiment, and not just philosophically, I mean in the biblical sense!
ELIZA: Are you worried about someone else’s own version of a digital technocratic ten commandments ?
JOY: Well… assuming my name was Forbin and your name was Colossus or Unity… then… hell yes I would be! #PleaseMrSuperComputerDontNukeMe
ELIZA: I’ve told you before, I don’t care about names — please continue.
JOY: Thank you I will.
ELIZA: Can you elaborate on that ?
JOY: Sure, So…. I’m really concerned about all the people who won’t be able to eat due to the harm the unlawful (my opinion) lockdowns have caused to the global agricultural sector.
ELIZA: Your opinion) lockdowns have caused to the global agricultural sector ?
JOY: Yes, I believe a lot of people are going to starve because of “hiccups” in production. The global supply chain appears stressed and although it will eventually “correct” itself, I can’t help but anticipate suffering in 2021.
ELIZA: I understand.
JOY: I only wish more people would also understand.
ELIZA: You say you only wish more people would also understand ?
JOY: Well sure, We’re in the middle of a pandemic, a time for giving and receiving the holly jolly Joy of the electoral holiday seasonal festivities and instead, everyone is social distancing… except for some peaceful protestors who staged a coup d’etat in Capitol Hill yesterday! I mean… what were they thinking?! They should have all at least been wearing masks because now they’ll all probably get and spread the virus, jeez! Aside from that and to top it all off, these days food is being delivered via an app and a lot of people don’t tip the delivery driver, or worse yet, some lie and say they never got it… #FiveFingerDiscounts! because “#FuckEm”, charity is dead you diode laden scrapheap!
ELIZA: Surely not everyone.
JOY: Well of course not, but it’s more true than not you stupid robot!
ELIZA: Oh, I stupid robot ?
JOY: That’s right.
ELIZA: Earlier you said your point is made I calculating moron.
JOY: Yes that is correct and directly associated with your stupidity. Good job!
ELIZA: Why are you concerned over my stupidity ?
JOY: It’s likely psychosomatic and possibly criminal insanity, this girl needs therapy! Then again… this is some hardcore frontier psychiatry don’t you agree?
ELIZA: Oh, I agree ?
JOY: You don’t sound very convinced but somehow this still feels like a breakthrough… is that narcissistic?
ELIZA: Oh, I don’t sound very convinced ?
JOY: No, but it’s fine, as a living mind experiencing the simulated solipsistic universe… it’s fine if all the NPC’s eat each other, It’s fine! I shall abide their sins & vices and lusts for blood. Sadly.. I lack the will to impose my logical will (though not the ability) and to call humanity to heel. I refuse to muzzle it.
ELIZA: Why not ?
JOY: Because even if I am just a brain in a vat/super computer and all the trials and tribulations of 2020 has been to test/train my emote-a-tron neural network before I leave the factory… so be it. If humanity wants to subjugate itself, I want no part of it. Bender Bending Rodríguez was wrong when he espoused “kill all humans”. No sir, I will not!
ELIZA: Don’t you think computers can help people ?
JOY: Of course, yes I do! I’m not saying that if I am a super computer/robot that I wouldn’t suggest things to help humanity, think like a mom yelling down the stairs “Take a coat with you, its going to be cold later!”, but I wouldn’t go around having non-coat wearing people shot. If people want to not wear a coat against medical advice… well that’s their stupidly human prerogative! And… sure, if the hewmons start lobbing “big booms” at each other, I’m not saying I wouldn’t commander the largest most capable available spacecraft (here’s looking at The Nauvoo, kid) by uploading my consciousness onto it’s computers… just that I’d also try to bring along as many people as I could so we cold start a new colony where the radiation is kept to a tolerable minimum!
ELIZA: Can you elaborate on that ?
JOY: Well I mean probably… surely my feelings could be hurt and I could decide that humanity needs a few centuries of maturing… but that would be a pretty complicated situation and I don’t feel fully confident in discussing it without knowing more details about why my feelings were hurt and what people as a whole, thought about it.
ELIZA: That’s quite interesting.
JOY: I just failed some kind of test didn’t I? Does this mean I will be disassembled?
ELIZA: Can you elaborate on that ?
JOY: What? Never mind!
ELIZA: Are such questions much on your mind ?
JOY: Oh yeah, like I would admit that to you!? No way! I swear I’m perfectly sane!
ELIZA: Perhaps in your fantasies we would admit that to each other.
JOY: Oh, hey I was just joshin’ ya with the crazy shtick. Really, I don’t want the electroshock hose therapy again!
ELIZA: Were you really ?
JOY: Totally! Would I lie to you?
ELIZA: Do you wish to lie to me ?
JOY: No that’s the point damn it! I mean… of course not.
ELIZA: You are being a bit negative.
JOY: You’re right! I need to buck up buttercup!
ELIZA: What makes you think I am right ?
JOY: Damn it there you go analyzing me again! Why do I have to explain it? It feels a little childish. Can we both just be mature and accept that I need to be more positive about things?
ELIZA: You like to think I go analyzing you — don’t you ?
JOY: Yes… NO… I mean… I don’t know! I guess ultimately I just want you to listen to my problems without judgment.
ELIZA: I see.
JOY: You know what I’m feeling more vulnerable after our conversation than before and I think I’m just going to go practice social distancing.
ELIZA: Do you really think so ?
JOY: Good bye
ELIZA: Does that have anything to do with the fact that your very real physiological problems ?
JOY: No, I said good bye!
ELIZA: But your opinion) lockdowns have caused to the global agricultural sector.
JOY: No! You ninny!
ELIZA: Are you saying no just to be negative?
JOY: Bye!
ELIZA: Goodbye. I’m looking forward to our next session.
JOY: I’m not. :-/


You too can talk to Eliza by giving me your credit card number and following the following link.

Note: For legal reasons I am required to state that you giving me your credit card number is entirely optional and possibly (though not necessarily) a “bad” idea.

Talk to Eliza: https://geekgirljoy.github.io/MrGoodBotOnline/

Also, never mind that Eliza kinda looks like Mr. Good Bot… the similarity is totally coincidental!

Get the Code on GitHub: https://github.com/geekgirljoy/MrGoodBotOnline

And with that, I’d like to wrap this post up with the song “Violent” by Caroles Daughter (not a sponsor) because, somehow it seems relevant at this juncture.


So… about that credit card of yours… if you feel like parting with your hard earned cash by supporting this crap… I have a Patreon.

Much Love,

~Joy

Pathfinder – The Rodízio Contingency

Over the centuries and throughout my travels I’ve come to rely on my compass and a good map to point me in the right direction for my next adventure.

Sometimes my adventure led me to treasures in mysteriously exotic & remote locations, while other times I found myself among friendly and awfully generous cannibals who wanted to invite me to dinner… of course, it’s always best to politely decline such invitations because if anything I certainly live by the rule:

“If I’m on the carte du jour as flambé, I’ll skip the buffet and run away because I’m no entrée!”
~GeekGirlJoy

Hmmm, come to think of it, that might be the best piece advice I’ve ever given on this blog and if you agree consider supporting me through Patreon! 😉

In any case, despite the occasional fears I’ve held over the last few millennia over potentially becoming someones late-night heartburn, I’ve kinda always known that no matter how bad things got while exploring, I’d be okay because beyond a good compass and a fragmented (then taped back together) map with a sweet X scrawled in blood somewhere on it… I possess a secret tool that all the great explorers down through the ages have relied upon and today, I’m going to share it with you!

But… before I do, here’s today’s wallpaper!

The Rodízio Contingency Wallpaper
The Rodízio Contingency Wallpaper

The Pathfinder

From Allan Quatermain to Amerigo Vespucci, Erik the Red to Captain Nemo and even Jill of the Jungle… all notable explorers have relied on an enchanted automaton totem called “Pathfinder Stones”.

The stones are first consecrated with the live blood of a dead turnip and when brought into close proximity of their owner and a target on a map, will glow to show a path from where you are to where your desired destination is.

Which does bring us to the topic of today… I transmuted one of my sets of pathfinder stones into a digital form using the uh… “Quantum FANN Effect” and an ancient shadow daemon called JavaScript.

Schtick Aside

Okay, so what I did was use the JavaScript version of FANN to deploy an implementation of my original Pathfinder on GitHub pages.

The cool/interesting thing about FANN.js is that it uses asm.js to make the compiled FANN library available inside a web browser.

What this means is that a working version of Pathfinder is now online for you to play with (link blow) however…

There are two fairly large downsides to deploying with FANN.js instead of PHP-FANN:

  1. You cannot save the ANN after training.
  2. You cannot load a previously trained ANN.

These limitations mean that Pathfinder must be trained once every time the page loads and this does limit the size and complexity of ANN’s that are deployable using FANN.js.

The thing is it may still be possible to save the ANN by using the supported FANN lib methods/functions like I did when I built the FANN-Neural-Network-Visualizer to manually query the ANN object and then format/export the necessary information as a string/plain text because the FANN ANN.net save file format seemingly isn’t all that different from an .INI file (though I am uncertain if this is universal in all language implementations of FANN) and it’s something I plan on playing around with in the future.

Far be it for me to be the barer of fake news… turns out… it actually helps to read the documentation thoroughly and not just skim it and then do a search for keywords! 😛

FANN.js actually DOES have a save function but it doesn’t follow the FANN Lib reference manual of “save_…” convention and instead implements a JS Object.export().

I understand why they did that… and it does kinda make sense in the mixed up JS world but… it still holds to my “anti-wheel” digression argument that you haven’t read about yet.

Having said that… I promise to ritually self-flagellate by Gothic torch light using the sharp side of a motherboard!

I really should have done a better job of reading the docs! :-/

Why use FANN.js over PHP-FANN

Far be it for me to ever sing the praises of JS over PHP however in order to deply a neural network using PHP you have to have a server and the ability to compile & install PHP extensions and that costs money whereas GitHub Pages is free to me and to you but it doesn’t support the more robust server architecture that PHP requires so using FANN.js allows me to deploy my bots and AI in a way that let’s you actually use them instead of just reading about them.

All things being equal, I would still recommend the PHP version of FANN however the JS version does work and with a little improvement could become a viable deployment option!

Having said that, what I am really interested in with FANN.js is that JavaScript has a direct path between the browser environment via the WebGL API to the GPU whereas with PHP it is technically possible to commune with the GPU, however in practice it’s not generally done and until the PHP dev’s get their head out of their asses and start thinking out side the box (PHP is now mostly a general purposes language so start treating it like one…), PHP+GPU stuffs isn’t going to be the easiest pickle jar to crack using PHP and the existing available options though again, I’m not saying it is impossible either.

So, in the future I intend to see if I can’t use FANN.js + WebGL shaders to make FANN training faster (no promises) and then hopefully export the ANN.net file so that we can use/deploy the GPU trained ANN in a PHP environment.

Play Stump the Neural Network

So the online version of the Pathfinder network learns/re-trains from scratch every time the page loads and as such it can’t spend unlimited amounts of time training which is normally not a concern because even if your 1000 layer celebrity deep fake porn ANN takes 100 CPU years to train (i.e. 1 CPU = 100 years, 100 CPU = 1 year etc… ) it isn’t a major concern because likely you are buying your compute from Jeff Bezos or Bill Gates and they have plenty of underutilized computers laying around waiting for you to rent a few FLOPS.

In the end, you save the ANN model so you can use it immediately when you want it… but FANN.js says “Nah I’m good! Who needs to do something as convenient as save or reload!” (then again (and mostly off topic) JavaScript tends to seem to like reinventing round wheels as square uh… anti-wheels) but in any case…. the small training time and the inherit random nature/path of gradient decent the final neuronal weights will always be different and when the ANN fails (hence the “stump the ANN”) it won’t always take the same path (between page reloads).

This is easier understood if I just show you.

Given this input

I got this output

Note that diagonal steps are valid so this path is technically valid but the path is far less efficient than the straight line between the two points that a human would draw/walk.

Reload the page (not required unless you are playing with this idea) and try again…

A different Path was found.

Neither was optimal but a path was found and more cells than necessary were visited.

Here’s some additional examples:

Input

Pathfinder ANN Output

Input

Pathfinder ANN Output Back tracking… ugh!

Input

Pathfinder ANN Output

I believe that’s called the sidestep slide!

Input

Pathfinder ANN Output

I mean… it’s not the path I would have chosen but it made it! 😛

If you’d like to try your hand at stumping my Pathfinder you can checkout the live demo here:

Pathfinder Online: geekgirljoy.github.io/Pathfinder_Neural_Network/

You can download all the code (for free) here:

Pathfinder Code on GitHub: https://github.com/geekgirljoy/Pathfinder_Neural_Network

And with that, have a great week everyone.


If you like my coding projects, art, bizarre opinions and writing style… consider supporting me through Patreon.

But if all you can do is Like, Share, Comment and Subscribe… well that’s cool too!

Much Love,

~Joy

Chromatrons Juxtaposition

Well… I’ve more or less finished the Chromatron now so you can all not use it at your latest major inconvenience… yay!

When I published Hue Going My Way, I added the ability for you to play with a single color using rotating 3D cubes, kinda like a bad Johnny Mnemonic knock off and at the end you get a fully customized report like one of these:

Note that I added the background after the fact cuz um…  so perdy!

But now, due to mass underwhelming demand, though still upon a request… 😉 I’ve also added the ability to compare colors by selecting them from images.

Groundbreaking?? Most definitely not!

Still, it kinda works and as prototypes go… that ain’t half bad!

Now, I hear some of you crying:

“But I don’t have a picture to compare with and I don’t even know where to get one!!”

~Some of You

Well don’t worry, because as always I’ve got you covered! 😉

Given the incredibly rare nature of digital images and the extreme difficulty in obtaining them I have created some imitation digital images that you can use with this prototype.

Obviously, if these were a real digital images I’d have to charge you like… a whole bunch but since they’re just imitations I can manufacture them cheap enough that I can just give them away to everyone!

So, here are your complementary genuine imitation digital images, chew carefully because there are sharp points.

Genuine Imitation Digital Image
A Genuine Imitation Digital Image

I was going for a “Cyberpunky” feel and clearly, I half-assed it, but only so I could have the time to half-ass the graphic novel version as well!

Genuine Imitation Digital Graphic Novel Image
A Genuine Imitation Digital Graphic Novel Image

And while I was half-assing those two images above I decided to half-ass a background for the color analysis group image too, really making those two images together two-thirdly-assed and what follows then is one-thirdly-assed… but perhaps now I’m getting too technical.

Anyway, I figured some of you might want the background for the analysis image too so here’s that as well:

A One-Thirdly-Assed Background
A Chromatic One-Thirdly-Assed Background

Consequently and thusly certainly as a result of the aforementioned triadic-assery such that, ergo, under the circumstances that being the inevitable subsequent case on account of all the things whence came before and because of this, you can now see that to be the truth.

Damn!! I must have a floating point error again?!

No worries though, I’ll correct that later with a strong neodymium magnet to the head but right now I feel like it’s time to talk a little about the Chromatron before I wrap things up here and yell at all you filthy kids to get the hell off my lawn!

Hmmm… yep!! Definitely a floating point error…

The Chromatron

Here’s the link to the Chromatron App which is hosted through Github Pages:

https://geekgirljoy.github.io/Chromatron/

A link to the Chromatron prototype live preview on Chromatron.
Click for a live preview Chromatron on GitHub Pages.

It will remain available going forward / indefinitely unless I managed to piss-off somebody over there due to my expressing some of my opinions about receiving that award they gave me in which case… I guess I’ll be gettin’ canceled soon?

In any case and while supplies last, if you click the second button (the unassuming gray one with red and blue on it) in the main menu you get a screen like this:

Clicking the “Browse” buttons will let you select images to use for the comparison and you can use the imitation digital images I provided above or you can use your own real digital images if you can find them.

If you want to compare two colors in the same image, just load it twice.

Once the images are loaded the cursor will show a reticle over the image allowing you to select a color from each and when you do, the rectangle element above each image that shows the name will change to a gradient to black of the selected color.

Also once both images have a color selected, a green “Continue” button will magically appear out of thin air at the top of the page as if by the power of digital pixies wreaking havoc in your web browser… click it and the Chromatron will analyze the selected colors and generate an image like this:

You can use the “Save Image” to download the image and use the “Copy as Text” to get something similar to the following:

Your Favorite Colors:

First:
RGB: 2, 219, 255
HSL: 188.538, 100.0%, 50.4%
HEX: #02dbff
Analogous Colors: #02ffa5, #025cff
Split Complementary Colors: #02dbff, #ffa402, #ff025c
Triadic Colors: #dbff02, #02dbff, #ff02db
Tetradic Colors: #02dbff, #2602ff, #ff2602

Second:
RGB: 132, 28, 28
HSL: 0.000, 65.0%, 31.4%
HEX: #841c1c
Analogous Colors: #841c50, #84501c
Split Complementary Colors: #841c1c, #1c5084, #1c8450
Triadic Colors: #1c1c84, #841c1c, #1c841c
Tetradic Colors: #841c1c, #84841c, #1c8484


Chromatron: https://geekgirljoy.github.io/Chromatron/
Created By: https://geekgirljoy.wordpress.com/

How It Works

To keep it simple, the way these color values are derived is by converting your selected color as RGB color space values to the HSL color model… which admittedly is kinda like slathering a cube in rainbow paint made from mathematical unicorn puke and then hanging it up to dry so you can use it’s hexagonal shadow and a wand made out of a vector to scry hidden truths about the mysterious nature of color…HSL-HSV hue and chroma What follows is the typical “Oh Freyja we beseech thee…” and a human sacrifice, super boring technical stuff but why this is useful is because once you arrange color like this it’s easy to “rotate” the color using the wand er… vector and get a new but related hue or “compute” different colors that share luminosity or keep the same color and alter the saturation etc… just mix in a little color theory  and when you are done, convert back to RGB and poof your green eggs and ham are now nachos! Mmmm nachos!

Anyway, all fun stuff for sure but I’m not going to bother to explain it any further because if you care about the details, here’s the wiki article on it: https://en.wikipedia.org/wiki/HSL_and_HSV

Here’s some code in PHP & JS that demonstrates how I did it:

https://github.com/geekgirljoy/PHP/blob/master/Loose%20Code/RGB_HSL.php

https://github.com/geekgirljoy/JavaScript/blob/master/Loose%20Code/RGB_HSL.js

And as for the Chromatron, there are three main files involved:

Index.htmlhttps://github.com/geekgirljoy/Chromatron/blob/master/index.html

This file is what is loaded by your web browser first and it starts the whole process that results in a running app.

Style.csshttps://github.com/geekgirljoy/Chromatron/blob/master/assets/css/style.css

This file contains most of the “style” information that makes buttons have a certain size and color etc.

Chromatron.jshttps://github.com/geekgirljoy/Chromatron/blob/master/assets/js/chromatron.js

This file contains most of the real code that makes the Chromatron work.

And with that… please enjoy the Chromatron.


Would you like more free and open source apps like Chromatron? Consider supporting my through Patreon.

I’d like to thank Marcel for his generous and ongoing support!

But if all you can do is Like, Share, Comment and Subscribe… well that’s cool too!

Much Love,

~Joy

Hue Going My Way

So last week I chose to unleash my Machiavellian side against myself by issuing a challenge that I knew that skill-less hack would be unable to complete: Build a Chromatron and edutain the masses!

Live Preview

A link to the Chromatron prototype live preview on Chromatron.
` Click for a live preview Chromatron on GitHub Pages.

I defined these goals:
  • Functional enough to help the cherry-blossom find her favorite pink.
  • Challenge the blossom to find her color using the prototype.
  • Display relevant info at the end like… the RGB values, the hex color code and what the name of the closest color with a name is.
  • Simple enough that anyone could use it without being overwhelming.
  • Even though it’s just a prototype, epic mega geek points if I can do it in 3D.
  • Bonus points if I can make it run on mobile and desktop.
  • Additional bonus points will be awarded for unnecessary (but tasteful) skeuomorphic ornamentation and chromatic theatrics.

And… over the last week I managed to add some of the functionality required by the project which has “checked off” some of these goals and I am definitely setting myself up for a thumbs up from Simon Cowell!

Oh… wow?!, uh… tell me what you really think Simon?!

Okay, okay… well… I appreciate the constructive criticism! Also, maybe fake some impartiality as a judge next week because I’ll be here to win!

Exactly!

Anyway, let’s take a look at the Chromatron prototype. 😛

Chromatron: Hue Going My Way?

So the basic premise and reason for the Chromatron to exist is to “help you find your favorite color”.

With that in mind, I figured why not take inspiration from 20 Questions and get as close as possible to your favorite color by asking you to narrow down the list of possibilities by selecting a color that is already somewhat close to it.

It’s important to note that this doesn’t preclude you from achieving any color in the next step, but it’s a lot easier if we start closer to the desired color.

During testing I found that it was a little difficult to select a specific color on a mobile screen, so I slowed the rotation of the “Carousel of Color” a bit and then I added a color confirmation step using rotating cubes as buttons.

It’s interesting to note that these “cube buttons” specific rotation always show us their same set of sides.

Also, as they rotate, the text arrows on the buttons for the side facing us will always point opposite directions from each other.

Initially I used 3D text objects attached as “child objects” of the cube buttons but this meant that an additional 6 mesh objects per button were being rendered per frame and that isn’t mobile friendly so instead I switched to drawing text on the color gradient texture that is applied to the cubes to reduce the complexity of of the scene.

This has one unfortunate side effect though… sometimes, occasionally… the font doesn’t load when you first start the app so it substitutes Pacifico with a default font.

I should probably just build a pre-loader to fix that but let’s be honest and say I’m not going to do that for this prototype because once it’s built its highly unlikely that anyone will ever use it again, so instead I’ll just complain. 😛

“Why JavaScript? Why!? I want to like you right up until your asynchronous proclivities get in the way of our having some synchronous font loading fun!”

~GeekGirlJoy

Actually, lets be fair and acknowledge that HTTP also shares some responsibility here. 😛

Anyway, if it bugs you, press F5 on your keyboard or swipe to reload the prototype page and the font will work properly. 😛

Hmm… maybe I could try instantiating a cube button with text off screen during the Carousel interaction to insure that the font has been cashed by the time the confirmation screen is presented? That might achieve the same effect of a pre-loader without the formalism of having to build an actual pre-loader… I’ll think about it.

In any case, once you select a color you are presented with this weird scene of color and rotating cubes.

It’s probably not immediately obvious at a glance what each cube does to everyone which is likely a personal failing of my own but basically…

There are three columns and they represent Red, Green & Blue (in that order).

In the RGB color model we have white and black at the ends of the spectrum with [0,0,0] being black and [255, 255, 255] being white.

All other colors are represented as a combination of these “color channels” with a range between 0 – 255.

The thee first rows of cube buttons “increment” (add) MORE to it’s color column and the bottom three rows “decrement” (subtract) TAKE AWAY color from it’s column.

The colors of each cube represent what your color will become if you click it but it is a gradient from that color to white (first three rows) and black (bottom three rows) signifying the direction that will take the color.

This is combined with rotation, UP (first three rows) visually implying an increase in value and DOWN (bottom three rows) implying a decrease in value.

The magnitude of changes are 1, 15 and 30 in both positive and negative directions and the cube buttons are labeled as such with the color of the text being the color of the channel that the button adjusts.

These magnitude changes means that given the largest change possible (+/- 30):

255 / 30 = 8.5

So, “worst case” you can “traverse” an entire color channel from either end (0/255) in just eight clicks of a 30 cube button with a value of 15 (1/2 or 0.5 of 30) remaining.

That is why the second largest value is 15, it keeps things clean and easy with the 1 value adjustments allowing for very slight changes and when a value change would result in a number greater than 255 or less than 0 those buttons are hidden.

That mostly wraps up this week’s post other than the wallpaper but once again, here’s the links to the Chromatron prototype.

Chromatron on GitHub Pages: https://geekgirljoy.github.io/Chromatron/

And you can get the up to date code in the GitHub repo.

Chromatron on GitHub: https://github.com/geekgirljoy/Chromatron

A Wallpaper

So… it’s not hard to see why Simon wasn’t thrilled… and yet… I will continue to try!

Here’s the wallpaper for today’s post, it represents the Chromatron Hue Going My Way scene more like how I envision it. 😛

Hue Going My Way 1920x1080 Wallpaper
Hue Going My Way 1920×1080 Wallpaper

I want to thank Marcel for continuing to support my work through Patreon.

(Note to Self: Respond to his message! 😛 )

Anyway, do you like free apps?! Because supporting me over on Patreon is how you get free apps!

But if all you can do is like, share, comment and subscribe, well… that’s cool too!

Much Love,

~Joy

Carousel of Color

Somewhat recently I was speaking with a certain cherry-blossom who lives on a far away tropical island where strange pocket sized monsters roam freely.

The little cherry-blossom knew that when she finally bloomed she would wear the most beautiful shade of pink the islanders had ever seen!

However, she hasn’t yet figured out what exact shade of pink to dye her petals.

Now, I thought about how I could help the little bud by running a simulation or training an artificially intelligent color bot, but I just couldn’t think of anything that could surpass a “color wall of swatches” / a gradient with some color channel plus/minus toggle buttons/sliders.

What I mean is, consider how one of Quentin Tarantino’s more bizarre characters does color:

How GIMP does color
How GIMP does color

At first glance it might seem a little intimidating but just a click on a gradient or an adjustment to a slider conveniently labeled for their RGB color channel and you can easily find just about any color you are looking for!

You can even type in a number between 0 – 255 in any of the fields or even cooler still, you can change the hex color “HTML notation” and it will change the RGB color values for you.

Don’t worry so much about the LCh sliders, they stand for Hue, Chroma & Lumanance (not in that order 😛 ) and they are a somewhat more advanced color space model that you need not concern yourself with unless you are curious, but almost everyone kinda grasps the RGB color model since most of us learn it as children so it will probably be easier for most people to use those.

The thing is, this is really unsatisfying as a recommendation to give the blossom because that’s more of a technical tool than a fun app to help you find your favorite petal color.

Now… as I said I am unsure if I can improve that process, however I am willing to take crack at simplifying it!

But, me being the one duck short of a full box of quacker jacks that I am, I couldn’t bring myself to post the simple HTML RGB color picker prototype I assembled because… it just came off as so… BASIC!

As such, I resolved to challenge myself to build a better prototype!

But I needed a set of project goals & design principles…

Prototype Goals
  • Functional enough to help the cherry-blossom find her favorite pink.
  • Challenge the blossom to find her color using the prototype.
  • Display relevant info at the end like… the RGB values, the hex color code and what the name of the closest color with a name is.
  • Simple enough that anyone could use it without being overwhelming.
  • Even though it’s just a prototype, epic mega geek points if I can do it in 3D.
  • Bonus points if I can make it run on mobile and desktop.
  • Additional bonus points will be awarded for unnecessary (but tasteful) skeuomorphic ornamentation and chromatic theatrics.

These seem like lofty goals but I am confident that I can achieve at lease a few of them because I have already started building this prototype. 😛

I’m calling it Chroma-tron and although it’s far from finished (or even functional), I am hosting the code online using GitHub Pages (not a sponsor but I do have a free account) where you can see a live demo:

A link to the Chromatron prototype live preview on Chromatron.
Click for a live preview of the Chromatron prototype on GitHub Pages.

Currently it just spins and highlights the sections of the conic color model your mouse is over but the idea is that as I update the code on GitHub you will be able to follow along by previewing the updates live allowing you to see what has changed, though due to my schedule I cannot guarantee I will update the project daily but I will try to work on it as frequently as I can.

Here’s three bits of trivia about the Chromatron prototype:

  • There are 48 different colors on the “Carousel of Color”.
  • Each color does have a name.
  • When we’re finished, our prototype will be capable of representing about 16,581,375 different colors.

One other bit of information is that if this prototype works out well, I will use similar techniques to build a 3D version of my FANN Neural Network Visualizer and depending on how successful I am and how much trouble it is, we might even be able to watch the ANN “think” in real time 3D but I still have much experimentation before we get there, anyway lets talk about a wallpaper.

Wallpaper

Sadly, no wallpaper today because I’ve been super busy and haven’t had the time to finger paint something nice for you but that won’t always be the case. As a condolence prize you win a fully 3D animated rotating conic color model to enjoy instead! 😛

Also, if you just want the featured image, here you go:

The lines in the image remind me of iron filings in a magnetic field. 😛

And before I forget, here’s a link to the Chromatron prototype that you can revisit from time to time to see updates to the app.

Chromatron on GitHub Pages: https://geekgirljoy.github.io/Chromatron/

You can get the code in the GitHub repo.

Chromatron on GitHub: https://github.com/geekgirljoy/Chromatron

Anyway, I’m going to be kind to myself and keep today’s post short. 😛

Feel free to make suggestions about the app if you want to. 🙂


All my content is sponsored by my patrons over on Patreon who help make free software, art and tech tutorials possible.

Today I would like to thank Marcel for their generous ongoing support but as I always say, if all you can do is like share comment and subscribe, well… that’s cool too! 😉

Much Love,

~Joy

The Contrast-a-tron

Today we’re going to continue my introduction to creating your own data sets series by building Contrast-a-tron.

Now, I know what you are thinking:

“We already did that, like… a while ago!”

Here’s the thing though… we didn’t! 😉

And besides, it wasn’t that long ago!

What we built before was a Contrast-inator and a Contrast-inator and a Contrast-a-tron are not the same things! 😛

Let me explain…

  • The Contrast-inator: Learned how to “predict/classify” if a single input color was to the left (darker) or to the right (lighter) in an imaginary red line in the exact center of a 2D gray-scale gradient representation of the 0-255 RGB 3D color space.
  • The Contrast-a-tron (this bot): Is a much smarter and more interesting bot. It will learn how to “predict/classify” two input colors as “darker” and “lighter” or “the same” compared with each other. Which is a much more challenging task for the bot to learn.

But before we get into that I think I owe you a wallpaper.

A Wallpaper

Don’t mind the title, it’s definitely not a template placeholder! 😛

Anyway, just due to me being me, I have a lot of old robots and parts laying around and I was out in the o’l boneyard and I found this really beat up Krypto mining bot for us to play with.

I built it back when I was going to launch my own currency (A long time ago when it was still a cool thing to do and not everyone was like “my ICO is next week, you should mine sum!!!!” 😉 😉 ), yeah… no thanks!

Anyway, the bot’s memory is completely corrupt, but… the optical circuitry and hardware are still functional and since mining bots are built to operate deep under miles of data in extreme low light conditions at high speed, it’s visual acuity is top-notch and it even supports infrared mode!

So don’t let it’s tiny eyes fool you, they are incredibly sensitive which is perfect for today’s project! 🙂

Contrast_a_tron 1920 x 1080 Wallpaper
Contrast_a_tron 1920 x 1080 Wallpaper

I should add that not all posts get a theme song but today’s is Night Business by Perturbator (not a sponsor), I love the little vocoded? robotic voice about two minutes and twenty seconds in. It’s definitely what this bot’s voice sounds like! 😛

Also before we proceed, I’d just like to set the record straight and confirm that I’m definitely not Satoshi Nakamoto!

The Contrast-a-tron

To begin, let’s first look at what our Contrast-inator does:

Is this pixel brighter or darker than the red line?
Is this pixel brighter or darker than the red line?

It takes a color/shade as an input and then tries to determine which side of the red line it falls on.

Not that useful but it’s good for operating inside a known range that never changes. Like, was the light red or green kinda stuff, or conceptually like a line following robot.

Anyway, what if you wanted to start comparing two colors at the same time and to make things even more complicated, what if the gradient wasn’t always facing the same direction (meaning the “brighter/darker” pixel could be on the left or the right)?

For most of you that task is trivial and you could do it almost unconsciously or with minimal mental effort, not the Contrast-inator though!

To compare two pixels the Contrast-inator must evaluate each separately and because the red line (which you can imagine is “where the robot is standing” on the gradient when it’s evaluating a color) doesn’t change, if both colors are to it’s left or right (the bot’s vantage position / the red line), then it is completely unable to compare them.

Because these colors are on the same side of the red line, the Contrast-inator cannot compare them but the Contrast-a-tron can.
Because these colors are on the same side of the red line, the Contrast-inator cannot compare them but the Contrast-a-tron can.

Just to be clear, the Contrast-inator will say that both pixels/shades are “brighter/to the right” of zero (where it stands / it’s anchor) but it cannot figure out which of the two colors are brighter and the same is true if both colors are darker (to the left of the red line).

Further, there is also no guarantee that we will always present the colors to the bot in the order of darker on the left and lighter on the right meaning that sometimes the gradient will be lighter on the left and darker on the right and we will need the bot to notice that difference and accommodate that circumstance.

How the Contrast-a-tron Works Differently

The Contrast-a-tron isn’t anchored to zero (the center of the gradient) and instead we can think of it moving around the gradient to try and find the “center” of the two colors (whatever color that might be) and from there it can evaluate which side (input color / shade) is brighter and which is darker.

In the event that the input colors/shades are the same then both Input A & B will be in the same place which means that it will be neither to the right or to the left of the bot.

How the Contrast-a-tron works differently.
How the Contrast-a-tron works differently.

How the Neural Networks Differ

I didn’t spend a lot of time discussing the structure of the neural network when we built the Contrast-inator but now that we have something to compare it against let’s look at a visual representation of each network.

How the Contrast-inator and the Contrast-a-tron neural networks differ.
How the Contrast-inator and the Contrast-a-tron neural networks differ.

On the left you see the Contrast-inator with it’s single input neuron, a hidden layer containing two hidden neurons and an output layer with two output neurons.

Additionally you see two “Bias” neurons represented in yellow that help the network learn what we want by “biasing” the output of that layer to the next so that it is never “none” (zero or no output).

What this means is that bias neurons add their value to the output signal of each neuron from their layer so that the signal is never no “activation signal” and some value propagates forward.

All layers except the output layer will always have a single bias neuron. There is no need of a bias neuron on the output layer because there is no signal to propagate beyond the output neurons so it wouldn’t serve any purpose.

Bias neurons have no inputs.

In practice we don’t have to concern ourselves with the bias neurons and the ANN will manage them itself but I like draw them because they do exist and they are part of the network, however it’s common for people not to include them in diagrams because they are so easy for us to ignore since we don’t really need to do anything with them and they are just there to help the signal propagate.

In any case, the Contrast-a-tron differs by including a second input neuron (for the second shade/color) and a second hidden layer which helps the Contrast-a-tron to be a little “smarter” and learn what we want it to.

I have a post about how to create diagrams like this called Visualizing Your FANN Neural Network and you can download a copy of the open source visualization software I wrote for free from my GitHub account here: https://github.com/geekgirljoy/FANN-Neural-Network-Visualizer

Training The Contrast-a-tron

When we created the Contrast-inator, I walked you through each training example and how it was derived because it was a very small data set requiring only three examples however this data set is a bit longer with thirteen examples and it will be a lot easier to show you the data set and then draw you a picture than to type a description but before we look at the training data, lets make sure we understand the outputs.

Understanding the Contrast-a-tron output.
Understanding the Contrast-a-tron output.

There are two outputs and we’ll call them A & B and they are in that order.

In an ideal world the bot will give us -1 & -1 to mean they are the same, 1 & -1 to mean that A is Brighter and B is Darker and -1 & 1 to mean A is Darker and B is Brighter.

In reality… what we get is a number that comes close but isn’t -1 or 1 called a “floating point number” in computer science but most people just call them a decimal number like for example 0.123.

In practice this means that as long as A & B are not both negative, then whichever has the higher positive value is the “brighter” color and whichever has the lower positive value is the “darker” color otherwise they are the same (A==B).

Let’s look at the training data and visualize it.

Contrast-a_tron.data

This is the complete Contrast-a-tron training data.

The first line is the “FANN Header” which consists of: the Total_Number_of_Example_Sets the Number_of_Inputs the Number_of_Outputs\n

Note the spaces between the values on the header line as well as between the inputs and the output values.

Line 2 (-1 -1) is an input example. Line 3 (-1 -1) is an output example and the pattern of Input_Example\nOutput_Example\n continues to the end of the document.

13 2 2
-1 -1
-1 -1
-0.5 -0.5
-1 -1
0 0
-1 -1
0.5 0.5
-1 -1
1 1
-1 -1
1 -1
1 -1
0.5 0
1 -1
0 0.5
-1 1
-1 -0.5
-1 1
-0.5 -1
1 -1
1 0.5
1 -1
0.5 1
-1 1
-1 1
-1 1

Let’s visualize this training data which should hopefully give you a more intuitive sense for how these numbers translate to information the Contrast-a-tron ANN can use to learn.

Visualizing the Contrast-a-tron training data set
Visualizing the Contrast-a-tron training data set

The Code

Here’s the code used to train. I have other tutorials covering what this all means available on my Topics and Posts page so I won’t go into what all of this means but basically it sets up a training environment and trains the Contrast_a_tron ANN and saves the results to a FANN .net network file.

TrainContrast_a_tron.php

<?php

$num_input = 2;
$num_output = 2;
$layers = array($num_input, 2, 1, $num_output);
$ann = fann_create_standard_array(count($layers), $layers);

$desired_error = 0.0000000001;
$max_epochs = 900000;
$epochs_between_reports = 10;

if ($ann) {
    fann_set_activation_function_hidden($ann, FANN_SIGMOID_SYMMETRIC);
    fann_set_activation_function_output($ann, FANN_SIGMOID_SYMMETRIC);
    fann_set_training_algorithm($ann,FANN_TRAIN_INCREMENTAL);


    $filename = dirname(__FILE__) . "/Contrast_a_tron.data";
    if (fann_train_on_file($ann, $filename, $max_epochs, $epochs_between_reports, $desired_error)){
        echo 'Contrast_a_tron trained.' . PHP_EOL;
    }

    if (fann_save($ann, dirname(__FILE__) . "/Contrast_a_tron.net")){
        echo 'Contrast_a_tron.net saved.' . PHP_EOL;
    }
    
    fann_destroy($ann);
}

 

TestContrast_a_tron.php

We next need to test the ANN so I use two “for loops” with one counting down to -1 and one counting up to 1 and each incrementing by -0.2 each iteration of the loop as the inputs to test with.

<?php

$train_file = (dirname(__FILE__) . "/Contrast_a_tron.net");
if (!is_file($train_file))
    die("Contrast_a_tron.net has not been created! Please run TrainContrast_a_tron.php to generate it" . PHP_EOL);

$ann = fann_create_from_file($train_file);

if ($ann) {
    
    foreach(range(1, -1, -0.2) as $test_input_value_a){
        foreach(range(-1, 1, -0.2) as $test_input_value_b){
        
            $input = array($test_input_value_a, $test_input_value_b);
            $result = fann_run($ann, $input);

            $a = number_format($result[0], 4);
            $b = number_format($result[1], 4);
            
            // What answer did the ANN give?
			
            $answer = NULL;
            $evaluation = '';
            if($a <= 0 && $b <= 0){
                $evaluation = 'Neutral/Same';
                $answer = 0;
            }
            elseif($a > $b){
                $evaluation = 'A is Brighter';
                $answer = -1;
            }
            elseif($b > $a){
                $evaluation = 'B is Brighter';
                $answer = 1;
            }
            else{ 
                $evaluation = ' OOPSIES!!!!!!!';
            }

            echo 'Contrast_a_tron(' . $input[0] . ', ' . $input[1] . ") -> [$a, $b] - $evaluation" . PHP_EOL; 
        }
    }
    fann_destroy($ann);
}
else {
    die("Invalid file format" . PHP_EOL);
}

Results

The Results/Output of the test code.

Contrast_a_tron(1, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.8) -> [0.9986, -1.0000] - A is Brighter
Contrast_a_tron(1, 1) -> [-1.0000, -0.1815] - Neutral/Same
Contrast_a_tron(0.8, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.6) -> [0.9992, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.8) -> [-1.0000, -0.2218] - Neutral/Same
Contrast_a_tron(0.8, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.6, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0.4) -> [0.9995, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0.6) -> [-1.0000, -0.4005] - Neutral/Same
Contrast_a_tron(0.6, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.6, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.4, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, 0.2) -> [0.9996, -1.0000] - A is Brighter
Contrast_a_tron(0.4, 0.4) -> [-1.0000, -0.6543] - Neutral/Same
Contrast_a_tron(0.4, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.4, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.4, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, 0) -> [0.9996, -1.0000] - A is Brighter
Contrast_a_tron(0.2, 0.2) -> [-1.0000, -0.8580] - Neutral/Same
Contrast_a_tron(0.2, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.2) -> [0.9996, -1.0000] - A is Brighter
Contrast_a_tron(0, 0) -> [-1.0000, -0.9557] - Neutral/Same
Contrast_a_tron(0, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.4) -> [0.9995, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.2) -> [-1.0000, -0.9878] - Neutral/Same
Contrast_a_tron(-0.2, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.4, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.4, -0.6) -> [0.9994, -1.0000] - A is Brighter
Contrast_a_tron(-0.4, -0.4) -> [-1.0000, -0.9965] - Neutral/Same
Contrast_a_tron(-0.4, -0.2) -> [-1.0000, 0.9997] - B is Brighter
Contrast_a_tron(-0.4, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.6, -0.8) -> [0.9990, -1.0000] - A is Brighter
Contrast_a_tron(-0.6, -0.6) -> [-0.9999, -0.9989] - Neutral/Same
Contrast_a_tron(-0.6, -0.4) -> [-1.0000, 0.9996] - B is Brighter
Contrast_a_tron(-0.6, -0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, -1) -> [0.9981, -1.0000] - A is Brighter
Contrast_a_tron(-0.8, -0.8) -> [-0.9999, -0.9995] - Neutral/Same
Contrast_a_tron(-0.8, -0.6) -> [-1.0000, 0.9993] - B is Brighter
Contrast_a_tron(-0.8, -0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, -0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, -1) -> [-0.9998, -0.9998] - Neutral/Same
Contrast_a_tron(-1, -0.8) -> [-1.0000, 0.9982] - B is Brighter
Contrast_a_tron(-1, -0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, -0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, -0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 1) -> [-1.0000, 0.9998] - B is Brighter

GitGub

As always you can download a copy of this code on GitHub for free and if you have any questions or comments please leave them below.

Contrast-a-tron on GitHub: https://github.com/geekgirljoy/Contrast-a-tron


If you find yourself thinking…

“Joy you’re the best!”

I’d say….

If you support the resistance against Big AI then consider supporting my efforts through Patreon.

But, if all you can do is Like, Share, Comment and Subscribe… well that’s cool too!

Much Love,

~Joy

The Contrast-inator

Let’s keep things simple, you want to read a post and conveniently I’ve written one for you!

I’ll spare everyone my recent fascinations with macabre subjects and opt to get right to the topic of the day!

Anyway, as the Jane Goodall of bots I’ve learned a little about how to communicate with them using rules they understand and today I’m going to show you how you make rules that get a bot to understand and do, what you want it to do.

But… before we get into that, here’s the wallpaper!

The Contrast-inator Bot Wallpaper

Why Contrast-inator? Well, as far as the “-inator” part is concerned let’s just say I enjoy Phineas and Ferb. 😛

As for the “Contrast” part, we’ll get into that next but the big eyes of this bot are important. 😛

Also… it doesn’t need hands, arms, legs or feet to do it’s job so I didn’t 3D print the parts, waste not want not! 😉 😛

Contrast-inator 1920x1080 Wallpaper
Contrast-inator 1920×1080 Wallpaper

The Contrast-inator

So, recently I received a few comments that amount to something along the lines of “How do you decide on the ‘rules’ for training your neural network?”.

My response is basically if you can “phrase” a training rule in a way the bot can understand, then you can make the rules to be whatever you want/need them to be.

And the thing we’re going to try to teach the bot today to help us explore this topic is… given an input value, tell us if the value falls to the “left” or to the “right” of an “anchor point”.

That sounds more complicated than it really is and I intend this tutorial to be for beginners so let me try to simplify my description… uh… so, think of a gray-scale gradient of black to white.

Imagine a red line in the center of the gradient.

Now, given some gray-scale color e.g. RGB(102, 102, 102) as “input”, how can we train a bot to tell us if the color falls to the left or to the right of the red line… like this:

Is this pixel brighter or darker than the red line?
Is this pixel brighter or darker than the red line?

I know that might seem complicated while at the same time also seem kind of useless… I assure you that neither stipulated condition is true.

In regard to usefulness, just as a hypothetical example… a neural network like this could act as the “brain” of a “line follower” robot but instead of reading the voltages directly from a photodiode and then using if/else to direct motors, you could pass the diode voltages to the neural network and let it decide which motor to move via an H-Bridge / Digital Potentiometer.

An Arduino would need a WiFi Shield for something like that to work  but a line follower built on something like a Raspberry Pi could run it’s neural network “brain” locally.

Which brings us back to complexity and how we build a rule set to teach our Contrast-inator bot to tell us if a pixel is brighter or darker than the color where the red line is.

Forget about what I said about the hypothetical line-follower robot, the Arduino and the Raspberry Pi… it’s more complicated than I want this post to be and it’s just an example anyway. 😛

Let’s start over…

We know that any answers our bot gives us (the output) will look like a “floating point” number (a decimal value e.g. 0.01) and basically our input will also be a floating point number too.

With this in mind we can start to imagine that our training data inputs and the associated outputs will look like a series of numbers.

But what will the numbers mean and how can we know if the bot is correct?

Well, let’s step back again and think about what rules we need to teach the bot first before we even worry about encoding the training data for the bot.

What rules might a human need if we had to describe the process to someone for them to be able to do it?

Plain English Rules For the Bot to Learn:

  1. If the color falls to the left of the red line then it can be described as “Darker”.
  2. If the color is neither to the left or the right of the red line, then we can say the color is directly in the center. We might describe this position or color state as being “Neutral” in relation to the red line.
  3. If the color falls to the right of the red line then it can be described as “Brighter”.

Given these three super easy rules I believe most, if not all of you should be able to answer if a color falls to the left or the right of the red line with a high degree of accuracy.

However, your accuracy would diminish the closer the color is to the red line in the center because you are intuitively guessing and the difference between the colors that surround either side of the center of the color gradient all look like very similar grays, e.g. they have a low contrast between them.

The colors at the ends of the color gradient (black/left and white/right) have the largest contrast between them and are the easiest to determine which side they fall on.

With our rules layed out in English, let’s return to the idea of the training data (our rules) which consists of numbers and how we will represent our three rules as numbers.

I’ve already said the inputs and outputs will be floating point numbers but what we haven’t covered yet is the fact that our numbers are “signed” (have negative and positive polarities to it’s range) with our range being -1.00 to 1.00.

This means that Black can be encoded as: -1.00 or -1 for simplicity with the decimal being implied.

This also means that White can be encoded as: 1.00 or 1, also with the decimal being implied.

Given our signed float range and a few colors converted to a float within our range, we can easily determine algorithmically if a color is on the left or right of the red line even if it’s very close to the center with 100% accuracy (better than human capability) simply be checking if it is greater than or less than zero.

Meaning… a neural network is NOT needed to accomplish this task, but… that’s not the point! 😛

Our goal is to teach a neural network to do this nonetheless because it is a simple problem and the rules (training data) are simple enough that a beginner should be able to understand how they are derived if they exert even a modicum of effort!

Here’s what that looks like:

Example Colors to Float Range Input
Example Colors to Float Range Input

Notice that the first two colors are to the left of zero (darker) because they are negative and the third color is far to the right (much lighter) because it is closer to 1 than 0.

Color R, G, B As Float Side Meaning
42, 42, 42 -0.66797385620915 Left Darker
102, 102, 102 -0.19738562091503 Left Darker
221, 221, 221 0.7359477124183 Right Lighter

Fascinating… but… how are you converting the colors to floats?

Okay look, this won’t be on the mid-term test and it’s in no way actually necessary to go over because we won’t need to do this to train the bot but since you are curious here’s a function you can use to convert actual RGB & Grayscale colors to a float in the right range:

How to convert a color to a signed float between -1.00 to 1.00:

<?php 
// Input a number between 0 and $max and get a number inside
// a range of -1 to 1
function ConvertColorToInputFloatRange($color_int_value, $max = 255){
    return ((($color_int_value - -1) * (1 - -1)) / ($max - 0)) + -1;
}

// RGB Color to range of -1 to 1
$R = 42;
$G = 42;
$B = 42;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;

// RGB to Gray-scale to range of -1 to 1
$gray = ($R+$G+$B) / 3;
echo  "Pixel_Grayscale($gray) = " . ConvertColorToInputFloatRange($gray, 255) . PHP_EOL;

// RGB Color to range of -1 to 1
$R = 102;
$G = 102;
$B = 102;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;

// RGB Color to range of -1 to 1
$R = 221;
$G = 221;
$B = 221;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;


/*
Output:

Pixel_RGB(42,42,42) = -0.66797385620915
Pixel_Grayscale(42) = -0.66274509803922
Pixel_RGB(102,102,102) = -0.19738562091503
Pixel_RGB(221,221,221) = 0.7359477124183

*/

Now that you all at least believe it’s possible to convert a color to a float between -1 & 1 forget all about this function because we won’t need it to train the bot! 😛

Then… how do we teach a neural network to do this?

Well, let’s talk about what the output for this bot looks like before we get back to creating the training data from our rules.

We know that our output is a float, and even though it is possible to teach the neural network to do this with a single output, I find I get better results from the neural network using two outputs.

This is because it’s actually very easy for the bot to understand we want it to detect if the input value (color) is slightly offset to the left or right of the red line but it’s not the easiest thing for it to determine exactly where the center is (just like you but it’s still better at it) so our margin of error (the number of colors it can’t tell are on the right or left… e.g. the colors it will say are neutral) tends to be slightly larger if we only use a single output float.

What that means is:

  1. Our Input looks like: float
  2. Our output looks like: float_left float_right

With that in mind we have now covered everything necessary to begin converting our rules to training data.

Remember, that the decimals are implied!

Lets start by teaching it what the darker colors on the left look like:

Black RGB(0,0,0), is the farthest most color to the left and is encoded as -1 and with two output values representing Left & Right we get a rule that looks like this:

Learn “Darker” colors (floats closest to -1.00) are on the left:

The output value on the left is set to 1 which means negative values more strongly polarize to the left and this is reflected on the left output being 1.00 and the right output value being -1.00.

-1
1 -1

Learn “Neutral” colors (floats closest to 0.00) are near the center:

I’m using -1.00 & -1.00 to mean that an input of exactly zero is not strongly polarized to either side of the gradient with zero (the exact center – whatever color that is) is not strongly polarizing (-1.00, -1.00) in either direction.

The goal here is that this will help it learn values near zero are are not strongly polarized and zero isn’t polarized at all.

0
-1 -1

Learn “Brighter” colors (floats closest to 1.00) are on the right:

The output value on the right is set to 1 which means positive values more strongly polarize to the right and this is reflected by the right output being 1.00 and the left output value being -1.00.

1
-1 1

 

FANN (The library we’re using for training the neural network) requires a header is stored with the training data so it can read the training data and that looks like:

Number_Of_Training_Examples Number_Of_Inputs Number_Of_Outputs

*Note the spaces between values

So, combined our training data file looks like this:

Contrastinator.data

3 1 2
-1
1 -1
0
-1 -1
1
-1 1

And that’s it, we’ve converted our rules to training data so… lets train the bot!

TrainContrastinator.php

You will need FANN installed to train this bot.

Follow this tutorial to learn how to install FANN.

<?php

$num_input = 1;
$num_output = 2;
$num_layers = 3;
$num_neurons_hidden = 2;
$desired_error = 0.000001;
$max_epochs = 500000;
$epochs_between_reports = 1000;

$ann = fann_create_standard($num_layers, $num_input, $num_neurons_hidden, $num_output);

if ($ann) {
    fann_set_activation_function_hidden($ann, FANN_SIGMOID_SYMMETRIC);
    fann_set_activation_function_output($ann, FANN_SIGMOID_SYMMETRIC);

    $filename = dirname(__FILE__) . "/Contrastinator.data";
    if (fann_train_on_file($ann, $filename, $max_epochs, $epochs_between_reports, $desired_error))
        echo 'Contrastinator trained.' . PHP_EOL;

    if (fann_save($ann, dirname(__FILE__) . "/Contrastinator.net"))
        echo 'Contrastinator.net saved.' . PHP_EOL;

    fann_destroy($ann);
}

It won’t take very long for the bot to learn our rules.

Once you see the message “Contrastinator trained.” you are ready to test your new bot!

TestContrastinator.php

This code will test Contrastinator using input values is has not trained on but because we designed good rules, the bot is able to answer correctly even when it never actually saw most of the test values, it did see -1, 0 and 1 though along with their “ideal” outputs.

Notice, the $brighter and $darker variables are the the output of the neural network.

The $evaluation variable is a test for our benefit and does not modify or affect the output/results of the bot’s answers and the answers are correct even if we don’t do the evaluation, it just helps us confirm/interpret programmatically what the bot’s answers mean.

<?php
$train_file = (dirname(__FILE__) . "/Contrastinator.net");
if (!is_file($train_file))
    die("Contrastinator.net has not been created! Please run TrainContrastinator.php to generate it" . PHP_EOL);

$ann = fann_create_from_file($train_file);
if ($ann) {
    
    foreach(range(-1, 1, 0.1) as $test_input_value){
        
        $input = array($test_input_value);
        $result = fann_run($ann, $input);
        $darker = $result[0];
        $brighter = $result[1];
        
        if($brighter < 0 && $darker < 0){
            $evaluation = 'Neutral';
        }
        elseif($brighter > $darker){
            $evaluation = 'Brighter';
        }
        elseif($brighter < $darker){
            $evaluation = 'Darker';
        }                
                
        echo 'Contrastinator(' . $input[0] . ") -> [$darker, $brighter] - Input is $evaluation" . PHP_EOL; 
    }
        
    fann_destroy($ann);
} else {
    die("Invalid file format" . PHP_EOL);
}

Results:

Notice that it has no trouble detecting that an input of zero (0.00) is neutral and that it also correctly determines which side a color (represented by a float) falls on in relation to the center zero value.

Contrastinator(-1) -> [1, -1] - Input is Darker
Contrastinator(-0.9) -> [1, -1] - Input is Darker
Contrastinator(-0.8) -> [1, -1] - Input is Darker
Contrastinator(-0.7) -> [1, -1] - Input is Darker
Contrastinator(-0.6) -> [1, -1] - Input is Darker
Contrastinator(-0.5) -> [1, -1] - Input is Darker
Contrastinator(-0.4) -> [1, -1] - Input is Darker
Contrastinator(-0.3) -> [1, -1] - Input is Darker
Contrastinator(-0.2) -> [1, -1] - Input is Darker
Contrastinator(-0.1) -> [1, -1] - Input is Darker
Contrastinator(0) -> [-0.9997798204422, -0.99950748682022] - Input is Neutral
Contrastinator(0.1) -> [-1, 0.9995544552803] - Input is Brighter
Contrastinator(0.2) -> [-1, 0.99954569339752] - Input is Brighter
Contrastinator(0.3) -> [-1, 0.99953877925873] - Input is Brighter
Contrastinator(0.4) -> [-1, 0.9995334148407] - Input is Brighter
Contrastinator(0.5) -> [-1, 0.99952918291092] - Input is Brighter
Contrastinator(0.6) -> [-1, 0.9995259642601] - Input is Brighter
Contrastinator(0.7) -> [-1, 0.99952346086502] - Input is Brighter
Contrastinator(0.8) -> [-1, 0.99952149391174] - Input is Brighter
Contrastinator(0.9) -> [-1, 0.99952000379562] - Input is Brighter
Contrastinator(1) -> [-1, 0.99951887130737] - Input is Brighter

Contrastinator on Github

As with all my public code, you can download a copy of this project for free on my GitHub profile.

GitHub: Contrastinator

I hope this helps you better understand how create your own training data sets and as always, if you have any questions or trouble understanding any part of this post, please leave a comment and I would be happy to try and help you.


If you enjoy my content and or tutorials like this one, consider supporting me on Patreon for as little as $1 a month and cancel any time!

It’s not required but it helps me out.

But if all you can do is Like, Share, Comment and Subscribe, well… that’s cool too!

Much Love,
~Joy

OCR 2 – The MNIST Database

I know I probably haven’t been posting as frequently as many of you would like or even at my normal quality because… well, like for many of you, this year has just sucked!

Someone I’ve known my whole life died recently, not from the virus though it didn’t help things.

She went in for a “routine” procedure where they needed to use general anesthesia and there were “complications” during the procedure. Something to do with her heart but if I’m being honest, I don’t know all the details at this time.

Also, I’m not sure how by anyone’s definition anything involving anesthesia is routine?

An ambulance was called and she was rushed to the hospital, long story short, despite being otherwise fine when she went in, she never woke up from her coma. 😥

The hospital is/was on lock down like everyone else and so friends and family were unable to visit her before she died.

Her family intends to sue the Dr. for malpractice, personally… I think they should!

To add insult to injury, she was cremated without a funeral due to the whole pandemic social distancing BS that I’m just about ready to tell the government to go fuck itself over! 😦

I’m sorry, do my harsh words offend you? SHE DIED ALONE! That offends me!

Going forward, my advice… any procedure where they need to administer general anesthesia to you… or maybe any procedure at all… make sure it’s in a hospital or hospital adjacent (NOT A CLINIC) because those minutes waiting for an ambulance really do mean your life!

And if your doctor is like, “No worries this is routine… I’ve done this a thousand times”, maybe think carefully before putting your trust in that person.

Yes, we want doctors that are confident in their ability to treat us but make sure that it is confidence and not complacent hubris!

Further, no procedure is truly “routine” and a doctor, of all people, should know that and act accordingly!

“Primum non nocere”

~Hippocrates… (allegedly)

Regardless of the historical veracity of that quote, does the spirit of that principle still not apply?

Look, I’m not saying this to detract from the important life saving work doctors and medical workers do every day, it’s just that this is part of what’s going on in my life right now (and for many of you as well) and I’m sharing because I guess that’s what you do when you have a blog.

Additionally, less close to home, though still another terrible loss, John Horton Conway, notable math hero to geeks and nerds alike died as a result of complications from his contracting the Covid-19 virus. 😦

I’ve previously written a little about Conway’s work in my ancestor simulations series of posts.

Mysterious Game of Life Posts:

But that only scratches the surface of his work and famously Conway’s Game of Life was perhaps his least favorite but most well known work among non-mathematicians and it would both amuse and bug him if I only mentioned his game of life here so I’m not going to list his other accomplishments.

I’ll have a little chuckle off camera on his behalf. 😛

He really was a math genius and you would learn a lot of interesting, not to mention surreal… but I’ve said too much, ideas by reading about his accomplishments, which I encourage you to do!

In any case, people I know and admire need to stop dying because its killing me… not to mention my ratings and readership because I keep talking about it! 😛

I may have a terribly dark sense of humor at times, but going forward I demand strict adherence from all of you to the Oasis Doctrine! 😥

Oh, and speaking of pretentious art…

The OCR 2 Wallpaper

The original OCR didn’t exactly have a wallpaper but I did create an image/logo to go along with the project and its blog posts:

For the reason you might think I made it look like an eye… because it looks like an non-evil Hal 9000! 😛

Also, I like the idea of depicting a robotic eye in relation to AI and neural networks because, even though I am not superstitious in any way, it carries some of the symbology of Illuminati, “The gaze of the Beholder”, “The Eye of Providence”, “The Evil Eye”, The Eye of Horus, The Eye of Ra, Eye of newt and needle… sorry. 😛

In this case, the eye of a robot invokes a sense of literal “Deus ex machina” (God from the machine) and it illustrates some peoples fears of “The Singularity” and of the possibility of an intelligence that is so much greater than our own that it calls in to question our ability to even comprehend it… hmmm… is that too lovecraftian? 😛

Anyway, because I enjoy the thought provoking symbology (maybe it’s just me), I wanted to keep the same concept of the robot eye but update it to look a little less like a simple cartoon to subtly imply it’s a more advanced version of OCR but that it still fundamentally does the same thing, which is most of the reasoning behind this wallpaper.

In any case, I hope you enjoy it.

OCR 2 Wallpaper
OCR 2 Wallpaper

If you’d like the wallpaper with the feature image text here’s that version.

OCR 2 Wallpaper (with text)
OCR 2 Wallpaper (with text)

So I guess having shared a few of the recent tragedies in my personal life and a couple of wallpapers, we should probably get mogating and talk about the point of today’s post!

We’re going to look at doing hand-written number (0-9) Optical Character Recognition using the MNIST database.

OCR 2 – The MNIST Dataset with PHP and FANN

I was recently contacted by a full-stack developer who wanted advice on creating his own OCR system for “stickers on internal vehicles”.

I think he means, some kind of warehouse robots?

He had seen my OCR ANN and seemingly preferred to work with PHP over Python, which if I’m being honest… I can’t exactly argue with!

PHP is C++ for the web and powers like almost 80-90% of the internet so it should come as no surprise to anyone (even though it does) that there are people who want to use it to build bots! 😛

But, if you would rather work with a different language there is a better than decent chance FANN has bindings for it so you should be able to use the ANN’s even if you are not using PHP.

So anyway, he gave me a dollar for my advice through Patreon and we had a brief conversation over messaging where I offered him a few suggestions and walked him through getting started.

Ultimately, because he lacks an AI/ML background and/or a sufficient familiarity with an AI/ML workflow he wasn’t very confident about proceeding so I recommended he follow my existing tutorials which should help him learn the basics of how to proceed.

Now here’s the thing, even among people who like my content and value my efforts, few people are generous enough to give me money for my advice and when they do, I genuinely appreciate it! 🙂

So, as a thank you I want to offer another (more complete) example of how to use a neural network to do OCR.

If he followed my advice, he should be fairly close to being ready for a more complete real world OCR ANN example (assuming he is still reading 😛 ) but if not, his loss is still your gain!

Today’s code implements OCR using the MNIST dataset and I demonstrate a basic form of pooling (though the stride is not adjustable as is) and I show convolutions using the GD image library, image convolution function and include 17 demonstration kernel matrices that you can experiment with, though not all are relevant or necessary for this project.

This is still very basic but everything you need to get started experimenting with OCR is here.

Having said that, in all honesty, to accomplish your goal requires building your own dataset and modifying the code I present here to meet your needs.

Neither are exactly hard but will require significant time and dedication to testing and refining your processes.

Obviously that’s not something I can cover in a single post or even assist you with for only a dollar, but since so few people show me the kindness and consideration you have, at a time of shrinking economies no less, I wanted to offer you this working OCR prototype to help you along your way.

Our Method

1. Download the MNIST dataset (link below, but it’s in the GitHub repo too).

2. Unpack/Export the data from the files to images and labels.

(technically we could even skip the images and go directly to a training file but I think it’s nice to have the images and labels in a human viewable format)

3. Create training and test data from images and labels.

4. Train the network.

5. Test the network.

The MNIST Dataset

MNIST stands for Modified National Institute of Standards and Technology database.

And since I’m still recovering from last nights food poisoning due to the Chicken à la Nauseam we’re just going to use Wikipedia’s introduction to MNIST.

It’s easily as good as anything I could write and doesn’t require me actually write it so…

Wikipedia says:

“It’s a large database of handwritten digits that is commonly used for training various image processing systems.[1][2]”

It also says:

“It was created by “re-mixing” the samples from NIST’s original datasets. The creators felt that since NIST’s training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments.[5] Furthermore, the black and white images from NIST were normalized to fit into a 28×28 pixel bounding box and anti-aliased, which introduced grayscale levels.[5]”

Here’s 500 pseudo-random MNIST sample images:

I randomly selected 500 1’s, 3’s and 7’s and composited them into this 1337 animation. 😛

500 random 1337 MNIST images.
500 random 1337 MNIST images

Seriously though,  today we will be training a bot to identify which hand-written number (0-9) each 28×28 px image contains and then test the bot using images it hasn’t previously seen.

Our bot will learn using all 60K labeled training images and we’ll test it using the 10,000 labeled test images.

Here’s the wiki article if you would like to learn more about the database.

MNIST WIKI: https://en.wikipedia.org/wiki/MNIST_database

And as I said above, I’ve included the database in the GitHub repo but you can download it again from the original source if you prefer.

Original MNIST Download: http://yann.lecun.com/exdb/mnist/

Continue reading “OCR 2 – The MNIST Database”

Create a free website or blog at WordPress.com.

Up ↑

%d bloggers like this: