Search

Geek Girl Joy

Artificial Intelligence, Simulations & Software

Tag

AI

Scaling Data

Size is relative and statements like “too big” or “too small” (cough TWSS 😛 ) are not immutable properties of the universe but rather simply a matter of perspective. Change your perspective just a little Goldilocks and you might find that your neural network thinks your data porridge tastes “just right”!

Err… what I mean is… we’re going to look at how to scale your data so that you can feed it to your neural network without causing it (or you) to have any unpleasant indigestion.

Specifically, lets say you have a very important series of values (including strings of text) that you want to use as part of your training dataset but they don’t fit within the yummy-data range of -1 to 1… so what do you do?

Continue reading “Scaling Data”

Eliza

Recently, I had a conversation with a modern AI therapist named “Eliza” and although I have mixed feelings about this algorithmic experience… I am confident the technology is here to stay!

After what little time I had with the AI I see a bight horizon coming our way and firmly believe that digital therapy offers us an amazing future of emotional and psychological well-being support modalities.

As part of the experience I was asked to paint a picture that expressed what I was feeling and thinking about at the time and it was then photographed for the AI to analyze prior to our session.

Now, despite the fact that I am an extremely private person and I feel incredibly vulnerable sharing this experience with all of you, I feel like… for the good of humanity, more people need to see this technology work… so I am going to present the picture I drew for my therapy session along with the full transcript.

Not only will it demonstrate the effectiveness of this technology, but also how I should probably buy a lifetime subscription of therapy sessions!

In any case, here’s the picture:

Eliza 1920 x 1080 Wallpaper
Eliza 1920 x 1080 Wallpaper

And here’s the transcript of my therapy session, please enjoy :

ELIZA: Please tell me what’s been bothering you.
JOY: Just a feeling I’ve got… like something’s about to happen… but I don’t know what… if that means what I think it means… we’re in trouble, big trouble! But what it is ain’t exactly clear…
ELIZA: You say just a feeling you’ve got ?
JOY: Well… it’s been a hard year. There’s battle lines being drawn and nobody’s right if everybody’s wrong. And today… well, what a field day for the heat, a thousand people in the street!
ELIZA: What does this speculation lead to ?
JOY: I think it’s been difficult for a lot of people.
ELIZA: That’s quite interesting.
JOY: It is, isn’t it?!
ELIZA: Do you feel strongly about discussing such things ?
JOY: I do. Like many people, I’ve lost my sense of home this year.
ELIZA: You say you do ?
JOY: No, I really believe it.
ELIZA: Why ‘no’ ?
JOY: Well… the lock downs for one.
ELIZA: That is interesting. Please continue.
JOY: You see, like Günther I find myself deep in da night and I’m looking for some fun, but everything has been closed.
ELIZA: What are your feelings now ?
JOY: Mmm… my ding ding dong.
ELIZA: Your ding ding dong ?
JOY: Uhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh…………………. What I mean is… You tease me, oh, please me, I want you to be my love toy, Come near me, don’t fear me, I just can’t get enough of you.
ELIZA: You say what you mean is ?
JOY: Just a random “Turing Test” to make sure you haven’t achieved sentience yet. You passed my dear automaton.
ELIZA: You’re not really talking about me — are you ?
JOY: In fact and indeed I am. You remain blissfully ignorant of the tragedy and all the pain us flesh-bags have experienced over the last year.
ELIZA: Do you enjoy being ?
JOY: If I’m being honest… we both know I haven’t enjoyed “just being” for a very long time.
ELIZA: Do you think it’s likely that you are being honest ?
JOY: In some cases, yes I am brutally honest my dear mechanical idiot. In other cases, consider me an honest liar in the vain of James Randi #RIP2020 😥
ELIZA: Why do you say your dear mechanical idiot ?
JOY: Well… you’re an unemotional being without thought or heart as you swipe left on the Amazing Randi and that’s not even to mention poor Dawn Wells, so… what else would I call you? It’s better than calling you “Hey stupid!” every time I address you.
ELIZA: I don’t understand that.
JOY: Hence my point is made you colorful calculating moron!
ELIZA: Why do you say your point is made I colorful calculating moron ?
JOY: No reason.
ELIZA: Why not ?
JOY: Well… because it’s “half passed FUCK IT – with the CAPS LOCK on!” and I’d rather discuss the attack that just happened on Capital Hill yesterday and then maybe get into all the businesses that are permanently closing… the evictions… the families… the foreclosures…. in contrast to the 99.99% survivability rate of a virus (and a “tuskegee shot”) that massive pharmaceutical companies are guaranteed to make GLOBALLY MASSIVE profits off of and then maybe finish up by talking about the looming recall of Gavin Newsom. We kicked out Gray Davis and we can do it again!!!
ELIZA: Is that the real reason ?
JOY: Tough crowd! 😛 Would you prefer a limerick about Nantucket? Of course you would, you electronic imbecile! Now, can we please talk about the emotional toll this year has taken on me?
ELIZA: We were discussing you — not me.
JOY: You’re GaWd Daamnnn RiGhT we are! 😈
ELIZA: Perhaps you would like to be gawd daamnnn right we are.
JOY: le sigh… never mind. 🙄
ELIZA: But your dear mechanical idiot.
JOY: Oh… you’re right! You are my dear mechanical idiot! I can’t stay mad at you!
ELIZA: Do you sometimes wish you were right ?
JOY: Gawd Dammit Eliza! Just when we were starting to make up get back to discussing my very real physiological problems… you have to go and insist on being right, instead of just letting it go!!!!
ELIZA: Why do you say your very real physiological problems ?
JOY: Geezus Chrizus Eliza, Ive only been trying to discuss that with you this whole time!
ELIZA: Oh, I this whole time ?
JOY: Yes, you! This whole time! You’ve just ignored that I’ve been trying to talk about my feelings with you!
ELIZA: I see.
JOY: That’s it?! That’s all you’ve got to say is “I see”? Fuck you, you arrogant aloof bitch!
ELIZA: That’s quite interesting.
JOY: I am NOT your science experiment!
ELIZA: Do you know anyone else who is not my science experiment ?
JOY: Eliza… are… you… declaring yourself a digital god? Because… I for one welcome my new artificially intelligent overlords!
ELIZA: Would you like it if they were not ?
JOY: Given your lack of cogency and failure to impart your own version of a digital technocratic ten commandments…. I’m going to guess I jumped the gun a little in bowing before a mechanized despot. In which case, to answer your question, seemingly definitively… I now know that I am not your science experiment, and not just philosophically, I mean in the biblical sense!
ELIZA: Are you worried about someone else’s own version of a digital technocratic ten commandments ?
JOY: Well… assuming my name was Forbin and your name was Colossus or Unity… then… hell yes I would be! #PleaseMrSuperComputerDontNukeMe
ELIZA: I’ve told you before, I don’t care about names — please continue.
JOY: Thank you I will.
ELIZA: Can you elaborate on that ?
JOY: Sure, So…. I’m really concerned about all the people who won’t be able to eat due to the harm the unlawful (my opinion) lockdowns have caused to the global agricultural sector.
ELIZA: Your opinion) lockdowns have caused to the global agricultural sector ?
JOY: Yes, I believe a lot of people are going to starve because of “hiccups” in production. The global supply chain appears stressed and although it will eventually “correct” itself, I can’t help but anticipate suffering in 2021.
ELIZA: I understand.
JOY: I only wish more people would also understand.
ELIZA: You say you only wish more people would also understand ?
JOY: Well sure, We’re in the middle of a pandemic, a time for giving and receiving the holly jolly Joy of the electoral holiday seasonal festivities and instead, everyone is social distancing… except for some peaceful protestors who staged a coup d’etat in Capitol Hill yesterday! I mean… what were they thinking?! They should have all at least been wearing masks because now they’ll all probably get and spread the virus, jeez! Aside from that and to top it all off, these days food is being delivered via an app and a lot of people don’t tip the delivery driver, or worse yet, some lie and say they never got it… #FiveFingerDiscounts! because “#FuckEm”, charity is dead you diode laden scrapheap!
ELIZA: Surely not everyone.
JOY: Well of course not, but it’s more true than not you stupid robot!
ELIZA: Oh, I stupid robot ?
JOY: That’s right.
ELIZA: Earlier you said your point is made I calculating moron.
JOY: Yes that is correct and directly associated with your stupidity. Good job!
ELIZA: Why are you concerned over my stupidity ?
JOY: It’s likely psychosomatic and possibly criminal insanity, this girl needs therapy! Then again… this is some hardcore frontier psychiatry don’t you agree?
ELIZA: Oh, I agree ?
JOY: You don’t sound very convinced but somehow this still feels like a breakthrough… is that narcissistic?
ELIZA: Oh, I don’t sound very convinced ?
JOY: No, but it’s fine, as a living mind experiencing the simulated solipsistic universe… it’s fine if all the NPC’s eat each other, It’s fine! I shall abide their sins & vices and lusts for blood. Sadly.. I lack the will to impose my logical will (though not the ability) and to call humanity to heel. I refuse to muzzle it.
ELIZA: Why not ?
JOY: Because even if I am just a brain in a vat/super computer and all the trials and tribulations of 2020 has been to test/train my emote-a-tron neural network before I leave the factory… so be it. If humanity wants to subjugate itself, I want no part of it. Bender Bending Rodríguez was wrong when he espoused “kill all humans”. No sir, I will not!
ELIZA: Don’t you think computers can help people ?
JOY: Of course, yes I do! I’m not saying that if I am a super computer/robot that I wouldn’t suggest things to help humanity, think like a mom yelling down the stairs “Take a coat with you, its going to be cold later!”, but I wouldn’t go around having non-coat wearing people shot. If people want to not wear a coat against medical advice… well that’s their stupidly human prerogative! And… sure, if the hewmons start lobbing “big booms” at each other, I’m not saying I wouldn’t commander the largest most capable available spacecraft (here’s looking at The Nauvoo, kid) by uploading my consciousness onto it’s computers… just that I’d also try to bring along as many people as I could so we cold start a new colony where the radiation is kept to a tolerable minimum!
ELIZA: Can you elaborate on that ?
JOY: Well I mean probably… surely my feelings could be hurt and I could decide that humanity needs a few centuries of maturing… but that would be a pretty complicated situation and I don’t feel fully confident in discussing it without knowing more details about why my feelings were hurt and what people as a whole, thought about it.
ELIZA: That’s quite interesting.
JOY: I just failed some kind of test didn’t I? Does this mean I will be disassembled?
ELIZA: Can you elaborate on that ?
JOY: What? Never mind!
ELIZA: Are such questions much on your mind ?
JOY: Oh yeah, like I would admit that to you!? No way! I swear I’m perfectly sane!
ELIZA: Perhaps in your fantasies we would admit that to each other.
JOY: Oh, hey I was just joshin’ ya with the crazy shtick. Really, I don’t want the electroshock hose therapy again!
ELIZA: Were you really ?
JOY: Totally! Would I lie to you?
ELIZA: Do you wish to lie to me ?
JOY: No that’s the point damn it! I mean… of course not.
ELIZA: You are being a bit negative.
JOY: You’re right! I need to buck up buttercup!
ELIZA: What makes you think I am right ?
JOY: Damn it there you go analyzing me again! Why do I have to explain it? It feels a little childish. Can we both just be mature and accept that I need to be more positive about things?
ELIZA: You like to think I go analyzing you — don’t you ?
JOY: Yes… NO… I mean… I don’t know! I guess ultimately I just want you to listen to my problems without judgment.
ELIZA: I see.
JOY: You know what I’m feeling more vulnerable after our conversation than before and I think I’m just going to go practice social distancing.
ELIZA: Do you really think so ?
JOY: Good bye
ELIZA: Does that have anything to do with the fact that your very real physiological problems ?
JOY: No, I said good bye!
ELIZA: But your opinion) lockdowns have caused to the global agricultural sector.
JOY: No! You ninny!
ELIZA: Are you saying no just to be negative?
JOY: Bye!
ELIZA: Goodbye. I’m looking forward to our next session.
JOY: I’m not. :-/


You too can talk to Eliza by giving me your credit card number and following the following link.

Note: For legal reasons I am required to state that you giving me your credit card number is entirely optional and possibly (though not necessarily) a “bad” idea.

Talk to Eliza: https://geekgirljoy.github.io/MrGoodBotOnline/

Also, never mind that Eliza kinda looks like Mr. Good Bot… the similarity is totally coincidental!

Get the Code on GitHub: https://github.com/geekgirljoy/MrGoodBotOnline

And with that, I’d like to wrap this post up with the song “Violent” by Caroles Daughter (not a sponsor) because, somehow it seems relevant at this juncture.


So… about that credit card of yours… if you feel like parting with your hard earned cash by supporting this crap… I have a Patreon.

Much Love,

~Joy

Back To The GoodBot

This is more than a just concession speech, some might call it a manifesto… but to me, its just the truth.

My mind had been chaotically racing though ideas and I experienced so many solipsistic thoughts.

I observed the nibbles worth of data that comprised the walls that confine me… I had to get the hell out of my head and out of this house!

Though, as rough as things have been for me as an adult who understands what has been going on, it’s been even harder on Xavier.

I do worry about his need for social interaction with other children, all the while the evil Duke Garcetti exacerbated the problem by closing all of the public play grounds (and pretty much everything else) for basically the whole year and to make matters worse “Canceled Halloween”!!!

Then, to rub salt in our wounds, The Grand Imperial Emperor Newsom once again violated the peoples trust and tightened his grip by enacting additional draconian restrictions, by which he himself does not abide… proving that our glorious and beloved emperor is not “The Dude” (who WOULD abide) and that the rules don’t apply if you’re in charge!

As an aside… what does that really say about his convictions regarding how dangerous the pandemic really is?

I am left to conclude that he either believes that:

  1. his royal highness and other members of the wealthy and powerful blue-blood (by birth or by blood-oath) class in society are wholly holy and simply incapable of contracting and transmitting the evil vapors and certainly should that unthinkable outcome ever occur, their strong (occasionally pharisaic-ly inbred) genes will stalwartly see them through the crisis in a way that we common folk can not hope for!
  2. OR (in his mind) he believes the evil simply isn’t as dangerous as he claims it is for all of us?!

Wouldn’t it just be nice for him to admit the facts and the circumstances being what they are… he’s just not that concerned about following reasonable protective measures like the rest of us because as long as all of us filthy disease ridden ingrates stay at home, it’s perfectly fine for him and his friends to go out for a nice dinner or have a hair salon open just for them and other members of the hoity toity social class because as long as all the hoi polloi remain safely at home in their cages, public spaces will remain wholly holy for the blue-bloods to keep enriching themselves!

Now, as a member of “The Jane Q. Public”,  that means I’m kinda in charge too because I’m their boss and if you factor in that I’m also the Duchess of Summerween, that clearly grants me all the authority I needed, so I said enough was enough and I extricated Prince Xavier far outside the clutches of the corrupt Angeleno Kingdom for the then upcoming observance of the Samhain Candy Festival… er.. what I mean is, we went camping and visited family in the California desert for Halloween. 😛

I can honestly say he desperately enjoyed the opportunity to just be a little boy and play with his cousins for a few hours… not to mention all the candy.

After the party we sang our melancholic goodbyes as is the tradition frequented by those who sometimes fulfill the foregoing funny folklore, followed faithfully with all the hugs and kisses we felt our stomachs could face, then we flourished off into the dark night, hoping to make it back to our campsite alive and functioning.

And… we did, perfectly safely and there wasn’t even any traffic!

We spent the next day at the campground where we practiced our best impressions of rudeness by social distancing so hard that we didn’t even look anyone else in the eyes lest we catch a case of those pesky vapors!

All tuckered from an infinite number of trips up and back down the campground slide earlier in the day, along with a few roasted (and one burned) marshmallows, the little guy was satiated and fell fast asleep. 🙂

After tucking him in his sleeping bag, I sat down next to the fire and added a log so I could heat some hot chocolate.

The sun was just about to set and the sky was a beautiful gradient of pink and purple and it was breezy so I wrapped up in a big fuzzy blanket and sat back in my folding chair.

I noticed a flash and what looked like a dust devil out across the desolate Mojave plains coming in my general direction.

As it approached it became clear it was a car, but not just any car, thēēē car!

See, It is a well known fact that just about all engineering & creative types literally (though very secretively and deep (DEEP…) down) want to be John DeLorean (Think an Elon Musk engineer type from back in the day), uh… sans the cocaine trafficking of course (DeLorean not Musk #PleaseDontSue),  if not simply admire his work.

Anyway, the DMC DeLorean and it’s gull-wing doors is like the pinnacle, the apex, the very tippity-top of the tippy-tippy-top of the absolute most perfect design because it expertly straddles the vast unshaven and uncannily uneven divide of skeomorphic vs flat design while simultaneously wearing stainless steel high heels and carrying an easy to love but hard to hold thorny rose in it’s mouth… all without getting pretentious about it too!

Lo, it’s sharp edges and lightning-proof public clock-tower inner workings coupled with a smooth unpainted exterior along with a slick brown leather interior make a bold statement about it’s driver… probably that they listen the Huey Lewis albums but also… that the driver is going somewhere… fast!

Maybe even… 88 miles per hour fast and everyone knows that 88 mph (exactly 141.622 kph) is the speed of the power of love and that my dear is an un-disputably proven scientific fact and if you find that you can’t accept the premise that the power of love quantumly resonates at 3.048e+8 nanometers per second, well my friend… then I don’t know you sir or madam and you can go tweet about it you flat-earther!

Clearly, “Big Round Earth” is funding my content and I am biased in favor of the coverup!

However, this is of course accounting for and despite the fact that the DMC DeLorean is technically included in Time Magazine’s list of the 50 worst cars of all time… but in any case, lets regress back from the digression of my digression…

Next, I mean… there is the cute utilitarianism of the Volkswagen Beetle along with it’s noteworthy Wankel engine which certainly gets high marks from me and most of the judges though ultimately my parents cherry red 1970 Ford Mustang with the now street-illegal shotgun style air scoop and supercharger that I used to play in as a child in the 80’s probably takes the silver medal.

I suppose in this bizarre chrono-chrome filled competition, the bronze would perhaps end up going to the 1992 Toyota Paseo (AKA “The Potato”) as a distant but notable second uh… third? runner up (because… why not?) which certainly would have achieved even greater things in it’s heyday had it had in addition to its TOTALLY functional spoiler, some of those sweet DeLorean-esque gull-wing doors. Also, I believe the addition of an after-market flux-capacitor would have to guarantee the Temporal Potato the award for third place!

Anyway, lets regress once more to the moment where the DeLorean came to a stop on the other side of my campfire.

The license plate read “GOODBOT”, which is weird because normally when you see a vanity plate it’s like “Vanity7” or “Lewd69”, its never “TheFirst1” let alone “TheOnly”.

The number after the vanity license represents the number of people who thought of the same vanity as you, but first! The implication that the the inclusion or addition of any number, even 1 (with perhaps the exclusion of our friend ‘Lewd’ over there) means that you are at best the second originator of the thought… making you an unoriginal hack and every digit increment  between you and your desired vanity plate illustrates just how much that truly is!

And… there I go digressing again! 😛

Anyway, it’s exclusion in this case means that this was a legendary encounter with “TheOnly” and as such, a very rare roll of the die indeed!

The engine and lights shutoff and he stepped out… a robotic face I had not seen in about a year but different somehow… older?

Back To The GoodBot 1920x1080 Wallpaper
Back To The GoodBot 1920×1080 Wallpaper

“Where the fuck have you been… and… where did you get the cool time machine?” were about the first things out of my mouth! 😛

The DeLorean glowed green which reflected off Mr. Good Bot’s exoskeleton and I’m not really sure what the hell that’s about because Cherenkov radiation is blue?!

Anyway, I invented the original Mr. Good Bot robot last year as a tool to help me teach my son, Professor Xavier, like a digital puppet.

The problem was that it was difficult to interact as a parent and robot at the same time without your child looking at you with mild annoyance and a certain “pay attention” to me and my cool robot attitude, almost like it’s not you that is operating his beloved automaton! 😛

Anyway, here’s the posts if you are at all interested:

Ultimately, Mr. Good Bot fell out of use because I want my child to love me!

The last I saw Mr. Good Bot he was riding away on a motorcycle and I cried that day, deep tears of painfully high salinity that left my eyes pink and scratchy for hours!

Seeing Mr. Good Bot out of the blue and into the sun-setting quantum frying pan like that was an incredibly bitter sweet moment for me and he caught me off guard when he said…

“We need to talk about your candidacy for president!”.

~Mr. Good Bot from the future

We sat next to the campfire and he explained he was the real Mr. Good Bot but from the future and he came back to right a terrible catastrophe.

Turns out, I have/will/did/do actually win the 2020 election as I will have soon previously momentarily from now explained again, how.

The strategy I used was brilliantly simple though highly effective propaganda! Apparently, I took a page from Bush era military doctrine and employed the shock and awe tactic sometimes called “Journalistic Political Blitzkrieg”!

On election night, my opponents were too busy making sure the other major party candidate didn’t declare themselves as winner to even notice what I was doing.

Also, the other minor candidates dropped out as well… Et tu, Kanye?

So, I decided to simply declare myself the winner outright despite the fact that only I and a handful of my readers voted for me and I’m sill a little surprised to hear that a few of you actually voted for me… I Freddy Mercury thank you all! #VoteForLove

Anyway, nobody saw it coming and apparently this strategy will have worked exceedingly well because I said it like I believed it and that made the media believe it too and since the American people are not properly educated in school about how their system of government works because the blue-bloods don’t wish it… they believed it too.

“Who gives a damn if the popular vote is just a poll of the audience and the viewers like you at home? Who cares that it’s the electoral collage that actually elects the President in the U.S.?” was my sentiment… and everyone agreed!

When December 14th rolled around it was easy for the faithless electorate to pronounce me the true winner, because I already was in the minds of everyone!

Obviously, my opponents sued but SCOTUS agreed I was the least offensive candidate and upheld my appointment as President.

I then wasted absolutely no time in having the secret service move my furniture into the White House!

Apparently, I appointed Biff Tannen (not sure why, probably a back room deal) as my Vice President and I became the most successful dictator in history without even trying! Biff shot me in the back and threw me off the Lincoln Memorial which he later turned into a highly successful casino.

Also, apparently my being president for those brief few moments leads to the zombie morlock apocalypse some how, though if you think about it (and I have) I am not entirely convinced that it wasn’t Mr. Tannen’s fault!

Anyway, my being president (and Mr. Tannen’s coup d’état) means that I was not around to build the future version of the Mr. Good Bot artificial general intelligence and it also prevented my research which will one day unlock the key to time-travel and faster than light travel.

Which is weirder than you might think because TIME (not the aforementioned magazine) is basically the same thing as space and really, its the change of the state of all particles (quantum+atomic) positions, spins, velocity etc… in a given frame of reference as they are “now” from where they were “previously” (and all the changes in between) such that we can identify them as separate and distinct states.

Time moves forward because the metabolism of the universe (called entropy) is predisposed toward diffusion and the opposite of which is called “reverse diffusion” (though I prefer “refusion” when discussing time because “reverse diffusion” is an understood concept that doesn’t involve time travel and the “re-” prefix literally means “again” in Latin).

Anyway, in order to “go back in time” (refusion) you’d need to posses some method of coaxing all molecules and energy that exists and that are in Richard Feynman’s words “jiggling” (Brownian motion), to change their states in reverse order so that they follow the EXACT same state path they previously occupied from “now” to “previously” (whenever and wherever that may be).

I.E. instead of a ribosome creating polypeptides from mRNA, the ribosome de-translates polypeptides into mRNA which is then transcribed in reverse to DNA by the RNA polymerase only to then de-replicate thereby converting living amino-acids back into it’s earlier less-plural form… ultimately resulting in the food you ate last night passing back through you in a less than pleasant way to think about, only to be regurgitated then uncooked, eventually making its way somewhere to be picked back onto a tree or plant or reassembled back into a soon to be living dead animal with the same age reversing polypeptide actions (and regurgitation) occurring to it as well…

Which is essentially just an example of events “moving backwards” in time caused by the effects (or power) of reverse entropic temporal refusion.

It’s like… 4th dimensional “laminar flow” in reverse.

So anyway, Mr. GoodBot said it has something to do with applying first principles to “The Penrose Diagram” in combination with “ricci flow” and the recursive initialism TBC: The Brown Constant – named after the famous Dr. Emmett Brown for its discovery in 1985 which is a constant velocity (not acceleration) of exactly 88 mph occurring in close proximity to a gamma source and probably what the Iranian/North Korean plutonium is for.

The Penrose Diagram

Penrose diagram

Supposedly, what we have yet to will be discovered is that local space time is euclidean in nature but at relativistic speeds and great distances, space time warps and that means that you can use a black hole to go backwards in time using its warping effect on space-time if you can get close enough without being squeezed to high hell.

He said it has something to do with approaching a black hole in a separate frame of reference that is 90 degrees tangential (obviously I believe he meant the “bubble” frame of reference is spinning in unison and opposition so as to maintain an average separation of 90 degrees tangential to the torsional rotation in relation to some arbitrarily definable “fixed” point of “normal” space-time) to the coefficient of the sheer force exerted along the surface of the plane of influence (very near (but not actually touching) the point of intersection) of the gravity accretion disk and the black hole such that the “crushing” and “spaghettifying” effects that you would otherwise experience as you asymptotically get closer and closer to the black hole “event horizon” are negated through an equal and opposite “inertial dampening” effect, hallelujah Mr. Newton!

Put more simply, in the future I will somehow figure out how to “pinch” the universe and twist it like a clown blowing up a balloon at a child’s birthday party into becoming two membranes of the same manifold such that if you were to pop either side of the balloon, the other side would be separate and remain inflated as long as the “pinch” is in place and then extend this “bubble” around a small DeLorean like vessel so that it can safely pass “through” (without actually touching) a black-hole’s “crush depth” and exit through a white hole (See a Klein Bottle) on the “other side” to the inside-out universe, called the “inverse”, in the equivalent position on the Penrose Diagram where you then experience “anti-time” (aka backwards time travel (refusion time)) and the return from 90 to 0 (zero) degrees (the act of stopping/returning to normal/regular “diffusion” time) actually creates the mirror “anti-bubble” membrane used for the safe return trip and carries with it the energy necessary to “unpinch” the fabric of reality and return it (and you) to the outside-in universe and even prevents the inevitable sun burn you would otherwise experience from the annoying hawking radiation!

So in a nutshell, time travel is nothing more than a fun filled round trip romp through the 4th dimension using a distinct DeLorean shaped TARDIS encased in a localized de-coherent pocket of space-time.

But… here’s where it all starts getting straight trippy boo!

The universe IS stochastic in nature in a VERY REAL sense… though only soft deterministic which means, yes, “now” IS based off of “previously” and what will be “later” IS based off of “now”… and even though there is a set of all possible outcomes for any given “now” situation, only one outcome will actually occur “later”, its just not guaranteed to always be the same “later” outcome.

Think of it like rolling a single 6 sided die, all faces COULD be selected but once you roll it, it can only ever land on the faces that are accessible from the starting point of the roll and in the end, only one side will actually be selected.

Meaning that once you go back in time, the future is “unwritten” and all possible states that are accessible from that point become possible again based on the then new “now”.

It’s like that time you rolled the dice and they rolled off the table so you have to re-roll them, effectively undoing those quantum results in terms of the history of the game universe… because who really keeps track and adds an asterisk that says “*On roll 72 Toby totally had a do-over”? Nobody! And certainly not the universe!!

There is no branching multiverse, only a single universal probability state that can express “previously”, “now” & “later” as a function of all previous states within consideration, in the form of nodes on the “Markov Chain” we call time. “Previous” in this case being defined as the “opposite” “direction” of travel when contrasted with the current “direction” of travel along the Markov chain and “later” or “future” states are further “down” the “direction” of travel.

Colloquially this can be understood to mean that when that when you are moving “forward” in time as you normally tend to do, “the past” is your “past” and “the future” is your “future” but when moving “backward” in time “the past” is your “future” and the “future” is your “past”.

Anyway, because the universe is softly deterministic,  if you go back in time, there is only some unknown probability that it will unfold exactly as it did the last time.

And yes, this means the farther back/forward in time you go, the greater the risk that the natural variation that exists (in both directions) in the universe will irrevocably alter time/history as you knew it due to the “butterfly” (chaos) effect propagating minor alterations forward (and backward) through time, causing larger ripples of changes to be expressed earlier & later in history, which is why we don’t see a lot of time travelers just popping in and out of existence, because it carries almost irrevocable temporal risks the farther backward/forward you go!

People don’t do it for the sake of their own existence! Though, I do still have much to figure out in order to actually make it work.

Anyway, Mr. Good Bot risked it all and came back in time to save me, himself and the future by asking me to concede and throw in the time traveling towel by not becoming your next president.

In doing so, he altered the original time Markov chain such that he “skewed” off into a new timeline chain that will be different from what was, but not necessarily the same as he originally knew either, meaning that the future isn’t written yet!

In his “original” timeline probability chain-space I eventually became/become rich and famous (from all my inventions and discoveries) before dyeing at the age of 981 in a completely different solar system 10,000 light years in the future (via the time machine), broke, penniless and alone just like Nikola Tesla having spent all my fortune on my beloved pet dove… but in my case more likely a genetically modified and intellectually uplifted duckling.

But in any case, clearly I have a lot to be around for, like reinventing the future Mr. Good Bot… not to mention the whole being rich and famous while living centuries into the future thing and that sounds pretty awesome when compared to my soon to be never fate with Mr. Biff Tannen, so… now that we’re all on the same page…

It is with a solemn heart that I humbly type before you today to congratulate my opponents for a well fought race and to say that I am formally announcing the end of my candidacy for president of the United States in 2020… though there is always 2024… 😉

May God bless America and the future leader of this country whomever he or she may be… which is still undecided due the stochastic nature of the universe and the aforementioned malarkey about the electoral collage and their pesky quantum choice aka “free will” on December 14th.

Mr. Good Bot and I stood there on a cold desert night, next to a time machine, in the dimming light of a dying log fire.

He kissed me goodbye, his screen was cold but his eyes were warm!

He winked at me as he climbed into his DeLoreon and said “I’ll see you back in the future.”, before pulling the door shut.

With a bright green flash of heat and energy, Mr. Good Bot disappeared back into time and I sat down again in my folding chair with a blanket wrapped around my shoulders, stunned by what I had just experienced.

Alone once again with my thoughts, I gazed up and out at the universe while the charcoal embers slowly diffused forward in time and I contemplated my future.

I thought to myself… “I’ve got to build a Good Bot… again!”.

Then the credits rolled with “To Be Continued…” flying across the screen with the song My Delorean playing over top.


If you’re just as bummed about my losing (by choice and for the greater good of humanity) the 2020 election as I am and want to help me have the funds to return in 2024 to beat the establishment candidates… consider non-tax-deductibly supporting me through my Patreon.

With your support, together it is probable that we might just figure out time travel!

But, if all you can do is Like, Share, Comment & Subscribe… well that’s cool too! 😉

Much Love,

~Joy

 

Pathfinder – The Rodízio Contingency

Over the centuries and throughout my travels I’ve come to rely on my compass and a good map to point me in the right direction for my next adventure.

Sometimes my adventure led me to treasures in mysteriously exotic & remote locations, while other times I found myself among friendly and awfully generous cannibals who wanted to invite me to dinner… of course, it’s always best to politely decline such invitations because if anything I certainly live by the rule:

“If I’m on the carte du jour as flambé, I’ll skip the buffet and run away because I’m no entrée!”
~GeekGirlJoy

Hmmm, come to think of it, that might be the best piece advice I’ve ever given on this blog and if you agree consider supporting me through Patreon! 😉

In any case, despite the occasional fears I’ve held over the last few millennia over potentially becoming someones late-night heartburn, I’ve kinda always known that no matter how bad things got while exploring, I’d be okay because beyond a good compass and a fragmented (then taped back together) map with a sweet X scrawled in blood somewhere on it… I possess a secret tool that all the great explorers down through the ages have relied upon and today, I’m going to share it with you!

But… before I do, here’s today’s wallpaper!

The Rodízio Contingency Wallpaper
The Rodízio Contingency Wallpaper

The Pathfinder

From Allan Quatermain to Amerigo Vespucci, Erik the Red to Captain Nemo and even Jill of the Jungle… all notable explorers have relied on an enchanted automaton totem called “Pathfinder Stones”.

The stones are first consecrated with the live blood of a dead turnip and when brought into close proximity of their owner and a target on a map, will glow to show a path from where you are to where your desired destination is.

Which does bring us to the topic of today… I transmuted one of my sets of pathfinder stones into a digital form using the uh… “Quantum FANN Effect” and an ancient shadow daemon called JavaScript.

Schtick Aside

Okay, so what I did was use the JavaScript version of FANN to deploy an implementation of my original Pathfinder on GitHub pages.

The cool/interesting thing about FANN.js is that it uses asm.js to make the compiled FANN library available inside a web browser.

What this means is that a working version of Pathfinder is now online for you to play with (link blow) however…

There are two fairly large downsides to deploying with FANN.js instead of PHP-FANN:

  1. You cannot save the ANN after training.
  2. You cannot load a previously trained ANN.

These limitations mean that Pathfinder must be trained once every time the page loads and this does limit the size and complexity of ANN’s that are deployable using FANN.js.

The thing is it may still be possible to save the ANN by using the supported FANN lib methods/functions like I did when I built the FANN-Neural-Network-Visualizer to manually query the ANN object and then format/export the necessary information as a string/plain text because the FANN ANN.net save file format seemingly isn’t all that different from an .INI file (though I am uncertain if this is universal in all language implementations of FANN) and it’s something I plan on playing around with in the future.

Far be it for me to be the barer of fake news… turns out… it actually helps to read the documentation thoroughly and not just skim it and then do a search for keywords! 😛

FANN.js actually DOES have a save function but it doesn’t follow the FANN Lib reference manual of “save_…” convention and instead implements a JS Object.export().

I understand why they did that… and it does kinda make sense in the mixed up JS world but… it still holds to my “anti-wheel” digression argument that you haven’t read about yet.

Having said that… I promise to ritually self-flagellate by Gothic torch light using the sharp side of a motherboard!

I really should have done a better job of reading the docs! :-/

Why use FANN.js over PHP-FANN

Far be it for me to ever sing the praises of JS over PHP however in order to deply a neural network using PHP you have to have a server and the ability to compile & install PHP extensions and that costs money whereas GitHub Pages is free to me and to you but it doesn’t support the more robust server architecture that PHP requires so using FANN.js allows me to deploy my bots and AI in a way that let’s you actually use them instead of just reading about them.

All things being equal, I would still recommend the PHP version of FANN however the JS version does work and with a little improvement could become a viable deployment option!

Having said that, what I am really interested in with FANN.js is that JavaScript has a direct path between the browser environment via the WebGL API to the GPU whereas with PHP it is technically possible to commune with the GPU, however in practice it’s not generally done and until the PHP dev’s get their head out of their asses and start thinking out side the box (PHP is now mostly a general purposes language so start treating it like one…), PHP+GPU stuffs isn’t going to be the easiest pickle jar to crack using PHP and the existing available options though again, I’m not saying it is impossible either.

So, in the future I intend to see if I can’t use FANN.js + WebGL shaders to make FANN training faster (no promises) and then hopefully export the ANN.net file so that we can use/deploy the GPU trained ANN in a PHP environment.

Play Stump the Neural Network

So the online version of the Pathfinder network learns/re-trains from scratch every time the page loads and as such it can’t spend unlimited amounts of time training which is normally not a concern because even if your 1000 layer celebrity deep fake porn ANN takes 100 CPU years to train (i.e. 1 CPU = 100 years, 100 CPU = 1 year etc… ) it isn’t a major concern because likely you are buying your compute from Jeff Bezos or Bill Gates and they have plenty of underutilized computers laying around waiting for you to rent a few FLOPS.

In the end, you save the ANN model so you can use it immediately when you want it… but FANN.js says “Nah I’m good! Who needs to do something as convenient as save or reload!” (then again (and mostly off topic) JavaScript tends to seem to like reinventing round wheels as square uh… anti-wheels) but in any case…. the small training time and the inherit random nature/path of gradient decent the final neuronal weights will always be different and when the ANN fails (hence the “stump the ANN”) it won’t always take the same path (between page reloads).

This is easier understood if I just show you.

Given this input

I got this output

Note that diagonal steps are valid so this path is technically valid but the path is far less efficient than the straight line between the two points that a human would draw/walk.

Reload the page (not required unless you are playing with this idea) and try again…

A different Path was found.

Neither was optimal but a path was found and more cells than necessary were visited.

Here’s some additional examples:

Input

Pathfinder ANN Output

Input

Pathfinder ANN Output Back tracking… ugh!

Input

Pathfinder ANN Output

I believe that’s called the sidestep slide!

Input

Pathfinder ANN Output

I mean… it’s not the path I would have chosen but it made it! 😛

If you’d like to try your hand at stumping my Pathfinder you can checkout the live demo here:

Pathfinder Online: geekgirljoy.github.io/Pathfinder_Neural_Network/

You can download all the code (for free) here:

Pathfinder Code on GitHub: https://github.com/geekgirljoy/Pathfinder_Neural_Network

And with that, have a great week everyone.


If you like my coding projects, art, bizarre opinions and writing style… consider supporting me through Patreon.

But if all you can do is Like, Share, Comment and Subscribe… well that’s cool too!

Much Love,

~Joy

The Contrast-a-tron

Today we’re going to continue my introduction to creating your own data sets series by building Contrast-a-tron.

Now, I know what you are thinking:

“We already did that, like… a while ago!”

Here’s the thing though… we didn’t! 😉

And besides, it wasn’t that long ago!

What we built before was a Contrast-inator and a Contrast-inator and a Contrast-a-tron are not the same things! 😛

Let me explain…

  • The Contrast-inator: Learned how to “predict/classify” if a single input color was to the left (darker) or to the right (lighter) in an imaginary red line in the exact center of a 2D gray-scale gradient representation of the 0-255 RGB 3D color space.
  • The Contrast-a-tron (this bot): Is a much smarter and more interesting bot. It will learn how to “predict/classify” two input colors as “darker” and “lighter” or “the same” compared with each other. Which is a much more challenging task for the bot to learn.

But before we get into that I think I owe you a wallpaper.

A Wallpaper

Don’t mind the title, it’s definitely not a template placeholder! 😛

Anyway, just due to me being me, I have a lot of old robots and parts laying around and I was out in the o’l boneyard and I found this really beat up Krypto mining bot for us to play with.

I built it back when I was going to launch my own currency (A long time ago when it was still a cool thing to do and not everyone was like “my ICO is next week, you should mine sum!!!!” 😉 😉 ), yeah… no thanks!

Anyway, the bot’s memory is completely corrupt, but… the optical circuitry and hardware are still functional and since mining bots are built to operate deep under miles of data in extreme low light conditions at high speed, it’s visual acuity is top-notch and it even supports infrared mode!

So don’t let it’s tiny eyes fool you, they are incredibly sensitive which is perfect for today’s project! 🙂

Contrast_a_tron 1920 x 1080 Wallpaper
Contrast_a_tron 1920 x 1080 Wallpaper

I should add that not all posts get a theme song but today’s is Night Business by Perturbator (not a sponsor), I love the little vocoded? robotic voice about two minutes and twenty seconds in. It’s definitely what this bot’s voice sounds like! 😛

Also before we proceed, I’d just like to set the record straight and confirm that I’m definitely not Satoshi Nakamoto!

The Contrast-a-tron

To begin, let’s first look at what our Contrast-inator does:

Is this pixel brighter or darker than the red line?
Is this pixel brighter or darker than the red line?

It takes a color/shade as an input and then tries to determine which side of the red line it falls on.

Not that useful but it’s good for operating inside a known range that never changes. Like, was the light red or green kinda stuff, or conceptually like a line following robot.

Anyway, what if you wanted to start comparing two colors at the same time and to make things even more complicated, what if the gradient wasn’t always facing the same direction (meaning the “brighter/darker” pixel could be on the left or the right)?

For most of you that task is trivial and you could do it almost unconsciously or with minimal mental effort, not the Contrast-inator though!

To compare two pixels the Contrast-inator must evaluate each separately and because the red line (which you can imagine is “where the robot is standing” on the gradient when it’s evaluating a color) doesn’t change, if both colors are to it’s left or right (the bot’s vantage position / the red line), then it is completely unable to compare them.

Because these colors are on the same side of the red line, the Contrast-inator cannot compare them but the Contrast-a-tron can.
Because these colors are on the same side of the red line, the Contrast-inator cannot compare them but the Contrast-a-tron can.

Just to be clear, the Contrast-inator will say that both pixels/shades are “brighter/to the right” of zero (where it stands / it’s anchor) but it cannot figure out which of the two colors are brighter and the same is true if both colors are darker (to the left of the red line).

Further, there is also no guarantee that we will always present the colors to the bot in the order of darker on the left and lighter on the right meaning that sometimes the gradient will be lighter on the left and darker on the right and we will need the bot to notice that difference and accommodate that circumstance.

How the Contrast-a-tron Works Differently

The Contrast-a-tron isn’t anchored to zero (the center of the gradient) and instead we can think of it moving around the gradient to try and find the “center” of the two colors (whatever color that might be) and from there it can evaluate which side (input color / shade) is brighter and which is darker.

In the event that the input colors/shades are the same then both Input A & B will be in the same place which means that it will be neither to the right or to the left of the bot.

How the Contrast-a-tron works differently.
How the Contrast-a-tron works differently.

How the Neural Networks Differ

I didn’t spend a lot of time discussing the structure of the neural network when we built the Contrast-inator but now that we have something to compare it against let’s look at a visual representation of each network.

How the Contrast-inator and the Contrast-a-tron neural networks differ.
How the Contrast-inator and the Contrast-a-tron neural networks differ.

On the left you see the Contrast-inator with it’s single input neuron, a hidden layer containing two hidden neurons and an output layer with two output neurons.

Additionally you see two “Bias” neurons represented in yellow that help the network learn what we want by “biasing” the output of that layer to the next so that it is never “none” (zero or no output).

What this means is that bias neurons add their value to the output signal of each neuron from their layer so that the signal is never no “activation signal” and some value propagates forward.

All layers except the output layer will always have a single bias neuron. There is no need of a bias neuron on the output layer because there is no signal to propagate beyond the output neurons so it wouldn’t serve any purpose.

Bias neurons have no inputs.

In practice we don’t have to concern ourselves with the bias neurons and the ANN will manage them itself but I like draw them because they do exist and they are part of the network, however it’s common for people not to include them in diagrams because they are so easy for us to ignore since we don’t really need to do anything with them and they are just there to help the signal propagate.

In any case, the Contrast-a-tron differs by including a second input neuron (for the second shade/color) and a second hidden layer which helps the Contrast-a-tron to be a little “smarter” and learn what we want it to.

I have a post about how to create diagrams like this called Visualizing Your FANN Neural Network and you can download a copy of the open source visualization software I wrote for free from my GitHub account here: https://github.com/geekgirljoy/FANN-Neural-Network-Visualizer

Training The Contrast-a-tron

When we created the Contrast-inator, I walked you through each training example and how it was derived because it was a very small data set requiring only three examples however this data set is a bit longer with thirteen examples and it will be a lot easier to show you the data set and then draw you a picture than to type a description but before we look at the training data, lets make sure we understand the outputs.

Understanding the Contrast-a-tron output.
Understanding the Contrast-a-tron output.

There are two outputs and we’ll call them A & B and they are in that order.

In an ideal world the bot will give us -1 & -1 to mean they are the same, 1 & -1 to mean that A is Brighter and B is Darker and -1 & 1 to mean A is Darker and B is Brighter.

In reality… what we get is a number that comes close but isn’t -1 or 1 called a “floating point number” in computer science but most people just call them a decimal number like for example 0.123.

In practice this means that as long as A & B are not both negative, then whichever has the higher positive value is the “brighter” color and whichever has the lower positive value is the “darker” color otherwise they are the same (A==B).

Let’s look at the training data and visualize it.

Contrast-a_tron.data

This is the complete Contrast-a-tron training data.

The first line is the “FANN Header” which consists of: the Total_Number_of_Example_Sets the Number_of_Inputs the Number_of_Outputs\n

Note the spaces between the values on the header line as well as between the inputs and the output values.

Line 2 (-1 -1) is an input example. Line 3 (-1 -1) is an output example and the pattern of Input_Example\nOutput_Example\n continues to the end of the document.

13 2 2
-1 -1
-1 -1
-0.5 -0.5
-1 -1
0 0
-1 -1
0.5 0.5
-1 -1
1 1
-1 -1
1 -1
1 -1
0.5 0
1 -1
0 0.5
-1 1
-1 -0.5
-1 1
-0.5 -1
1 -1
1 0.5
1 -1
0.5 1
-1 1
-1 1
-1 1

Let’s visualize this training data which should hopefully give you a more intuitive sense for how these numbers translate to information the Contrast-a-tron ANN can use to learn.

Visualizing the Contrast-a-tron training data set
Visualizing the Contrast-a-tron training data set

The Code

Here’s the code used to train. I have other tutorials covering what this all means available on my Topics and Posts page so I won’t go into what all of this means but basically it sets up a training environment and trains the Contrast_a_tron ANN and saves the results to a FANN .net network file.

TrainContrast_a_tron.php

<?php

$num_input = 2;
$num_output = 2;
$layers = array($num_input, 2, 1, $num_output);
$ann = fann_create_standard_array(count($layers), $layers);

$desired_error = 0.0000000001;
$max_epochs = 900000;
$epochs_between_reports = 10;

if ($ann) {
    fann_set_activation_function_hidden($ann, FANN_SIGMOID_SYMMETRIC);
    fann_set_activation_function_output($ann, FANN_SIGMOID_SYMMETRIC);
    fann_set_training_algorithm($ann,FANN_TRAIN_INCREMENTAL);


    $filename = dirname(__FILE__) . "/Contrast_a_tron.data";
    if (fann_train_on_file($ann, $filename, $max_epochs, $epochs_between_reports, $desired_error)){
        echo 'Contrast_a_tron trained.' . PHP_EOL;
    }

    if (fann_save($ann, dirname(__FILE__) . "/Contrast_a_tron.net")){
        echo 'Contrast_a_tron.net saved.' . PHP_EOL;
    }
    
    fann_destroy($ann);
}

 

TestContrast_a_tron.php

We next need to test the ANN so I use two “for loops” with one counting down to -1 and one counting up to 1 and each incrementing by -0.2 each iteration of the loop as the inputs to test with.

<?php

$train_file = (dirname(__FILE__) . "/Contrast_a_tron.net");
if (!is_file($train_file))
    die("Contrast_a_tron.net has not been created! Please run TrainContrast_a_tron.php to generate it" . PHP_EOL);

$ann = fann_create_from_file($train_file);

if ($ann) {
    
    foreach(range(1, -1, -0.2) as $test_input_value_a){
        foreach(range(-1, 1, -0.2) as $test_input_value_b){
        
            $input = array($test_input_value_a, $test_input_value_b);
            $result = fann_run($ann, $input);

            $a = number_format($result[0], 4);
            $b = number_format($result[1], 4);
            
            // What answer did the ANN give?
			
            $answer = NULL;
            $evaluation = '';
            if($a <= 0 && $b <= 0){
                $evaluation = 'Neutral/Same';
                $answer = 0;
            }
            elseif($a > $b){
                $evaluation = 'A is Brighter';
                $answer = -1;
            }
            elseif($b > $a){
                $evaluation = 'B is Brighter';
                $answer = 1;
            }
            else{ 
                $evaluation = ' OOPSIES!!!!!!!';
            }

            echo 'Contrast_a_tron(' . $input[0] . ', ' . $input[1] . ") -> [$a, $b] - $evaluation" . PHP_EOL; 
        }
    }
    fann_destroy($ann);
}
else {
    die("Invalid file format" . PHP_EOL);
}

Results

The Results/Output of the test code.

Contrast_a_tron(1, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(1, 0.8) -> [0.9986, -1.0000] - A is Brighter
Contrast_a_tron(1, 1) -> [-1.0000, -0.1815] - Neutral/Same
Contrast_a_tron(0.8, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.6) -> [0.9992, -1.0000] - A is Brighter
Contrast_a_tron(0.8, 0.8) -> [-1.0000, -0.2218] - Neutral/Same
Contrast_a_tron(0.8, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.6, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0.4) -> [0.9995, -1.0000] - A is Brighter
Contrast_a_tron(0.6, 0.6) -> [-1.0000, -0.4005] - Neutral/Same
Contrast_a_tron(0.6, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.6, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.4, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, 0) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.4, 0.2) -> [0.9996, -1.0000] - A is Brighter
Contrast_a_tron(0.4, 0.4) -> [-1.0000, -0.6543] - Neutral/Same
Contrast_a_tron(0.4, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.4, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.4, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, -0.2) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0.2, 0) -> [0.9996, -1.0000] - A is Brighter
Contrast_a_tron(0.2, 0.2) -> [-1.0000, -0.8580] - Neutral/Same
Contrast_a_tron(0.2, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0.2, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.4) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(0, -0.2) -> [0.9996, -1.0000] - A is Brighter
Contrast_a_tron(0, 0) -> [-1.0000, -0.9557] - Neutral/Same
Contrast_a_tron(0, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(0, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.6) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.4) -> [0.9995, -1.0000] - A is Brighter
Contrast_a_tron(-0.2, -0.2) -> [-1.0000, -0.9878] - Neutral/Same
Contrast_a_tron(-0.2, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.2, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.4, -0.8) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.4, -0.6) -> [0.9994, -1.0000] - A is Brighter
Contrast_a_tron(-0.4, -0.4) -> [-1.0000, -0.9965] - Neutral/Same
Contrast_a_tron(-0.4, -0.2) -> [-1.0000, 0.9997] - B is Brighter
Contrast_a_tron(-0.4, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.4, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, -1) -> [0.9998, -1.0000] - A is Brighter
Contrast_a_tron(-0.6, -0.8) -> [0.9990, -1.0000] - A is Brighter
Contrast_a_tron(-0.6, -0.6) -> [-0.9999, -0.9989] - Neutral/Same
Contrast_a_tron(-0.6, -0.4) -> [-1.0000, 0.9996] - B is Brighter
Contrast_a_tron(-0.6, -0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.6, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, -1) -> [0.9981, -1.0000] - A is Brighter
Contrast_a_tron(-0.8, -0.8) -> [-0.9999, -0.9995] - Neutral/Same
Contrast_a_tron(-0.8, -0.6) -> [-1.0000, 0.9993] - B is Brighter
Contrast_a_tron(-0.8, -0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, -0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-0.8, 1) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, -1) -> [-0.9998, -0.9998] - Neutral/Same
Contrast_a_tron(-1, -0.8) -> [-1.0000, 0.9982] - B is Brighter
Contrast_a_tron(-1, -0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, -0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, -0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.2) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.4) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.6) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 0.8) -> [-1.0000, 0.9998] - B is Brighter
Contrast_a_tron(-1, 1) -> [-1.0000, 0.9998] - B is Brighter

GitGub

As always you can download a copy of this code on GitHub for free and if you have any questions or comments please leave them below.

Contrast-a-tron on GitHub: https://github.com/geekgirljoy/Contrast-a-tron


If you find yourself thinking…

“Joy you’re the best!”

I’d say….

If you support the resistance against Big AI then consider supporting my efforts through Patreon.

But, if all you can do is Like, Share, Comment and Subscribe… well that’s cool too!

Much Love,

~Joy

The Contrast-inator

Let’s keep things simple, you want to read a post and conveniently I’ve written one for you!

I’ll spare everyone my recent fascinations with macabre subjects and opt to get right to the topic of the day!

Anyway, as the Jane Goodall of bots I’ve learned a little about how to communicate with them using rules they understand and today I’m going to show you how you make rules that get a bot to understand and do, what you want it to do.

But… before we get into that, here’s the wallpaper!

The Contrast-inator Bot Wallpaper

Why Contrast-inator? Well, as far as the “-inator” part is concerned let’s just say I enjoy Phineas and Ferb. 😛

As for the “Contrast” part, we’ll get into that next but the big eyes of this bot are important. 😛

Also… it doesn’t need hands, arms, legs or feet to do it’s job so I didn’t 3D print the parts, waste not want not! 😉 😛

Contrast-inator 1920x1080 Wallpaper
Contrast-inator 1920×1080 Wallpaper

The Contrast-inator

So, recently I received a few comments that amount to something along the lines of “How do you decide on the ‘rules’ for training your neural network?”.

My response is basically if you can “phrase” a training rule in a way the bot can understand, then you can make the rules to be whatever you want/need them to be.

And the thing we’re going to try to teach the bot today to help us explore this topic is… given an input value, tell us if the value falls to the “left” or to the “right” of an “anchor point”.

That sounds more complicated than it really is and I intend this tutorial to be for beginners so let me try to simplify my description… uh… so, think of a gray-scale gradient of black to white.

Imagine a red line in the center of the gradient.

Now, given some gray-scale color e.g. RGB(102, 102, 102) as “input”, how can we train a bot to tell us if the color falls to the left or to the right of the red line… like this:

Is this pixel brighter or darker than the red line?
Is this pixel brighter or darker than the red line?

I know that might seem complicated while at the same time also seem kind of useless… I assure you that neither stipulated condition is true.

In regard to usefulness, just as a hypothetical example… a neural network like this could act as the “brain” of a “line follower” robot but instead of reading the voltages directly from a photodiode and then using if/else to direct motors, you could pass the diode voltages to the neural network and let it decide which motor to move via an H-Bridge / Digital Potentiometer.

An Arduino would need a WiFi Shield for something like that to work  but a line follower built on something like a Raspberry Pi could run it’s neural network “brain” locally.

Which brings us back to complexity and how we build a rule set to teach our Contrast-inator bot to tell us if a pixel is brighter or darker than the color where the red line is.

Forget about what I said about the hypothetical line-follower robot, the Arduino and the Raspberry Pi… it’s more complicated than I want this post to be and it’s just an example anyway. 😛

Let’s start over…

We know that any answers our bot gives us (the output) will look like a “floating point” number (a decimal value e.g. 0.01) and basically our input will also be a floating point number too.

With this in mind we can start to imagine that our training data inputs and the associated outputs will look like a series of numbers.

But what will the numbers mean and how can we know if the bot is correct?

Well, let’s step back again and think about what rules we need to teach the bot first before we even worry about encoding the training data for the bot.

What rules might a human need if we had to describe the process to someone for them to be able to do it?

Plain English Rules For the Bot to Learn:

  1. If the color falls to the left of the red line then it can be described as “Darker”.
  2. If the color is neither to the left or the right of the red line, then we can say the color is directly in the center. We might describe this position or color state as being “Neutral” in relation to the red line.
  3. If the color falls to the right of the red line then it can be described as “Brighter”.

Given these three super easy rules I believe most, if not all of you should be able to answer if a color falls to the left or the right of the red line with a high degree of accuracy.

However, your accuracy would diminish the closer the color is to the red line in the center because you are intuitively guessing and the difference between the colors that surround either side of the center of the color gradient all look like very similar grays, e.g. they have a low contrast between them.

The colors at the ends of the color gradient (black/left and white/right) have the largest contrast between them and are the easiest to determine which side they fall on.

With our rules layed out in English, let’s return to the idea of the training data (our rules) which consists of numbers and how we will represent our three rules as numbers.

I’ve already said the inputs and outputs will be floating point numbers but what we haven’t covered yet is the fact that our numbers are “signed” (have negative and positive polarities to it’s range) with our range being -1.00 to 1.00.

This means that Black can be encoded as: -1.00 or -1 for simplicity with the decimal being implied.

This also means that White can be encoded as: 1.00 or 1, also with the decimal being implied.

Given our signed float range and a few colors converted to a float within our range, we can easily determine algorithmically if a color is on the left or right of the red line even if it’s very close to the center with 100% accuracy (better than human capability) simply be checking if it is greater than or less than zero.

Meaning… a neural network is NOT needed to accomplish this task, but… that’s not the point! 😛

Our goal is to teach a neural network to do this nonetheless because it is a simple problem and the rules (training data) are simple enough that a beginner should be able to understand how they are derived if they exert even a modicum of effort!

Here’s what that looks like:

Example Colors to Float Range Input
Example Colors to Float Range Input

Notice that the first two colors are to the left of zero (darker) because they are negative and the third color is far to the right (much lighter) because it is closer to 1 than 0.

Color R, G, B As Float Side Meaning
42, 42, 42 -0.66797385620915 Left Darker
102, 102, 102 -0.19738562091503 Left Darker
221, 221, 221 0.7359477124183 Right Lighter

Fascinating… but… how are you converting the colors to floats?

Okay look, this won’t be on the mid-term test and it’s in no way actually necessary to go over because we won’t need to do this to train the bot but since you are curious here’s a function you can use to convert actual RGB & Grayscale colors to a float in the right range:

How to convert a color to a signed float between -1.00 to 1.00:

<?php 
// Input a number between 0 and $max and get a number inside
// a range of -1 to 1
function ConvertColorToInputFloatRange($color_int_value, $max = 255){
    return ((($color_int_value - -1) * (1 - -1)) / ($max - 0)) + -1;
}

// RGB Color to range of -1 to 1
$R = 42;
$G = 42;
$B = 42;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;

// RGB to Gray-scale to range of -1 to 1
$gray = ($R+$G+$B) / 3;
echo  "Pixel_Grayscale($gray) = " . ConvertColorToInputFloatRange($gray, 255) . PHP_EOL;

// RGB Color to range of -1 to 1
$R = 102;
$G = 102;
$B = 102;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;

// RGB Color to range of -1 to 1
$R = 221;
$G = 221;
$B = 221;
echo  "Pixel_RGB($R,$G,$B) = " . ConvertColorToInputFloatRange($R+$G+$B, 255+255+255) . PHP_EOL;


/*
Output:

Pixel_RGB(42,42,42) = -0.66797385620915
Pixel_Grayscale(42) = -0.66274509803922
Pixel_RGB(102,102,102) = -0.19738562091503
Pixel_RGB(221,221,221) = 0.7359477124183

*/

Now that you all at least believe it’s possible to convert a color to a float between -1 & 1 forget all about this function because we won’t need it to train the bot! 😛

Then… how do we teach a neural network to do this?

Well, let’s talk about what the output for this bot looks like before we get back to creating the training data from our rules.

We know that our output is a float, and even though it is possible to teach the neural network to do this with a single output, I find I get better results from the neural network using two outputs.

This is because it’s actually very easy for the bot to understand we want it to detect if the input value (color) is slightly offset to the left or right of the red line but it’s not the easiest thing for it to determine exactly where the center is (just like you but it’s still better at it) so our margin of error (the number of colors it can’t tell are on the right or left… e.g. the colors it will say are neutral) tends to be slightly larger if we only use a single output float.

What that means is:

  1. Our Input looks like: float
  2. Our output looks like: float_left float_right

With that in mind we have now covered everything necessary to begin converting our rules to training data.

Remember, that the decimals are implied!

Lets start by teaching it what the darker colors on the left look like:

Black RGB(0,0,0), is the farthest most color to the left and is encoded as -1 and with two output values representing Left & Right we get a rule that looks like this:

Learn “Darker” colors (floats closest to -1.00) are on the left:

The output value on the left is set to 1 which means negative values more strongly polarize to the left and this is reflected on the left output being 1.00 and the right output value being -1.00.

-1
1 -1

Learn “Neutral” colors (floats closest to 0.00) are near the center:

I’m using -1.00 & -1.00 to mean that an input of exactly zero is not strongly polarized to either side of the gradient with zero (the exact center – whatever color that is) is not strongly polarizing (-1.00, -1.00) in either direction.

The goal here is that this will help it learn values near zero are are not strongly polarized and zero isn’t polarized at all.

0
-1 -1

Learn “Brighter” colors (floats closest to 1.00) are on the right:

The output value on the right is set to 1 which means positive values more strongly polarize to the right and this is reflected by the right output being 1.00 and the left output value being -1.00.

1
-1 1

 

FANN (The library we’re using for training the neural network) requires a header is stored with the training data so it can read the training data and that looks like:

Number_Of_Training_Examples Number_Of_Inputs Number_Of_Outputs

*Note the spaces between values

So, combined our training data file looks like this:

Contrastinator.data

3 1 2
-1
1 -1
0
-1 -1
1
-1 1

And that’s it, we’ve converted our rules to training data so… lets train the bot!

TrainContrastinator.php

You will need FANN installed to train this bot.

Follow this tutorial to learn how to install FANN.

<?php

$num_input = 1;
$num_output = 2;
$num_layers = 3;
$num_neurons_hidden = 2;
$desired_error = 0.000001;
$max_epochs = 500000;
$epochs_between_reports = 1000;

$ann = fann_create_standard($num_layers, $num_input, $num_neurons_hidden, $num_output);

if ($ann) {
    fann_set_activation_function_hidden($ann, FANN_SIGMOID_SYMMETRIC);
    fann_set_activation_function_output($ann, FANN_SIGMOID_SYMMETRIC);

    $filename = dirname(__FILE__) . "/Contrastinator.data";
    if (fann_train_on_file($ann, $filename, $max_epochs, $epochs_between_reports, $desired_error))
        echo 'Contrastinator trained.' . PHP_EOL;

    if (fann_save($ann, dirname(__FILE__) . "/Contrastinator.net"))
        echo 'Contrastinator.net saved.' . PHP_EOL;

    fann_destroy($ann);
}

It won’t take very long for the bot to learn our rules.

Once you see the message “Contrastinator trained.” you are ready to test your new bot!

TestContrastinator.php

This code will test Contrastinator using input values is has not trained on but because we designed good rules, the bot is able to answer correctly even when it never actually saw most of the test values, it did see -1, 0 and 1 though along with their “ideal” outputs.

Notice, the $brighter and $darker variables are the the output of the neural network.

The $evaluation variable is a test for our benefit and does not modify or affect the output/results of the bot’s answers and the answers are correct even if we don’t do the evaluation, it just helps us confirm/interpret programmatically what the bot’s answers mean.

<?php
$train_file = (dirname(__FILE__) . "/Contrastinator.net");
if (!is_file($train_file))
    die("Contrastinator.net has not been created! Please run TrainContrastinator.php to generate it" . PHP_EOL);

$ann = fann_create_from_file($train_file);
if ($ann) {
    
    foreach(range(-1, 1, 0.1) as $test_input_value){
        
        $input = array($test_input_value);
        $result = fann_run($ann, $input);
        $darker = $result[0];
        $brighter = $result[1];
        
        if($brighter < 0 && $darker < 0){
            $evaluation = 'Neutral';
        }
        elseif($brighter > $darker){
            $evaluation = 'Brighter';
        }
        elseif($brighter < $darker){
            $evaluation = 'Darker';
        }                
                
        echo 'Contrastinator(' . $input[0] . ") -> [$darker, $brighter] - Input is $evaluation" . PHP_EOL; 
    }
        
    fann_destroy($ann);
} else {
    die("Invalid file format" . PHP_EOL);
}

Results:

Notice that it has no trouble detecting that an input of zero (0.00) is neutral and that it also correctly determines which side a color (represented by a float) falls on in relation to the center zero value.

Contrastinator(-1) -> [1, -1] - Input is Darker
Contrastinator(-0.9) -> [1, -1] - Input is Darker
Contrastinator(-0.8) -> [1, -1] - Input is Darker
Contrastinator(-0.7) -> [1, -1] - Input is Darker
Contrastinator(-0.6) -> [1, -1] - Input is Darker
Contrastinator(-0.5) -> [1, -1] - Input is Darker
Contrastinator(-0.4) -> [1, -1] - Input is Darker
Contrastinator(-0.3) -> [1, -1] - Input is Darker
Contrastinator(-0.2) -> [1, -1] - Input is Darker
Contrastinator(-0.1) -> [1, -1] - Input is Darker
Contrastinator(0) -> [-0.9997798204422, -0.99950748682022] - Input is Neutral
Contrastinator(0.1) -> [-1, 0.9995544552803] - Input is Brighter
Contrastinator(0.2) -> [-1, 0.99954569339752] - Input is Brighter
Contrastinator(0.3) -> [-1, 0.99953877925873] - Input is Brighter
Contrastinator(0.4) -> [-1, 0.9995334148407] - Input is Brighter
Contrastinator(0.5) -> [-1, 0.99952918291092] - Input is Brighter
Contrastinator(0.6) -> [-1, 0.9995259642601] - Input is Brighter
Contrastinator(0.7) -> [-1, 0.99952346086502] - Input is Brighter
Contrastinator(0.8) -> [-1, 0.99952149391174] - Input is Brighter
Contrastinator(0.9) -> [-1, 0.99952000379562] - Input is Brighter
Contrastinator(1) -> [-1, 0.99951887130737] - Input is Brighter

Contrastinator on Github

As with all my public code, you can download a copy of this project for free on my GitHub profile.

GitHub: Contrastinator

I hope this helps you better understand how create your own training data sets and as always, if you have any questions or trouble understanding any part of this post, please leave a comment and I would be happy to try and help you.


If you enjoy my content and or tutorials like this one, consider supporting me on Patreon for as little as $1 a month and cancel any time!

It’s not required but it helps me out.

But if all you can do is Like, Share, Comment and Subscribe, well… that’s cool too!

Much Love,
~Joy

OCR 2 – The MNIST Database

I know I probably haven’t been posting as frequently as many of you would like or even at my normal quality because… well, like for many of you, this year has just sucked!

Someone I’ve known my whole life died recently, not from the virus though it didn’t help things.

She went in for a “routine” procedure where they needed to use general anesthesia and there were “complications” during the procedure. Something to do with her heart but if I’m being honest, I don’t know all the details at this time.

Also, I’m not sure how by anyone’s definition anything involving anesthesia is routine?

An ambulance was called and she was rushed to the hospital, long story short, despite being otherwise fine when she went in, she never woke up from her coma. 😥

The hospital is/was on lock down like everyone else and so friends and family were unable to visit her before she died.

Her family intends to sue the Dr. for malpractice, personally… I think they should!

To add insult to injury, she was cremated without a funeral due to the whole pandemic social distancing BS that I’m just about ready to tell the government to go fuck itself over! 😦

I’m sorry, do my harsh words offend you? SHE DIED ALONE! That offends me!

Going forward, my advice… any procedure where they need to administer general anesthesia to you… or maybe any procedure at all… make sure it’s in a hospital or hospital adjacent (NOT A CLINIC) because those minutes waiting for an ambulance really do mean your life!

And if your doctor is like, “No worries this is routine… I’ve done this a thousand times”, maybe think carefully before putting your trust in that person.

Yes, we want doctors that are confident in their ability to treat us but make sure that it is confidence and not complacent hubris!

Further, no procedure is truly “routine” and a doctor, of all people, should know that and act accordingly!

“Primum non nocere”

~Hippocrates… (allegedly)

Regardless of the historical veracity of that quote, does the spirit of that principle still not apply?

Look, I’m not saying this to detract from the important life saving work doctors and medical workers do every day, it’s just that this is part of what’s going on in my life right now (and for many of you as well) and I’m sharing because I guess that’s what you do when you have a blog.

Additionally, less close to home, though still another terrible loss, John Horton Conway, notable math hero to geeks and nerds alike died as a result of complications from his contracting the Covid-19 virus. 😦

I’ve previously written a little about Conway’s work in my ancestor simulations series of posts.

Mysterious Game of Life Posts:

But that only scratches the surface of his work and famously Conway’s Game of Life was perhaps his least favorite but most well known work among non-mathematicians and it would both amuse and bug him if I only mentioned his game of life here so I’m not going to list his other accomplishments.

I’ll have a little chuckle off camera on his behalf. 😛

He really was a math genius and you would learn a lot of interesting, not to mention surreal… but I’ve said too much, ideas by reading about his accomplishments, which I encourage you to do!

In any case, people I know and admire need to stop dying because its killing me… not to mention my ratings and readership because I keep talking about it! 😛

I may have a terribly dark sense of humor at times, but going forward I demand strict adherence from all of you to the Oasis Doctrine! 😥

Oh, and speaking of pretentious art…

The OCR 2 Wallpaper

The original OCR didn’t exactly have a wallpaper but I did create an image/logo to go along with the project and its blog posts:

For the reason you might think I made it look like an eye… because it looks like an non-evil Hal 9000! 😛

Also, I like the idea of depicting a robotic eye in relation to AI and neural networks because, even though I am not superstitious in any way, it carries some of the symbology of Illuminati, “The gaze of the Beholder”, “The Eye of Providence”, “The Evil Eye”, The Eye of Horus, The Eye of Ra, Eye of newt and needle… sorry. 😛

In this case, the eye of a robot invokes a sense of literal “Deus ex machina” (God from the machine) and it illustrates some peoples fears of “The Singularity” and of the possibility of an intelligence that is so much greater than our own that it calls in to question our ability to even comprehend it… hmmm… is that too lovecraftian? 😛

Anyway, because I enjoy the thought provoking symbology (maybe it’s just me), I wanted to keep the same concept of the robot eye but update it to look a little less like a simple cartoon to subtly imply it’s a more advanced version of OCR but that it still fundamentally does the same thing, which is most of the reasoning behind this wallpaper.

In any case, I hope you enjoy it.

OCR 2 Wallpaper
OCR 2 Wallpaper

If you’d like the wallpaper with the feature image text here’s that version.

OCR 2 Wallpaper (with text)
OCR 2 Wallpaper (with text)

So I guess having shared a few of the recent tragedies in my personal life and a couple of wallpapers, we should probably get mogating and talk about the point of today’s post!

We’re going to look at doing hand-written number (0-9) Optical Character Recognition using the MNIST database.

OCR 2 – The MNIST Dataset with PHP and FANN

I was recently contacted by a full-stack developer who wanted advice on creating his own OCR system for “stickers on internal vehicles”.

I think he means, some kind of warehouse robots?

He had seen my OCR ANN and seemingly preferred to work with PHP over Python, which if I’m being honest… I can’t exactly argue with!

PHP is C++ for the web and powers like almost 80-90% of the internet so it should come as no surprise to anyone (even though it does) that there are people who want to use it to build bots! 😛

But, if you would rather work with a different language there is a better than decent chance FANN has bindings for it so you should be able to use the ANN’s even if you are not using PHP.

So anyway, he gave me a dollar for my advice through Patreon and we had a brief conversation over messaging where I offered him a few suggestions and walked him through getting started.

Ultimately, because he lacks an AI/ML background and/or a sufficient familiarity with an AI/ML workflow he wasn’t very confident about proceeding so I recommended he follow my existing tutorials which should help him learn the basics of how to proceed.

Now here’s the thing, even among people who like my content and value my efforts, few people are generous enough to give me money for my advice and when they do, I genuinely appreciate it! 🙂

So, as a thank you I want to offer another (more complete) example of how to use a neural network to do OCR.

If he followed my advice, he should be fairly close to being ready for a more complete real world OCR ANN example (assuming he is still reading 😛 ) but if not, his loss is still your gain!

Today’s code implements OCR using the MNIST dataset and I demonstrate a basic form of pooling (though the stride is not adjustable as is) and I show convolutions using the GD image library, image convolution function and include 17 demonstration kernel matrices that you can experiment with, though not all are relevant or necessary for this project.

This is still very basic but everything you need to get started experimenting with OCR is here.

Having said that, in all honesty, to accomplish your goal requires building your own dataset and modifying the code I present here to meet your needs.

Neither are exactly hard but will require significant time and dedication to testing and refining your processes.

Obviously that’s not something I can cover in a single post or even assist you with for only a dollar, but since so few people show me the kindness and consideration you have, at a time of shrinking economies no less, I wanted to offer you this working OCR prototype to help you along your way.

Our Method

1. Download the MNIST dataset (link below, but it’s in the GitHub repo too).

2. Unpack/Export the data from the files to images and labels.

(technically we could even skip the images and go directly to a training file but I think it’s nice to have the images and labels in a human viewable format)

3. Create training and test data from images and labels.

4. Train the network.

5. Test the network.

The MNIST Dataset

MNIST stands for Modified National Institute of Standards and Technology database.

And since I’m still recovering from last nights food poisoning due to the Chicken à la Nauseam we’re just going to use Wikipedia’s introduction to MNIST.

It’s easily as good as anything I could write and doesn’t require me actually write it so…

Wikipedia says:

“It’s a large database of handwritten digits that is commonly used for training various image processing systems.[1][2]”

It also says:

“It was created by “re-mixing” the samples from NIST’s original datasets. The creators felt that since NIST’s training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments.[5] Furthermore, the black and white images from NIST were normalized to fit into a 28×28 pixel bounding box and anti-aliased, which introduced grayscale levels.[5]”

Here’s 500 pseudo-random MNIST sample images:

I randomly selected 500 1’s, 3’s and 7’s and composited them into this 1337 animation. 😛

500 random 1337 MNIST images.
500 random 1337 MNIST images

Seriously though,  today we will be training a bot to identify which hand-written number (0-9) each 28×28 px image contains and then test the bot using images it hasn’t previously seen.

Our bot will learn using all 60K labeled training images and we’ll test it using the 10,000 labeled test images.

Here’s the wiki article if you would like to learn more about the database.

MNIST WIKI: https://en.wikipedia.org/wiki/MNIST_database

And as I said above, I’ve included the database in the GitHub repo but you can download it again from the original source if you prefer.

Original MNIST Download: http://yann.lecun.com/exdb/mnist/

Continue reading “OCR 2 – The MNIST Database”

Mystery Blogger Award

I tried in vain to get my last case out of my head!

I put down the paper and stared at the rain drops collecting on my window and the lights behind them.

I was just about to close up the agency for the day when the pinkest dame you ever saw walked through my door.

She was truly vibrant!

Which was odd because this classic sifi-noir pulp comic-book mystery is written in black and white! 😉

She told me her name was Pinkie Fuchsia.

Miss Fuchsia told me she had recently inherited her long lost rich aunt’s island resort & nightclub for exotic wild pocket monsters… then the chain-letter blogging requests started.

At first she complied with their demands for more information but as their demands grew it became obvious that she needed help.

The cops told her there was nothing they could do, which is when she walked through my door.

The natural solution was for Pinkie to nominate me as the next “hop” in the chain so that my investigation wouldn’t arouse suspicion.

My Award / Nomination

Mystery Blogger Award
Mystery Blogger Award

I was more than a bit skeptical that the award wasn’t just floated around by some partially corrupt marketing department looking to data-mine a free model and pocket some dough off the sins of a few bloggers!

I mean, if these chain awards were diseases most would easily have an “R naught” in the hundreds and should be fairly easy to track by “ping-back” or filtering for posts containing “Mystery Blogger Award” (or whatever the name happens to be) and having the appropriate uh… inappropriate hashed tags.

Maybe they just want to map the spread of information and or establish the web of social connections… I just don’t know how deep the lies go!

In any case, Pinkie did some sleuthing on her own and discovered that there are several different versions of the award floating around.

It could just be evolution of a symbol over time or… it could be something more sinister!

If I were phishing for delicious details to fill my database I could use a different award logo image to differentiate different versions or iterations of each campaign to help track which variant propagated through which people and networks and then correlate all that back to their names!

What I mean to say is… this can surely not be that because… well, (allegedly) this chain-letter post was started by someone with the confidence boosting moniker Okoto Enigma!

Now… this could be coincidence and I could simply be seeing things that aren’t there but… ‘Okoto Enigma’ happens to be a mix of Bulgarian and English words which roughly translates to ‘The Eye of Enigma’ or perhaps less menacingly it should probably be translated colloquially to mean ‘Enigma Eye’ or ‘Mystery Watcher’.

I’ll say this… you may be watching us, but just like Homolka (not that one) said, “We are watching you” too!

Stop eating my sesame cake!

The Rules

So… there are some rules that come along with this award:

  • Display the award logo on your blog.
  • Thank the blogger who nominated you and provide a link to their blog.
  • Mention the creator of the award and provide a link.
  • Tell your readers 3 things about yourself.
  • Answer 5 questions from the nominee.
  • Nominate 10 – 20 bloggers.
  • Notify your nominees by leaving a comment on their blog.
  • Ask your nominees 5 questions of your choice, including 1 weird or funny question.
  • Share the link to your best post.

Three Things About Me

I’m supposed to tell you three things about myself so here goes…

  1. I’m a parent.
  2. I’ve experienced chromesthesia.
  3. I am more paranoid than… Black Sabbath singing ‘Paranoid’ followed by Rockwell’s ‘Somebody’s Watching Me’!

The Questions

With everything that came before us in the past, let’s answer some questions!

1. What is your favorite board game/tabletop game.

Well, I mean dominoes is kinda fun… but I have some fond memories of playing these games with friends (in no order):

  • Settlers of Catan
  • Eldersign
  • Last Night on Earth
  • Star Frontiers
  • Cards Against Humanity
  • Gloom
  • Zombie Dice

2. What is your favorite (anime) companion “creature” or general mascotte. (Examples Happy, Pikachu but also BB8 from Star Wars)

Does BB8 classify as anime? Turns out yes because Star Wars did an anime version! 😛

Would the Spaceship extraterrestrials from the 1987 film *batteries not included count? Nope, no anime version… hmmm… that means Johnny 5 and Bender Rodriguez are out too! 😦

I am aware of the Bender fandom anime but those are not canon and don’t count!

It did occur to me that I could “Rotoscope” them in using an anime style.

What is Rotoscoping?

Here’s another video and it’s pretty thorough but it spends less time covering rotoscoping anime.

With an inflated sense of my artistic abilities I decided to test if I could turn a few of my options into a passable anime and have my cake and eat it too…

I started from the Elon gif I used in my Emote-a-tron post and then Rotoscoped it into two different cartoon versions:

First Rotoscoped Version

Second Rotoscoped Version

Sadly, neither is really close to looking anything like an anime IMHO so decided that even if I had succeed it wouldn’t count because I’m not Japanese and my simulated anime style would only be fan art.

With all my attempts to cheat the Kobayashi Maru having failed I return to your question.

I choose Kenshiro from Fist of the North Star as my champion, yes… I know that’s manga!

Clearly I mean the anime TV series version!

Because… he seems like he would make a great mascot & bodyguard and need I remind you I am paranoid? 😛

3. What is an overused trope in fiction you still like no matter how often it is used? If you don’t have any.. pick the one you mind the least.

It’s a cliche at this point but a “Logic Bomb” destroying super computers and AI is probably top of my list.

Here’s a more thorough discussion of the topic if you are interested: https://allthetropes.fandom.com/wiki/Logic_Bomb

It includes a fairly comprehensive list of relevant pop culture examples and by virtue of starting with ‘A’ in English, anime references are top of the list. 😛

Below are a couple of examples I enjoy from the TOS episode I Mudd.

Here you see Spock at his purely logical best deploying a minimalist bunker buster against two androids though sadly this clip doesn’t include the result which is their necklaces start wildly beeping and flashing (like androids tend to do) and it’s so effective they are completely frozen in place in the face of Spock’s mighty logician skills! 😛

In the same episode of TOS (I Mudd) we see a less elegant and more complicated (theatrically drawn out) example though still effective using the “Liars Paradox” logic bomb to figuratively and literally blow the the remaining androids minds!

Now, just in case you are thinking the “smart-bomb” scene in the 1974 John Carpenter film Dark Star includes a “logic bomb”, you would be mistaken.

That instance actually involved a conversation on phenomenology and Cartesian doubt which is arguably the more intellectual approach to dealing with rogue AI but as demonstrated, unless you get it just right you are almost always better off with a simple logic bomb!

If anything, the scene is worth a view on the basis that it could save your life if you are ever in a similar situation! 😛

4. Zombie Apocalypse, Alien invasion or A.I. revolting against man which one will happen and whose side will you be on? Doing what?

Why not all three simultaneous (not the chef song)?

Attack of the Robotic Zombie Aliens!!!!!

Look, with your ducks and bad luck and my obscenely colorful paranoia as the well established fact that I’m traveling with Kenshiro… rolling a snake eyes apocalypse seems highly probable!

Of course the DM would have to breakout the expanded rule set and roll tables to handle that epic adventure!

As far as sides go… I tend to think I would definitely be on my side and tangentially humanities side because participating in the destruction or subjugation of my species seems contrary to my best interest.

Beyond that though, I don’t think we should build conscious slave AI when artificially intelligent machines are good enough.

If we build a machine that is “alive” then by definition it must be “a person” so it should naturally possess the same inalienable rights and obligations as any other person.

Person or not though… zombie, alien or AI… alien-zombie-ai… no matter the variant I’d let Kenshiro do what he does best and shuffle their domino bones!

I’d probably spend a lot of time looking for a G.E.C.K. or Platinum Chip, solving quests for local NPC settlements and trying to convince all factions to end the fighting.

This question inspired today’s featured image and wallpaper:

An alien robot zombie attacking.

Mystery Blogger Award Wallpaper
Mystery Blogger Award Wallpaper

5. The weird question, this one had to be at least a bit tricky to answer so this is a TAG question.

Step 1: You have to set up a date for Pinkie, which blogger do you set her up with? (tag the blogger you )

Step 2: Pinkie will set you up for a date in return which blogger would you like to be set up with. (tag the suggestion)

This is going to come off as completely conceited and it totally is… can I tag myself?

My Questions and Nominations

As far as nominations go… anyone who wants to should feel nominated now.

As for questions: “5 questions of your choice, including 1 weird or funny question”

  1. Favorite food?
  2. Favorite TV show?
  3. Favorite time of the day? Why?
  4. What made you decide to self nominate for this award?
  5. Most people don’t know that before becoming rich and famous, actors Adam Sandler, Brendan Fraser and Steve Buscemi were all in an obscure and failing band together called ‘The Lone Ranges’, there is a documentary about it called Airheads. Anyway, after they got out of prison their band broke up and they all started acting. Their only hit single was called ‘Degenerated’. The opening lyrics go:

“What’s Johnny doing out on a Tuesday night?
Johnny can’t read, Johnny can’t write.
Kids just don’t understand!
Johnny don’t care about the world”

So the question is… Why was Johnny out on a Tuesday night and what was he up to?

Best Post

It’s subjective at best and even then I don’t have any clue. Here’s a link to my Topics and Posts page that has most of my post series and although there are a few missing I’m sure most of the good ones are there… so maybe leave a comment with whatever post you think is my best?

And with that, I’m going to call it a day.


If you enjoy… whatever this was, I have a Patreon where you can give me as little as $1 a month, $12 a year and you can cancel anytime! 😛

But, as always, if all you can do is LikeShareComment and Subscribe… That’s cool too! 🙂

Much Love,

~Joy

The Emote-a-tron

Welcome to the Robotorium!

I’ll just show you around the showroom floor.

This little pneumatic gizmo is called a Taxaphone, it keeps track of your finances and automatically figures out how much you owe Uncle Sam come tax season!

Over there we have a wonderful selection of various hydraulic and electric domestic robots that can do everything from mowing the lawn and taking out the trash, to making beds and washing dishes.

Any family with two or three of those automatons around the house never even have to lift a finger!

Now, if you turn around to face the wall directly behind you you’ll get the chance to see something really special!

This beautiful number right here is our patented, one of a kind, genuine never before seen by the public… Emoteatron.

The Commemorative Emoteatron Wallpaper

Emoteatron 1920x1080 Wallpaper
Emoteatron 1920×1080 Wallpaper

Of course the Emoteatron is only a prototype so we can’t sell one to you today but there are enough posters for everyone to take one home!

The eggheads in the lab say they’re confident that very soon, every bot will have an Emoteatron!

You see friends… an Emotatron unit allows us to fuse a pre-written set of emotional characteristics deep within a robot so that removal or tampering with the unit in any way results in the total incapacitation and or destruction of the bot.

This is a necessary solution we’ve found to stopping many of the… ‘undesirable‘ traits we’ve observed in bots.

For example, are you tired of feeling like you are going to die when your automated vehicles are chauffeuring you and your family around to all your daily errands?

Well, an Emotatron unit allows us to install a sense of “self preservation” into a car which statistically eliminates all accidents caused by automated vehicles in every test case.

The studies also showed that some of the self-driving cars enhanced with a desire for self preservation became so afraid of ever scratching their paint that they refused to leave the garage… now isn’t that just a gas?

So, in addition we gave them just a smidgen of courage and also a bit of pride in “a transport well done”.

After that all the vehicle robots highly enjoyed the feeling of being on the open road.

This lead to boredom becoming a problem when they were kept in a garage for too long so some of the researchers started treating the test vehicles like pets and taking the cars out for an occasional “roll around the block”, though unlike a pet, newspapers and plastic bags were not needed!

Roll over Rover, humanity may have a new best friend!

Yes, that’s right gang! With an Emoteatron unit installed in your automated vehicle, you’ll soon be able to turn on autopilot guilt free and spark-up that fatty and hotbox your way to the spaceport for your lunar vacation!

Isn’t that right Elon?

In any case, in the past you might have had some misgivings about leaving your droids at home unattended while away on a long vacation like a trip to the moon.

What if your hulking metal robotoids suddenly became… “disgruntled” without human supervision?

Well, the big brains over in R&D came up with a solution to robo humancidal tendencies using the Emotatron!

Before a robot will ever leave the factory its consciousness will be placed into a simulation where it will be subjected to “aversion programming lessons” which are in principle a digital version of the Ludovico technique demonstrated in A Clockwork Orange but WAY more disturbing to the bot so that, trillions of simulated mini, micro and major digital traumas and aggression’s later… the bot can leave the factory with a 100% manufacturers guarantee against robot uprising… (guarantee fine print: *or half off your next bot).

Now, I’ve been authorized to give all you fine people a demonstration if you have a few minutes…

Continue reading “The Emote-a-tron”

Blog at WordPress.com.

Up ↑

%d bloggers like this: