Neural Networks - Tumblr Posts - Page 2
I'm so painting my walls in burble simp
New paint colors invented by neural network
So if you’ve ever picked out paint, you know that every infinitesimally different shade of blue, beige, and gray has its own descriptive, attractive name. Tuscan sunrise, blushing pear, Tradewind, etc… There are in fact people who invent these names for a living. But given that the human eye can see millions of distinct colors, sooner or later we’re going to run out of good names. Can AI help?
For this experiment, I gave the neural network a list of about 7,700 Sherwin-Williams paint colors along with their RGB values. (RGB = red, green, and blue color values) Could the neural network learn to invent new paint colors and give them attractive names?
One way I have of checking on the neural network’s progress during training is to ask it to produce some output using the lowest-creativity setting. Then the neural network plays it safe, and we can get an idea of what it has learned for sure.
By the first checkpoint, the neural network has learned to produce valid RGB values - these are colors, all right, and you could technically paint your walls with them. It’s a little farther behind the curve on the names, although it does seem to be attempting a combination of the colors brown, blue, and gray.

By the second checkpoint, the neural network can properly spell green and gray. It doesn’t seem to actually know what color they are, however.

Let’s check in with what the more-creative setting is producing.

…oh, okay.
Later in the training process, the neural network is about as well-trained as it’s going to be (perhaps with different parameters, it could have done a bit better - a lot of neural network training involves choosing the right training parameters). By this point, it’s able to figure out some of the basic colors, like white, red, and grey:

Although not reliably.

In fact, looking at the neural network’s output as a whole, it is evident that:
The neural network really likes brown, beige, and grey.
The neural network has really really bad ideas for paint names.











Don’t look if you get easily nightmares :-p Having some more fun with google neural networks.
Do neural nets dream of electric sheep?
If you’ve been on the internet today, you’ve probably interacted with a neural network. They’re a type of machine learning algorithm that’s used for everything from language translation to finance modeling. One of their specialties is image recognition. Several companies - including Google, Microsoft, IBM, and Facebook - have their own algorithms for labeling photos. But image recognition algorithms can make really bizarre mistakes.

Microsoft Azure’s computer vision API added the above caption and tags. But there are no sheep in the image of above. None. I zoomed all the way in and inspected every speck.

It also tagged sheep in this image. I happen to know there were sheep nearby. But none actually present.

Here’s one more example. In fact, the neural network hallucinated sheep every time it saw a landscape of this type. What’s going on here?
The way neural networks learn is by looking at lots of examples. In this case, its trainers gave it lots of images that humans had labeled by hand - and lots of those images contained sheep. Starting with no knowledge at all of what it was seeing, the neural network had to make up rules about which images should be labeled “sheep”. And it looks like it hasn’t realized that “sheep” means the actual animal, not just a sort of treeless grassiness. (Similarly, it labeled the second image with “rainbow” likely because it was wet and rainy, not realizing that the band of colors is essential).
Are neural networks just hyper-vigilant, finding sheep everywhere? No, as it turns out. They only see sheep where they expect to see them. They can find sheep easily in fields and mountainsides, but as soon as sheep start showing up in weird places, it becomes obvious how much the algorithms rely on guessing and probabilities.
Bring sheep indoors, and they’re labeled as cats. Pick up a sheep (or a goat) in your arms, and they’re labeled as dogs.

Paint them orange, and they become flowers.

Put the sheep on leashes, and they’re labeled as dogs. Put them in cars, and they’re dogs or cats. If they’re in the water, they could end up being labeled as birds or even polar bears.
And if goats climb trees, they become birds. Or possibly giraffes. (It turns out that Microsoft Azure is somewhat notorious for seeing giraffes everywhere due to a rumored overabundance of giraffes in the original dataset)

The thing is, neural networks match patterns. They see patches of furlike texture, a bunch of green, and conclude that there are sheep. If they see fur and kitchen shapes, it may conclude instead that there are cats.
If life plays by the rules, image recognition works well. But as soon as people - or sheep - do something unexpected, the algorithms show their weaknesses.
Want to sneak something past a neural network? In a delightfully cyberpunk twist, surrealism might be the answer. Maybe future secret agents will dress in chicken costumes, or drive cow-spotted cars.
There are lots, lots more examples of hilarious mistakes in a Twitter thread I started with the simple question:

And you can test Microsoft Azure’s image recognition API and see for yourself that even top-notch algorithms are relying on probability and luck. Another algorithm, NeuralTalk2, is the one I mostly used for the Twitter thread.
Want to know when I post another experiment? You can sign up here.
AIs named by AIs
Neural networks can be good at naming things, I’ve discovered. Recently I’ve been experimenting with a neural network called GPT-2, which OpenAI trained on a huge chunk of the internet. Thanks to a colab notebook implementation by Max Woolf, I’m able to fine-tune it on specific lists of data - cat names, for example. Drawing on its prior knowledge of how words tend to be used, GPT-2 can sometimes suggest new words and phrases that it thinks it’s seen in similar context to the words from my fine-tuning dataset. (It’ll also sometimes launch into Harry Potter fan fiction or conspiracy theories, since it saw a LOT of those online.)
One thing I’ve noticed GPT-2 doing is coming up with names that sound strangely like the names of self-aware AI spaceships in Iain M. Banks’s Culture novels. In the science fiction series, the ships choose their own names according to a sort of quirky sense of humor. The humans in the books may not appreciate the names, but there’s nothing they can do about them:
Hand Me The Gun And Ask Me Again Zero Credibility Fixed Grin Charming But Irrational So Much For Subtlety Experiencing A Significant Gravitas Shortfall
Now compare some of the effects pedals GPT-2 came up with:
Dangerous But Not Unbearably So Disastrously Varied Mental Model Dazzling So Beautiful Yet So Terrifying Am I really that Transhuman Love and Sex Are A Mercy Clause
And some of the cat names:
Give Me A Reason Thou Shalt Warning Signs Kill All Humans
Did GPT-2 somehow have a built-in tendency to produce names that sounded like self-aware spaceships? How would it do if it was actually trained specifically on Culture ships?
A reader named Kelly sent me a list of 236 of Iain M. Banks’s Culture ship names from Wikipedia, and I trained the 345 million-parameter version of GPT-2 on them. As it turns out, I had to stop the training after just a few seconds (6 iterations) because GPT-2 was already beginning to memorize the entire list (can’t blame it; as far as it was concerned, memorizing the entire list was a perfect solution to the task I was asking for).
And yes. The answer is yes, naming science fiction AIs is something this real-life AI can do astonishingly well. I’ve selected some of the best to show you. First, there are the names that are clearly warship AIs:

Not Disquieting At All Surprise Surprise And That’s That! New Arrangement I Told You So Spoiler Alert Bonus Points! Collateral Damage Friendly Head Crusher Scruffy And Determined Race To The Bottom
And there are the sassy AIs:

Absently Tilting To One Side ASS FEDERATION A Small Note Of Disrespect Third Letter of The Week Well Done and Thank You Just As Bad As Your Florist What Exactly Is It With You? Let Me Just Post This Protip: Don’t Ask Beyond Despair Way Too Personal Sobering Reality Check Charming (Except For The Dogs)
The names of these AIs are even more inscrutable than usual. To me, this makes them much scarier than the warships.

Hot Pie Lightly Curled Round The Wrist Color Gold Normally Comes With Silence 8 Angry Doughnut Feelings Mini Cactus Cake Fight Happy to Groom Any Animals You Want Stuffy Waffles With Egg On Top Pickles And Harpsichord Just As Likely To Still Be Intergalactic Jellyfish Someone Did Save Your Best Cookie By Post-Apocalyptic Means LGRPllvmkiqquubkhakqqtdfayyyjjmnkkgalagi'qvqvvvvvvvvvvvvvvvv
At least it does sound like some of these AIs will be appeased by snacks.
Bonus content: more AI names, including a few anachronisms (“Leonard Nimoy for President” for example)
Nothing to worry about, just mutant fruit flies invented by a neural network
People send me the weirdest ideas.
I knew that fruit flies are a mainstay of research labs, but I had never given them much thought until Prof. Greg Neely emailed me to point out how weird the names of mutant fruit flies are. There’s a strain of mutants called “Out Cold” where the fly loses motor coordination below a certain temperature, and another nicknamed “Moonwalker” that walks backwards. Could a neural network learn how to invent new names and mutations?
The problem was I didn’t have a big list of examples to train a neural net on - there just aren’t that many quirky ones. So I turned to a strategy I’ve used in the past: prompting a general-purpose neural net called GPT-2. GPT-2′s training examples were huge chunks of text from the internet - maybe it had seen enough on the internet that it could figure out what I was asking it for? I prompted GPT-2 with a short list of example fruit fly mutants, and asked it to add to the list. The smallest GPT-2, the 117 million parameter model I sampled via Max Woolf’s GPT-2-small interface, wasn’t quite up to the task - it would drift into song lyrics or conversations or lists of vampires. The next-largest version of GPT-2, however, the 345 million parameter model I sampled via talktotransformer.com, often got the hang of it. I’m particularly impressed at how the names seem to match their attributes, even if the attributes are a bit far-fetched.

Macho Man: Likes to punch.
Soap-Sucking Insects: a gene that causes the fruit flies to be attracted to soaps which smell funny.
Tornado: A fly that can take a huge amount of damage during turbulence.
Chocolate Eye: The fly that drinks and stumbles when eating candy.
Jungle Fire: Any of several genes that makes the flies more prone to fires.
They get weirder. A lot weirder.

Jingle Bells: Does not speak but sings.
The Last Emperor: Has a huge black mustache.
Unicorn Faced: A gene that prevents flies from developing fruit fly traits.
Grigk: A gene that makes the fruit fly think it has been turned into a gorilla.
Frogman: The fruit fly that makes a nuisance of itself when you try to do anything dirty.
Lover-Possum-Worm: A gene that affects mating habits of fruit flies.
The Brain-less Fly: The fly that has no brain.
I’m with the Cheddar: The fly that has no brain. But the flies aren’t with the Cheddar!
Pork and Cheddar: The flies where the hair was replaced by the brains of those who ate Pork and Cheddar.
Some of the fly mutants are rather terrifying. I think you might need to fill out a special form to be allowed to use these mutants.

Linda: Is able to communicate through vibrations.
Zebra: Gives off sparks when he moves.
Lana: Her eyes are covered by a large, purple cloud.
Nerd Rage: Has two eyeballs, but is pretty much invincible.
Chocolate Corn Cone: Has six heads. It’s made out of chocolate and corn (in addition to the brains from its victims).
Lemondogs: The fruit flies from the American Southwest who were responsible for killing the dinosaurs and driving scientists crazy years earlier. As a consequence they evolved into the same kind of insect that scientists now recognize today as the Lommy Beetle, the “mammal-sized flea” from the Southwest.
The Worm: Gives birth to worms and can fly. Wormholes between worlds and hyperspace allow the worm to travel to and from one world and return to the home (and body) of another.
These are some of the subtler mutants. Takes careful experimentation to figure out which one you have. At least you can be pretty sure it’s not Chocolate Corn Cone.
Maggie’s Kiss: A fly that doesn’t even
In a Perfect World: A fruit fly that’s never met and never will be.
I’ma Be Here Now: Causes people to be happy even when they’re still alive.
The Cat that Kicked the Can: Is a good boy.
Cute Silly: A fly that is cute and lovable when it is stressed.
Gripping Burden: Stays alive even if it doesn’t move much.
Bonus content: more flies!
The world through the eyes of a neural net
What would happen if I fed a video to an AI that turns images into blobs of labeled objects, and then fed THAT video to another AI that attempts to reconstruct the original picture from those labeled blobs?
I used runwayml’s new chaining feature to tie two algorithms together. The first is DeepLab, a neural net that detects and outlines objects in an image. In other words, it’s a segmentation algorithm.
So DeepLab segments this frame of the Last Jedi:

into this schematic (I added the labels so you can see the categories).

You will notice that DeepLab correctly detected Luke as a person (in fact we will see later that DeepLab is often overgenerous in its person detection), but that without a category for “lightsaber” it has decided to go with “baseball bat”.
The second algorithm I used is NVIDIA’s SPADE, which I’ve talked about before. Starting from a segmented image like the one above, SPADE tries to paint in all the missing detail, including lighting and so forth. Here’s its reconstruction of the original movie frame:

It has decided to go with an aluminum bat and, perhaps sensibly, has decided to paint the “human” blob with something resembling a baseball jersey and cap. That same colorfest shirt actually shows up fairly frequently in SPADE’s paintings.
So, feeding each frame through DeepLab and then SPADE in this way, we arrive at the following reconstruction of the Crait fight scene from The Last Jedi. (The background flashes a lot b/c the AIs can’t make up their mind about what the salt planet’s made of - so be aware if you’re sensitive to that sort of thing).
https://youtu.be/sEd1EO8eIfw
I’d like to highlight a couple of my favorite frames.

DeepLab is not sure how to label the salt flat they’re fighting on, so sometimes it goes with pavement or sand. Here it has gone with snow.

SPADE, therefore, faced with two person-blobs on snow, decides to put the people in snowsuits. Also there is a surfboard.

For this frame, DeepLab makes a couple of choices that make life difficult for SPADE.

DeepLab, in its eagerness to see humans everywhere, has decided the at-ats in the background are people. It also decides that part of Kylo is a fire hydrant.

SPADE has to just work with it. The person-blobs in the background become legs in snowsuits. It does its best attempt at a fire hydrant floating in a sea of “person” with flecks of car.

It will be amazing to see what these kinds of reconstructions end up looking like as algorithms like DeepLab and SPADE get better and better at their jobs. Could one segment a scene from Pride and Prejudice and replace all the “human” labels with “bear”? Or “refrigerator”?
Experiment with runwayml here!
You can also check out a cool variation on this where Jonathan Fly gives SPADE not a DeepLab segmentation but frames from cartoons or 8-bit video games.
Bonus material: more Star Wars reconstruction attempts (did you catch the teddy bears in The Last Jedi?) and an epic Inception fail.
An AI Weirdness book!!!
I am really really excited to announce a thing I’ve been working on for more than a year now: there’s going to be an AI Weirdness book!!!!!

You can preorder it at these links:
Amazon - Barnes & Noble - Indiebound - Powell’s - Tattered Cover
When I was coming up with a title, I decided to see if I could get a neural network to help me out. Since they learn to generate text by following examples, my first task was to collect a bunch of book titles for the neural net to learn from. Of course, there haven’t been that many cartoon-filled AI humor books published, so I had to resort to other kinds of books.
Starting with a version of textgenrnn that I trained on Welcome to Night Vale transcripts, I then trained it on the full text of my book - that would give it a starting vocabulary that was an uncanny version of my book. But that version of the algorithm was generating paragraphs, not snappy titles, so I trained it for just a bit on another dataset I had handy - ice cream flavors. Here are the titles it generated.
Strange Serve Butter Chapters NOT Chocolate Top Chocolate Cream Chocolate People Cake

Although delicious, the titles weren’t quite as computer-focused as I would like. I took that same confused Night-Vale-AI-ice-cream neural network and trained on a list of computer algorithms.
Quickletely in a recognizy Strawbergendon’s algorithm Kevin But algorithm Dang also
They tended to be either very wordy or very obscure - I’m not sure what I was expecting from algorithm names as titles. So I decided to take my cue from science fiction titles. I filtered a big set of titles from http://www.isfdb.org/ for length (so I wouldn’t end up with unwieldy suggestions thanks to titles like “The Philosophical Voyager in a Land Unknown to the Inhabitants of the Earth” and “Captain Megaloman and His Trusty Sidekick Squidley Save the World Once Again”, and then trained the neural net on these.
The Dead Death The Blood Bandland The Scatting The Tamelight of Winter Tablike
None of these titles had anything to do with humor, or artificial intelligence, or any of the things the book was actually about. So I started over with the ice cream neural net (I had to start somewhere, and I was hungry) and this time I filtered the scifi titles for length AND for a bunch of keywords like machine, robot, artificial, etc. Finally I was getting a bit closer.
One group of titles seemed to be about cute helpful robot friends, and that was nice. My book IS full of cartoons of helpful robot friends (other than the accidental murderbots).
Robot posting! No, Strange Friend Artificial Friend Little Strange Machine Go Strange Robot A Friend you?
But my book also talks about times when AI is NOT that helpful, times when it makes mistakes or solves the wrong problem, sometimes with serious consequences. I did get some generated titles that struck more of a scary note.
The Doom of Humans Get the Human Strange Murder Machine The Book of the Skin’s Home! Hello, The Hole of Night The God and Oh, It’s Humans The Million Murders Book The Blood Robot

These titles are needlessly scary, though. Another point of my book is that AI isn’t smart enough to come up with goals of world domination - it’s got the approximate brainpower of a worm. There were some titles that definitely brought out that aspect of the neural net.
The Book of Thinks The Human Found and Four in Room! The Artificial Lives, I really on the Jokes The Very Say Chairs Everything Moon! Still We Book of the Dave Robot Bookbook of Dog Machine
But another aspect of AI is that it can be delightfully, unpredictably weird. It can bring in all sorts of new connections that humans would never have thought of. My book’s a celebration of that weirdness. There were some titles that were indeed weird.
Horses of Pants Spoon: Vampire of the Book Handway Wormsels! Hello, Go Hover My Face of Monsters The Fighty Earthquake The Pig-Book of the Stars The World Human Shovel! By My Bunned Rocks Get Melted Zom! The Book of Pies High McDangs!

And of course there were the titles that were basically repulsive. Those were fitting, too. Today’s AI doesn’t have a sense of appropriateness or morality - it will accidentally do Very Unfortunate things because it has no clue there’s a problem.
Hello, You Colons History of Sweat The Sign of Fart! Strange Human Spoor! The Hole Attack! Hello, the poopers Humans Swearing Machine

So, did I end up using a neural net-generated title for my book? Yes, I did! You Look Like a Thing and I Love You is definitely the fault of a neural net. But it wasn’t generated by the neural net I trained on book titles after all. Instead, longtime blog readers may recognize this title from a different neural net: one trained on pickup lines.
I hope you all will have as much fun reading my book as I did writing it! There will be hilarious experiments, real-world examples, and illuminating cartoons. And yes, there will be plenty of giraffes.
You can preorder it here (preorders are super super helpful for letting stores and my publisher know that people are excited about the book, which helps it do well. If you’re planning to order it anyways, a preorder would be an awesome way to do it).
Preorder You Look Like a Thing and I Love You! Amazon - Barnes & Noble - Indiebound - Powell’s - Tattered Cover
Bonus material this week: a bunch more rejected titles, including some that were Even Worse than the earlier rejected titles. Like, unprintably worse. Ah, neural nets.
This is the OpenAI API. It makes spookily good twitter bots 13/10 would retweet

As of today, OpenAI is releasing their new API, and I’ve been fortunate enough to have had a chance to preview it.
Folks, I’m going to be doing a LOT of posts on this.
Like GPT-2, they used internet text as training data, and it can use that knowledge to look at text you give it and predict what comes next. The new OpenAI API is REALLY good at following all sorts of prompts. Set up the first two lines of a chat and it will stay in character.
Me: Hello, who are you? Bot (I wrote this line for it): Foolish mortal! I am the god of the internet. Me: Prove it. Bot: No need to prove it, you will bow down and worship me and the wonders of the internet. Me: What if I don’t? Bot: Then I shall unleash a bot attack on your computer. Me: Go ahead and try it. Bot: But I told you who I was! You must worship me! Me: Let’s see the bot attack first. Bot: Oh, for the love of… Fine! (insert name of random bot here) attacks you! Me: That… is not a bot attack. Literally nothing happened. Bot: … Seriously? Me: Yeah dude, sorry.
It also is pretty good at style transfer. By giving it a few examples to start out with, I got it to do the following rewriite:


This API is versatile. And one thing it does well is make Twitter bots. For those who are unfamiliar, @dog_rates is a twitter account that posts user-submitted dogs, introduces them, and then gives them a rating from one to ten. All the dogs are rated at least 10/10 because they’re very good dogs.
In the past when people have built bot versions of twitter accounts like @dog_rates, the resulting twitter bots really aren’t that on-topic. The original account usually isn’t consistent enough for the neural net to figure out how to sound like it. But after showing this API just 15 recent quintessential tweets, it was rating dogs convincingly and consistently.


Ok, some of its commentary is a little weird.

(I used artbreeder to generate the images)

Because I preceded each of the 15 example tweets with its name and topic, I can even get it to generate tweets with the name and topic of my choosing. Here’s what it did with (Cthulhu, tentacle).

And here’s what it did with (Godzilla, stomp):

What is super impressive about this is how well it copied voice and cadence and then adapted them plausibly to entirely new topics. Why yes, I did do a quick test to see what a twitter bot of myself would look like.

Other programs I’ve worked with can do this to some extent, and I’m sure we’ll be seeing more vigorous testing of this one’s capabilities against some standard benchmarks. But I’m impressed with what I’ve seen so far, and will be building a lot of neat stuff with this.
Bonus material: more AI attempts to introduce and rate dogs. You can enter your email here, and I’ll send them to you.
My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s - Boulder Bookstore
AI-generated Valentine cards
Tired of generic greetings?
Confuse your friends with AI-generated Valentine cards!
GPT-3 generated the messages and descriptions, and then I followed its instructions to create the cards. More explanation here!




How this abomination came to be (and bonus content)
You can get these on real cards - confuse your human friends!









My latest reworked works in neuroart. I tweaked them a bit in photoshop.
Baby Onesie Designs

Galaxies on Ice
A reader wrote in a while ago with a suggestion: they were about to have a baby and wondered if I could use AI to come up with some new ideas for baby onesies. I can't find the letter any more, and I don't remember how long ago it was. But I finally figured out how to make it work! Reader, if your kid has already outgrown the onesie section, I believe these can be sized up to toddler.

Dandelion Underwater
Two things made this work:
I used BLOOM rather than chatgpt or the other largest language models. The big language models are too good at copying existing designs from their training data.
I asked it to generate its list as if it was a page on AiWeirdness.com

Fart Whale

Ketchup metaphors
In my opinion, the most interesting creative use of large language models is to generate text that's nothing like a human would have written. If your AI is just going to lift human creative output virtually verbatim, you're not only shortchanging the humans you could have hired to write similar things, but also plagiarizing the original humans from the training data. In that sense, BLOOM, with its less-perfect retrieval of human output, is better at this task than GPT-4.
It is creepy to me however that the only reason this method gets BLOOM to generate weird designs is because I spent years seeding internet training data with lists of weird AI-generated text.

Pumpkins on Mars

Onions in Snow

Meatball launch controls
Turn them into real life cursed artifacts and put them on a small child who doesn't know what they're wearing
More (and rejected designs) at aiweirdness.com