BLATHER

AI Art — Is It of the Devil?

OF COURSE IT IS! I joke. Sort of. But who doesn’t freak out a little when AI sets up shop in their line of work? It did not give me warm fuzzies to learn that I (or anyone) could type a simple prompt into a textbox and a few seconds later be presented with an image more technically adept than anything I could have created on my own.

For a while, I pretended it wasn’t there. I avoided reading about it, looking at examples or experimenting with it. But I had a coworker who enjoyed emailing our group random pics he generated with Bing Chat, and eventually, I played around with it, too, feeding it outrageous and weird prompts, just to see what monstrosities would pop out.

I was put off when I discovered that it could only do what it does because it was trained by feeding it millions of images (many of them copyrighted) scarfed up from the Internet without permission. That seemed … wrong. On the other hand, aside from the sheer scale of it, how is that any different from what artists and designers have always done when they study the works of other artists and designers for inspiration, ideas, and techniques?

Don’t you just hate it when things are no longer black and white?

Have I gleefully embraced it? Not exactly. I’m not entirely comfortable surrendering yet another skillset to a machine, and I’m still apprehensive about where it will all lead, but … technology marches on.

In addition to my attempts to coax weird and creepy imagery out of it, I’ve used AI to generate images for a couple of actual projects. I used it to create a portion of the graphics that are laser etched into my Eurorack case. And I took a shot at “collaborating” with AI to create an image to possibly print as a limited edition. Both were generated using simple one sentence prompts to Bing Chat (which uses an implementation of OpenAI DALL-E in the background).

For the print image, I asked Bing Chat to create a “a photorealistic human/reptile hybrid child.” I got back an impressive lizard boy on a black background. In a separate prompt, I asked for “a backdrop of Virginia Creeper.” What came back was not Virginia Creeper, but I liked it well enough. In Affinity Photo, I extracted the lizard boy from the black background, blurred and altered the coloring of the flowery background, and then placed the lizard boy on top of it. I drew the starry halo, painted the green reflections from the halo onto the boy’s skin/scales, gave him green eyes, embedded the jewels in his forehead and tattooed the constellation on his chest.

When completed, did it feel like my artwork? Not really. DALL-E did all the heavy lifting — and from simple prompts at that. I just embellished it. Maybe “commissioning” rather than “collaborating” would have been a more accurate word to use earlier.

There are more full-featured tools like Midjourney, which I haven’t used (mostly because I haven’t wanted to cough up the money). I believe Midjourney accepts more complex and iterative prompts. Maybe if I felt more in control of what was being generated, I would have more of a sense of ownership of the final product.

At any rate, lizard boy, or as he is more formally known, Silurian Boy, says “Hi” below.

Silurian Boy