Or should that be, "Digital painting of highly realistic brown rabbit sitting on grass, vivid lighting, bokeh, trending on Artstation."
Text-to-image is both less what someone might think, but has more going on under the hood than some might expect. Blinding entering prompts might get results, but even with this stage of creating AI art there is more of a creative involvement than that.
You could take a line, put "walked into an art gallery" at one end, work your way up through "hired an artist and worked with them through several drafts," to "did the pencils and handed it off to inker and colorist" and "limited myself to three random colors pulled from a box of pastels" along the line. But you can extend way past "painted to from scratch" to "ground the pigments personally" to... "invented a new art form?"
At all stages other things are intersecting. A random defect in the paper that looks like a cat curled up and inspires the artist, the art store was out of blue, a friend made a comment. At all stages in traditional art, one is using what one learned in classes, taking tricks from other artists, opening reference books on anatomy and using reference images.
At all stages the artist is engaged, using their eye and their skill to intelligently react to and to shape what they do in response.
AI changes things in...weird ways. It is like hiring an artist who you can only contact briefly through garbled emails, who you suspect doesn't speak your language, who doesn't tell you what they are thinking until their (often unexpected) result shows up in your email...wait, I've just described Fiverr.
I'm not going to talk about the copyright issue at the moment. I am just dealing with the "but is it art, and did you really make it or did you just press the button?" aspect. There's a dirty secret (well, not secret, but often dirty) to popular AI art, and that there is a hell of a lot of repainting. Basically, if you see spectacular AI images of the "trending on Artstation" variety, it is because someone with good PhotoShop skills spent the same annoying finicky hours grinding away with the little brushes and the blur tool and all that.
The difference is really where the effort needs to be. Take blending. Blending is a huge pain in the butt in any media. Lots of dabbing the brush in multiple slight variations of the same color and using just the right amount of water and going back and back and back and back.
The same thing can be "...easily accomplished by a computer." In the right conditions. With the older tools, like Gaussian blur, it is a one-step; use the selection, run math on it. AI does convergence towards goal, meaning it will blend properly in the ugly edge area.
It can use the same understanding-by-example to add sheen or dew droplets or bokeh or all sorts of things that are really, really fiddly to do without it.
What is lacks is any constructivist understanding, It can pick up rules based on what was in the images it was trained on. That's it. It "knows" that nuts are found near bolts. It doesn't understand that nuts go on bolts, and that they don't work sideways, either.
So for AI generation of fashion, it gets scary in the way it can put hyper-detailed wrinkles and stitching and seams and all that. But it has no understanding of how clothes are constructed. It can do a perfect buckle, but it has no function (well...maybe it was trained on Liedfeld.)
I guess it is a perfect mix for bad steampunk. "Just glue some gears on it."
Sorry...meant to write a post about process and lessons learned, but the preamble went really long.
No comments:
Post a Comment