The morality of AI art.
The subject has complexities, but it isn't anywhere near as murky as the people trying so hard to sell AI to the consumer are making it. Admittedly, whatever the core issues are, they are hard to untangle from discussions of the purpose of art, the economics of art, the process of art, and empowerment.
Take the last. Art is hard, and talent doesn't strike all of us, but even those with talent need to have time and -- for many arts -- finances to pursue it. It is far from impossible to go from the wrong side of the tracks to the concert hall, but there is no denying that it is harder than it would be for someone who went to properly-funded schools, had parents who could afford tutors, could afford a decent student instrument, etc. (Don't get me started on the heartbreak of Violin-Shaped Objects.)
There are always gatekeepers who, when art is democratized, cry out that it is being debased. Cutting loops in Abelon "isn't real music." And AI makes it so easy to cry "they didn't make art -- they just pushed a button."
But on the other side, there's a difference between enabling people who might not otherwise be able to pursue art, and selling the illusion of making art in order to make a profit. Not exactly peculiar to AI, this. Any hobby you name is almost instantly overrun by people looking to sell the hobby to you (in the form of "must-have" tools you were getting just fine without, and so on). AI, viewed this way, is a digital paint-by-numbers kit.
The potential customers aren't the only ones buying the illusion, either. Artists are right to be concerned, just as musicians were when they began to be replaced wholesale in certain fields. There is always the economic drive to replace what is good but expensive with what is good enough. And that's a race to the bottom, as the current "good enough" soon becomes the "good but expensive" and the search is on for something else...
As long as the intended audience will accept it, and that is one of the fears. Flood the marketplace with "good enough" and keep it there long enough, and the public would lose the ability to tell the difference. That's what the Academy de Beaux-Arts was afraid of...but what they tried to keep out, and failed to, were the Impressionists.
I think the public is canny enough to keep looking for better, if given the economic opportunity. For all the cheap schlock, there remains a paying audience. For all the fast food, grocery stores, markets and restaurants aren't going out of business.
Enough of the public have learned to despise the quick-and-dirty AI art that the economic model of many of the big art sites has forced them to take steps to control the flood. Which leads into a discussion of the value of gatekeepers to the consumer. Self-publishing, for instance, exploded. There are so many self-published works that the costs to place your work are going up, and the readers are complaining about the difficulty of finding anything.
Self-publishing did flirt with AI. It got smacked down by Amazon the same way that every other get-rich-quick scheme that tried to use their site did (like low-content books). Writing is in a weird position as it seems to promise wealth and fame, but it is currently difficult for software to do the work for you. The people selling dreams to would-be authors are selling books on how to write, world-building software, and services.
And outright scams, because vanity publishing remains the best money-maker.
Not to say people haven't tried AI, but there are no easy riches in publishing. It is a buyer's market, and despite how much effort individual writers may be putting in, to the greater market their labor is cheap; so cheap there isn't a need for an alternative.
AI art, meanwhile, is having trouble pulling up money outside of the flash-in-the-pan niches of, say, monetized YouTube slide shows. The challenge is similar. The world has no lack of hungry artists willing to work cheap; all that AI can offer is its novelty (the surface gloss, largely) and volume. And the latter is self-defeating, just as it is in publishing.
Which slides into another problem with democratization. The big players in AI art have expensive rigs, and spend a lot of time at it. More and more, they are looking less like artists than like bitcoin miners. Even to how critical a high-end video card is.
And that looks like a segue into "but are you really being an artist." I believe that those slide-show makers are prioritizing pushing output. AI is in a peculiar place that might be inherent, or might be the current circumstance of technology -- and I am biased towards the later. Right now, the way to make money is to push out a bunch of art before the market gets over-saturated (too late!) and the way to push out art is not by being an artist but by pushing buttons on a powerful and expensive rig.
Exactly the model all those salespeople are wanting. "You too can be an artist...if you drop a thousand bucks with us on the right graphics card."
So here's the thing. We talk up freeing the inner artist from their inability to hold a pen or afford paints, but much more importantly, from the need to have a liberal arts education and time spent in traditional art classes.
But the infinite world of possibilities is...smaller than it appears. This has always been so in the arts, I hasten to add. An artist that wants to be seen or heard uses the modes and forms that are currently understood by the audience. There are always those (like those Impressionists) who are fighting to get something different accepted by a potential audience, but economically this is at best a gamble. The market reality is "do what everyone else is doing."
Technically an AI art engine can create anything, in practice, the people using it are narrowing the existing constraints of the training data even further in order to pursue the flavor-of-the-month and get those eyeballs they crave.
Let me explain in a little more detail. The training data was what the original academic researchers could scrape off the internet. Which means that a well-known painting is more represented than the output of an outsider artist. Meaning the engine is already primed to regurgitate the "look" of current media (which itself is feeding off itself, looking to other shows or other advertisements or other book covers as to what the audience is primed to expect).
It is very, very focused on what is common and normal in mass visual media. Poses, for instance, trend towards the presentational. "Showing off the new winter jacket" is the pose a figure will take even with a heavily weighted prompt that attempts to put them in an action pose.
As I said, many artists in those social media circles where popularity rules are going after what gets eyeballs. To focus in on that flavor of the month (or, more benignly, to focus in on whatever personal vision they are pursuing) the tool is LoRA (and checkpoints, and embeddings...but let's keep it simple).
And this is the thing. The academics who trained the original models had some shred of honesty and did their best to anonymize by using as much data from as many sources as possible. LoRA are more tightly focused. When a young artist thinks "I want to make stuff that looks like Masamune Shirow" they are drawn to a LoRA that was trained specifically on that artist. On a small number of works. Overtrained on them. So much so, given the right prompts it can and does recreate enough of a specific image you can recognize it.
Again this is implicit in the concept of the training data. Tell the AI to give you a Florentine woman with a mysterious smile and you could get anything. Tell it to give you the Mona Lisa and what you get back will be recognizable as Leonardo's painting. But there's a difference between training on a million images, and training on as little as six (some LoRA are that small.) In the former, you get a guy in a jacket but it isn't a recognizable individual guy or brand of jacket. In the latter...you might get one of the six images back complete in far too many details.
(It gets worse. Some LoRA go right out and say, "For best results use this image." That is, to base the new, supposedly unique image on an actual specific piece of art. And not with text prompts as in the Mona Lisa example above, but by, basically, taking that image, blurring it slightly, then reconstructing it with AI. But more on this when I post about the inpainting process.)
The social art world is fads of the moment and the successful focus is hyper-narrow. The original inspiration is clearly seen. Basically...this is digital fanfic. I mean; there are lists of prompts for the hopeful new AI artist that are the names of other people creating AI artwork.
(This is really nth-generational stuff. It requires LoRA that are trained on the output of artists who were probably already using the same...)
So while the AI proselytizers are going on about how stealing from a million anonymized images isn't really stealing, the practical reality is far from that case. Yes; the big commercialized online engines are filtered now with long lists of illegal prompts that can't be used anymore, including the names of public figures, the names of artists, and even the names of some art styles. But they are only part of the picture -- and the users are really, really good at finding the loopholes because the data is still there. They just changed the names to make it harder to find it.
Is this, though, different from being inspired by the style of another artist, or even the movement they have begun, and doing work in the same way? How does the specific kind of work done in making this derivative matter? Or is it the nature of the link between them?
From one perspective, AI is absolutely stealing the original because that original was fed into the computer. From another perspective, it has been digitally shredded in a way that makes it impossible to reproduce it exactly. No matter how close the AI reproduction may appear to human eyes, the pixel patterns are not the same.
Is a copycat more ethical than a straight clone? Is the fact that on a pixel level, down at the digital heart, it isn't actually the same an important distinction, or is this just a fancier way of filing off the serial numbers and selling it as unique? There are people flipping, cropping, or blurring clips from movies so they can post them on YouTube without getting caught by the automatic check for copyright violations. Is this really, substantially, different?
And does it matter if the creator could have painted it from scratch themselves? Does it make it better if they are a skilled artist in their own right? Does this have to be the same skill set as the original? Does it make it worse if they were "pressing buttons," that is, doing things that don't look like how we conceive of the process of creating visual art?
Because your average "hand painted" art these days is done on a screen with a hell of a lot of computer assistance. And resources which are not original. And some of those resources might not be paid-for commercial stock or copyright-free (cough Greg Lang cough).
On the third hand, it is somehow worse if a forger is skilled enough to have made original art, and chose not to?
This gets really tangled because all the way across the art world, homages, training by copying, working with a mentor in their studio, doing cover versions...this is all how artists learn. And so very many of the good artworks are part of a dialog; Vietnam vet Haldeman reacting to Robert Heinlein's jingoistic Starship Troopers and moved to write The Forever War. Saint-Saens spoofing themes from Offenbach and Mendelson in his Carnival of the Animals. Generations of artists using the pose from Michelangelo's Pieta.
I mean, look, I'm currently working on a novel that is consciously and openly using "used furniture." Something that is meant to be recognized as retro. The characters and background are being carefully crafted to remind the reader of things they know (or think they know; the thing about retro nostalgia is that so much of it is rooted not in a deep understanding of the original, but an exposure to other people's distillation of those elements they find most cliché).
Thing is, though, it is arguable that AI artists are not having a dialog -- because they aren't engaging with the material personally and at that level. They are dancing about architecture; they are entering text instructions to a computer for it to make a mindless reconstruction of what it thinks is happening in the original work.
Perhaps. It is certainly true that one can go to a model that other people are recommending and copy a list of prompts from some forum, push the button and sit back. But I think that even the most production-oriented, assembly-line artist has that urge to chase their own vision. It is difficult not to engage your aesthetic senses. And there are functional choices that can be made at every step; all the way down to picking which generated image to up-rez and post and which to throw out.
For many, they are engaging with the image itself, on the terms of visual art. Adjusting the composition with their internal sense of aesthetics and whatever understanding they have of traditions. Discarding or altering poses and hands and musculature because they understand anatomy in the way a practicing artist does. Perhaps not as deeply, but not every artist has those years of figure drawing behind them. And, even, choosing prompts because they have some grasp of the history of art and the figures in it.
This is of course the basic Google Query problem; understanding that what you want is not the precise and technical term, but a common term -- one which may even be incorrect. You don't type "Elizabeth Tower," you type "Big Ben" to get the result you are seeking. Often in prompt crafting you know the AI will misinterpret, taking the most popular meaning of a word. It is the visual version of autocorrect gone rogue. Especially if what you are targeting is obscure, the best strategy might be to describe something similar but better known. Instead of "the gadget" (as the Trinity device was known as), type "sea mine with wires" to get a similar-looking thing.
Again, this is why direct cloning of source images gets used. The AI gravitates towards the easy to understand. Dial up the amount of regeneration and your actual source image of a vintage locomotive will be warped into Thomas the Tank Engine. Which is, again, why AI can be stealing to a degree rather more than the AI fan club likes to admit.
(As a sideline -- more when I talk about the process of inpainting -- it is true that you can make an original digital painting, that is, something more akin to manual painting with traditional tools, and then hand that to the AI to add detail and gloss. But the AI doesn't see things the way we do. A fairly decent sketch is actually less effective than blobs of color. The things we do as traditional artists to sell an image are to the AI artifacts that have to be interpreted. Better to paint a blob and dial up the "denoising" to give the AI a relatively clean slate. So, in this way, AI works against the use of traditional painting skills.)
(The exception to the exception is models that are specifically trained -- or different operations such as Control Nets -- that are designed to interpret drawings or paintings. These will bridge the idea expressed in an outline to the object that this line describes to our human understanding. Without it, this isn't the external contours of an object to the AI; it is a physical thing itself, a black string hanging in space.)
(And, even, it is possibly good training for the artist in learning to think in color masses, values, and planes, and not get misled into outlines and external contours.)
So this is work, and it does take skill, and some of it is traditional art skill. It is also not debatable that AI takes "less" work. The AI artist may spend a lot of time, but that time may be hitting a button over and over again and waiting for the next generation to complete. It isn't with pen in hand, strongly engaged with every aspect of the artwork from brush stroke to composition.
This is why we draw a line between a mixer or recording engineer and the musicians. Between actors and playwrights. Between authors and editors.
I don't think you can say that AI isn't art. But you absolutely can say that it isn't traditional art. Being able to paint, and in fact doing that painting, can be a part of the process. But they can also be omitted.
Does this have anything to say about the morality of it, though?
When you are on the social media sites that are currently flooded with the stuff (and many are actively beating back the tide) it is absolutely stealing art. But that was happening before AI, as these are sites where people are sharing their own versions, or remakes, or mark-ups and distortions, of commercial IPs. Where one person will take an image out of a movie and add some corny dialogue, and another person will like it so much they steal that person's marked-up steal, add some of their own scribbles, and post that.
The questions around the training data are one large reason (that and the backlash) are the bigger reasons why AI is not being used as much for commercial work. Or when it is, they attempt to hide that they have. Sightings out in the wild have included such things as illustrations in a textbook, or in a paper submitted to a scientific journal, though!
AI is tainted, now. Adobe Software is just one entity that is trying very hard to sell it, and mollify the consumers that reacted badly to the first surge of clumsy images and the questions raised about copyright. On the latter, Adobe (and a growing number of other companies) have pledged that they are not using copyrighted work.
Okay, first, this pledge is coming from a company known for suspiciously qualified statements along the lines of "We are not training this AI engine on work that our users have uploaded to this part of the cloud."
But even then...is it still, morally, stealing art when what you are stealing is free to use? Sure, there are copyright free and royalty free resources being used all the time in the arts. They are usually used in a transformative fashion, if for no other reason than that a hundred other people bought that stock photo and you really should do something to make your book cover not look just like the other guy's book cover.
(Guilty. I did my own repainting of the stock images I used for the Athena Fox books, even if I then handed them off to the actual cover artist).
But...the way AI is used is a lot more like the guy that goes to the tray of free cookies and takes two handfuls of them, stuffing some in his pockets for later.
I also worry about small reference pools. If your training data is nothing but what they could get that was royalty free and cheap, it is going to slant the nature of the data. You risk a sameness. You risk artifacts of drawing too deeply on too few sources.
There are already those structural problems within the commercial art world. Already there are external pressures to make the art look like what the market is currently saturated with. The artists are already working with tools (brushes, filters, stock photography) that are a small slice of that infinite possibility, meaning that even without those external constraints the tools themselves are pressuring the artist to do certain things in a certain way. And more so when they are in a commercial setting where the art directors and supervisors and buyers and so forth are tacitly urging them to use the tools that the company is already using.
I worry that AI tools, the data those tools are based on, and the specifics of the look, are going to spread. Basically, the visual equivalent of Autotune.
Will we forget the lessons of classical art because all we are training on (or meme-ing on) is a cliché impression of the most well-known works, or worse, crass commercialized imagery. Will we become so inured to the artifacts and flaws of AI images we cease to see them any more -- and stop trying to correct them. (Like clipping in video games. We just don't notice it any more).
And is it driving out people who are taking the slow path, doing things by hand because they want more than the results of button-push art? As I touched on above, having a traditional approach and skills is, at the current state of the art, counter-productive. Knowing and caring what a Dutch Elm looks like is counter-productive when the AI is over-trained on other trees and is almost impossible to force away from duplicating them. Knowing and caring about weird discontinuities and bad anatomy when, again, the very process conspires against fixing those things?
AI is very good at gloss; at those surface effects, textures, blending, lighting, that are difficult to achieve manually. It is very bad at poses and composition and logic and story. The one conspires against the other, though. You can rarely do a half-decent hand painting that does do good composition and uses the correct historical research or whatever, and then have AI add that last bit of gilding, because the mindless AI will paint the roses gold as well. In the process of adding ground fog or godrays, it will muck the hell out of the foliage.
But that's not...ethics.