For no good reason at all, I fired up Stable Diffusion, loaded in several different models and a few LoRA, and experimented to see what would happen if there was no prompt at all.
Some of the LoRA were so overtrained they popped up the elements you'd be using for. Say a "classic Chevy" LoRA would produce classic Chevys even without a prompt. Styles also came through even without prompts, like a LoRA I use that does the kind of pulp cover art appropriate for all your Retro Rockets.
And then there was this.
I did several runs and got several other weird sportsball sort of displays. Until the penny dropped; there's this one model I experiment with that was strangely trained, and requires a cryptic Konami Code of a thing in the prompt box before the "real" prompt. "score_4, score_6, source_pony..." And I hadn't deleted those lines before testing a model that didn't parse them.
Anyhow, the results were amusing. A lot of the checkpoint models, when left without direction, return a close-up of a textured surface. The handle of a dishwasher. A warehouse roll-up door. That sort of thing. But especially with the LoRA, it only takes one prompt word -- not even one of the recommended keywords -- to get the thing spitting out the kind of thing it is designed for. There's a weird thing I've noticed, where even if the LoRA is incompatible with the checkpoint model some of the raw data in it is still filtering in.
Oh, and here's a less amusing look into the soul of a new machine. In normal usage you have several prompt terms that all get thrown into a bucket, parsed and interpreted, and some hybrid thing created out of them that you could almost think of as a hash table for what is supposed to be a finely-ground abstraction of the original training data.
When you use a single term on an overtrained LoRA, it spits out what looks a lot like a thing that it shouldn't even have. It isn't, of course, an actual copy of an original image. Even if you can read the artist's signature on it. But there are few enough of them, and you aren't calling on enough other elements that are present in the model itself to mask them, either.
If you are running a "mammals" LoRA that only has six bats in the original training data, after a few runs you tend to really notice the bat with the wings up, the bat with the blue background, and the grainy night picture of a bat. Even if they don't look exactly like the originals, you get the feeling that with a little work you could probably identify those originals.
AI, however, is here. The backlash is also here and in some corners getting vicious; many young artists are being accused of using AI with the same specious logic and "it looks like" of Moon Landing deniers finding prop markings and lighting instruments in the Apollo Surface Record.
In a different corner, there are still hopefuls trying to make a buck writing novels with AI. They founder on the shoals of the fact that the marketplace is unforgiving for even good novels; there's no easy riches to be had in the writing game. They hardly need to crash into the towering cliffs of Amazon's aggressive adjustment of the algorithm (and the rules) to protect their brand.
And in forums, AI has just become another part of the noise of spammers, astroturf farmers and all the people who just want to have an argument about their hang-up of the moment. The enshittification that makes it harder and harder to actually find information, answers, or help was already ongoing. AI output just takes another new set of skills to recognize for what it is. The biggest challenge is that the productivity of chatbots is so much higher than that of even the political talking point sweatshops in Russia, China, and elsewhere, it exceeds whatever small ability there might have been for forum operators to moderate or filter the stuff.
It means, like search engines, like most of the Google monolith, like all the things that were supposed to be making things easier (review sites like Yelp, say) have become pitifully useless, drowning in the inevitable SEO-ification of anything that might earn a nickel anywhere.
And we can't blame AI. It is just the tool of the moment.
No comments:
Post a Comment