Pages

Friday, April 3, 2026

Turnip Tesseract

So you are hiring an editor and want to know if they are as familiar with science fiction as they claim. Or you are hiring an artist and want to know if they are familiar with ligne claire.

Well, between Google, Wikipedia, and now AI, all you need is an insulating layer of text between the questioner and the target. Now any hungry slop merchant can pretend expertise long enough to get you to fork over the money.

I've got two beta readers on hire right now, several developmental editors I've been talking to, and new art needs in the future and I am in dire need of a Turing Test. How do you hold an oral, a books-closed exam, a calculator-free test, when you can't see if the person at the other end is answering out of their own expertise or is frantically typing away in the background to let Claude answer for them?

Before you drop $2K to $6K on an editor?

Think of say SF. In my lifetime, there was a time when you had to have read the stuff. There were some Cliff's Notes and the like but basically you could ask them if they knew the book that put powered armor on the map (Starship Troopers), or the name of the protagonist (Johnny Rico).

When things first became searchable online, the data was there but not the associations. Ask them to compare two "big dumb objects" and they'd have to go into their own memory to realize that both Ringworld and Rendezvous with Rama had suitable examples.

Now Wikipedia has much more associational and analytical pages which fill in the connections between the raw data. And increasingly, you can ask AI, which can very quickly do some very subtle associations based on questions created on-the-fly.

When you get the work back, then you have the volume and the leisure and the real-world application of those promised skills, and that is where failure will show (and AI will become obvious). But what do we do in the hire?


No comments:

Post a Comment