Friday, April 3, 2026

Turnip Tesseract

So you are hiring an editor and want to know if they are as familiar with science fiction as they claim. Or you are hiring an artist and want to know if they are familiar with ligne claire.

Well, between Google, Wikipedia, and now AI, all you need is an insulating layer of text between the questioner and the target. Now any hungry slop merchant can pretend expertise long enough to get you to fork over the money.

I've got two beta readers on hire right now, several developmental editors I've been talking to, and new art needs in the future and I am in dire need of a Turing Test. How do you hold an oral, a books-closed exam, a calculator-free test, when you can't see if the person at the other end is answering out of their own expertise or is frantically typing away in the background to let Claude answer for them?

Before you drop $2K to $6K on an editor?

Think of say SF. In my lifetime, there was a time when you had to have read the stuff. There were some Cliff's Notes and the like but basically you could ask them if they knew the book that put powered armor on the map (Starship Troopers), or the name of the protagonist (Johnny Rico).

When things first became searchable online, the data was there but not the associations. Ask them to compare two "big dumb objects" and they'd have to go into their own memory to realize that both Ringworld and Rendezvous with Rama had suitable examples.

Now Wikipedia has much more associational and analytical pages which fill in the connections between the raw data. And increasingly, you can ask AI, which can very quickly do some very subtle associations based on questions created on-the-fly.

When you get the work back, then you have the volume and the leisure and the real-world application of those promised skills, and that is where failure will show (and AI will become obvious). But what do we do in the hire?


I got the beta read back and I am in an uncomfortable position. It was detailed and echoes many of my own thoughts and that's given me some actionable stuff to do. 

Yet, the beta reader is aggressively asking me to post a rating. Not comment or critique, just stars. And there are so many weird little not-quite-red but certainly-not-green flags about her work and her presentation.

I am very sensitive to cadence. The cadence of her speech in all (but one) text communications is different from the cadence of her report. This isn't just the formality level. It feels different in a way that word choice and grammar wouldn't cover.

I've never had a beta read before. I had the impression that they should spend their energies in top-level impressions. Did the story hang together, did the ending feel deserved and complete, was the protagonist sufficiently something to keep the reader interested in them.

This report seemed to get down in the weeds very quickly. Sentence level corrections, down to typographical errors. Organized in bullet points. Too much praise. Now, there are things that feel like a human hand was involved, but I'm still getting the smell of AI off it.

The positive reviews for this beta reader on Reedsy are...strange. In fact, some of them have the same cadence. The negative reviews are clearly human and one of them (there's not that many reviews overall) questions whether this reader understood the assignment. This switching back and forth from business-speak to something very idiomatic feels to me like someone who isn't comfortable with English and is using artificial tools to bridge that gap.

If she is using AI, it is part of a process. I can't tell what percentage of that process is hers, however.

I'm not comfortable leaving a good review. I also am not comfortable confronting her on this. And as I said, she is being very aggressive about asking for those stars.


No comments:

Post a Comment