Thursday, August 7, 2025

Chatter

So here's the big thing that is wrong with using AI to write your book.

The book is the writing.

This works for visual imagination, too. Hell, in that case we can go right down to models of the human visual system. Know the blind spot? Yes, but you can't see it. That's because vision is an illusion. You aren't seeing this 3D world in high detail, straight lines and everything. That's constructed for you in your mind.

Or, rather, the illusion of it is. You think you see the world in high detail because anything that catches your attention, your eyes flick to it in that ceaseless motion they are always making. Your mind is maintaining this sense of the rest of what is in the visual field and giving you this emotional impression of it all being there, even though the reality is that it is in lower detail outside the center of your vision and your moment's attention.

It is like a dream. When you experience it, you think it is all there in detail. You also think the story makes sense and that's what I am winding back to.

Because anyone who actually crafts an art realizes that while the basic shapes and that impression of it all making sense is there at the outermost level of detail, at the most zoomed-out level of perceiving it, the experience of the book or movie or artwork is the encountering of details that agree with and support that impression.

And these details aren't in the writer's head. They aren't in the artist's visual imagination, no matter how good. Because the human brain isn't big enough to hold it all.


 That artist above had a concept of the character that informs every stage. She didn't have to draw the shoes before she knew what kind of shoes that character would wear. But at the same time, she didn't know how those socks folded or how many laces or any of that because those details didn't matter.

In many cases they unfold from the underlying conceptions in a logical way. Or can be reconstructed from basic principles. They don't need to invent the concept of "shoe" just to finish a drawing. They can also start that drawing knowing that shoes exist, that they as an artist have drawn shoes before, that they know how to look up a reference if that fails. It is, to borrow the math joke, a problem for which "a solution exists."

But the specifics of that shoe supports that original idea of the character, and the execution of it is unique to that artist in many ways, and the combination is that which makes this her drawing.

At the very best, if you ask ChatGPT to write your novel for you, it is only using that first gestural drawing. None of that input the artist makes is there.

And that's best case. The AI operates not with a deductive logic but statistically; it will add the kind of shoes that are more likely to be added in similar circumstances. This is a place where an artist could say, "ah, but he might have penny loafers with tassels, and that could add a little flair that isn't otherwise visible in his dress." The AI can't make those kinds of decisions.

It can give the illusion of making them, because it will make some decisions and some of those will be low probability. But even outside the "death of the artist" argument, since there's no connectivity here, the details won't support each other.

More artist talk. See that gestural drawing that's first in the series, and how that captures how the character is standing? Now look closer. The drape of the clothing follows how that clothing would have to move from that person assuming that position. The line-work points and subtly accents the underlying line of action.

Look at an AI image and the line is broken. Because there never was a line; any of the parts that remain are borrowed chunks from similar poses and similar choices made by similar artists that may or may not resemble each other in this specific aspect.

Every single line of dialogue in a novel is doing something. Every choice of a word in a description is doing something. It isn't a a "gaunt" stoney outcrop because that's a synonym for bare, it is because the writer wanted you to be thinking of sunken cheeks. Of hunger, perhaps, thus helping to establish that this is a place bare and inhospitable. Or it is "gaunt" because they'd used "barren" in the previous sentence and those two sound too much alike. Or it is "gaunt" because three paragraphs down there's going to be a little joke with it.

Again, the AI can do this. Not through intention, but because it is in the training data, and the patterns are familiar, and some other writer once made a similar choice even if for different reasons. So it can come up, and it can convince, create that illusion of mind, the way a dream can appear to have a rational plot at the moment you are dreaming it.

But none of this is the choice made by the person who asked AI to write their book.

No. You didn't find the cheat code to make art. You didn't find a way to skip the boring part -- because the part of it that is your book isn't there in the idea, in the outline, in the prompt.

It didn't exist. It never existed. You've got an illusion of this wonderful book that just needs someone to put the words down for you. No. You don't. That is the blind spot speaking, the dream speaking so compellingly. The book in your mind doesn't exist yet. And it will never exist.

Unless you write it.

Diffused Goals

Hit a stall on The Early Fox. Possibly due to the new meds — which are promising, at least.

I was reading up (well, mostly listening to a podcast series) on the Apache, and looking at videos of Cloudcroft, NM.  Sigh. Cloudcroft really doesn’t fit the vibe I was going for.

Took several days to figure out that this could be a good direction after all. And now the cast living inside my head has made adjustment to their new status and they’ve come to life again.

But they are no longer searching for Doc Noss’s lost treasure. That pulled the narrative too far off course. Pity, because I’d even worked out a clue they could enlist Penny into helping with. (See, an old letter was using bad schoolboy Greek, but Penny recognizes it is a paraphrase from Xenophon, because that’s the schoolboy lesson. And her Greek is equally bad and she’s making the same kinds of mistakes so she understood what she was looking at...)

Anyhow.

We’ve got this deeply ingrained instinct to learn shit. And once we learn a new thing, we get proprietary about it. We want to get better, and we want to boast. Yesterday at work I got into a conversation about 7400 series chip codes. It is hard not to remember once being good at a thing, and wanting to pick it up again and polish those old skills.

We get into Jeff Goldblum territory too easily, where we start pushing at a thing because we’ve gotten intrigued by the technical challenge. And we lose track of why (if there ever was a reason) we wanted to do it in the first place.


So, Stable Diffusion. AI image creation is moving with lightning speed. This is more of a tech bubble thing where the industry is visibly trying to excite people, and throwing a ton of money at it (which hides much of the true cost), but nobody has quite answered what it is they are actually trying to solve.

They’ve got a cool thing, and someone must be willing to pay money for it. Dot dot dot profit.

All of us down much lower on the tech pyramid are chasing around trying to learn about it, trying to figure out how it will affect us. And in some cases, playing with the thing. A project that started out as fun but is now increasingly just about the technical challenge.

I’m still on the aging Web UI based AUTOMATIC1111 front end. Mostly because I already know where everything is. And my hardware might not be able to take advantage of the modular structure of ComfyUI.

The WebUI SD implementation was originally built around the SD1 model, based on the LAION data set, a 512x512 pixel data set. The SD1.5 proved the most popular and long-lasting.

I’ve never been particularly lucky with AI upscaling. Probably because I’ve been generating with a variety of LoRAs with narrower and more specialized data sets and focus, and lacking those resources, the upscalers tend to try to turn everything into a variation of what it is they expect to see.

A basic and perennial problem with AI. Even the more recent data sets are mass data scrapes of largely copyright archives. Poses, for instance, are over-represented by advertising, fashion, news; meaning they default to the standard upright and facing (with a 20-something, good-looking, white model, too). The AI borks when asked to do fighting poses because that’s such a smaller part of its resources. Even if it starts with the right pose, it drifts off (or it fleshes out its equivalent of a gestural drawing with the wrong muscle groups and clothing details — all of them belonging on a model in a more familiar pose.)

And you may ask, how can you generate at a higher resolution in the first place? Because the source images weren’t all taken at the same distance. One might be a full-length person, one might be a close-up of hands. It uses the later to fill in when it is doing the later passes.

Theoretically. Since it is looking for any resemblance that fits the guidelines, you can (and sometimes do) find a clear kneecap instead of a knuckle. Because the dice have no memory; there is no underlying plan. At every sequential numbered step between the original gaussian blur and the final render, it is treating it as a new problem of “what is this blurred image and what in my training data might look like it?” Modified of course by prompt and other weighting such as ControlNet.

This relates to what is seen as the problem of hands but isn’t a problem itself; it is a diagnostic. But I’ll get back to that.

The next model was the SDXL, which used a 1024x1024 set of sources. With some curation towards representation et al (yet, still massive copyright violations). So with that as a base you can generate at 1024 native, and up to at least 2048 with a low level of artifacts.

For me personally, I couldn’t get XL to run correctly. There’s a fork called Pony which added a ton of anime images (2.5 million scrapes of anime, and furry -- or couldn’t you guess?) That biases that model, so there are some forks of Pony towards more realistic images.

I’m using one of those as the base model now.  Each model has its own peculiarities, both in the variety of training data, the weighting of parts of that data, and the prompts which are recognized. One model might completely ignore “Mazda,” another immediately spit out four-door compacts.

(Or ancient sun gods).

This is the basic and endemic problem of AI; it converges on the norm. More than that, it produces a convincing simulacrum of that norm.

Which is not to say people aren’t able to explore personal visions. But that convergence means, among other things, that the dice memory effect gets amplified. The AI does not understand it is supposed to be a steampunk dirigible. At every step of the render it will be attempting to relate what is in the image to what it finds familiar.

LoRA attempt to swamp this effect by having their own pool of training images which are heavily weighted. But since that is a smaller number of images, they can’t handle the variety that might appear in the final image. So it started to render a brass gear, but it ran out of reference material that matched what was currently in the render and swapped it out for a gold foil star.

But back to my current process.

Inpainting is the key. Inpainting is basically the img2img process with a mask.

When you are generating from scratch, the engine fills a block of the requested image size with gaussian noise. It then progressively looks for patterns in what is first noise, then a noisy image. In img2img mode the starting point is a different image. A selectable amount of noise is added; basically, the AI blurs the original, then tries to construct what it has been told (by prompt and other weighting) to expect to see.

Inpainting mode further restricts this with a mask, meaning only certain parts are corrected. In a typical render-from-scratch workflow, the area of a badly rendered hand is selected, then that part of the image re-rendered until a decent hand appears. (Not picking on hands here, regardless of how meme-able those have been. It just makes an easy example).

For my process, I select the dirigible (made-up example; not sure I’ve ever attempted a proper steampunk image) and load up a specific LoRA and rewrite the prompt to focus attention on what I need to see. Then I switch to the guy with the sword, inpainting again with a pirate LoRA and prompt, and so on until all the image elements of this hybrid idea are present.

I want to get back to this, but the idea of a dinosaur in Times Square is easy to achieve with any of the various AI implementations, but only casually. It will not be a good dinosaur, or a good Times Square, and the ideas will get contaminated. The dinosaur will get Art Deco architecture and the buildings will sprout vines. At a casual glance, it is fun, but this is why AI is and will probably remain unsatisfying.

When you drill at all deeply, it is getting it wrong. 

I just tried to do some desert landscape and at first glance, sure, it does all the desert things. Sand, rocks, wonderful sky. Except. I’m no geologist but look any longer than a second or two and the geology looks just really, really wrong. And there’s a reason for that besides lack of sufficient specific references that force it to repurpose more generalized resources.

That reason is that this is entirely built on casual resemblances. There’s nothing in the process resembling the rules that underly the appearance of almost all things. It doesn’t put two hands on a person because it is working through the denoising process in assembling a person of standard anatomy, it does this because most of the training examples present it with more than one and less than three hands.

It has no concept of hand. It finds hands in proximity to arms but, like a baby, there’s no object permanence. An arm that goes behind something else now no longer carries forward the assumption of a hand being involved.

That's why you will hear the AI bros shouting that hands are a solved issue. They aren't "solved"; the symptom was attacked brute-force style by giving it more reference images of hands until the statistical probability of looking like a normal hand rose sufficiently. The underlying problem remains.

If you look closely, and especially if you have any subject-matter competence, the details are always wrong. No matter what it is -- and the more out of the mainstream the subject, the more likely it will be wrong.

The big models were trained on a shitload of human beings and the average of that mass is a thing that looks to the casual eye like a human. We apes are trained to pay attention to apes so one of the ways AI images convince for a moment (before Uncanny Valley yawns wide) is a nice smile and a pair of eyes you can make contact with...and it is only with a longer glance that you see the scenery behind this particular Mona Lisa is a worse fantasy than whatever Leo was painting there.

That's the trick of AI. It has this glossy, convincing look that up until AI came along took a lot of labor and a lot of skill to achieve. Just like LLMs can convince us with four-dollar words and flawless grammar that the facts contained in that text are also correct. But there's no connection. The kinds of details that took a photograph or a really dedicated painted are achieved without effort, because these are just surface artifacts.

In a slightly different context, Hans Moravec talked about why we overestimated computer intelligence for so long: because we humans find math hard. Adding up large numbers is hard for us because we are general-purpose analog machines and the reality of the elaborate calculus we are doing just to catch a ball in on hand is hidden from us. So the machine, by adding big numbers, looks smart. And we can't understand intuitively why recognizing a face should, then, be so hard for it.

So AI images have this same apparent competence. It takes an artist’s eye, or an anatomist’s, to see the pose is fucked up, the muscle groups are wrong. The more you know what to look for, the more you fail to see the things that a Rodin was able to carve into clay and stone. How this finger flexed means this muscle is tensed. There’s reasons for things. There are underlying structures.

It’s not just a pile of rocks in the desert. It is the underlying rock partially covered by weathered material.

But back to the art process. I am well beyond inpainting the bad hand. I am doing this inpainting cycle right down to the basic composition, because what I am after lies too far outside any trained concepts or available references.

Part of that fault lies in my base model of choice. Especially the 2.5 million image Pony set is character art, so very presentational in a single large posed figure. It doesn't want to do a long camera view of three people having a conversation.

I usually start with another image. It might be a generated image — but one that might be using a different model entirely. And even that will be so far off, that image goes outside to be painted on with a tablet and digital brush.

Other times I've started with a photograph that is close in some way. Or a rough sketch. Or once (and it went so well I mean to continue the experiments!) a posed artist's mannequin.


Multiple passes at different levels of blur and different focuses of prompt are needed to get the thing to move in the direction I’m envisioning. For this, another useful dial to tweak is the “steps” dial. A low blur and a low steps means it doesn’t change much, but it changes it with very little resemblance to the original.

A high step count means it moves conservatively from the blurred image, refining a little bit at a time, and thus tends to preserve details in a way that low blur doesn’t do.

High blur, on the other hand, frees the engine to make radical changes in shape and color; changes that are not the same as the conceptual changes aimed at with low step count. 

Often, though, the AI needs a more direct hint. So back through an external paint application.

This part is peculiarly fascinating to me because, a bit like the Moravec example, it requires me to think in a very different way. For one very basic lesson, the AI responds to value, not lines. That's one of the tough things for many young artists to learn because from the first moment we pick up a pen we tend to think in terms of lines. Of outlines, of borders. Seeing things in shading planes is a further step. But seeing just raw tones, divorced from other clues; that's an unfamiliar way of looking.

That pride in learning a new skill? I have a certain pride in knowing the shortcuts that communicate to the AI. It isn't about realism, it is about certain tricks it recognizes. And on the flip side, avoiding things I have learned confuses it. This happens in prompting, too, have no doubt about that, but it is a particular joy in being able to use those skills of seeing and visualizing that I used back when I was designing for the stage. Or trying to learn how to draw comic books.

In any case, the last steps are performed on the whole image, using a more conservative LoRA and prompt, low blur and high step count. This emphasizes refining, cleaning up what is already there. The final pass is done with multiply on — my graphics card can handle up to 2.5x the working resolution without tiling (and I’ve gone 4x with).

I know the upscaler is supposed to use the prompt and LoRA from the image in question but this method gets much, much closer to conserving the details that are peculiar to that LoRA.

And at the end of it I look at it, say, “That looks cool” and then close the file. Because there’s really little purpose in it otherwise. The goal was learning something technical.



Sunday, August 3, 2025

Singing the Blues

 


Part II is complete. I'm aiming for a shorter book this time so I have as little as 20K to go before the end game. That's 2-3 set-piece scenes, a couple of long drives, another desert wander if I can do it and a bunch of conversations. For what was planned as a novel with mostly silences I'm ending up with a hell of a lot of conversations.

Have decided to eschew continuing the outline, and just see how the story unfolds. Maybe I should plan more. I am really, really looking forward to switching gears to a couple of SF novels where I can ration the world-building. This series is largely about showing off a region and a culture and there's an additional constraint that the real world isn't tidy. I can't have a single planet, war, piece of tech that sums up a theme or idea I'm trying to put across. Instead I just have to deal with the mess of nineteen different tribes in New Mexico alone. Even when you go back to the Ancestral Pueblo (who we used to call Anasazi) they aren't the only or dominant culture in the area.

So I got Freeman singing some blues and a song that might be too on the nose. And I felt obligated to at least mention the Mound Builder Myth -- it is part of the themes I'm developing but I can't spare pages to go into it properly. And I really need Jackson and Sanchez back for another scene before the ending so I'm dreaming of a sequence now where they get in the way of a truck that's trying to run her off the road, a la Silkwood.

The only episode I've got at all planned out is I'm gonna go to Cloudcroft. That's gonna be the big fix of western history, Indian Wars, treasure hunters standing in for prospectors, and a group I'm calling the Asshole Apache in my notes (that's how the NAGPRA rep at White Sands referred to them). Another bunch of retired guys in a bar, but instead of playing the blues (and old protest songs) they are talking up past exploits and plotting how to get at the Victorio Peak treasure.

The whole thing might be too on-the-nose to feel right. Too much easy stereotype there. But they've been living in my head long enough the scenes and settings and conversations have all grown around them and at this point all I have to do is write them down.

Oh, and do some research on Cloudcroft, Lozen (and the Apache generally), plus I've found some good stuff on the early days of Los Alamos. (The Netflix series is...a Netflix series. Too much history is changed because the story they wanted to tell is sex and suspicion under the tensions of building the first atomic bomb.)

And listen to some more of that Delta Blues.



My opinion of the moment is that AI was pretty much the inevitable next step in what was already happening in publishing. And in the pop music industry, for that matter. Amazon Kindle is the literary equivalent of streaming music services, and when you build a business model on quantity, the pinch point is how much you pay for processed creative product.

People have been exploring that with low-content books and short books. They'd already reached a point where writers going into the eBook market couldn't afford editing or boutique cover services. In fact, the pressures of that algorithm running the firehose of "more books but cheaper, please" means even spending longer than four months to write the damned things is a luxury unaffordable for the self-published writer.

I am writing faster. I feel I've finally made a breakthrough where it really is starting to come easier. But, as with so many things, I seem to have arrived too late. I fixed some of my outstanding health issues maybe four months too late to jump on a new position I really, really wanted (and I'm still dealing with the fallout from that). I put money in the stock market just before one of the big crashes. Joined the Maker Movement and a hackerspace when that was imploding under the weight of commercialism. Doctorow had only the corner of it; enshittification is happening everywhere (and has been happening for a long time, will always happen).

The landscape of fiction is changing so rapidly I don't even recognize it now.

Of course, here I am writing a travel adventure series where we finally crawled out of COVID to hit world-wide revolt against the growing problems of mass tourism (something I did indeed write about in my first book, with the horrendous problems suffered by Venice). And as of this month it has become increasingly difficult to Fly While American. We've managed to piss off so much of the world that even (unfairly) pretending to be Canadian doesn't return travel to where it was even ten years ago.

Hell, I had story lines planned both in Moscow and in Tel Aviv. Not really stories you want to be trying to tell at the moment. Everything is changing so rapidly. I struggled enough dealing with the ubiquity of GPS and translation software and Google (although, oddly, that enshittification is actually helping there. It has reached the point where "I'll just Google up this obscure historical fact that will solve the mystery" is no longer the panacea for the problems faced by an Archaeologist-Adventurer.)

Oh, yeah, and my latest mass-produced cover is so...meh...I don't even have the heart to get back to 101 Covers and see if it can be rescued. I'm close to just writing off that hundred bucks and doing something different.

Not AI, though. I'm not desperate. Or stupid.


(We didn't need a university-level study -- there's at least two I've read on pdf -- to show this idea that the original training data is so finely ground it would be impossible to return the original images from it. Well...poke around enough, and I'm pretty sure you could identify that artist's signature that the AI put in there without even being asked for it!)

Saturday, July 26, 2025

Whiskers on Kittens

Finally did the first Jackson and Sanchez scene. I have a feeling I'm going to revise a few times before I am happy with it. There's a hell of a lot happening in these last chapters of Part II.

I got a few hundred words down over breakfast. And second breakfast. Was glad I had a computer available for the next few (text is up to 30K now).

Glad because these are a few things I had to look up. Animal life (and tracks) at White Sands. Colors of various "warning, radiation" signs. The street address of the Waste Isolation Pilot Plant. The dates of the kitty litter incident at WIPP, and the safety officer exodus at LANL.

The correct term for the Air Force field uniform. Oh, that was fun. Turns out the same year as my story, they are phasing out one style and bringing in another. Only, not across the whole service at once. Some of them are still back in a third!

Air Force slang. The correct branch of the Air Force for my pair (not that they are telling Penny). Ranks addresses and saluting protocol (I may have saluted a lot, but I was never an officer). What is a HEMI (that was a thing ten years ago, as of this story. Oh, well. Penny has already stated she doesn't do cars. Or Chrysler trucks). The contents of an abo knapping kit, shapes of Clovis, Folsom, and Western Stemmed Tradition points. Burlington (Missouri) chert. Genetics of the Solutreans. Kulkulkan. 

There was probably more but that's all I remember.


And, no, Spock is totally wrong here. Unless things really are different in the Star Trek universe...after all, didn't we already have the Eugenics Wars? Khaaaaaaaan!

Sunday, July 20, 2025

The Trouble with Research

...is that it is volatile. I spent three days (well, I was doing other things, too). But three days just to track down a particular piece of art.


See, I'd seen it. I made a note to myself that I might want to use it. But that was when I was early in the development of The Early Fox and didn't know quite where it was going to go. So I read three or four books on nuclear New Mexico, on Navajo miners and Downwinders, as well as on ranchers and eminent domain in White Sands and on the hill that became Los Alamos.

No matter how much I take notes, and highlight passages, I just can't remember the stuff I end up wanting to use. So I try, these days, to parcel my research efforts out. I read just enough to make sure the plot points are plausible.

And I wait until I'm actually writing the scene before I read any further.

One downside to this is it is almost like cramming for an exam. In this current book, the geology of the playa plays a crucial part in the plot. But I already wrote the scenes that are heavily about that geology. I risk having forgotten too much when I come back to it for the final clue.

Another downside is a lack of front-loading. My new Nuke Museum sequence is going to take some absorbing of Los Alamos in the Trinity Test days. Ideally, I'd stop and watch Oppenheimer and do some more academic research and I'd let that sort of cook until I could basically write a short historical-fiction excerpt.

And...oops; Manhattan just dropped on Prime free. Of course, the same book I discovered the Noel Marquez painting in, is SCATHING about the Manhattan mini-series...

I don't want to lose steam so I'm skipping over the museum to do Penny's meeting with Jackson and Sanchez, and the end of Part II. Which is what I'm doing with Egtved anyhow. But I do worry that the stack of plot changes is reaching critical mass. At some point I need to go back and rewrite before I forget that what a Christie Pit is got moved to Chapter 8 so needs to be taken out of Chapter 4...

Also research-wise, the desert stuff especially makes this a very visual book, and that makes it better to do at home on the dual-monitor setup. I really do love writing in a cafe over a long brunch, but the phone screen can only handle blocks of text. I can't have pictures of the rocks and sand spread out at the same time.

I only got five hundred out today, but I still have a little time after dinner and -- now that I'm about to hit the "Test Bed," it is going quickly.

Good thing, too, because I've got shiny new idea syndrome. Ran into another article and I want to do the boat one, and the viking one. But no vikings in boats. For how lightweight these damn Athena Fox stories really are (and for how low the sales are on them), I really should be punching them out on a four-month basis.

Oh, yeah. And started the home folder and dropped a 500-word proof-of-concept on my "words about writing" book.

Saturday, July 19, 2025

Frybread

I've been fried all week. Strange week. Have a lot of energy at work but collapsing in the evenings and thus, no writing done.

One day into the weekend and there's 1,700 words down. The whole Pueblo Cultural Center thing written. But...reviewing that work (after I woke up again, damn this sickness), realized I'd completely forgotten the mural. So now need to open one of my Kindle books, track that thing down, and slot it in.

On top of the open tabs I've got on pueblos of New Mexico, language groups, blue corn, and the Three Sisters. And oh boy is frybread a rabbit hole. Not just a million varieties, but history legacy and identity and, yes, even controversy. That is a hell of a lot to load on to one pancake. No wonder the stuff is nearly flat.*

These driving scenes are killing me. I end up talking about all sorts of strange things in them. The intent was to just make them contemplative, just a landscape passing almost as if in a dream. But I am not Tolkien. I can't fill three pages on how dry the rocks are. I couldn't even do it with a nice fat tree to describe.

And I'm not ready for the nuke museum scene. I wish I had work week still on me because dreaming up this one is good stuff for the mental back-burner. I have the edge of something with Penny imagining herself a Los Alamos wife (and it was wartime, so yeah, a lot of them were working inside the gates, too. Some even had degrees!) And somehow carrying this on to some sort of bad blood between a surly teen or an influencer or someone who damaged an exhibit, and blames Penny for getting in trouble over it.

Because I really do want that chase through the missile yard. And doing it with Penny half-thinking spies at Los Alamos...

But I'm losing my focus, so I'm gonna go watch the Tenth Doctor play the Fifteenth... 


* Frybread, described by many as an indispensable ingredient of a powwow...is made with wheat. Think about it for a moment.

Wednesday, July 16, 2025

Crime Novel and Museum Guide


My outline revisions now have Penny visiting two museums in Albuquerque before doing her hike into the desert. It harms the pacing, but it is the best way to set up stuff I want her to know for the stuff that happens in the end of Part II.

And I don't actually have to info dump. She can be showing learning "things" basically off-stage, with the scenes about...scene stuff.

I have this idea of her somehow experiencing Los Alamos in the 1940's via some of the exhibits. Bringing that more to life. It isn't exactly the core historical period but, really, the historical thing for this book is largely the nuclear age.

BTW, I write this on the 80th anniversary of the Trinity Test. 

I also, really really want to do a chase or fight scene around the rockets. It looks almost like a railyard out there, with these missiles on their sides lined up like detached strings of freight cars. I had thoughts while I was there of ducking in and out in one of those "chase through the railyard" scenes.

Only problem is, there's not anyone chasing Penny yet. There are at least two (possibly three) distinct things she does towards the end of Part II that changes that status and changes the game.

And I sort of want this to be real stakes. Not her imagination running away, not a confused Karen chasing after her because she thought Penny was a docent and is demanding she explain the Titan Missile Program to her bored kids. In the best of all possible worlds, this would be the fallout from some Good Samaritan act earlier.


I'm feeling a little better about the lack of side quests. I mean, I still don't have them, but she did do a few active things to earn clues, and wasn't just getting them handed to her. 

Anyhow.

I made another lovely trip to the ER. So understandable why I'm writing a bit slow. But it really does feel like I'm getting the hang of putting out a good 5K a week, and it is methods that can be expanded to more, perhaps significantly more.

Which is good, because I'm still having Shiny New Idea syndrome.

I still wish sometimes I was doing Actress Penny. Taking it even further; she actually did a bunch of movies of the sort of Asylum kind -- possibly mockbusters referencing more directly properties that I wouldn't be able to include in their original form.


So no skills in archaeology, or gunplay, or really much physical skill other than a rough-and-tumble physicality. But a skilled mimic with eidetic memory and original-Penny's gift of gab/CHAR 20 ability to convince other people. She'd be the kind of hero who could fake knowing guns well enough to bluff an enemy...but also able to somehow pull off firing the thing anyhow when things went sideways.

And the movies are a running gag, both for pop-cultural references that are entirely IP free, and as her version of the Junior Woodchuck Guidebook.


The more plausible/likely idea I had, though, is to take the idea of fiction becoming real, and two ordinary people getting forced by "the story" to take on roles of omni-disciplinary historian/linguist/archaeologist and companion good-at-everything-physical Action Girl archetype.

And make that the last chapter of the "Other Adventures of Athena Fox" idea I proposed earlier.