Sunday, December 28, 2014

Combat Cameraman

With an emphasis on the "combat."

Just for a lark I started a new game of Bioshock just to see what you could do with the Research Camera.

The game is pretty well balanced, and pretty well organized where you usually need a combination of approaches to get through all the various obstacles. But still, a few combinations are so powerful you can get through ninety percent of the game with them. What a lot of players like is the Electroshock/Wrench combination. You can switch (equipped) plasmid and (equipped) weapon very quickly, usually by tapping the right mouse button. For the above combination, you stun enemies (or cameras, for that matter) with the plasmid then close in to melee range with the wrench.

(It is also a nice combination if you've been hacking turrets, particularly flying turrets. Just stun the enemy and let the turret finish them off.)

There are a half-dozen buffs for the wrench anyhow (various tonics you stumble upon at various points in the game -- no, using the wrench more doesn't increase the probability of these items being made available). I can't think off-hand of any corresponding buffs for the various ranged weapons options, other than the "Power to the People" weapons upgrade stations.

So the trick to using the Research Camera fully was to equip a plasmid that was a primary weapon. And Telekinesis was it. It is also spoken of as, really, the most powerful plasmid in the game. You practically never run out of ammunition if you are using it in combat, and the EVE cost is so low you can use it for hours without needing to find more EVE to recharge.

The first time I tried this combo, I was searching a room when a Spider Splicer ran in after me. I took her picture, then caught one of her spider claws in mid-air and threw it back at her. She yelped, and ran back out to use the nearby First Aid Station -- which I'd hacked on the way in! And I was sold.

So I went through the game with the camera at the ready. Everything that attacked -- and several things that didn't -- I took pictures of. Only when I'd finished, or when I was really pressed, I'd switch to telekinesis.

And boy is the Research Camera powerful!

You almost immediately start collecting damage bonuses. It also starts dropping tonics on you. Within the first few dozen pictures of Splicers leaping at me, I had upgraded to Static Discharge 2. Which meant any Splicer closing to melee range would be thrown back violently by my natural electric shield, and with my other damage bonuses often killed outright. Then I'd pick up the Splicer's body with telekinesis and use it to mow down the others.

The only downside is I'd also been letting the Big Daddies be, so I quickly ran out of fresh slots for all my nifty new tonics.

Makes me wonder what other under-emphasized gameplay mechanics in the game might be similarly powerful if utilized correctly...

Friday, December 26, 2014

Blue Light Special

Building a dozen stage marker lights this weekend (assuming the parts arrive today or tomorrow).

The client wanted them in a variety of different colors, including blue. I told him the latter might be difficult.

At a first approximation, blue LEDs typically list a forward voltage of 3.3 volts, and a pair of AAA cells (at 1.5 volts ea) sums to 3.0 volts.

At a second, better approximation, the voltage range (depending on formulation) is all the way from 2.48 to 3.7 volts. As opposed to, say, red or yellow, which list between 1.7 up to around 2.3 volts.

And this is another one of those wonderful places where what looks like an arbitrary number is actually a window into the physics of the universe. At the simplest level, an LED's forward voltage is defined by the band gap. And the frequency of the photon emitted is a direct product of that band gap. Blue LEDs typically require more voltage because the light they emit is of a higher frequency.

But only at a first approximation. LED physics is a lot more complicated than that. In any case, because this is a photonic effect (as well as a diode junction), an LED isn't like an incandescent bulb. They either light, or they don't. There's no equivalent of the barely-seen glow of a filament on the last dregs of a discharged battery.

Nor is battery chemistry as simple as that. 1.5 volts is the equivalent chemically-defined energy level in the older Alkaline formulas. Rechargeable batteries are typically less; assume 1.4 volts per cell. And unlike LEDs, batteries are not consistent in voltage across their draw. As a battery discharges, the voltage also lowers.

So this is the long answer of why a blue LED would make a poor match-up with a pair of AAA cells; theory says the cells would just barely provide the necessary voltage to make the light turn on, and a bare fraction into their lifetime, that voltage would drop too low to work any more.

There is enough wiggle room in these numbers, however, to make experiment worthwhile. I stuck a blue LED in my bench rig (a pair of AAA batteries in a holder and a couple lengths of jumper wire). And four days later, it is still shining.

Why? Well, one factor may be that I'm using rechargeable batteries. They start with a lower voltage, remember, but at least in the Sony "Eneloop" series, they maintain that voltage for a much greater part of their discharge curve. The voltage does not droop significantly until they've reached a good 70% of the total capacity.

Note in passing that the capacity of batteries also changes over discharge rate. When you state a battery has "1500 mAh" you have to specify the typical amperage of the draw. For AAA batteries, that stated amperage is 15ma. Which, fortunately, is within the range of what a single high-brightness LED imposes.

(Incidentally, I just realized I could have made these things with Pirahnha RGB's, and changed color with just a jumper).

Still, the red LED in the prototype is now on six days of continuous use, and showing no signs of dimming either.

Thursday, December 25, 2014

A Man, a Plan, an...Underwater City Based on Objectivist Principles: Bioshock!

Another wandering review of an ancient game (well, 2007 release...)

It is tempting to approach Bioshock with hindsight, forgetting just how many innovations it introduced to a wider audience. The combination of first-person-shooter gunplay and survival-game ambience was relatively new. The world-building was exceptional not just for a new level of technical achievement, but for an integrated and cohesive vision that made internal sense (instead of being just a series of spectacular backdrops). Even Crafting -- present in nascent form -- was essentially new.

However, the game survives well. Like the Lord of the Rings, it may have been followed by enough others to lend a certain jaded familiarity with the ideas, but it still holds up as doing a good job with them.

But let's just take this as established and move on; it was a very good game and deserved its acclaim then, and it is quite playable and does not feel in the least bit retro or awkward now. Whatever you can say on the technical side -- low-poly models, simplistic AI, canned and repetitive dialog -- the game recognized these technical limitations and constructed story and level and character design and gameplay to make the best possible use of them.

In short, it largely avoids the uncanny valley problem by choosing characters that are already firmly deep in the trench. The splicer's rote behavior becomes not emblematic of poor coding, but of their own befuddled brains. Similar can be said of the physical design of Rapture itself. Draw distance and skybox are hardly an issue when you are in a narrow corridor deep under the sea.

With that said, the rest of this essay is going to be about some of those elements of gameplay and design choices and what they say about evolving game design in general.

First off, and sharing a problem Tomb Raider 2013 had much, much worse, Bioshock may have too much in the box. It seems the accepted design that when you put RPG elements into an FPS (err..Role Playing Game elements into a First-Person Shooter -- I'm going to try to stay away from the technical and the acronyms but this is going to slide away from general readership regardless), you end up with multiple streams of weapon-and-health support.

As in every FPS from Doom on, you can pick up ammunition from enemy drops. You can find it just lying around the landscape (at least Bioshock is restrained in this -- in many games the random ammunition boxes become almost as ludicrous as the frequent health packs). You can also purchase it (a nice Bioshock innovation is that, contextually, vending machines for ammunition make sense in that universe.) And you can craft it.

Which is my first complaint, really. Crafting is cute, but when you get right down to it, it is a different vending machine. Or a different weapons drop. I have yet to see a Crafting system that let you make something truly unique (I can't really imagine how you could code it up!) So instead, this is just "expensive" ammunition. Which technically is not available to anyone else in the game world, except that they can and do use the same Crafting machines you do, plus -- to Craft, you need raw supplies, and you find those on enemy dead or in the usual supply cabinets.

So when you get right down to it, Crafting is just like purchasing ammunition with money you found on a body or hidden in a crate, only it is barter goods instead of cash. It is slightly more limited, but at the same time you don't get any real choice as to what you collect. You can't go out to the Rubber Hose Tree because you have everything else but that to make new RPG rounds.

At least they've avoided the extremes of Fallout 3, where you are carrying so many scraps of rusted metal and stale bread you have to rent a storage locker to put them in. Or the opposite extreme of Tomb Raider 2013, in which the open pool of "salvage" begins to suggest that the materials for strengthening a home-made stick bow are equally appropriate to bore-sight a 1940's machine gun.

But, in the end of it, you have three distinct streams of ammunition re-supply, and this starts to feel a lot less like gameplay and a lot more like a transparent effort to stretch the running time.

Bioshock has the sometimes-reviled "Vita-Chambers." Which are an in-game hand-wave towards how your character is able to restore from a save point. I don't have problem with them per se, except to say they don't get you anything. The in-universe explanation is ludicrous and you never see anyone else using them, nor does anyone in game seem to recognize that they are being used. Now, if Splicers camped the nearest Vita-Chamber, that might make a little more sense. But, really, you've just replaced one lack of explanation for an equal lack of explanation. Only one that is wearing a large blue-glowing lampshade.

Save points are an essential problem in game design. You achieve desired difficulty through there being a real chance of failure. But failure can not be punished too excessively or the player stops playing. I've had a least one game that required you restart and re-load a saved game, and that was so onerous the game lost all fun.

On the other hand, something like Tomb Raider: Legends, where you almost-instantly reset to a convenient place (with all health restored), makes death such a revolving door suicide becomes a viable gameplay option.

Bioshock started to carry through something with the Vita-Chambers in that they restore you with marginal health and Eve. The old space game Escape Velocity escape pod option plays out similarly; you lose your current ship and at least some of your reputation. In at least one version of the game, if you lost a battle to pirates you would have cargo and money stolen then be left drifting without power for the further ignominy of having to plead with passing ships for a little help.

I could sort of see this being carried through in an FPS, where making a mistake and getting shot means you are shown in cut-scene somehow finding cover to crawl into -- or being rescued by locals -- and then restart play with most of your weapons gone and your health hanging by a thread, with your first task being re-supply.

Except there's that old punishment problem again. If the player avatar gets killed/incapacitated often, then the penalty can't be too large or take too long or the player will quit in disgust after the third or fourth time of going through it. And when you make the drawbacks of resurrection too low and/or the process too fast and painless, you take the bite out of death and take the challenge out of playing.

I played Bioshock through to the end of Arcadia on "Medium" difficulty, but the big set-piece battle in the labs I ended up hitting the Vita-Chamber every few minutes. I literally would run out of the Vita-Chamber, attack the remaining Splicers with whatever weapon happened to be equipped at the moment, kill one, die, be sent back to the Vita-Chamber. Lather, rinse, repeat until all the Splicers were finally dead. It really took the thrill out.

At least a proper save point means you face the same challenge and have to get through it at least once from the top. You can't just chip away at it. (But then, this is the theory behind why some games limit the number of save points -- to keep you from save-scrumming through a really tough section. Me, I think the choice should be made by the player. You should be able to balance perceived risk and rewards, weighing whether to spend the time saving the game versus the pain of having to go back to the previous save point.)

Really, this is an extension of the Health Pack problem. At least, in the Bioshock universe it makes in-game sense why Health Packs work. And why guns are relatively weak; everyone is all Adam'd up to be much less vulnerable. And this is why rocket-propelled grenades and flame throwers are in the vending machines -- and they kill just about as effectively as they do in the real world. Being able to shrug off a couple 38 slugs does not translate into resisting two pounds of explosive warhead!

Bioshock also partly answers the Hero Success problem. Although it seems to you that you are pretty much a beginning Splicer -- literally just off the boat, access to the same weapons and Eve that they have -- you can believe you are doing well even early on because you aren't, at least, batshit insane. The Splicers are so easily distracted, prone to fratricide, and otherwise just as likely to get themselves killed as to successfully attack you, it is not entirely unbelievable you make it through them alive.

Later on, of course, you realize you are Jack Ryan, and the turrets are shooting at you less, the cameras take longer to find you, and every safe, door, and vending machine is easier for you to hack than it is for anyone else other than you or dear old dad.

Still, it does beg the question of if the Splicers are so into trying every plasmid they can inject, why so many of them shoot at you with guns, and why even the ones with plasmid-based attacks specialize in only one (and aren't even that good at it). Again; they've all got rubber hose and distilled water in their pockets; why aren't they are the Maker-Space-o-Matic making their own frag grenades?

For some reason, stealth and environmental kills feel more "realistic" in these sorts of ordinary-man-fights-off-skilled-ninjas situations. Even in the movies it seems to work. Put the hero in a "Draw, pardner" situation and it feels unlikely that they win the shootout. But let them come up with some stupid contrivance with a cart, a robe, and some straw and it feels reasonable. Relatively speaking.

Tomb Raider 2013 would have felt a lot better if that sort of thing had been made the rule. And they had the space for it; Lara shows an aptitude towards climbing and she's small enough to get into spaces the Solarii can't. It is established in several places in the game that the mooks simply don't expect someone to be able to approach from a certain direction, and otherwise have their guard down.

But then the game throws it all away by letting you, often forcing you, and apparently expecting you to do stand-up gun brawls with upwards of a dozen mooks firing automatic weapons. And you hose them down with an identical weapon and no explanation of your superior standing. Half-life got the same way, but at least had the hand-wave of the HEV suit. That, and the contrived circumstances that let Gordon constantly appear in the least expected place.

Bioshock nearly falls in the same direction, particularly after Jack downs so many plasmids he can take out a Big Daddy with a monkey wrench. Fontaine should be nearly unstoppable. Instead he's a typical boss fight, and easily taken down with a few grenades.

That said, Bioshock takes the hyperspace arsenal problem and doubles it. I found it frustrating and difficult even after some creative keyboard re-mapping to be able to actually reach the right combinations of weapons and plasmids amid the tumult of a battle.

The game gives you too many choices. It seems to expect you to be using them, too. Adding to the various tonics, the plasmids, the ammunition choices, the various strategies of distance, melee, fratricide via plasmid, trap-laying, and hacking, it also has an entire side mechanic of Researching.

At some point in the middle of combat while watching your three and nine for flankers, keeping an eye on Eve and Ammo (the game handles depletion badly and inconsistently. When you run low on Eve, you will re-inject in a time-consuming animation that can get you killed if you were just about to swing a wrench in a close-in melee. But if you run low on a speciality ammunition, no reload occurs until you page through all your ammunition types to find one you still have in stock), and of course trying to get the right weapons selected, you are expected to whip out a camera and take a nicely framed picture of your attacker, which after you've taken enough will tell you which kind of ammunition does them the most damage.

And, yes, Splicers do have typical video-game sound signatures. So, technically, you could hear a Splicer singing to themselves from around the corner, select and load up the optimum weapon combination, then jump at them.

In practice, if you do so you'll find yourself staring at a security camera that you need to rapidly switch weapons and/or plasmids in order to deal with, and the Splicers travel in mixed packs anyhow. Which is why most players seem to settle on one or two weapons and just use those.

Which in turn means that all that foofraw with U-Invent stations and weapon drops and cash drops and so forth is just annoyance, because you can never seem to be able to find ammunition for the weapon you've decided to settle on. You don't need more variety, and you don't need upgrade for the weapons you never use. You just want a way to reload the one you are comfortable with.

The much-touted "Moral Choice" is hardly that. It is presented pretty clearly from the first moment as "be relatively nice" or "eat kittens." The closest it comes to nuance is you only earn the good ending if you avoid eating even a single one.

I differ from other reviewers, however, in saying that there is no ludonarrative dissonance here. I think Jack is faced with essentially the same choice the player is faced with. The attraction of new plasmids against the moral repugnance of harming the Little Sisters.

I think the closest this comes to being true is that it is expected the player wants to fight Big Daddies. They can't resist the challenge. Internally, it is a lot less justified for Jack to tangle with them. Me, I think Jack and I pretty much agreed on the big guys. In this nightmarish world full of mad Splicers killing everything that moved, they were the only thing that didn't mess with me. They just tromped around, groaning, minding their own business. Sometimes Splicers would take them on, and they'd put a stop to that. Which I was also totally in favor of. And there's a moment in the animation where the Little Sister says "Come along, slowpoke," and the Big Daddy groans and adjusts it's heavy tank and regulator before lumbering after her. I can totally sympathize.

Given the choice, I'd leave them alone. Sure, Tennenbaum wants you to "rescue" the Little Sisters, but from where I stand, they seem happy enough and are pretty well adjusted to living in this freaky place. Being a little girl without spooky powers and a giant robotic-appearing guardian does not seem to have good long-term prospects in Rapture.

The reasoning to do otherwise is in-game, at least. Jack is put in position of "the only person who can save us" and has to selfishly chose to gather Adam in order to be be able to take on Fontaine. And that means attacking the Big Daddies, no matter how much that makes the Little Sisters cry. But at least you can still chose to rescue them instead of "harvesting" them.

(To add insult to the simplicity of the false moral choice, you only get half the Adam if you refrain from killing the little girls. And if you refrain twice, Tennenbaum shows up with a magical teddy bear containing more Adam than any three. So it isn't a moral choice, as much as it is a test for how stupid immorality can make you. One way or another, you'll have more plasmids than you know what to do with before the middle of the game).

This is also a complaint about the RPG element. Like at least one other game I've been naming a lot recently, Bioshock apparently gives you options to specialize your character growth, but practically speaking you'll end up with all the slots filled soon enough regardless on the order you chose to fill them.

I chose to specialize in hacking, but there are a lot of tonics you find if you explore (especially in the earlier levels, it is well worth checking out every nook and cranny. And after you get telekinesis, is worth keeping an eye open for extra ammo and Eve vials that are half-hidden on remote ledges or behind grates). Basically, you fill up your slots with random stuff long before you can make reasoned choices as to purchases.

And even then, hacking is almost entirely engineering tonics, so those slots would go unused if you didn't buff the hacking skills -- there's no trade-off involved, no loss you are taking to push those skills. So not really RPG in that way.

With that all said, the elements basically work. They are all defensible within-game, making a nice change from, say, games where clips of ammo seem to be lying around the halls at random, and stale food and old potions found on the damp and dirty floor of a dungeon are perfectly safe and indeed urgent to imbibe immediately.

The hacking mini-game is innocuous enough and it is a lot of fun to hack not just turrets but first aid stations and let Splicers blow themselves up. The ammunition choices don't seem to make a critical difference and there are far too many weapons, but even though only a small number of plasmids are worth keeping at the ready, it is awful fun to mess around a little and use some of the more outrageous powers to give a Splicer a bad day (the game thoughtfully gives you an episode in which your genetic code is going amuck and you basically get handed a random plasmid to experiment with every minute or so.)

The atmospherics are wonderful, early stages of the game are quite spooky and the extended episode with the mad artist fellow bring that spookiness right back up. And for those that empathize with the Big Daddies, there's a point late in the game where you can really indulge that feeling. The various oral diaries are a hoot and even the Splicers are fun to listen to (for at least a little).

And the ending is moving, and sufficient. I'd like it if it cut to titles instead of dropping you back to the main loading screen, though; that moment really needs a quiet time to follow to let it sink in a little and move through the catharsis.

Family, indeed. Another good thought for the holidays. Even if you do spend some of it behind a computer...and under the sea.

The Fundamental Attribution Error

Many people don't code. According to some academic studies, a significant number can't code; there is some essential conceptual block they have yet to hurdle. Of course this is controversial, and complex, and much-discussed.

I was just browsing the usual blogs on this cold morning and The Daily WTF had dug up the old fizzbuzz problem sometimes given as an interview question for new software developer hires. And an intriguing statement came out of one of the following links; that poor programmers didn't stumble on just the big, complicated problems, but on little problems as well. The problem seems to be with what some of us used to call thinking like the computer (the oft-repeated "it does what you said, not what you meant" problem).

And I can't help thinking that there may be links to the tech/non-tech dichotomy as well, plus could throw in the oft-observed (but less well studied) apparent statistical correlation between programmers and people on the autistic spectrum.

One of the articles I've looked at suggests that the problem may lie in modeling. That non-programmers attempt to map the behavior of a system to their existing internal models of the world, but programmers are willing to accept and work within the constraints of the machine's unique (and often counter-intuitive) internal model instead.

This seems to me to have some application to mathematics, and even a few soft sciences as well (ethnomusicology leaps to mind -- a difficult field to write academically in because you simultaneously have to work within a culture's unique mapping of the sound space, but document what they produce within the not-always-perfect mapping of the shared language of western tradition.)

The other thought that springs out is that rules-based behavior -- which I have noticed among non-technical people and commented on before -- makes sense within this mapping problem. In a rules-based system, you only have your internal model of the larger universe. Instead of abstracting and intuiting the internal state of a device or system, you establish a narrow, filtered connection to it.

Rules-based behavior -- what I also call magical thinking -- is a list of empirical solutions. I'm going to go back again to a simple system I observed at an old workplace. There were work lights in the wing that needed to be turned off during the play. Since the room could be approached from two directions, the original electricians had put in a two-way switch; you could switch on or off from either end of the room.

The tech approach is to model the system. Usually by direct experimentation (aka, flip the switches, figure out how it works). Or, more importantly, figure out an internal model; the truth table of that set of switches. Rules-based behavior wants to map a single action, (press this switch) to a desired result (turn the lights off). The user of a rules-based system needs to have made available a system that can be abstracted and limited to this simplicity.

The first attempt at documenting the rule was to label the switch. "Up" was equivalent to lights "On." The map matched the territory -- until the next time someone flipped the switch at the other end of the room. Now the map didn't match the territory. The solution? Re-label the switch, and tape over the other switch.

As a sound engineer or as a general stage technician one sees this sort of thing all the time. The usual wave-off is you are working with people who don't have the time or the interest to learn how the gadget they are abusing actually works. So we put up with equipment that breaks at inconvenient times (like in the middle of the show). But I've got a growing belief that this isn't a good picture -- that our model is incorrect.

Because we don't require the musicians on stage or the stage manager fumbling with a headset to understand the internal electronics of the gear. Sure, that may be the framework the technician used to arrive at their understanding. But it isn't necessary to get through a couple of years of technical eduction and basic electronics to construct a proper model of the underlying system.

What we may be dealing with, in short, is that same non-programmer problem; people who are unable to make the mental leap to modeling a system as it is, instead of trying to cram its behavior into their established system of beliefs.

I've often noticed and remarked on the difference in behavior between the tech and non-tech, the Morlock and Eloi of our overly-stretched and far-too-contrasty comparison here. Confronted with a new piece of technical equipment, the Morlock plays with it. The Eloi approaches tentatively, and only presses a control when they are assured that is the right button to press. When questioned, they may say they are "afraid of breaking something."

The Morlock scoffs that gear is harder to break than that. But that ignores two very important caveats behind their approach. The first is a robust general model formed from similar pieces of gear that informs them of the safe envelope for experimentation. They may flip all the switches on the surface of a new mixer safe in the knowledge that they probably can't break anything, but they know better than to fiddle with the small black switch cryptically labeled "120/240" on the back of the thing!

Plus, the Morlock knows that if they break it, they can probably fix it, and if it was that easy to break in the first place it probably needed to be replaced anyhow.

The Fundamental Attribution Error in psychology is, basically, attributing the behavior of others to internal factors ("He is such a rude person") whereas one's own behavior is seen as driven by external factors ("Sure I hung up on him, but he asked for it.")

Assignment is a fundamental error in programming (and in C-like typography, an error that everyone makes on a regular basis without requiring the influence of a poor mental model). The same interviewers who were startled at how many would-be programmers could not formulate an approach to a test problem in the fizzbuzz class were not particularly surprised by educators who found up to eighty percent of the students in first-year comp sci could not correctly answer;

a = 10;
b = 20;
a = b;
if (a < b) {
///does this line run?

(The more usual form of this mistake for most of us is "If (a = b){..." aka; typing "=" or assignment when we meant to type "==" or equivalence. Fortunately, the compiler tends to catch this kind of error!)

Apparently, though, and according to at least a small number of academic studies, the error of the students is not typographically based but stems from an incorrect or entirely missing internal model of how attribution works.

And as for fizzbuzz? I left it to the last, because it is also a truism that if you bring up fizzbuzz around any group of programmers discussion will instantly devolve into better (or, at least, more unique and esoteric) ways of programming it. There is something in the programmer that can not resist the urge to solve the problem -- well over the urge to discuss the context of solving the problem.

Yes, I immediately thought of my own. Using modulus, of course. But even if I had tackled it five years ago, before I realized the sheer power of "If (a % 5 == 0){", I could still have come up with a solution. It is a matter of matching the problem space with the space of the tools at hand. If nothing else, "If (a == 3 || a == 6 || a == 9..." would do the job, even if frighteningly brute-force. (Or for the true glutton, a "switch/case" stack 100 items long...)

And I don't know if this idea of being unable to adopt the internal model of the machine is a reasonable explanation for why some people don't seem to have the tech gene. I do know that trying to apply a (small) set of rules is not going to get you through a fizzbuzz problem.

Which may be the simplest way of looking at a thing that may be over-stated and over-complexed; that the Eloi problem is, in short, trying to simplify a complicated world into too small a set of rules.

Meanwhile, I'm spending Christmas morning sitting at home alone thinking about conceptual problems in computer programming. And if that isn't suggestive of a position somewhere on the high-functioning end of the spectrum, I don't know what is!

(Truth is -- I'm listening to Christmas-themed jazz tunes, the sun is coming through my window, I just got off the phone with Mom and I'm thinking about re-heating that pumpkin pie, and I'd say that life is pretty durn good.)

Saturday, December 20, 2014

...Shining at the Frankenstein Place

My design goals were in error, and most of the math was thus based on the wrong assumptions. The revised target was for a light that would be visible primarily during black-outs.

Which is perhaps calculable, but if you tried to point Wolfram-Alpha at figuring the threshold of detection from an actor who is within a handful of seconds from having been exposed to full stage lighting...well, I think it would spit up.

In any case, a 20ma LED, even run down at around 6-7ma, is plenty bright enough. 20 degrees is a bit too narrow for the expected off-axis viewing and 40-60 is better.

Also, even more importantly, I "wasted" several hours drawing up plans for laser-cut acrylic when it turns out there are battery boxes available for about a buck fifty each. "Wasted," because I was at rehearsal anyhow and getting paid for that.

In any case, the project is spec'd, tested, and documented.

Working within parts available at the local Radio Shack (which, despite boarded-up windows appears to still be in business) the unit cost is about $2.50. Purchasing in bulk through DigiKey, I can get it down to $1.75 each. And build time is minimal.

As with most of my work, this is open hardware. The full Instructable is up, of course; at

Wednesday, December 17, 2014

Lux Life

I just got a priority project dumped on me. I'm trying to figure out if I can build a better stage edge light.

The client goal was to break the existing price point and deliver something cheaper. Unfortunately, that price point is $11 and I'm not sure I can get significantly cheaper.

The one thing I can do is provide a lot better performance at that price point.

The existing commercial product is dead-simple; a 9v battery clip modified to hold a naked 4ma LED. Very simple, small, easy to maintain...but that's the best of it. Undirected and extremely lossy, as 9v "transistor" batteries are much more expensive than AA cells, and since LED forward voltage is under 3v, most of that extra voltage is being wasted as heat in the regulator.

So it is obvious to design around AA/AAA, especially as those already exist in profusion in the theater world, and are better-established in the rechargeable options as well (why I opted for 3xAAA as the standard for my DuckLight).

The LED is a little more of a question. 20ma is easily available in that size, but what is the actual desired intensity? After trying several different estimation models, including illumination, visual astronomy, and f-stop estimation, I've zeroed in to assuming the LED will be behind a diffusor that in turn is tucked into a hood (or otherwise is blocked from direct audience view.

Using a whole bunch of loose assumptions, such as calling typical lux at the front edge of the stage 400-600, calling the albedo of dance marly .1 and ignoring the anisotropic shading model (which will come back to bite me, I know) I end up with a desired lux of around 100 before the self-illuminated surface can be clearly discerned from the background illumination.

So the smaller the surface, the better, right? Well, now we switch to the angular resolution of the human eye, which works out to about half a centimeter from the back of the stage. Except that resolution is actually all over the map when comparing relative magnitudes -- we can see stars, after all, and we don't have the optical power to resolve those.

Still, the numbers are converging around a 1cm^2 diffusor driven by 5-20ma as being "probably" visible from on stage. Omitting glare, of course. Plus the actor or dancer is facing right at sources (the face light) that are several magnitudes greater, and the instantaneous magnitude range of the human eye is only 3-4 f-stops.

This is going to have to be calculated when I work on making a light that can be seen against the glare of the shin-busters to allow a dancer to run off stage. (Well, since they normally navigate by the shin-busters, the real calculation is how much their eye will dark-adapt during the brief moments of a black-out as they race towards a red blinking safety light in the wings).

Unfortunately the battery numbers are not coming out where I'd like them. The competition boasts 125 hours (my calculation confirms this is possible for their setup) and that won't last the week. Running full-out with 20ma, even with the much greater capacity of "penlight" cells I can't stretch to the length of a run.

So the optimal design appears to be 2xAA, which has a slightly larger profile than 2xAAA but will last for ten days. That means the user places fresh cells in during prep for the weekend's performances, and leaves them on over the weekend. That is the best option for mechanical simplicity, cost, and limiting the failure modes (both mechanical and human error).

(I may still go for AAA, which gives a theoretical life at max output of four days. The 4x battery performance I'm getting out of penlight batteries is balanced by the 4x output power I'm aiming for in the design).

For simplicity, the diffusion may be simple cavity reflection; the "hood" is also the diffusor. Which does bring to question, however, if I want to design a tilting head so the same unit could be used for side lights. Actually, though...there's no reason it has to be designed with the light output axial with the batteries. It could be orthogonal to their long axis. This isn't a flashlight, after all!

My best guess at this point is a laser-cut acrylic "housing." The quotes are there because I don't need to close off the battery compartment. I just need to keep the batteries in place. So an open framework with a flat bottom makes the most sense (the flat bottom designed for double-stick tape, screws, or magnets to fasten it in place...provide pockets for supermagnets in the design.)

To get a true picture of the cost, then, I have to upload files to Ponoko. Even if I do intend to laser them myself at TechShop, I am entering into this on a non-profit, open-hardware basis.

Now what I really need is a name (so  I can refer to it in project notes and folder names more efficiently). "mouseLight" works for me. Especially since I am working out the details for this over a production of "Nutcracker."

Monday, December 15, 2014

Flash! Aaaaah....!

Between shifts over the run of Nutcracker I've been getting a lot of work done on my RGB light. Finally tracked down a constant-current driver that seems nearly perfect; the AMC7135.

Back up a hair. LEDs are current devices. Once forward voltage is achieved, current flow will increase near-logarithmically over only small increases in voltage; a runaway process leading to overheating and breakdown. Thus current limiters are needed.

The most basic is a series resistor. Basically, the resistor is chosen so that the current across it (calculated assuming the voltage drop across the LED is constant), is the same as that desired for the LED. Pick the right resistor, the loop current remains under the limit. The problems are two; all the voltage above the necessary forward voltage of the LED is transformed to heat in the resistor, and, the resistor is only of the correct value when the voltage is stable.

The better regulation is essentially a transistor that uses the current flow in a feedback loop, thus holding the current at a desired point. With the right components, these circuits will compensate for a range of voltages and, as well, react thermally in the correct direction to compensate for the changing behavior of LED and circuit as it warms up.

However, the typical packages I was finding were a couple bucks each and required up to a dozen external discrete components; capacitors, resistors, signal diodes, zener diodes, even inductors. Which was a lot of parts to be adding to a small board especially when I wanted to control three channels or more of LED.

But the AMC7135 is the ticket. Unlike many current limiters, it is preset internally to 350ma. It requires essentially no external components. And it may be a usage that's out of spec, but it can be successfully PWM'd. There are in fact high-end flashlights that use this exact circuit.

Again to step back; LEDs require a minimum voltage, and when this voltage is achieved will consume as much current as you let them. This makes them poor candidates for resistance dimming. The best way to control their intensity is to turn them on and off quite rapidly. By varying the ratio of time on to time off, you achieve a nice control of intensity.

Do this to red, green, and blue LEDs and you can achieve a decent range of colors as well.

Which is what I've been trying to do for over a year; to create an extremely cheap, small, high-power colored light that can be used inside props, on costumes, and in other theatrical applications. There are a lot of commercial packages that are similar, but few offer two of the things I find important for this application; tight control, and battery power.

Control turns out to be the sticky issue. What I really don't want is all those cheap LED toys; things you have to hold down a button and scroll through different colors to get to the one you want. This is non-optimal for theatrical use. It should be able to be set so when the actor (or the light board operator) hits a switch it lights up the right way the first time.

Proof-of-concept; ATtiny controlling two channels of a 3W LED via darlington transistors, on a 4xAAA battery pack. When you hold down the button it does a candle flicker.

There are also commercial products that allow you to dial up a preset -- from infrared controllers to full DMX-512 wireless solutions to the pretty-durn-close BlinkM. Where all but the BlinkM fail for me is that they are fiddly (or expensive). We don't want a lighting controller gaff-taped to a nearby wall with sticky tape pointing at the right preset. That's just another kind of over-complicated work around. Again, we want a user-programmed preset. After the light is set, it just plain turns on.

The only real failings of the BlinkM for me is lack of power for theatrical use, and some lack of flexibility in the existing controller software. Which might just be a conflicting paradigm thing (they went with a drum machine model for animations, I went for a flexible event-based system for mine).

The BlinkM from ThinkM.

So...I've had the high-power, hard-programmed RGB working. I had it on a costume in a show, and over the show it did exactly as required. The problem is, setting the desired look was a programming task. It was done with the skills, the knowledge of the circuit's behavior, and the programming tools I own.

The "Wiz" jacket, switching between free-running light animations when commanded remotely from the simple GUI shown on the laptop beside it.

What I want is for anyone to be able to purchase a kit or make the open-hardware light, set it to a theatrically useful look (oil lamp, say), and be done with it. But I am having no luck making the power I desire accessible to the end-user I envision.

Also, on the hardware side, if I make each individual light capable of simple stand-alone programming, it increases the unit cost. It makes too much sense to offload as much as the hardware as possible even if that does mean the end-user needs a minimum of two things to get started instead of one.

Hardware USB is the most transparent option. To put it on each individual light, though, increases their cost excessively. Plus the varied solutions are non-optimal in different ways. USB is a complex protocol, and there are no all-platform open definitions for producing a virtual COM port from it -- which is what is needed to allow one to write a nice user-friendly front-end (or even to surf in to the device via Terminal and program it on a command-line basis).

The older Arduinos used a translator chip from FTDI to handle interface to the user's USB port -- which also required a driver to be installed to show up on some platforms. There are AVR chips that are USB-native, but like the FTDI they are surface-mount devices and add considerably to the cost and effort of putting together each light.

HID is actually a lot easier; you can get a class-compliant HID with out-of-spec modeling of proper USB behavior, and it will still be recognized by most computers as a keyboard. There is even a clever hack that translates extra characters on this virtual keyboard into a fake serial port. But it requires multiple pieces of helper software and probably doesn't work on Mac yet.

Clever Arduino compatibles like the Trinket get by because the programming port used by avrdude is not per se a serial port. Basically, you can program an AVR using another non-USB compliant AVR (or even program itself with a clever enough bootloader) but it can't be made to show up as a serial port.

So, unless I want to write a wrapper for avrdude that presents a simplified GUI to the end-user but invokes the avrgcc toolchain behind the screen (too many pitfalls there I'm afraid), the best option is to move the expense of this USB-to-serial translation off the light and into a second board.

The BlinkM does it quite cleverly. It is programmed via a I2C connection, which is done using the soft serial code on an Arduino. Simply put, you plug the header of the BlinkM into matching pins on an Arduino header, run the provided Arduino sketch to turn the Arduino into a I2C to USB translator, then run the BlinkM programming software.

Somewhat similarly, you can once you have the complete Arduino software up and running, use an Arduino (with or without invoking soft serial -- in the old days it was done just by prying the ATmega itself out of the socket) to patch you though to the AVR you want to program. You could even use the Arduino IDE to write and upload the program.

And that's what I'm going to do. Only with an additional wrinkle. The power user can and should write their own software into the light. But for someone who just needs to specify the exact color it boots up into, or how long it takes to fade out when turned off, I am going to provide a space for user presets in non-volatile memory. So the end-user will use either FTDI cable (which are widely available in Arduino circles), or an Arduino with the appropriate software, to plug into the light node. They will try out different looks and then commit the look of their choice to one of the slots of user memory.

If and when I get that far, I can wrap this in a proper GUI. But before then, this behavior of going direct to looks or presets and setting the boot preset is also exactly what I want for inter-node communication.

Because the other thing I want to put on these lights that doesn't exist in any cheap, easy-to-use, prop-and-theater-friendly option is remote control.

The proof-of-concept for a accelerometer-based effect; this combined accelerometer, XBee module, and AAA battery pack to detect a throwing gesture and play a sound effect from a remote receiver. 

At the base of it is a protocol to let each node communicate bidirectionally. I'm doing it this way, instead of using existing buses like I2C, for transparency as I program. The BAUD rate I envision may be low, but I can still afford to send ASCII nouns rather than cryptic hexadecimal sequences.

I envision a complex prop that might have several lighting nodes each playing canned animations in free-running mode, and a single controller just telling them when to switch on and off. This is also how we will interact wirelessly; you won't control a candle flicker in real time, you will command it to light, then blow out.

A sticking point at the moment is the RF option itself. I'd like to provision each lighting node with the basic circuitry, omitting only the expensive transceiver. But between level shifting and LDO and of course a couple status lights, it may add too much to the individual cost and complexity.

This is worse when considering the transceiver options. Now, there are a lot of cheap, off-the-shelf 434 kHz, Bluetooth, etc. options. But all of them are inadequate for theater. They offer fifty feet of transmission in line-of-sight. Theater requires shooting two hundred and fifty feet through set walls and a fifty-member dance ensemble.

The leading options I have now are the Hope RF chips -- particularly the RFM69W and HW chips -- and the XBee series.

Trouble is, the Hope chips appear to require four digital pins to communicate with them properly, and the code to navigate their communications protocol eats up a full 8K of program memory. Which makes them pretty much impossible to run from an ATtiny, and pushes them up into "requires a late-model Arduino to operate" territory.

The XBees are simple and transparent and I've been using them in actual theater situations. Their major drawbacks are frequency and price. They are 2.5 GHz devices, and shorter wavelength radio has poor penetration of obstacles. The "Pro" models, which have the necessary power to punch from light booth to backstage, are upwards of forty bucks each.

So down the road, I hope to get the Hope chips working for me, but for now I'm going to have to hope that only a few of the more expensive transceivers are needed by the typical end-user.

And as for interfacing with the worlds of DMX-512 and MIDI; I'm holding off there. I see this as happening via software (aka a laptop with a DMX-512 dongle) even though it would be a lot more turn-key if it was a single stand-alone DMX-512 module. But getting one of those up to spec and robust, it would end up costing pretty close to the commercial models that are already available. So not really something I need to explore at the moment.

But, at last, I'm nailing down the specs on the basic light module. I should be able to get the first order out to the fab house for printed-circuit boards this week. Maybe even assemble one in time for my coming lighting design.

Here's the major parts. Power supply 3.7V to 5v; aka lithium poly, 5v wall wart, or 3x "penlight" batteries (AA or AAA). The latter is my preferred choice for theater due to the ease of swapping in freshly-charged cells (rechargeable or otherwise).

I have a bunch of 3W RGB's to start with, but I've found RGBW's available for about twice the price and those will provide a much nicer mix for typical theater applications (candles, lanterns, headlights...) Unfortunately RBGA or RGB+ "Warm" White (4,000 K) are harder to come by at budget prices.

The buck driver on the right there is there to illustrate mostly the SOT-89 surface-mount footprint of the AMC7135's, of which I'll be provisioning four of. A nice additional quality of these things; you can run them in parallel to drive higher wattage LEDs.

The controller chip looks at this point to be an ATtiny 20-pin, probably the attiny861A because of its 8K of program memory. I do love the 25/45/85, but there just aren't enough pins. Even the ATtiny84 is pushing it for sufficient pins -- and doesn't have the nice UART of the larger, more expensive chip. The various chips to the lower left are standing in for that footprint. I'm going through-hole and socket for ease in assembly and replacement.

I haven't decided on the header; whether to go with the 2x3 ICSP type or the 6-inline format shared by the FTDI cable and other similar devices. And I'm still pondering whether the XBee will be on the main board or have a daughterboard (for which the Adafruit board at the top right is standing in). The trade-off, as always, is between a compact footprint and lots of out-of-the-box functionality, versus price per item. Because I have to anticipate situations in which you'd want a dozen of these things scattered around as part of a more complicated lighting effect, and that would be a lot easier if I could keep them under $10 each.

Thursday, December 11, 2014

It Ain't the Heat, it's the Humidity

Actually, it is both.

A very simple bit of physics. The speed of sound changes due to the density of the air. Which means altitude, temperature, and humidity.

The absorption of sound by air (that is, the way acoustic energy falls off with distance in addition to the geometry of inverse-square dispersion), also changes with humidity, and this is frequency-dependent.

But let's take the first. Assume you EQ'd your house until the speaker response was nice and flat. Well, when the audience walks in, you've got a problem.

What is room EQ? Two things, primarily. There's the response curve of the speakers and amps. And there's the response of the room. This latter is due largely to multipath reflection. Sound comes from a speaker. It travels across the audience, hits the back wall, reflects back over that same audience. Where the two paths meet, parts of the waveform will be in phase and combine, others will be out of phase and destructively interfere. Thus, peaks and valleys in the response -- the most prominent of which can be related directly back to the dimensions of the room itself.

So what happens if the speed of sound changes? The two paths increase or decrease in time. But they do so as a proportion, as a percentage, of their previous time. Which means the phase relationship changes. Which means the peaks and valleys are not in the same place.

Which means that 2 kHz peak you notched out of the mains is no longer there, and you are taking a 2 kHz cut in the middle of the sound. And there's a new, uncorrected peak at 2.5 kHz. Or maybe 1.5 kHz -- depending on how the speed of sound changes, and how it relates mathematically to the critical dimensions of the room.

Oh, yes. And the temperature and humidity does change. Not only does each human body in the audience pump 100 watts of heat into the air, each gives off water vapor, raising the humidity. And that's not the worst of it. The graph changes radically between 10% and 30% humidity. Which means starting with a dry room and introducing warm sweaty bodies will have a much greater effect on the speed of sound in your venue than you might expect.

And that's before we even get to considering the frequency-dependent attenuation that also takes place over that critical humidity curve. Which -- just to put icing on the cake -- is most prominent at 12.5 kHz; right in the middle of our frequencies of interest.

A Perfect Storm

There's a renter that comes into my usual theater space every December. We typically have 3-5 technicians on our overhire list, so we split up the shifts to cover the gig. This year, there's just me, and I've got the entire thing. My hours from Sunday were 14, 13, 12...and then a mere 6 (plus two hours in a meeting across town on another show).

Over the same period that protests have been marching through town tying up traffic, breaking windows, setting fires, and attracting news helicopters that sit over our apartment loudly clattering away until the wee hours of the morning.

The protests finally a record-breaking storm moved in, rattling windows all night, knocking down power lines, and flooding half the streets.

That's a twelve-hour shift of climbing ladders, by the by, with maybe fifteen minutes to grab a bite (no lunch or dinner break). And then little sleep due to riots and weather. And little food due to being almost broke.

And you know what? I feel great.

Pan With a Real Handle

...which was a mildly clever pitch I heard some years ago from a pan-handler in The Haight, near the "pan handle" of Golden Gate Park.

Anyhow, I'm getting into a lighting design now. Nice little space, about twenty dimmers to play with (but some LED pars that should take up some of the slack).

And once again, I learn more about the goals for a circuit I'm designing when an actual application comes along. There's an old-style radio in the show and the dialog makes a point of that nice tube glow.

Seems like a good application for the Duck Light; dial up a good "glowing tube" red-amber, rig it to a button....wait. The current version is color-programmed off-line. There's no provision for real-time adjustment of the color.

Right. So I really do need to set up a software chain that, at the least, allows real-time dialing of a color selection and pushing that into permanent flash memory.

And in the nature of this play, I'd really want the effect to be both remotely controlled, and DMX controlled. I had envisioned running complex effects off a laptop, but for this show, it would be nice to have a modular, turn-key, DMX-512 solution instead.

So this is once again something the BlinkM already does, and one better; it has a primitive sequencer built in, and uploads fresh code via its own custom GUI-based IDE. It makes it nearly trivial for the end-user to select color and simple animations the will operate in free-running mode.

I'm still caught on how to offer programming flexibility without the overhead of FTDI or other USB toolchain -- since I'm not really up for adapting any of the open-source serial-over-USB code bases myself!

Thursday, December 4, 2014

KP Duty

Spent a full day at TechShop milling, grinding, and most especially filing. It looks like all the parts from the parts kit will more-or-less fit:

The shroud retaining lever fits, but the shaft was sheered near where the nut would be. Right now it is a tight friction fit but if I want to be really nice, I need to cut off the shaft, press-fit a new one, and thread that for a new retaining nut.

The magazine catch release appears to fit in the existing slot, with the pin being press-fit. At the very least I have to re-drill the holes, as one got slagged over.

The charging handle appears to be assembled through the gun, with the bolt catcher also pinned on. Sounds very fiddly to do! (I don't even know how you can press the -- missing -- pin in once the side rails are welded on).

I just figured it out. There's a hole in the side plates that is drilled slightly larger than the retaining pin. Trouble is, this requires moving the bolt back slightly from out of battery -- something no longer possible since I welded it in. But I should be able to just mill the slot a bit further and make it work.

The sight was wrenched off but even straightened, the remains of the rivets do not appear to be located properly. So I'll need to grind those off, possibly re-mill the slot a little to achieve a proper fit, then drill new holes. Which I might want to tap, or use soft brass on, so the next person along doesn't have to lever the sight assembly off.

The trigger assembly is held at the front by a block with a notch in it; only the stub of this block remains as the rest was in the direct path of the cut. So I have a few hours yet -- among other things, cleaning off the cosmoleum or whatever it is on the remaining original parts so they fit and move properly.

On the receiver itself: it is never going to look pristine, and I spent all morning with diamond bit and small hand files just re-shaping the barrel lugs after my last attempt to fill some of the remaining voids. But at the same time, it is starting to look decent and it might just be worth trying to fill the two worst gaps; the remaining slot in the side plates, and the small gaps to either side of the end cap. The latter are visible when the weapon is assembled.

To do the latter, I'll want to lathe up a new backing plug from the last of my 1-1/4" aluminium rod. Although since the bolt is now a permanent part of the rebuilt receiver, there's no good reason to keep any more of the rear clear than is needed for the rod that is part of the end cap. The big downside is I'm sure to damage the threads again, and have to grind and file those some more.

The side plates are a bit more troublesome. I had a long and difficult time trimming weld metal that made it past the end of my last set of backing plates, so I'm leery of welding near the tube. But pretty much, would be milling a chunk of aluminium plate to stand in for the trigger group. And probably milling into the existing side plates to provide a slot for fresh pieces of steel. The MIG will fill a lot, but even with a backing plate, where I fill is where I'm going to have to go back with mill and grinding bits to clean out the slot again.

I'm reading up on prep for gun bluing now. I have both brown and blue and should be able to get a nice aged patina out of them. I suspect very strongly I'm going to have to spend 10-20 hours continuing to file things smoother and smoother, then graduate to finer and finer emery papers, then finally to steel wool and wire brushes. So there's a bit of labor involved, still!

Tuesday, December 2, 2014


I welded today.

Took the SBU (Safety and Basic Use class) for the MIG welder last week. Went in today and pulled a bunch of scrap (mostly battered welding chits from previous classes) from the bin and tried to remember what I'd learned in class. I'd had a lot of trouble in class for some reason. I think because previous experience in stick welding had me used to eyeballing the arc, and the bulk of the MIG electrode holder means you really can't do it that way. So fifteen years of muscle memory was in my way.

I messed with the dial settings and made lots of nasty spots and sputters for about an hour, then it started to click. The "bacon" sound came almost immediately upon achieving a good spark coil sound (the two sounds are superimposed; you can hear when you are getting a good weld). And once I found that groove, I could pretty arbitrarily change the electrode distance and travel speed, pretty much adjusting one to the other.

It is a whole dance when you are doing it right; maintaining the arc, building and pushing the puddle, achieving the right blend of penetration and fill, controlling the heat build-up. All through constant movement whilst maintaining the correct distance and travel.

Put rougher scrap on the table and started at welding into corners and filling voids. And it was going well enough I jumped right up to working on the Suomi again.

It is nice to finally have the receiver in one piece. There's a lot of grinding and filing before I can be sure I got the measurements right, though. And I haven't decided on the best way to tackle that hole behind the magazine well. I could mill a backing plate out of aluminium plate and fill the remaining void with weld metal. Or I could cut a slot and put in new steel. Or perhaps I can mill down the stubs of the side rails enough to where I can slot in new chunks of plate steel -- probably again with a backing plate of aluminium.

I also think I have to lathe another "fake bolt" to use as a backing plate to put a little more metal at the end of the tube; the cap screws down only so far and the gaps I have would be visible in the assembled weapon. The original fake bolt I machined is of course welded permanently inside the tube at this point, and I intend to take further steps to ensure it remains so.

There's still several smaller gaps. I tried flowing solder into some of the smallest holes but my iron hasn't got the power to push heat into five pounds of steel. There's also an epoxy-based filler designed for this, but it doesn't take bluing well. I have a small brazing kit, too, but unless I get in on the TIG class some time real soon I'm looking at basically throwing big blobs of metal from the oversized wire we have and then grinding them back down again. And it may take multiple passes before I'm satisfied with it.

In any case, TechShop was very much the way to go. Sturdy steel tables, lots of clamping options, a full-sized MIG of course, and lots of bandsaws and grinders and whatnot right there to create filler with. Much more comfortable than the plywood propped up on sawhorses out in a parking lot that I started with!

Came home, and my laptop was showing that it was on battery. With the charger plugged in. Oops. Looked at the charger, and there was a nasty charred spot in the cord where somehow it had cracked or been caught (or nibbled?) and tore the outer wrap-around conductor. Which apparently is not a ground/shield; there's power running through it. Hrm. Anyhow, was able to work a length of desoldering braid into the frayed ends and restore continuity. Good thing, too, since closing weekend is almost on us and all my sound cues and keyboard patches and house music are on this machine!

To finish up the day, plugged the UMX-610 into a Reaper file I keep for that purpose and did a little piano practice. But what did I say about muscle memory? Maybe welding all day tuned my hands to the wrong expectations. I was fumbling a lot of notes. But oh well. I do far too many things to even hope to do all of them well.