I know, this blog is starting to look like an electronics blog. It all does get back to theater eventually, I promise. As will I. Now that I've remembered that you can put pictures here, I'm going to be moving some older posts about lighting design from a previous blog. I have photographs from some of my old designs I think will help explain the points I make so badly otherwise.
Plus at some point I'll scan in some of the actual paperwork used for shows and designs; cue lists, stage plots from actual bands I've mixed, typical marked-up script pages, and so on.
For the moment, though, I'm still going on about learning AVR. This is as much for me as for anyone else; in attempting to explain, perhaps I understand what I'm trying to grasp just a little bit better.
Interrupts go way back in the history of computing. One of the greatest examples I can think of is the Apollo Guidance Computer. The AGC was purpose-built; it would be not be too much to say the chips were built around the software tasks, rather than the other way around. And the internal architecture was nothing if not interrupts.
Ah, but I'm going to have to interrupt once again. Let's go to the very basics of a program. Programs are linear. In a complete computer environment, due to such tricks as multi-tasking, multi-threading, multiple cores, math co-processors, and of course interrupts, many things may be happening at the same time. (Or at least, give the appearance of happening at the same time.)
For a single, simple program, the flow starts at the top and, although there may be calls and jumps, it continues along a single pathway.
AVRs are single-purpose machines. They aren't intended to run a program then stop. They are intended to start running when booted and keep running until power is cut off. So whereas an endless loop with no possible break would be anathema in any other program, it is standard in the AVR world.
If nothing else, there is nowhere to break to. If you could press "Control-Q" or type "Exit," there is no computer to exit to. There is nothing but the AVR itself, and whatever it is running.
Thus, the below is a perfectly legitimate AVR program:
int main (void)
{
DDRA = (1 << 0);
PORTA |= (1 << 0);
}
It loops endlessly, doing nothing but setting a connected LED to "on."
Perhaps not the most useful program. Let's add a second loop that actually does something:
int main (void)
{
DDRA = (1 << 0);
for (;;)
{
PORTA ^= (1 << 0);
_delay_ms(2000);
}
}
The inner "for" loop evaluates as always true, meaning when the program enters that loop, it never exits. Within the loop, the output of Pin0 of the A register is toggled. The LED blinks (with a time dependent on the actual clock setting..."ms" is not a real-time constant.)
Which is great, but there is a potential problem.
Say you have a gadget that is going to do something when a button is pressed. While waiting, it flashes an LED to let the user know it is in "Ready" mode for the button press.
But what happens when the user presses the button? Look again at the loop above. The only place to put that button routine is inside the for(;;) loop, and the vast majority of the CPU cycles in that loop are spent sitting in the _delay_ms(2000) command.
Only in those brief moments when the AVR finishes the delay, executes the LED toggle, and then returns can the user's input be detected.
So here is the Apollo Spacecraft's AGC, efficiently using up 99% of the available CPU cycles to compare the pilot's input with the actual change of position of the spacecraft as fuel mass is expended and the spacecraft's COG is changing. And then the Approach RADAR detects a potential collision.
Enter interrupts.
On the AGC, when something like the RADAR wanted attention NOW, the AGC quickly saved current status of the registers and its position in the program stack, exited and cleared, loaded and executed a different program as demanded by the emergency, then when that was done, retrieved where it was and what it had been doing and went back to it.
It did so so efficiently -- it did so in such a fail-to-safe mode -- that when a cascade of interrupts brought down the fly-by-wire to the limits of functionality during the very first landing, a smart engineer was able to recognize the error code that was flashing on the AGC's little terminal (the DSKY) and give the Mission Commander the go-ahead to land anyhow.
As it turned out, they'd left the Approach RADAR on by accident, which was convinced the spacecraft was about to impact another craft about 3474 kilometers in diameter and made of solid rock.
The point being, though, that interrupts are a way of smoothly exiting a program upon an outside trigger.
So there are two ways of dealing with the above button-and-LED problem. One is to use what in the AVR world is called a pin interrupt. Not all pins are available for interrupt use, but you could wire the button to one that is. The loop would be broken, the interrupt routine run (which could be a complex program on its own), and on completion of all code the AVR jumps back to the loop and continues running from where it left off.
Or, you could make the LED blink an interrupt. Sounds tricky, I know.
The AVRs make available several internal timers. These are registers designed to count clock cycles, completely independent of ordinary program flow. Think of it as a for(i = 0; i < 2000; i ++) routine that runs outside of and simultaneous with the Main() program loop.
Using a global interrupt, we can detect each time the loop evaluates as false, toggle the LED, reset the timer, then jump back to what we were doing. Total cost is a cycle or two of program time. Meanwhile the bulk of the clock cycles are spent where we want them; in the main body of the program.
To make such a global interrupt work, we need to do four things; we need to enable global interrupts, we need something in the timer that can be detected, we need to turn on the appropriate interrupt in the timer mask, and we need to add the actual interrupt.
Here's my current working test code:
#include &< avr/interrupt.h &>         //make sure to include the interrupt library
ISR(TIM0_COMPA_vect)       //this is the actual interrupt vector
  {
  PORTB ^= (1 << 4);        //toggle the LED
  }
int main(void)               // the standard avr libC for the main program body
  {
  DDRB = (1 << 4);             // set Data Direction Register pin3 = output
  TCCR0B = 0b10000101;   //yes, I am setting the timer flags in binary
  TIMSK |= (1 << OCIE0A)     //the compiler is smart enough to recognize the
                                    // name of the flag I want to set here, and replace it
                                    // with the proper shift when I compile.
  sei();                         // turns on global interrupt
  OCR0A = 200;             //I know, magic numbers. This is the number put in the
                                // comparison register B that the timer evaluates for. Another
                                //way is to use the overflow (at 255 for the 8-bit timer).
  for (;;)
    {                             //the "body" of the program, which is empty. It goes
    }                               // in here and never comes out.
  }
(As an aside, it is strangely difficult to put properly formatted code samples on Blogger. Something else I guess I'll have to find time to learn, if I'm going to continue talking about this stuff).
There are a few other tricky things going on here. I said the timer is driven by the CPU clock. That is not exactly true. The timer can be driven from the CPU clock or from an external clock. But before it counts a single bit the clock input goes through a prescaler. Basically, a divider.
Above, where I read in "0101" to the first four bits of the timer0 control register, I was telling it to use the internal clock, and to divide that clock by a factor of 1024. Since my chip was set in the fuses to run from the internal oscillator at 8mHz, this brings the LED flash frequency down to where I can see it.
I've actually cheated and left off several other necessary #includes, which tell the compiler to look into other avr-C libraries for functions I've called. avr/io, for one. But by the time you got to programming interrupts, you'd be used to putting them in all your programs by default.
A more important caveat is it is the AVR series. Each chip has different registers, different pinouts, different options. There really is no getting around having to download the Atmel datasheet for whatever chip you are using, and then modifying the code samples you are working with.
These are not as straight-forward as working with Arduinos!
I am reasonably confident in my ability to turn on and off pins and call interrupts. My next task, then, is to try out the direct timer output mode to the LED, which makes possible a 100% hardware blink (which is to say, once the code is run once, the AVR handles what would be interrupts internally, without ever needing to enter the main program space). Then proper PWM, which uses the same output pins and timer. That will be amusing, as I will be interrupting the hardware from within the program to change the timer values (in order to pulse the LED or slew a servo).
After that, I'll move on to MIDI. The big trick for that is not so much using the non-UART serial port, but the necessary system clock scaling to get it close to the MIDI standard BAUD. It isn't as simple as the timer prescaler, unfortunately.
But this blog, I'll probably save from more detailed explanations and just give a general progress report. And start working on some proper theater-related posts.
No comments:
Post a Comment