The perfect clock: A theoretical approach
For those that showed interest in my previous journal.
Warning: Long post ahoy!
As I have commented in a previous journal, for several years already I wanted to make a digital clock with a PIC microcontroller and I never really got around to actually build it. Not because I didn't know how, just because I was lazy or didn't have the time. Last year at uni, while not paying attention to maths class (should have, but instead I did this) I spent my afternoons trying to at least think how the assembly code would be. I will later look up the code I've already written and include it here. This is what I have observed, and I hope others will find it useful.
Ever since I had to program one in school, one of the most elemental flaws in a simple clock design is error propagation - which might not matter when the clock is only a decorative item in projects that don't live more than a couple of minutes inside the lab, but is kind of an important issue if what you're making is a clock that will (hopefully) stay for years on that shelf in plain sight. And dammit you want it decently accurate.
Here I shall describe a simple setup that should run (at least) on either a PIC 16F84 or a PIC 16F877 (tested on those, should run on other 16F- models as well).
This microprocessor should be running on a 4MHz crystal oscillator, to achieve the simple number of 1MIPS (1Million instructions in 1 second). That's just mine, do your own calculations for other oscillator frequencies.
The basic design can't be simpler. First we need to enable timer-based hardware interrupts, by setting INTCON to 1X1XX000. Then the Prescaler has to be set to 1:128 by setting OPTION_REG to 00000110, to scale down those interrupts to a longer, more practical interval. This should leave us with an interrupt every 8192 oscillator cycles. Which at 1MHz last 1 microsecond each, so 8192 cycles should last us 8.192 milliseconds between interrupts.
Hardware-based timer interrupts are obviously periodical. If they are fired every 8.192ms, you count enough interrupts and you should be close enough to a full second, right? Not quite, because 8.192ms x 122 = 999.424ms. That's almost one second. Almost. If you just increment the seconds-count register after 122 interrupts, your clock will be 1 second ahead for every about 14 minutes. Doesn't sound so good now, eh?
Well, you could find out how many seconds does it take to be one full second ahead, and substract one second at that point. Fine, that would WORK. It's EASY. And it's the only solution I've ever found on the internet. But it's a horrible hack and it will look HORRIBLE if you happen to be looking at it at that time. But go ahead if you like, do that and stop reading here.
I'll keep going for those that want their seconds more uniformly spaced. What if we count 123 interrupts? Well, no. It turns out that we'd be off the other way: That's 1007.616ms. A larger error, with longer seconds - leading to the clock being 1 second behind every about 5 minutes of running. That's even worse.
So if we want our seconds more uniformly timed (and our clock not looking like it has a bug), we'll have to use a combination of 122 and 123 cycles. Now, counting 122 interrupts gets us 0.999424 seconds. If we add up enough of these "short" seconds, the leftover time should someday be enough that a "long" second (123 cycles) wouldn't be too noticed. It turns out that magic number is 13:
13 "short" seconds = 8.192ms x 122 interrupts x 13 = 12992.512ms = 12.992512 actual seconds
If at this point we make the next second a "long" second (of 1.007616 actual seconds), our 14 seconds will last exactly 14.000128 actual seconds. That's much better, we have actually trimmed the error to about a four-millionth. But it is still not good enough for me.
At this point we can repeat the above process. We have a cycle of 14.000128 seconds. We could check how many of these are needed to be able to easily substract the leftover error. Notice it happens to end in 128, which is a power of 2 and thus easier to add up. So if we count 64 of these 14000.128ms cycles, it adds up to 896008.192ms (that's a lot of seconds). If only we could trim those 8.192ms off the 64th cycle...
Wait a minute, don't the timer interrupts last 8.192ms? And remember that for every 13 "short" seconds, the 14th happened to be 8.192ms longer? I think we're lucky. Here, we make the first 63 14-second cycles "normal", and the 64th has to be purely 14 122-interrupt seconds. What does this mean? Well, this:
(8.192ms x 122i x 13 + 8.192ms x 123i) x 63 + 8.192ms x 122i x 14 = 896000ms
(Green="short" second, Red="Long" second)
Cue the sunshine, because what we have here, gentlemen, is 896 perfectly rounded seconds - or 14 minutes and 56 seconds.
Now we can rejoice watching others' sub-perfect, sloppy, whole-second-substracting clocks while enjoying our beautifully smooth clocks with perfect cycles of 14:56.
With this, we have successfully removed the software error in the time calculations, leaving only the time shift caused by (possible) manufactural imperfections in the oscillator crystal.
Now you take care of date-keeping and Daylight Savings Time.
I prefer coding in Assembly, I just don't trust higher-level compilers to do what I want Besides, I remember being told that the C compiler (provided by PIC) translates to long unnecessary assembly code. So what better than doing the assembly yourself
couldn't resist sorry.. actually sounds pretty cool. Wish i was good at maths and logic so i could program too..
)
Perfect time gone