OSlostTicks is 8 bits, so if you are exceeding 8 bits of delays while holding off OSSched() anywhere, then relative time (via OS_Delay() will be in error. Absolute time (via the ticks services) will always be correct as long as no ISRs (to call OSTimer()) are missed.
Keep in mind that Salvo's timer has an unherent (in)accuracy of +/- 1 tick.
I assume you are using a += nudge style of updating your timer, which will maintain accurate time as long as you never miss an ISR (which is diferent from holding of OSSched() for more than one tick, which is what your long task is doing). IOW, for the purpose of this discussion, I assume your ISR is showing accuracy of well under 5%.
However, your decription (one long task that does not block OSSched() for more than 8 bits of lost ticks) suggests that the lost tick function is not the root cause.
Note also that ticks (not the same as delays) are by default 32 bits, and are readable and writeable (e.g. via OSSetTicks()), so you don't have to maintain your own seconds counter -- just use Salvo's.
All this said, I think what you want is to use OS_DelayTS(), etc. These delays have long-term zero jitter, instead of OS_Delay(), etc., which is affected by short-term jitter.
So, to debug this, I would:
1) Use an accurate external timer to ensure that OSTimer() is called at the ISR without any error (ever).
2) Use Salvo's native ticks feature (and leave OSBYTES_OF_TICKS set to 4).
3) Use OS_DelayTS() and compare the results against using OS_Delay().
4) Do some long-term checks of the vaue of OStimerTicks. It should always be accurate, since it is updated in OSTimer(), not OSSched(). If it's not accurate, then that points to a problem with the rate at which OSTimer() (and hence its paret ISR) are called.
5) Using a Pro build, do a trap in timer.c on whether OSlostTicks ever rails at 255.
------------------
[This message has been edited by aek (edited February 14, 2008).]