Thanks, it would certainly make things easier in some of my applications. I assumed (from a quick look at the source) that the timestamp was calculated at the start of the delay and then treated the same as any other task delay. So to implement this (I thought) would only require adding some code to handle the initial timestamp calculation ?
By the way, I think there are some problems with the current timestamp code. In a repeating task loop, if you choose a time delay that is very large, i.e. (say) over 50% of the resolution of the OStypeDelay type, the delays don't seem to be correct, you get alternate long and short delays instead of steady fixed length delays. I think this is due to overflows occuring when calculating the timestamp delay value - the intermediate variables used are the same size as the source and destination variables. Normally to avoid this type of overflow you would ensure the intermediates are twice the size of the source and destination, i.e. source=word, intermediate result=long, destination=word instead of source, intermediate and destination=word. What do you think ?
Originally posted by aek:
Hmmmm .... I have to think about that. Non-trivial.
Specifically, I need to see if there is room within the tcb struct to do TS delays when one is also waiting an event ... if not, tcbs would have to be expanded for just this purpose.
I'll get back to you on this.