Pumpkin, Inc.

Pumpkin User Forums

bug: OS_DelayTS gives wrong delay in AVRs

If you think you've found a bug or other mistake in your Salvo distribution, post it here.

Re: bug: OS_DelayTS gives wrong delay in AVRs

Postby aek » Tue Jan 06, 2004 3:22 am

Hi Martin.
quote:
It is also a problem, because it should be >80ms. You must add the runtime of the while-loop.
No.

By failing to context-switch within a timer tick, your task is violating Salvo's timer requirements.

The delay that OS_Delay() produces is correct -- OS_Delay() produces an 80ms delay (in your case) between context switches.

I will investigate why OS_DelayTS() isn't behaving as expected.

------------------

-------
aek
aek
 
Posts: 1888
Joined: Sat Aug 26, 2000 11:00 pm

Re: bug: OS_DelayTS gives wrong delay in AVRs

Postby lattenzaun » Tue Jan 06, 2004 3:32 am

Hello,

yes you are right. OS_Delay works properly. (Now I inserted an OS_Yield between while-loop and OS_Delay to get what I expected: 104ms (...not 108)).

Please, let me know about bugfix in OS_DelayTS.

Regards,
Martin

lattenzaun
 
Posts: 11
Joined: Thu Dec 18, 2003 12:00 am
Location: Vienna, Austria

Re: bug: OS_DelayTS gives wrong delay in AVRs

Postby aek » Tue Jan 06, 2004 4:58 am

Hi Martin.

I wrote:

quote:
between context switches.
On further reflection, strictly speaking, this is not correct.

A call to OS_Delay() should result in the specified ticks' worth of delay between when OS_Delay() is called in the task, and when execution resumes (after the delay expires) inside the task (assumes that tasks runs immediately after delay expires). This is actually more like what you proposed -- 80 + 25.2ms = 105.2ms in your example. I think that OS_DelayTS() would do more like what you originally wanted -- 80 delay + 25ms "in task" still results in 80ms. I have to think about this some more.

Of course, none of this is an issue when a task yields back to the scheduler within a short period (i.e. within one system tick).

What got me curious was that when I added OS_Yield() after OS_Delay(), it still "repeated" every 80ms. But if you add it before OS_Delay(), then you get 104ms.

It took me a while to understand why ... it's because "lost ticks" are being collected during your in-task for() loop, and so they are a "credit" that is applied against OS_Delay(20)'s 20-tick delay. But lost ticks are cleared (i.e. processed) in every trip through OSSched(), i.e. whenever you call OS_Yield().

Collecting lost ticks is important, because it keeps the global OSTimerTicks counter correct even when OSSched() isn't called as frequently as required. But its effect on OS_Delay() is unexpected. I have to consider this more carefully.

Thank you for bringing this to my attention.


------------------

[This message has been edited by aek (edited January 06, 2004).]

-------
aek
aek
 
Posts: 1888
Joined: Sat Aug 26, 2000 11:00 pm

Re: bug: OS_DelayTS gives wrong delay in AVRs

Postby aek » Tue Jan 06, 2004 5:15 am

Hi Martin.

One quick way to fix this (i.e. get an "uncorrected by lost ticks" 80ms delay whenever you call OS_Delay(20) in your example task) is to call it like this:

code:
OS_Delay(20+OSlostTicks);

Note that the delay's accuracy suffers when lost ticks occur. Naturally, in the next release, this will be incorporated into Salvo's delay services (functions).

------------------

[This message has been edited by aek (edited January 07, 2004).]

-------
aek
aek
 
Posts: 1888
Joined: Sat Aug 26, 2000 11:00 pm

Re: bug: OS_DelayTS gives wrong delay in AVRs

Postby lattenzaun » Tue Jan 06, 2004 9:53 am

Hello,

sorry, I forgot to notice, I am calling OSTimer() all 500us. I expected the for-loop will execute every 10ms, but it executes every 9.5ms.

Now I replaced OS_DelayTS(20,TaskBlink1) by OS_Delay(20,TaskBlink1). I expected to get a rate of 10ms plus the execution-time of the while-loop for my for-loop, but the rate of the for-loop is only 10ms.

Is it wrong to use OS_DelayTS to schedule the for-loop at a constant rate?

Regards,
Martin

lattenzaun
 
Posts: 11
Joined: Thu Dec 18, 2003 12:00 am
Location: Vienna, Austria

Re: bug: OS_DelayTS gives wrong delay in AVRs

Postby aek » Tue Jan 06, 2004 10:26 am

Hi Martin.

I think the main problem is the rate at which you're calling OSTimer(). We recommend (see the manual) a minimum of 2,000 instruction cycles between calls to OSTimer(), and 10,000 is what we usually use.

When you call OSTimer() too quickly, you may be overrunning the system's ability to maintain proper timing. What happens if you change the rate to 200Hz (every 5ms) and do OS_Delay(2)?

quote:
Is it wrong to use OS_DelayTS to schedule the for-loop at a constant rate?
No, but I think the problem is that the system tick rate is too short (see above).

------------------

-------
aek
aek
 
Posts: 1888
Joined: Sat Aug 26, 2000 11:00 pm

Re: bug: OS_DelayTS gives wrong delay in AVRs

Postby lattenzaun » Wed Jan 07, 2004 1:54 am

Hello AEK.

Many thanks for your quick answers and for your workaround in SB-22.

Now I also used OS_DelayTS(20+OSlostTicks, TaskBlink1); in order to schedule the for-loop at a constant rate, and now it seems to work. So both OS_Delay and OS_DelayTS needs the addition of OSlostTicks. What else timer relevant OS-calls needs the addition of OSlostTicks? What OS-calls will set OSlostTicks to zero?

Greetings from Vienna and Best Regards,
Martin

lattenzaun
 
Posts: 11
Joined: Thu Dec 18, 2003 12:00 am
Location: Vienna, Austria

Re: bug: OS_DelayTS gives wrong delay in AVRs

Postby aek » Wed Jan 07, 2004 2:20 am

Hi Martin.
quote:
What else timer relevant OS-calls needs the addition of OSlostTicks?
Anything that involves a task delay -- i.e. OS_Delay(), OS_DelayTS(), and OS_WaitXyz(..., timeout).

I'm not surprised that OS_DelayTS() now works properly -- the lost ticks are independent of the operation of timestamping (TS).

quote:
What OS-calls will set OSlostTicks to zero?
OSTimer() increments lost ticks. OSSched() resets them to 0. So this problem only crops up if/when the call to, say, OS_Delay() occurs immediately after OSTimer() is called, but before OSSched() is called (OSSched() normally follows OS_Delay() because of the context switch anyway).

This is the case (somewhat rare, but it happens as part of normal Salvo operation and is part of the timer's specs) where the delay error is maximized, e.g. an OS_Delay(1) can result in a very short, sub-system-tick delay.

------------------

-------
aek
aek
 
Posts: 1888
Joined: Sat Aug 26, 2000 11:00 pm

Re: bug: OS_DelayTS gives wrong delay in AVRs

Postby aek » Wed Jan 07, 2004 11:34 am

Hi Martin.

Please see SB-22

------------------

-------
aek
aek
 
Posts: 1888
Joined: Sat Aug 26, 2000 11:00 pm

Previous

Return to Bug Reports

Who is online

Users browsing this forum: Baidu [Spider] and 2 guests

cron