Pumpkin, Inc.

Pumpkin User Forums

Optimizing delayed task with very big delay

If you can't make Salvo do what you want it to do, post it here.

Optimizing delayed task with very big delay

Postby luben » Sat Dec 30, 2000 10:16 am


If I want to make a task that runs once every day (or every week) there could be some ways to do this:

1. To make system tick slowlier and to set number of bytes for delay 3 or 4 bytes. Then I can directly obtain long delays. But this has 2 disadvantages - decreases the time resolution of OSTimer() and the changing of number of bytes for delays will affest all other tasks - increasing the ammount of used RAM and slowing down the performance (operating with 4 bytes variables is slowlier).

2. To make e semaphore and to run this task from other place. The other task should be run on slowliew frequency, so the devide counters could be with shorter - 2 or 1 byte. In this case I consume one event - this needs some RAM.

3. To Stop the task and then to OSStart() it again. In the last case you don't need to create a new event, no need to increase the number of bytes of delays.

So, my question is - which way is the prefered and recommended from Salvo. I mean - what kind of disadvantages I have if one task is stopped at all. From what I saw in the manual I quess that you give it "ultralow" priority - so this task couldn't never become eligeable, correct?

By the way, I saw in the manual that if you have enabled delays, OS_Stop() is equal to OS_Delay(0,label). Does that mean that if one task is delayed with "normal" delay value and I run OSStart() , the delay will expire? I mean - if a task1 uses OS_Delay(100,label) and somewhere I do OSStart(task1) the delay of task 1 will finish?


Posts: 324
Joined: Sun Nov 19, 2000 12:00 am
Location: Sofia, Bulgaria

Re: Optimizing delayed task with very big delay

Postby aek » Sat Dec 30, 2000 10:42 am

Method 1 is bad for the reasons you mention.

Method 2 is pretty good -- an event takes just 4 bytes (three if you don't use event types, 2 if no event types and only small sems and/or message pointers), so that's quite RAM-efficient.

Method 3 is pretty good, too.

Another option is to loop the OS_Delay(), like this (assume 10ms system timer):

void TaskEveryWeek ( void ) {
static unsigned long int delay;

/* initialization code */
for(;;) {
/* body of task */
for ( delay = 302400 ; delay > 0 ; delay-- )
OS_Delay(200, label);

This results in a task that runs once a week (100 * 60 * 60 * 24 * 7 = 60,480,000 system ticks, or 1 week at 10ms/tick), using only OSBYTES_OF_DELAYS of 1. The advantage to this approach is that no event is needed, and the extra RAM that is required (in the form of a static variable) is local to this particular task. Also, it's very legible.

Note that if you made delay global instead of local in scope, you could manipulate the period of the task, especially in conjunction with OSStopTask() and OSStartTask().

Which page of the v2.1 User Manual are you referring to with the stopped task? A stopped task's priority is irrelevant -- it never runs.

Re OSDelay(0,label). OSStartTask() works only on tasks that are stopped -- it has no effect on delayed or waiting tasks.

[This message has been edited by aek (edited December 30, 2000).]

Posts: 1888
Joined: Sat Aug 26, 2000 11:00 pm

Return to Coding

Who is online

Users browsing this forum: No registered users and 1 guest