Page 1 of 1

I2C interrupt ISR performance

PostPosted: Thu May 05, 2016 1:56 pm
by mktaj
Hi all,

I have a routine that reads data from I2C, at once per second using interrupts. I am finding that every few seconds I get data that looks corrupted. I was reading the Salvo User's Guide and in the Performance chapter there is a warning:

"Warning: Failure in the interrupt hooks to disable an interrupt source that calls a Salvo service will inevitably lead to runtime problems in a Salvo application due to the unavoidable corruption of global variables. Therefore it’s important to keep track of which Salvo services are called from ISRs, and configure the interrupt hooks accordingly."

What does it mean by a "salvo service"? Does that mean something like OSSignalBinSem? I do call that from my ISR. Does interrupt source mean my I2C's interrupt? Any info is appreciated,

thanks,
mktaj

ps I am using a PIC24FJ256GB210 MCU on a Pumpkin MB and MPLabX IDE and compiler.

Re: I2C interrupt ISR performance

PostPosted: Mon May 09, 2016 7:45 am
by aek
By default, Salvo 4's interrupt hook disables _all_ interrupt sources while Salvo is in a critical section. This is the "safest" possible configuration for Salvo, as it guarantees that Salvo's shared variables will not be corrupted by e.g. a call to a Salvo service from within an ISR.

What does it mean by a "salvo service"? Does that mean something like OSSignalBinSem? I do call that from my ISR. Does interrupt source mean my I2C's interrupt? Any info is appreciated,
Yes, any SalvoXyz() function is a Salvo service. And yes, your I2C interrupt is an interrupt source. However, since I presume you are using Salvo in its default configuration (e.g., by linking to a Salvo library), then your configuration is correct, and Salvo cannot be corrupted by your call to OSSignalBinSem() from your I2C interrupt.

What is the problem you are encountering in I2C? Are you having issues reliably receiving I2C transactions (as an I2C slave)? It's interesting to note that despite the relatively high speed of the 'GB210, it is still not trivial to ensure that I2C writes and reads to a slave will occur without error, at 100kHz and 400kHz rates. This is typical of MCU-based I2C slaves.

Fortunately, there is a simple solution, that involves enabling clock stretching and a careful configuration of Salvo's interrupt hook and your I2C ISR. We (Pumpkin) use it in all of our nanosatellite SupMCU-based systems, and run as I2C slaves at 400kHz with 100% reliability.

Please provide more info on your I2C utilization ...

Re: I2C interrupt ISR performance

PostPosted: Mon May 09, 2016 9:47 am
by mktaj
Thanks for the info! The corruption issue was actually a bug in my code, but I think I have a timing issue also. The while loop in my driver has a call to OS_Delay (with a value that works out to 1 second in our configuration), so that we should receive data at once per second. (We are reading data from a Clyde Space EPS power system and battery). When I watch the timestamps in the output, it appears to be running faster than 1 Hz, and with a fair bit of jitter in the timing of the reads. I am in the process of capturing some data so that I can analyze it explicitly, I will post the results when I have them.

I would be interested to learn more about clock stretching, I am unfamiliar with that.

thanks,
mktaj

Re: I2C interrupt ISR performance

PostPosted: Mon May 09, 2016 10:15 am
by aek
Your post suggests that you are running as an I2C Master -- you won't be affected by any delays in your App, as I2C doesn't care about stalls in the clocks. Your problem is elsewhere.

Clock stretching is for I2C slaves that can't handle high clock speeds, or for I2C slaves whose only option for high clock speeds is to stretch the clock ...

Read the CS datasheets very carefully -- there are various restrictions on how you must interface to their I2C modules, and you must observe all of them.