Using Two LabJacks With DAQFactory


IanW

Recommended Posts

Hi,

I have two LabJacks and would like to acquire data from 18 channels spread between the two devices. The sample rate is 10Hz although the logging rate is 1Hz (10 samples being averaged). I would like to acquire the data across a network. Is this a feasible configuration and if so would it require the use of streaming ?

I have done a quick test and it appears as if some of the samples are being dropped.

Any help would be appreciated

Thanks

Ian

Link to comment
Share on other sites

10hz should not be a problem. I'm assuming you are using a UE9 since you mention a network? Anyhow, your timing is 0.1 which is plenty big. You may want to set the offset of one of the labjacks to 0.05 to put it on a separate thread, so that DAQFactory can communicate with the two devices concurrently. Also, make sure you are NOT using D# 0 (i.e. LabJack ID 0), since that means first found and can't really be used with multiple LabJacks.

Link to comment
Share on other sites

Hi,

I have tried using the two UE9 LabJacks as advised. There appears to be no loss of data but I have noticed that the data being acquired by each LabJack can become misaligned. I set the alignment parameter to 0.4 seconds and you do get periods during the acquistion where the data alignment is outside of this. Is there any way of ensuring a tighter alignment between the two devices ?

Kind Regards

Ian

Link to comment
Share on other sites

Hi,

I have attached a screen shot of the data showing where the misalignment occurs. The logging alignment parameter was set to 0.4 seconds. The first six channels are from first LabJack, the rest are from the second device. I am running a test at the moment with the alignment set to zero to see if there is a constant delay between the readings taken from each LabJack. Is the real difference between the data is 0.45 seconds i.e. 0.4s + 50mS between the two log timestamps ? The problem seems to right itself after a while. This may be a red herring but by observation in normal operation there is about 50mS slew between the two Labjack reads which happens to be the same as the misalignment

Kind Regards

Ian

post-1586-1216929303_thumb.jpg

Link to comment
Share on other sites

Hi,

I have run the logger over the weekend and the data logs show that the two LabJacks have a 50ms difference between the timestamps for all but one of the recorded log files. There is one log file which has a difference of about 0.9s betweeen the LabJack reads. Here is an extract from the log:

Time Stamp---------------Difference between the timestamps

07/26/2008 11:25:03.1340-------0------------------------------------LabJack1

07/26/2008 11:25:04.0850-------0.9509963989257-----------------LabJack2

All that log is the same (and its only happened in one file). Have you any idea as to why this has occured ?

Kind Regards

Ian

Link to comment
Share on other sites

Hi,

I could email the .ctl file to you as its basically a modification of an application which has been written for a customer (so its perhaps not appropriate to post on here).

Kind Regards

Ian

Link to comment
Share on other sites

OK, well the problem is that there is no real way for you to sync up the timing loops when using averaging. When not using averaging, the offset determines when in the second the acquisition occurs, so an offset of 0 means at x.000 while an offset of 0.05 means x.050. You can see this by turning off the averaging on all your channels. Once you turn on averaging, it counts data points, but it doesn't use the beginning of a second as a reference point, so depending on when the loops start relative to each other, you can end up with completely different points in the second. This makes it a bit harder to align.

So, the easy solution is to change your logging set from "All Data Points" to "Fixed Interval", set the Interval to 1 and select "Snapshot". This should give you the desired result.

Also, why do you have a history interval set to 10? History interval is designed for systems where you don't have a lot of memory and you want to be able to go back in time really far. It was also designed before Persist was an option to do the same thing. With history lengths of 100, you aren't really using a lot of memory (1.6K per channel), so why not just have a history length of 1000 and a history interval of 1. The difference in memory is about 150K, which is minimal on a Windows machine.

Link to comment
Share on other sites

Hi,

Thanks for the reply. What I would like to do is to take 10 samples every second and record the average of that in the log rather than just taking a snapshot every second which won't contain same information. Does setting the logging set to 'all data points' , 'fixed interval = 1' and 'snapshot' just record the value every second rather than averaging 10 samples over a second. If this is the case could a sequence be written to log the average data ?

With regard to the history interval I was just being cautious, is having a longer history interval more of an advantage?

Kind Regards

Ian

Link to comment
Share on other sites

If you have the Avg? checked and 10 specified for the count, then the data will look like 1 second data to every other part of DAQFactory including the logging set. The only thing that actually runs at 0.1 seconds is the acquisition. So, the snapshot is of the 1 second average, not any single 0.1 second reading.

History Interval: there is almost no reason to have a value other than 1 here. The only time I could see it would be if you had a computer with little available disk space for Persist, and you wanted to be able to see data back a real long time. With only 10 channels, even if you had a history of a million, you are only talking 16 meg per channel, so 160meg total. Of course a history of a million is only 11.5 days at one second data, so I suppose if you needed to go back much farther, didn't want to use persist or log files, you might set the history interval to 10 to extend the history back to 115 days, but of course you'd only have every 10th data point.

Link to comment
Share on other sites

Hi,

Thanks for the reply. I have set the program to log a fixed interval snapshot every second whilst leaving the channel settings to average the incoming 0.1s samples every second. This seems to work fine. I have noticed a slight 'creeping' of the timestamp for example the values for successive samples are:

19.203s

20.204s

21.206s

22.207s

23.208s

I can live with this but just curious as to why its happening.

Kind Regards

Ian

Link to comment
Share on other sites

Unlike the timing loops, the logging loop is not self-correcting for latency. With Aligned data, it doesn't matter because the time stamp is the time stamp of the data, but with snapshot, the logging set is basically going out every interval and taking a snapshot of whatever the current reading is and logging a line for it. It assigns the time when the snapshot is made. That causes two problems:

1) since it isn't self-correcting, there will be drift. Basically if the interval is set to 1, it waits 1 second from the time it last logged to the time it starts logging again. This is done internally with a simple Sleep call, which, depending on what windows is doing, can actually be longer than what is specified. As I said, in Timing loops, a different method is used to ensure that the interval remains fixed. It, however, is more cpu intensive which is why it is not used in the logging set

2) The timestamp is when the snapshot occurred, not when the data was taken. This means that, in your case, the timestamp could be up to 1 second off. If a data point comes in just after a snapshot occurs, it will be almost 1 second before the next snapshot and so the times will be way off. You can use the "include time with all" to get the exact times of each channel.

Link to comment
Share on other sites

Hi,

I have been doing some more testing, with the logging parameters set to 'Fixed Interval' 'Interval = 1' and 'Average' whislt acquiring at 10 samples/sec (with no averaging directly on the channel). The time stamp does creep slightly as expected due to the control of the timing loops. But I think the data is what I want, 1s average of 10 sample/sec raw logging rate. I presume in fixed interval mode when average is selected it will average all the samples acquired since the last averaging operation.

Just out of curiosity is there any way of using sequences and export sets to get the same functionality and perhaps more tightly controlled timestamps or is the loop timing mechanism in sequence programming the same as the type used in logging.

Kind Regards

Ian

Link to comment
Share on other sites

Yes, certainly. DAQFactory is designed to be easy to use for the basic, most common stuff without scripting, but then offers the powerful scripting backend when you need really fine control over what you are doing. You could combine a regular sequence with an export set, or for even more control, use the File. functions to log to file directly in script. Then you can control exactly what is output and how. You could even log to other formats like XML or HTML even. You could also have completely control over the timestamps. How this is done depends a little on what timestamp you want to use. The inputs aren't going to have the same timestamp, even with averaging turned off, so which timestamp do you want to use? How much inaccuracy can you have in your timestamp?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.