Multiple digital inputs + conversion for logging


Recommended Posts

I am a new LabJack user, and so far very enthused about what the UE9 is capable of doing compared with other vendors' products.

My application uses one analog input to monitor the output of a signal-conditioning amplifier, and 20 of the digital I/O to monitor the outputs of a 20-bit binary counter, which counts pulses from a quadrature incremental encoder. The reasons why it's being done this way are beyond the scope of this post, but my problem is that I need to convert the raw inputs coming into the UE9 into two scaled and weighted parameters that can be used for Variable Display objects, Graphing traces, and most importantly, logging outputs.

Firstly, the analog signal I'm measuring is +10V for zero-scale and 0V for full-scale, so the voltage coming into the UE9's analog input (via an LJTick Divider) needs to be multiplied by a scaling factor, and the result subtracted from 10 to get a true scale value. I can achieve what I want in the Display objects and graphs by using conversion formulae in the appropriate places, but I could not find any really simple way of getting that converted value outputted to a logging set. I posted this question on the LabJack forum and have received a working solution, i.e. use Conversions - if anyone has a better suggestion, I'd like to hear it, particularly if it works in with one for the larger problem, next.

The 20 digital I/O is even tougher. Each digital bit has an appropriate binary weighting applied to it so that when I add them all up I get a decimal equivalent to the binary count. I then apply a scaling factor to that result to get a true relationship between the number of pulses going into my (external) binary counter and the angular displacement of the encoder shaft. Again, I can get conversions for Display Variables and Graphs, which are 10 lines long, but need to get that converted decimal number into the logging set as just one parameter.

The obvious answer to this is to use scripting, but the other users of the machine to which my UE9 will become grafted want the flexibility to create .ctl files of their own making, and whilst they are all scientifically savvy, most of them don't know anything about programming let alone C or C++ syntax.

I'm hoping that I've missed something in the moderate amount of reading I've done of the Application Guides, and that someone can advise me as to a direction, or illustrate using a previous solution to a similar problem.

Thanks to all who contribute here,

Instron

Link to comment
Share on other sites

For the analog you want a Conversion. This is the simplest way and is what Conversions are designed for.

The digitals are tougher. Normally you'd have to use scripting. There is a trick you could use, but this assumes that your reading these values relatively slow. What you do is this:

1) I'll assume you are reading the digitals at say, Timing 1, Offset 0. Create a new channel with Device Type "Test", D# = 0, I/O Type A to D, Channel #0. Set its timing to 1, Offset 0.5. This will cause the channel to read 1/2 second after the digitals start reading, which should be enough time for them to be completely read.

2) Create a conversion to convert your digitals into the analog signal. Normally you'd use "Value" in your conversion so it could be applied to multiple channels, but in this case, we don't want to use the value from the Test channel, but instead just use all the digitals. So, if the digitals are named D0, D1, etc, you might do:

D0[0] * 53.284 + D1[0] * 32.184

I don't know what your weighting factors or formula is, but hopefully you get the idea.

3) Apply that conversion to the Test channel. This will cause the Test channel to get the result of that calculation, calculated 1/2 after the digitals start reading.

This only works on slower acquisition rates because we need to make sure and set the offset big enough that the Test channel read occurs after the digitals are done.

To do this in script, you simply use AddRequest/GoOne to read the digitals manually (covered in the DF-LJ app guide), then make the calculation and stick the result into a channel using MyChannel.AddValue(). In this case, speed isn't an issue because you are doing everything from a single sequence and therefore a single thread.

Link to comment
Share on other sites

Thanks indeed, the Test channel method worked a treat! Eventually I will get the scripting version in place, but for the time being your first suggestion is delivering what's needed to get the users satisfied.

I did suspect that the use of a "Test" channel might be the go, from reading various snippets of other posts on this forum, but could not find any direct references to it in either the LabJack User's Guide or the DAQFactory Express manual. Is this a deprecated feature, or are there more comprehensive references to using them elsewhere?

Thanks again for such prompt and helpful advice.

Instron.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.