Customizing log/export fo match format


aceav8or

Recommended Posts

I know DAQfactory does just about everything imaginable so here's my question, how do I get my log to appear in this format (below) so i can directly import it into another program?

If I'm better off exporting it instead can you give me an idea of how to do that as all I've worked with is the logging function.

Here is what I've learned about the format:

Files are ASCII text in format and look like this:

MULTICURVE:

"curve1" "curve2" "curve3" "curve4" "curve5" "curve6" "curve7"

0.00 0 0 0 0 0 0 0

0.01 8.64 0 24.28 0 0 0 0

0.02 8.63 0 24.43 0 0 0 0

0.03 8.64 0 24.58 0 0 0 0

0.04 8.64 0 24.73 0 0 0 0

0.05 8.64 0 24.87 0 0 0 0

0.06 8.64 0 25.00 0 0 0 0

0.07 8.65 0 25.13 0 0 0 0

0.08 8.64 0 25.26 0 0 0 0

0.09 8.64 0 25.37 0 0 0 0

(etc)

1539.98 0 0 0 0 0 0 0

1539.99 0 0 0 0 0 0 0

1540.00 0 0 0 0 0 0 0

END-DATA:

Where the curve names in quotes are the column headings in your data file, you can have as many columns as you wish. They are separated by <Tab> characters.

The only other required item is the SETUP command, at the top of the file, which lists the top and base depth for the log, and (optionally) the XYZ coordinates:

SETUP: 0 1597 0 0 0

And the END-LOG: command at the end of the data file. ,

END-LOG:

Link to comment
Share on other sites

Well, you can do this one of three ways, two of which are very similar and the ones I'll stick with. You'll use the File. functions for low level file generation. I'll assume you are doing this after the fact and that your data is in curve1 through curve7

private handle = file.open("c:\myfile.dat",0,1,0,1)
file.write(handle, "SETUP: 0 1597 0 0 0")
file.write(handle, "MULTICURVE:")
file.write(handle, '"curve1" "curve2" "curve3" "curve4" "curve5" "curve6" "curve7"')
for (private i = 0, i < numrows(curve1), i++)
file.write(handle, "" + curve1[i] + chr(9) + curve2[i] + chr(9) + curve3[i] + chr(9) + curve4[i] + chr(9) + curve5[i] + chr(9) + curve6[i] + chr(9) + curve7[i])
endfor
file.write(handle, "END-LOG")
file.close(handle)[/CODE]

That's the general gist.

Link to comment
Share on other sites

I need to be able to export this at will, even while data is still being collected.

When I switch curve1 etc for channel names I get this back "C1086 One of the parameters was empty: Line 6 - unable to perform quick sequence action.

Can you explain line 5 and 6 for me?

How would I log time as one of the curves? Systime() didn't seem to work.

Thanks guru!

Link to comment
Share on other sites

I wouldn't run it directly in a quick sequence because if it takes a long time it will stall DAQFactory. I'd instead beginseq() it in a sequence after querying a file name. To query the file name, use File.openFileDialog(). Something like:


global string filename = file.openFileDialog()
if (!isEmpty(filename))
beginseq(myLogScript)
endif

[/CODE]

and have your log script reference the global "filename" variable.

I can't address the channel name or time errors directly without seeing your whole file. You don't want systime() because that is just the current time, not the time of the data points. To get the time of the data point, use something like:

curve1.time[i]

Pick any of the channels.

Line 5 just causes it to look through all the historical values in curve1. Line 6 writes a single line with all the different channels separated by tab characters (chr(9)). The starting "" just tells DF to create a string.

Link to comment
Share on other sites

  • 2 weeks later...
  • 4 weeks later...

I have a channel that is an incremented count, if I want to export only the max value of the other channels over the time period that it took the count to increment in the above format how would i do that? That is to say if the count is 1 and curve1 goes from 3 to 7 and back to 3 before the count gets to 2, then I only want to add one line to the export, ie count = 1 ; curve1 = 7.

Thanks!

Link to comment
Share on other sites

That sort of thing you are going to have to calculate and store in a variable before logging. You aren't going to be able to do advanced logic like that in the export/logging set itself. To make the calc, I'd simply create an event on your count channel that whenever it changes, it does a max() between the time of the last read and the current read. You'll first need to determine when the last change in count was. That's pretty easy, just record it in some global variable. It'd go something like this:



if (count[0] != count[1])
// changed:
theMax = max(otherChannel[lastTime, count.time[0])
lastTime = count.time[0] + 0.001
endif

[/CODE]

You'll need to declare and initialize lastTime somewhere, probably to 0.

Link to comment
Share on other sites

  • 2 weeks later...

Well, if its actually streaming (i.e. high speed LabJack streaming), you aren't going to be able to time it perfectly. If by streaming you just mean continuous data, you can check if a value is a whole integer instead of a fraction a number of ways, the simplest is to simply compare the value to it's floor:

myValue == floor(myValue)

but there are others, for example:

myValue % 1 = 0

Link to comment
Share on other sites

It's continuous data gained from a serial signal and added to a channel. So I guess I really want to get the max data from another channel from the time the serial data first crosses an integer to the time it leaves that integer despite whether or not the value ever lands exactly on that integer.

Link to comment
Share on other sites

That's different. You'll need to capture the times. First, in a startup sequence, create a global variable. Call it something like LastTime and initialize it to 0. Then in the event for the channel that crosses the integer, do something like:


if (floor(myChan[0]) != floor(myChan[1]))
if (lastTime != 0)
anotherChannel.addValue(max(myChan[lastTime, myChan.time[1]]))
endif
lastTime = myChan.time[0]
endif

[/CODE]

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.