robopp Posted November 15, 2017 Share Posted November 15, 2017 Hi, We're having an issue where our project file is getting corrupted after approximately 24 hours of run time. The good project file is 184KB and when the file gets corrupted the project file size decreases to 215 bytes. We have over 120 channels with most of the channel data coming from a sequence that's parsing virtual COM telemetry data. The remaining channels are from a LabJack U6. Any idea what's going on here? Does DAQFactory record errors into an error log file or something similar? Also, on this machine we consistently receive the warning that multiple instances of DAQFactory are running when that's not the case. Link to comment Share on other sites More sharing options...
AzeoTech Posted November 16, 2017 Share Posted November 16, 2017 DAQFactory doesn't access the .ctl document once the application is loaded. There are four ways it could get corrupted: a. you are using the Auto Persistance Timer (under File - Document settings). I recommend against this in general. Set it to 0. This causes DAQFactory to automatically do a File - Save at some interval, and if it fails, the file gets corrupted. b. you are calling system.saveDocument() from somewhere in your script, and the file is getting corrupted on the attempt to save c. you are using file. functions or logging/export sets and are actually simply writing over the .ctl file d. you have a corrupt hard drive. My guess is that it is the first one. Link to comment Share on other sites More sharing options...
robopp Posted November 16, 2017 Author Share Posted November 16, 2017 The corruption is probably happening when a manual save is performed, but is there a way to figure out why it's getting corrupted? Another issue we're seeing with this DAQFactory project file is that after approximately 24 hours of run time all graphs go blank and DAQFactory must be restarted to correct. Labels continue to update, but it's as if all history has been randomly lost? Any idea what's going on here, or how I can debug this issue? Link to comment Share on other sites More sharing options...
AzeoTech Posted November 17, 2017 Share Posted November 17, 2017 Does it corrupt every time you save? Seems like a lot of things happen at the 24 hour mark. I'd look through your script and see what happens at 24 hours. Link to comment Share on other sites More sharing options...
robopp Posted November 17, 2017 Author Share Posted November 17, 2017 No, I believe the corruption is usually tied to the 'loss of history' event. If we save the project at that point then the project file gets corrupted. The only script we're running is processing incoming serial data at 1Hz. There's no significance to the 24 hour mark other than the same amount of data has been collected. We have a demo in a couple of weeks that's going to run over multiple days. Having to restart DAQFactory mid-demo is not acceptable. I need help here in debugging this issue. Would a phone call expedite this process? Thank you, Rob Link to comment Share on other sites More sharing options...
AzeoTech Posted November 17, 2017 Share Posted November 17, 2017 A phone call might help, but seeing your document first would be best. You can post it here, or if you don't want others to see it, email it to us. Link to comment Share on other sites More sharing options...
robopp Posted November 17, 2017 Author Share Posted November 17, 2017 Sent. Thank you! Link to comment Share on other sites More sharing options...
AzeoTech Posted November 17, 2017 Share Posted November 17, 2017 Thanks. On first look I think you have your histories set to high. You have 143 channels each with 500,000 historical. That's 1.2 gig of memory. The problem is not so much that, but rather moving that memory around as DAQFactory processes things. If you aren't careful, you can end up processing a lot more memory than you expect. DAQFactory preallocates as soon as you add the first point, so it is allocating a full 1.2 gig of memory with the first data point. I would use Persist instead, especially if you want to go even more than 500,000 data points. With persist you can pretty much go as big as you want, but you have to change the way you access the data. So, you probably would set the history to something small like 1000 for each channel, then set the Persist to 500,000 (or more). Then you'll need to change the way you access the historical. For graphs that means for example putting this in: Boost_5V_Rail[Component._Graph2D.BottomAxis.CurrentScaleFrom, Component._Graph2D.BottomAxis.CurrentScaleTo] instead of your current: Boost_5V_Rail/1000 You also should move the calculations you are doing into channel events. You do this a little where you are calculating relative, but you should do it for other things as well. For example,you have a screen element labelled Boost Energy Produced with an expression of: Sum((Boost_Vout/1000)*(Boost_Iout/1000))/3600 The problem with this is that you are going to pull the entire history of Boost_Vout and Boost_Iout every time the screen refreshes, divide each by 1000, multiply them together (as an array), then sum them. This is fine if your histories are pretty small, but even after 86400 data points, it is really inefficient. You are better off keeping a running calculation of this by using a channel Event and then sticking the running result in a new channel. Then the screen element just accesses the most recent reading of that channel. The event would be in Boost_Vout because your script adds this value after it adds Iout, or you could just put this code right in the telem_parser script after the for() loop. The script might be something like: BoostEnergyProduced.addValue(BoostEnergyProduced[0] + Boost_Vout[0] / 1000 * Boost_Iout[0]/1000) Then have the variable value component (or graph) simply use: BoostEnergyProduced[0] / 3600 You'll want to do something similar for the expression in your Boost Power Output graph where you have: (Boost_Vout_Final/1000)*(Boost_Iout/1000) Calculate this each time you update these two channels and stuff the result in another channel. Then plot that other channel. It is much more efficient. Basically, look through your application for anywhere you access channels without any subsetting (i.e. without [] after the channel name). Once you change the history to 1000 for the channels, doing myChannel without [] will only return the last 1000 values no matter how big the Persist is. You can only access the Persisted data by using []. This is actually to protect you from accidentally bringing in a million data points when you really only wanted the most recent. I should also point out that your Export set accesses channels this way, with no []. Finally, is there a particular page being displayed when it fails? Link to comment Share on other sites More sharing options...
robopp Posted November 28, 2017 Author Share Posted November 28, 2017 Thank you for the suggestions. I've implemented almost all of them, but am experiencing an issue with the following: BoostEnergyProduced.addValue(BoostEnergyProduced[0] + Boost_Vout[0] / 1000 * Boost_Iout[0]/1000) This value never updates. If I watch BoostEnergyProduced it's blank. What does a blank value mean? If I seed BoostEnergyProduced with a dummy value it finally starts to update, but this isn't a solution. Any idea what's going on here? Link to comment Share on other sites More sharing options...
AzeoTech Posted November 29, 2017 Share Posted November 29, 2017 The problem is that when you first start, BoostEnergyProduced is empty. Then when the event occurs, the expression inside the addValue() can't evaluate because BoostEnergyProduced is still empty. You have to initialize it first, for example in a startup sequence put: BoostEnergyProduced.addValue(0) You might instead want to set the Persist on that channel to match the history so it lasts through restarts. Then, in your startup sequence put this instead: if (isempty(BoostEnergyProduced)) BoostEnergyProduced.addValue(0) endif so it will only set it to 0 if it doesn't exist yet (for new installations). Otherwise it will use the value from Persist. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.