Replies: 1 comment 4 replies
-
This is a good point. I think that historically, many labs have collected very short bursts of data for some reason, and until now I have been resistant to accommodate it. The reason was that after quality controls applied in L1AQC and L1BQC, there are often only 5 or fewer spectra available to work with in one minute of data (theoretically this could be as high as ~30, but it never is). In the scenario of a 1 minute ensemble in L2, one would be faced with discarding (by default) all but perhaps one spectrum during glitter removal (brightest 90% of Lt). That does not allow for any estimation of standard error, and would be far less likely to be representative than an average of multiple spectra over time. On the other hand, if your situation is simply that your files are broken up into 1 min interval, but run continuously (i.e., one minute per file), your raw files could simple be concatenated to one hour's worth of data and run the usual way. |
Beta Was this translation helpful? Give feedback.
-
As of now, the code doesn't allow for ensemble of less than one minute:
HyperCP/Source/ProcessL2.py
Lines 1235 to 1236 in 8ed6754
For me, this limit is too stringent. In the case of data acquired at my lab (HOCR .raw file of ~300 kB) each file is composed by less than one minute acquisition.
Nevertheless, by relaxing this 60 seconds limit to 45 or 30 seconds, I can process this data and get good results from it.
I propose to relax the ensemble time lower limit to 45 or 30 seconds to allow for this manual acquisition mode to be processed with HyperCP.
I believe that allowing the re-processing of older datasets is key to the adoption of the software. Once people adopt this software, it will likely be easier to convince them to change their data acquisition habits.
0 votes ·
Beta Was this translation helpful? Give feedback.
All reactions