MFMP: Automated experiment with Ni-LiAlH

  • can,

    I like your observation. I can't say what is happening exactly, but the fact that there is some change is reason to continue the experiment somewhat longer. Note that the steps and the noise that occurs on the pressure at the edge of switching across steps is caused by the Honeywell PX2 sensor. Apparently that is a characteristic of their signal processing chip about which others have complained for this series of sensor.


    Anyone have a suggestion for a quieter absolute pressure sensor?

  • BobHiggins

    I'm wondering if increasing the rate of the "test" commands (for example to once an hour) will increase the rate of pressure decrease.

    Or even better, if there will be a faster rate of decrease of the pressure at 1150°C by periodically bringing temperature to the level where LiH forms (if there is still enough free Li left) instead of executing the test command.

  • can,

    Limitations, limitations of script driven experiments. While I can intervene by aborting and starting a new script, or jumping to a different place in this script, I do not current have the functionality to manually change the voltage, temperature, or pressure setpoint. I am keeping my eye on this experiments for new features to add to the program. One of the programming issues is that I want the experiment to be repeatable. So, if I create a way to manually intervene, it needs to have a way to log the commands that were actually made so that the manually intervened protocol can be repeated in a future experiment to see if the result happens again. It is possible to do that, just a bunch of additional code to decode the manual inputs into script-like commands. For example, if I were to manually command it to a new temperature and or pressure, the system would have to log that the previous command was shortened in time, a new command was entered, and then wait until the next command to know how long that manual command was. Then what - do I have a resume into the script or form then on is everything manual?

  • Limitations, limitations of script driven experiments. While I can intervene by aborting and starting a new script, or jumping to a different place in this script, I do not current have the functionality to manually change the voltage, temperature, or pressure setpoint.


    I don't understand where would be the problem in doing this:


    - Make a copy of the currently running script;

    - Edit the script to add appropriate temperature/power changes approximately where the experiment currently is. If you don't really know what to add you could make it execute the test command more frequently;

    - Abort the current script;

    - Load the new script;

    - Jump to the newly added portion of the script.

  • can,

    Yes, that would be possible. I could do that even today (abort, restart, and jump to another point in the script), but it would create a glitch in the operation, and I would have to be really careful in doing it in a way that the repeatability is not compromised. The timestamps would still be OK.

  • 20170404-0330UT - something has crashed the operation of the Labview program. Suspiciously, it looks like the data stopped in file index 099. Could I have a have a counting bug? Well, the experiment will cool off overnight and I will begin looking for the bug and setting up the calibration run in the morning.


    Thanks for all of the support can and for all of the suggestions I received. I plan for the next run to go more smoothly.


    Temperature already down to 900C and I can't record the termination.

  • This is a graph of the entire experiment: round2-pdf-1491290245.pdf


    And this is a graph of the 1150°C part where pressure slowly decreased: round2-pdf-1491290394.pdf


    This is a zipped .csv file containing all the data in a single file, without dropped columns, but also full time stamps and calculated columns. Since it was too large (24361 kB zipped) I had to upload it elsewhere. Good luck using it in a spreadsheet: https://mega.nz/#!2h5UkbyJ!b4t…0KmRFyqjQqr3JL9_0ocCsGMN8


    (294,924 rows of data x 25 columns = 7,373,100 data samples)


    BobHiggins

    There's also another bug: it seems that the first character in the first 9 rows in your .csv files is often missing. More importantly, in the column label list this results in one being called "ample" instead of "Sample".


  • Here is a multipage document showing all the saved spectra from the experiment. The "start date" is the raw internal date of the various saved files: it can be seen that even though they appear to be in 24 hour format they actually are in 12 hour format. I have sorted the files by creation date, so they are in correct ordering in the document, but this method is inherently brittle. (EDIT: this method also circumvents the issue of the non-zero-padded file numbers).

  • I want to thank Bob Higgins and "can" for not only their impressive efforts but the willingness to share it with us anonymous observers!


    Learning and moving forward... job well done. (Much more productive than arguing about imaginary heat exchangers! ;))


    Again, thank you! :thumbup:

  • I also would like to repeat my thanks to can for his hard work in keeping up with the graph reporting. It made a lot of difference in the crowd experience. We will have to work on automating this. Thank you to those of you have offered to help!


    The experiment was stopped last night after running for 4 days. Something crashed or got hung in one of the Labview threads. It is funny how such multi-threaded programs fail - they stop working correctly, but many of the other autonomous threads that make up the whole app just keep going - like a robot out of control. The program kept measuring but stopped putting data into files about 20170304-0300UT. It wouldn't accept the command to jump to the cool-down sequence, so I just had to shut it down.


    We got 4 full days of operation with at least 2 of those days at 1150C continuously (with testing pulses). I will be setting up a calibration run to begin tomorrow (I hope) - it will be a 24 hour cycle. I don't think there was any XH and I didn't see anything in the gamma detection (but have not done any photometric calibration yet). The calibration should work as a pre-calibration for the next experiment and a post-calibration for this one.


    I am going to be doing some of the debugging of the Labview code today. I hope that once the bugs are removed and some other automation is put in place, these experiments can be run regularly - eventually hoping for no more than 1 week down between experiments. If the experiment is being run autonomously, I can spend my time preparing new experiments. Dusty plasma should be fun.

  • can

    That is from 137Cs and should be 662 keV. The second peak is from 40K and should be 1461 keV. Presume that (0,0) is one of the points and do a 3-point fit for the energy scale. Normally I do the fit for the energy scale, and re-sample to a uniform 1 keV/sample. Then sum the counts in the starting waveform and scale the re-sampled waveform to have the same sum.


    I do this calibration for each integration (file) and then you can median combine or average to combine multiple files. When you are done, you can divide by the live time (in this case 5000s) and then scale any file you are testing by the number of live time seconds.


    If you combine the files by average, test the result by taking any one calibrated file and subtract the average file. The result should be zero mean noise. If it is not zero mean, then the single file or the average may have a signal in it.

  • It took a while but I managed to do this. Below are a sample image and a multipage pdf document of all processed spectra from this experiment.

    The vertical lines show the locations of the uncalibrated 661 keV and 1461 keV peaks. The script finds them automatically, or at least tries to.

    All spectra are resampled to 2048 bins in the 32-2000 keV range, allowing trivial subtraction of the average.



  • can,

    Very nice work! A couple of comments ... When you re-scale the energy, the range does not always turn out to be the same 2048, so what you do is find the max bin and let's say it is 2087. Then you resample everything in 1 keV bins from 1-2087 for that waveform, and you normalize to have the same counts as the original waveform. The scaling may change for the next waveform - the maximum bin might be 2082, so you re-sample in 1 keV bins from 1-2082, and normalize for the same total number of counts. Now, you can average the two waveforms, but you have to throw away the data above 2082, or just use the first waveform's resampled values for 2083-2087. You get the idea.


    Also, it is customary to show the plot of the residual (average subtracted) with a minimum Y value that is negative, so that you can see if the noise is zero-mean; I.E. centered on Y=0.


    I have something similar to what you did, I presume in Python, in Matlab. Python seems to be a wonderful opportunity to have a very capable language that is free. Can you create standalone apps in Python?

  • BobHiggins,

    I am rusty but have your script reference a flat file on a timer say every 15 minutes for parameters. If the flat file has not changed then no bother. If it's changed then the new data is acted on.

    As far as hung threads, get a watchdog to see if it is being used. If you are stopping and having to restart, well we dont want that. And maybe can come up with a way to either bypass it or fix it. Have a watchdog for any realtime processes.

  • Rigel,

    For me the program has evolved to be pretty complex. The data from the DAQ is coming pretty fast and it is formatted and put into a FIFO as a thread. A separate thread, queries the FIFO, and if there are 50 samples in the FIFO it opens the storage file for that hour and appends the data while emptying the FIFO. If the time stamp for the last sample is greater than the file's first record time stamp by a specified time length (an hour in this case) a new file is opened and the new file name is setup for the data to be appended to. This program has at least 5 threads running, but I don't remember for sure. I can always add another thread as a separate watchdog, but I think I could put it in one of my monitoring threads.


    There are some noted bugs and some gut areas to look for the bug that caused the failure.


    can,

    I have added the gamma spectra files to the Google drive for overnight when the reactor was cooling. But for most of them the reactor is already cool.

  • Very nice work! A couple of comments ... When you re-scale the energy, the range does not always turn out to be the same 2048, so what you do is find the max bin and let's say it is 2087. Then you resample everything in 1 keV bins from 1-2087 for that waveform, and you normalize to have the same counts as the original waveform. The scaling may change for the next waveform - the maximum bin might be 2082, so you re-sample in 1 keV bins from 1-2082, and normalize for the same total number of counts. Now, you can average the two waveforms, but you have to throw away the data above 2082, or just use the first waveform's resampled values for 2083-2087. You get the idea.


    I think I'm doing it a little bit differently. There is a function in one of the libraries I'm using that takes two X and Y arrays of values (like for example Energy and Counts) and returns an interpolating function that approximates quite closely Y for any new value of X (typically an array of values) it's provided to it.


    The new array of values X can be arbitrary (for example from 50 to 1000 in 300 subdivisions, or from 32 to 2000 in steps of 1), but the results will be more accurate if it more or less matches the same density of the original data the interpolating function was produced from. It also interpolates out of range X values (e.g. 2200 keV), but obviously one wouldn't want to normally do that.


    For every calibrated spectrum I create a new interpolated spectrum with all the exact same predefined energy range and number of steps, which can therefore easily be subtracted/averaged/etc each other.


    So far what I'm doing seems to work, but there might be better or more correct ways ...


    Also, it is customary to show the plot of the residual (average subtracted) with a minimum Y value that is negative, so that you can see if the noise is zero-mean; I.E. centered on Y=0.


    I've just tried to do this: round2_spectra_all.pdf


    I have something similar to what you did, I presume in Python, in Matlab. Python seems to be a wonderful opportunity to have a very capable language that is free. Can you create standalone apps in Python?


    Python is a general purpose language which can be used for many things, not just numerical and data analysis (actually these came in relatively recent times). Standalone applications can be made in Python. However, it's an interpreted language and such applications generally need to be bundled with a Python interpreter.


    For my work with these graphs, etc, I'm using an IDE called Spyder which offers a working environment somewhat similar to Matlab. Usually it's not recommended to download every library or component separately; for the purpose of scientific analysis there are ready-made environments called Python distributions which come bundled with the most useful libraries and tools for this. A famous one is Anaconda. I'm using one called WinPython.


    BTW, on a slightly related note, from today's documents in the Court Docket:



Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.