MFMP: Automated experiment with Ni-LiAlH

  • BobHiggins

    I was looking to doing it in a more general way, applicable to other data and setups as well. The process I'm doing is very similar to what you just described, except that I'm looking for the data points where input power changes the most, then back off a little from there to find settled temperatures and power.


    It seems that the tube is requiring a bit less heating power for the same temperatures compared to the previous calibration. Here I'm using the same scale you used for your calibration graph.






  • BobHiggins

    Here are the settled temperature and power data points I have for the ongoing run, if you're interested checking/verifying them out in detail:


    0 20.269128 0.000000
    1 99.985764 7.839422
    2 150.119477 12.898864

    3 200.026566 18.259071

    4 249.971209 24.112105

    5 300.061297 30.244218

    6 350.028534 36.811278

    7 400.018285 43.482862

    8 449.942648 50.382975

    9 500.124163 57.344809

    10 549.994370 64.812710

    11 599.680412 72.817857

    12 649.987899 80.373628

    13 700.433194 89.006773

    14 750.010100 97.307769

    15 799.985766 106.204750

    16 850.001880 114.910574

    17 899.999560 124.606630

    18 950.023008 134.319156

  • Eric Walker

    For what it's worth, yesterday I tried searching for peak finding functions or functions looking for local minima, but it does not seem to be an easy problem to solve efficiently, at least from a cursory search and for my capabilities. In the end I opted for something simple that could be applicable to these experiments. I didn't investigate whether scikit-learn could be useful, and as of yet I do not have any experience with machine learning. I'll try looking into it.


    BobHiggins

    Here's a comparison of the old calibration with the new one:



  • Two difficulties with using a scikit-learn model would be (1) you'd need enough labeled data for input to the model (e.g., taking real data sets and manually annotating step boundaries) and (2) the trained model itself is a binary that is not something you can include in a code printout. But with enough training data my guess is that you will obtain a fairly accurate function, provided there are few pathological corner cases in the training data to mess things up. This is not to say that it would be straightforward to attempt to train such a model without prior exposure to simple machine learning tasks.

  • EDIT: the calibration procedure is now complete.



    The polynomial coefficients for this calibration curve are:

    5.90037299e-05, 8.76003394e-02, -1.42850249e+00


    The actual data is:


    Time Stamp Tube (C-k) Heater Power (W)
    2017-04-07 20:31:30+00:00 20.2868 0.0000
    2017-04-07 21:31:30+00:00 99.9550 7.8973
    2017-04-07 22:31:30+00:00 150.2056 12.6526
    2017-04-07 23:31:30+00:00 200.0766 18.2385
    2017-04-08 00:31:30+00:00 249.9916 24.0121
    2017-04-08 01:31:30+00:00 299.9876 30.5354
    2017-04-08 02:31:30+00:00 350.0030 36.7786
    2017-04-08 03:31:30+00:00 399.9866 43.5241
    2017-04-08 04:31:30+00:00 449.9320 50.5660
    2017-04-08 05:31:30+00:00 499.9400 57.3545
    2017-04-08 06:31:30+00:00 550.0210 64.8987
    2017-04-08 07:31:30+00:00 599.8081 72.8219
    2017-04-08 08:31:30+00:00 649.9865 80.5421
    2017-04-08 09:31:30+00:00 700.4046 88.8530
    2017-04-08 10:31:30+00:00 749.9879 97.4612
    2017-04-08 11:31:30+00:00 800.0876 105.9176
    2017-04-08 12:31:30+00:00 849.7612 114.9099
    2017-04-08 13:31:30+00:00 899.9153 124.6430
    2017-04-08 14:31:30+00:00 949.9773 134.3180
    2017-04-08 15:31:30+00:00 999.9120 144.5973
    2017-04-08 16:31:30+00:00 1049.9512 155.2515
    2017-04-08 17:31:30+00:00 1100.0029 166.3409
    2017-04-08 18:31:30+00:00 1149.9553 178.2902
    2017-04-08 19:31:30+00:00 1198.1996 188.9271



    In the end since I wasn't 100% sure that I was sampling the correct data points I chose to very simply sample a row of (averaged) data every hour since the start of the experiment. Since every step is exactly one hour long this makes it pretty easy to obtain a calibration curve. It's not a "smart" method like the one I was previously using, but it worked pretty well for this calibration run.


    In case you're curious, here is the code used to compute this:

    https://github.com/can2can/LEN…mshell_round02_postcal.py

    https://github.com/can2can/LEN…master/clamshell_utils.py (used for reading csv files and computing power data; if there is a related problem, it's probably here)


    Below is also attached a zip file containing:

    - Raw .csv data with proper timestamps and computed columns

    - 30 seconds averaged data

    - Only the calibration datapoints (as pasted above)

  • can

    Thanks. I am grateful for all of your work! I am going to confirm this morning. What concerns me is doing the 30s averaging. If it is done as a blanket running average of the waveform, it will average in the step to the new power/temperature into the most settled points at the end of each soak. I think what needs to be done is identification of the the last 2 minutes of the soak, for example before the step in the raw data, and average only those data points together to get a settled data point. The difference is likely small.

  • BobHiggins

    I understand your concern given that I started iterating from the very first sample and assumed that the next one would always be after exactly 60 minutes. Here's power data as used by the calibration above together with the same data shifted back by 5 samples (= 150 seconds) so that the average will certainly not include data from the next step. Due to this shift the first datapoint is not available (NaN):


    Heater Power (W) Heater Power (W)
    Standard Shifted back by 5 samples
    0.00 NaN
    7.90 7.86
    12.65 12.96
    18.24 18.26
    24.01 23.92
    30.54 30.46
    36.78 36.76
    43.52 43.42
    50.57 50.38
    57.35 57.35
    64.90 64.84
    72.82 72.82
    80.54 80.55
    88.85 88.60
    97.46 97.22
    105.92 106.27
    114.91 114.90
    124.64 124.59
    134.32 134.50
    144.60 145.04
    155.25 155.47
    166.34 166.46
    178.29 177.97
    188.93 188.91
    166.76 166.51
    144.45 144.57
    124.26 124.47



    In this case there doesn't seem to be much difference overall. In some cases input power is slightly higher, in others slightly lower, but there's one anomalous exception. Perhaps, for added safety margin next time using the same method, it would be better to include a longer period at zero input power at the beginning of the run so that I can iterate in 1-hour steps a bit earlier from the beginning.

  • BobHiggins

    I found that apparently the rolling mean used by that resampling function here is right-aligned by default (an option must be set to make it centered, but incidentally it doesn't work on time-series), so it shouldn't take into account new temperature and power values.


    So, here's a calibration curve computed using a 60 second rolling mean of the original data, shifting all values by 1 sample and manually setting for the first data point (which otherwise becomes indefinite) input power to zero and temperature to ambient:



    It looks essentially the same as the other calibration.

  • can,

    Ok, just as a check (not any kind of competition), I did the calibration in a likewise fashion in Matlab. I averaged 70 points that were 10 points inward from the temperature change step. In general I got the same numbers as you did. The important numbers were different by less than 0.1%. However, my 200C number was lower by 2% because it appears I took it closer to the edge. In the low temperatures, it takes longer to settle because the temperature difference to ambient is smaller and the state was not settled until close to the step. Overall, our coefficients were very close:


    Regardless, your numbers are close enough and by far close enough where there would be any XH (going forward).


    Now to begin getting ready for the next experiment. Likely the next one will begin a week from Tuesday (don't want the monitoring across Easter).

  • Good to know that the values I previously found were indeed more or less accurate, less so that the calibration indeed changed by more than 10% at high temperatures.


    Ultimately, the point of this exercise was also showing that it's useful when the data allows to compute calibration values in automated manner, just like this experiment series is supposed to be performed.

  • On an unrelated note, I probably found in the disassembled USX program (using Java Decompiler) where the numbered filename of the saved spectra is composed during a run with the UCS-20 spectrometer.


    It's in mca_stand_alone > MultiRunThread.class > saveRun(int number)


    The saveRun function has a line that goes like this:

    Code
    f = new File(this.directory + "//" + this.file_name + "_" + number + ".spu");


    In order to make the filename number zero-padded (i.e. 00001, 00010, 00341, etc) this should be:

    Code
    f = new File(this.directory + "//" + this.file_name + "_" + String.format("%05d", number) + ".spu");


    However, I'm not capable of editing the class java bytecode directly to add this, and previously I haven't had success rebuilding it from the extracted source code either.

  • I have heard back from Spectrum Techniques regarding a fix for the time stamp issue:


    We have tracked down the time stamp issue that you reported and are sending you a new executable jar file. This new jar file fixes the problem by using a 24 hour time stamp. Find this new jar file attached to this e-mail.

    To install this new file on your local computer, make sure the USX application is not currently running. Next, you will need to find the existing MCA_Stand_Alone.jar file on your computer. Rename this file to something like 'MCA_Stand_Alone_Original.jar'. This step is necessary so you may restore the old MCA_Stand_Alone.jar file in case something goes wrong and to differentiate the old file from the new one that uses a 24 hour clock format.

    To locate the existing jar file right click on the USX software icon (on the desktop), select 'Properties', and click the 'Find Target' button. Your existing MCA_Stand_Alone.jar file is located in the Windows Explorer window that appears. Rename this 'MCA_Stand_Alone.jar' file to 'MCA_Stand_Alone_Original.jar'. Paste the new jar file (which is attached to this e-mail) exactly as it is currently named into the Windows Explorer window. Re-launch the USX application. Click Spectrum > Multiple Runs as you normally do and run the experiments. When you load the resulting .spu files the start and end times they contain will be in a 24-hour clock format.

    Unfortunately, owing to the fact that you are running our software on a Windows XP computer, we are unable to offer you a solution to the file naming and sequencing issue. It will be necessary for you to upgrade your OS to Windows 7 or later for us to be able to fix this problem for you. Please advise concerning how you want to proceed with this matter.


    I am going to install their code. Then maybe I will try the hack to fix the sequencing afterward. I am putting their new MCA_Stand_Alone.jar file here:


    https://drive.google.com/file/…NNDI4RVE/view?usp=sharing

  • I have heard back from Spectrum Techniques regarding a fix for the time stamp issue:


    Interesting that they changed one instance of the format string to "HH:mm" but not the second (only two appear in this binary). The version string is "1.2.00 USB" and "Released: June 14th, 2013"

    I haven't tried running it yet, my UCS30 is on a different computer.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.