FP's experiments discussion

  • Anyway, I would like to know your opinion on the distinction in 3 XH types proposed in (1)


    Ascoli,


    I understand your attempt to divvy up the curve but I would perhaps do it based on a little different reasoning. Similar breakpoints but for different reasons. The HAD region is clear, no current forms the breakpoint. The division between LXH and HXH I would do based on whether or not any of the cathode is exposed to the gas phase. As I have indicated previously, when the metal electrode gets exposed, it starts to unload. There you have a different situation as the H2 (or D2) coming out of the electrode can now react at the electrode surface, just like it would at a recombination catalyst. So there should definitely be some consideration of recombination plus the electrochemistry going on in the other part of the electrode. It gets a bit more complicated, as has been discussed previously.


    During 'normal operations', ie with the electrodes fully covered by electrolyte, the excess heat results seem to fit the curve shape of the P/(P*-P) term. IOW, the apparent excess heat would seem to explainable fully as an artifact of the F&P model, in particular the enthalpy term for the electrolysis gases. See the attached plot to observe this. I used the vapor pressure of water from


    and converted to atmosphes by dividing by 760 to get the solid line. The dots are the excess heat signals from the figure you pointed to (your (1)) plotted vs. the T I read off the graph. The T data are a little crude since I just eyeballed it.









    However, if you want to talk about LXH and HXH, I can adapt.

  • You keep bringing on "blue arrows" as if related to the paper.


    I see no blue arrows in the paper, or in the video linked in the paper.


    So let's forget about blue arrows and move on.


    You can forget the videos, if you don't like them. This doesn't change the situation. As I already explained to you (*), the blue arrows in the "Four-cell boil-off" video (1) are intimately related to the F&P paper (2). I'll show you again the evidence of this strict relationship:

    Cell 1

    Cell 4

    > 21:52 – first arrow in video (1)

    22:03:58 – Figure 10(B) on paper (2)

    > 22:18 – last arrow in video (1)

    10:35 > – first arrow in video (1)

    10:43:58 – Figure 10(C) on paper (2)

    11:10 > – last arrow in video (1)


    This explanatory video has been very probably shown during the Pons presentation at ICCF3 in Nagoya, on Friday 24 October 1992, as explained in a previous comment (**). As shown in the table at the end of that comment, the biggest problem in the data used by F&P in calculating the extraordinary excess heat - during the "grand finale" at the end of the boiling period - is the origin of the 10-11 minutes that they used in the calculation at page 16 of their paper (2).


    Could you please explain me where this crucial datum come from?


    (*) FP's experiments discussion

    (1) https://www.youtube.com/watch?v=mBAIIZU6Oj8

    (2) http://www.lenr-canr.org/acrobat/Fleischmancalorimetra.pdf

    (**) FP's experiments discussion

  • I understand your attempt to divvy up the curve but I would perhaps do it based on a little different reasoning. Similar breakpoints but for different reasons.


    OK, I like the idea of the breakpoints. This facilitate the definition of the three XH regimes (LXH, HXH and HAD). Let's go to first identify these breakpoints, starting form the last to happen.


    Quote

    The HAD region is clear, no current forms the breakpoint.


    Yes, the interruption of the current, and therefore of the input power, is the obvious definition of the HAD breakpoint as imagined by F&P. But in reality the current didn't stop and the HAD claimed by F&P is an artifact derived from inadequate measurements of the electric parameters, their logging and the plotting of the experimental data.


    Consider that the HAD has been claimed only for Cell 2, not for the other 3 cells. Its real breakpoint occurred when, for some reason to better investigate, the measuring and logging systems of the electric parameters were no longer adequate to provide the correct value of the input power.


    Quote

    The division between LXH and HXH I would do based on whether or not any of the cathode is exposed to the gas phase.


    It happened much earlier. HXH is defined on the basis of the rate of vaporization of water, so HXH breakpoint must occur when boiling onsets, ie several hours before the cathode starts to be exposed. In this case the artifact consists in having assumed that half of the water vaporized in the last 10 minutes, instead of the almost one hour it took in reality.


    Quote

    During 'normal operations', ie with the electrodes fully covered by electrolyte, the excess heat results seem to fit the curve shape of the P/(P*-P) term. IOW, the apparent excess heat would seem to explainable fully as an artifact of the F&P model, in particular the enthalpy term for the electrolysis gases. See the attached plot to observe this. I used the vapor pressure of water from


    I didn't check yet, but this explanation of LXH seems reasonable to me, provided that the 'normal operation' is defined by the correct breakpoint between the LXH and HXH regimes, ie at the onset of boiling. In fact, at temperatures close to the boiling point the P/(P*-P) term is no more adequate to represent the heat lost by evaporation, because the denominator goes towards zero and the inaccuracies skyrockets. This choice would be in agreement with your curve, which only includes points very far from the boiling temperature.


    Quote

    However, if you want to talk about LXH and HXH, I can adapt.


    Thank you. I appreciate. I would also add the HAD. But for the moment I would propose to start from the HXH, whose claim was the precise and only scope of the ICCF3 paper, as specified in its abstract (1):

    "We present here one aspect of our recent research on the calorimetry of the Pd/D2O system which has been concerned with high rates of specific excess enthalpy generation (> 1kWcm-3) at temperatures close to (or at) the boiling point of the electrolyte solution."


    (1) http://www.lenr-canr.org/acrobat/Fleischmancalorimetra.pdf

  • Where did you get the idea that the 1992 paper is the most important paper by F&P ?


    Please ask Rothwell (1): "McKubre pointed out that Fleischmann was a master of theory and mathematics, […] The title of his major paper says it all: “From simplicity via complications back to simplicity.”"


    Quote

    Well let me inform you: it is Absolutely not!

    The most important F&P paper is their detailed 58-page seminal paper "Calorimetry of the Palladium-Deuterium-Heavy Water System," published in the Journal of Electroanalytical Chemistry In 1990.


    Who did established that?


    Anyway, should I conclude that you have finally realized that the ICCF3 paper and PLA article titled “From simplicity via complications back to simplicity” are no longer defensible?


    (1) http://lenr-canr.org/acrobat/Fleischmanlettersfroa.pdf

  • I think Kirk was pointing to his reply to your question (from Sept, 2017). It seems like he proposed an experiment to show how much the calibration constant changes when the location of the recombination changes. The question is whether the LXH reported in the 1992 paper is less than the maximum error from uncertainty in the calibration constant.


    Alan Smith wrote:

    | Or describe a decent experiment to prove it in sufficient detail for it to be replicable.


    Kirk's reply:


    "To reiterate, replace the Pd and Pt cathode and anode in a standard F&P-type cell with a Joule heater, i.e., a resistor. Make the leads long enough so that you can bend it up out of the electrolyte and into the gas space of the cell. Assume 20W total input power derived from 2A at 10V. Use maximally 2A at 1.54 V in the gas space heater, and 2A at (10-1.54) V in the liquid heater. Calibrate by varying current.


    Now change the voltage distribution to less heat in the gas space. Be bold, take it to 0. So 2A max at 10V in the liquid. Calibrate by varying current.


    Now, wait 3 days and repeat.


    Wait 3 more days and repeat.


    Report results.


    Note that Ed Storms proved Jed wrong in his 'it don't matter where or how' comment. In the experiments that I reanalyzed he reported 3 different calibration results. (The only researcher I have seen do this BTW. Kudos to him for honesty and thoroughness.) He reported that a Joule heater gave a calibration equation of Pout = 0.072107 * DeltaT -0.23893. Electrolytic calibration (what I describe above) however gave Pout = .071221 * DeltaT -.177146 *initially* and Pout = 0.070892 * DeltaT - 0.14405 *finally*. (So it makes a difference how, where, and when.)


    Compare to my extracted separate calibration equations for runs 3 and 6, which both displayed zero or nearly so 'excess heat' (which means they used 'inactive' electrodes and thus are equivalent to calibration conditions). I obtained Pout = 0.070672 * DeltaT - 0.177146 for run 3 and Pout = 0.071320 * DeltaT - 0.0.131471 for run 6. Pretty solid evidence that my zero excess heat assumption gives calibrations well within the normal variation of the experimental setup, isn't it?


    Correction: The electrode used in Runs 3 and 6 was an 'active' electrode that had become inactive and was immediately revitalized by an anodic strip for Runs 4 and 7, which showed maximum excess heat signals."

  • Note that Ed Storms proved Jed wrong in his 'it don't matter where or how' comment. In the experiments that I reanalyzed he reported 3 different calibration results.

    No, he proved that the effect is many orders of magnitude too small to do what Shanahan claims. Obviously, if you measure carefully enough, with the right kind of instruments, you can detect what Shanahan describes. By the same token, the effects described by Morrison are also real. However, they are 1,700 too small to account for the heat. That's what makes Shanahan and Morrison crackpots rather than just wrong. It is one thing to say "X might be the cause" when X might produce, say, half the effect. It is at least plausible. But when X is thousands of times too small, and when everyone knows that, to continue claiming it might be the cause is ludicrous. It is extreme innumeracy.

  • It has been pointed out to him probably about 100 times now that the paper he cites is the one that uses the fallacious strawman argument ("the random Shanahan CCSH") in an attempt to discredit the CCS/ATER theory.

    You can repeat that as many times as you like, but you are still wrong. You have not addressed the issues raised by the authors of this paper. You are still wrong by orders of magnitude. You claim to know better than experts in calorimetry and textbook. You claim to know things that -- if true -- would disprove calorimetry going back to the 18th century, and that would win you the Nobel prize. This is a classic crackpot delusional behavior. You and thousands of others claim they know physics better than Einstein, the experts are wrong and they are right.

  • Ascoli,


    The paper you have analysed from 1992 does not seem to correspond to the official one released in mainstream science Journal in 1993.


    The official mainstream Science paper with the same title was revised and published in March 1993 in Physics Letters A, and includes the most important discoveries of F&P : Heat bursts that occurs during electrolysis;


    http://newenergytimes.com/v2/l…n-Pons-PLA-Simplicity.pdf


    ref:

    "we have already drawn attention to the fact that, after prolonged polarisation, one can sometimes observe regions in which there is an increase of temperature accompanied by a decrease of cell potential with time for Pd-based cathodes such as that shown in fig. 1."

    ...

    " One can therefore pose the question: “How can it be that the temperature of the cell contents increases whereas the enthalpy input decreases with time. 9” Our answer to this dilemma naturally has been: “There is a source of enthalpy in the cells whose strength increases with time.” At a more quantitative level one sees that the magnitudes of these sources are such that explanations in terms of chemical changes must be excluded [ 7 1."


    Anyhow: I haven't found any major errors in these papers yet, so any specific main critical points you would like to point to? which is actually inside the paper and not in some old video tapes not part of the paper?


    Fig 1 referred to above:

  • You can repeat that as many times as you like, but you are still wrong.


    No, I'm not. You just refuse to study what I say and realize what it means, because of your ostrich tendencies.



    You have not addressed the issues raised by the authors of this paper.


    I don't need to address off-point issues that are irrelevant to the issue. (Hint: That's what a 'strawman argument' is.)



    P.S. To the rest of LF, the remainder of JR's post is another example of him hyperventilating...

  • Added note:


    By the same token, the effects described by Morrison are also real. However, they are 1,700 too small to account for the heat. That's what makes Shanahan and Morrison crackpots rather than just wrong. It is one thing to say "X might be the cause" when X might produce, say, half the effect. It is at least plausible. But when X is thousands of times too small, and when everyone knows that, to continue claiming it might be the cause is ludicrous. It is extreme innumeracy.


    JR is pulling a fast one here. He starts to talk about my criticisms of the Storms work and then shifts to Morrison. Morrison never saw Ed's work, so Morrision was talking about something else (I suppose JR is referring to M's comments about F&P's work). Then JR quotes some numbers. I don't believe JR has that right, or he is doing the typical early-CF thing of assuming 'only electrochemical recombination can occur'. The fact is that my proposed CCS/ATER thing could fully explain Ed's 780mW apparent excess heat signal, and has the potential for much larger signals is poorer calorimeters (which I have posted mathematical explanations for in the Two-Zone model posts). Further as I noted yesterday, the open cell work of F&P we've been discussing might also be attributed to the impact of the P/(P*-P) term in F&P calorimetric equation. They admitted at high T it didn't work, but they failed to estimate its impact at intermediate temps. In any case JR tries to confound my criticisms and those of Morrison and couple them to ridiculous numbers to 'prove' his point. Intelligent people can see through that, especially when it is pointed out to them.

  • " One can therefore pose the question: “How can it be that the temperature of the cell contents increases whereas the enthalpy input decreases with time. 9” Our answer to this dilemma naturally has been: “There is a source of enthalpy in the cells whose strength increases with time.” At a more quantitative level one sees that the magnitudes of these sources are such that explanations in terms of chemical changes must be excluded [ 7 1."



    Or the answer to the dilemma is "There is an analytical error that becomes significant." If fact one has been proposed that shows the obtained signals are easily caused by chemistry.



    Anyhow: I haven't found any major errors in these papers yet, so any specific main critical points you would like to point to? which is actually inside the paper and not in some old video tapes not part of the paper?


    That's because you don't listen when some of us point them out. THUNK.

  • It [HXH] happened much earlier.


    The temperature increase may come from a change in the electrolyte concentration, as has been noted before. I'm not arguing with you here, just pointing out that the point where the electrode is exposed to the gas space is a significant breakpoint too. Startup of boiling might also change the characteristics of the gas phase as well. Lots of complications to go around.


    For me the bottom line is that the video techniques was never used again, and in fact comments were published suggesting that was deliberate and likely due to the fact that there were unreported problems with it, as I indicated in my whitepaper and as you have shown in detail.


    Your general conclusion that the field has proceeded over the years on a sketchy basis is also correct IMO.