Rossi vs. Darden aftermath discussions

    • Official Post

    Well, you can't compromise on science. Given that calorimetry can have an error rate of 30% or more, most low level CF excess energy results should be considered inconclusive or failures unless backed up by other evidence.


    You are losing credibility.


    If you were right, engineers woudl sure know it 8|


    It can be very precise

    http://www.lenr-canr.org/acrobat/LonchamptGreproducti.pdf


  • My credibility is 100% intact. Many of these papers presented as "evidence" are dated from 1989 and 1990. Their techniques are likely not as accurate as now. In fact, some of the papers themselves allude to the high errors associated with calorimetry. For example, the tip of your index finger can put out as much heat in 2 or 3 hours as that one experiment claimed to generate in almost 2 months. You should try reading some of the papers critically before questioning my posts.

  • Here is a link to recent CF results posted this year and dated December 2016.

    brillouinenergy.com/wp-content/uploads/2017/01/SRI_ProgressReport.pdf


    They claim the typical few watts of output power at a claimed COP of the typical 1.2 to 1.4. Note that they don't use the total energy that goes to the heating resistor as input energy. They try to determine the total energy from the heating element that actually makes it to the heating core as input energy. I don't necessarily agree with this method because in the real world there is always imperfect transfer of heat from one source to another. That lost energy still counts as input.


    Also, in the conclusions section, they make the following statement :


    "


    Better calorimetry is regularly being optimized and implemented.

    "


    These are their words and not mine. So, even in results of less than one year ago, the authors acknowledge that they are still improving the calorimetry used to measure the CF results. I respect their honesty. I'll let people draw their own conclusions about that statement without mentioning my own.

    • Official Post

    heir techniques are likely not as accurate as now. I


    read the article. It's purpose is exactly to oppose your argument.

    When the world leading electrochemist design a calorimeter, even in 1985, it is precise.


    of course it was not so well in MIT

    http://lenr-canr.org/acrobat/B…Pjcondensedg.pdf#page=138 (page 138+)


    Quote

    Accurate isoperibolic calorimetry requires a well-defined heat transfer pathway from the calorimetric cell to a constant temperature water bath. The MIT isoperibolic calorimetric results published in 1990 had a major impact in convincing scientists, as well as US Patent officials, that the anomalous excess enthalpy reported in 1989 by Fleischmann and Pons in Pd/D systems was due to various calorimetric errors. Additional information about the MIT calorimetry has allowed a more detailed analysis. The major new finding is that the walls of the MIT calorimetric cell were so well insulated with glass wool (2.55 cm thickness) that the major heat transfer pathway was out of the cell top into the room air rather from the cell into the constant temperature water bath. This helps to explain the reported sensitivity of 40 mW for the MIT calorimetry versus the sensitivity of 0.1 mW achieved for the Fleischmann–Pons Dewar calorimetry. The evaluation of calorimetric designs, accuracy of temperature measurements, electrolyte level effects, calorimetric equations, and data analysis methods leads to the clear conclusion that the Fleischmann–Pons calorimetry was far superior to that of MIT. Therefore, the results of the MIT calorimetry cannot be used as a refutation of the Fleischmann–Pons experiments



    Maybe you could start a training with Bockris textbook on electrochemistry

    http://www.springer.com/us/book/9780306455544

  • Alain referenced http://www.lenr-canr.org/acrobat/LonchamptGreproducti.pdf


    Some comments regarding that:


    In Section 2, subsection 2.1 “Description of the experiment” I note:


    “It is a Pyrex Dewar with the upper part sliver [silver] coated to prevent heat radiation losses in this area, and to make the heat losses by radiation insensitive to the water level.” – radiative heat losses are prevented, then made insensitive to water level? Say what? Not very clear here. Do they know what they are doing?


    “The various parameters are as follows:”


    “the electrolyte: LiOD, 0.1 M H, “ – So, is it D or H, or both?


    “the cathode: palladium cylinder (platinum for blanks), diameter 2 mm, length 12.5 mm is spot welded on a platinum wire, “ – So, the cathode has Pt as well as Pd, unless the connection point is above the electrolyte level (fig. 1 implies it is not), but if that is true, there is Pt in the gas space. Pt is a good recombination catalyst. I *assume* the Pt is immersed and possibly covered by some sort of shrink tubing or other wrap, but the Pt on the cathode may actually be exposed – Figure 1 is unclear on this.


    “the anode: platinum wire, diameter 0.2 mm,” – no comment


    “a thermistor for temperature measurement of the electrolyte, with a precision of ± 0.01°C at 20°C and oft 0.1°C at 100°C,” – no comment


    “a resistor for heat pulses generation,” – no comment


    “a kel’f plug for electrical connections,” – Kel-F absorbs hydrogen, and eventually in a long term experiment will become full saturated and start to release hydrogen on the outside surface. Unsure if O2 does the same or not, probably to a lesser extent.


    “and a duct for replacing the water eliminated by electrolysis and by water vapor carried away in the electrolysis gases.” – no comment


    “Data are collected every 6 seconds, and averaged every minute.” – Hmmm…did Gene Mallove protest this averaging too? He didn’t like the MIT guys doing this…


    In Section 3, I note:


    Equation 3 has the P/(P*-P) term in it. They report in Section 2 that they initially load at 0.2A for 1-2 weeks, then use 0.5 A “until the cell reaches boiling temperature”. As I noted in my whitepaper, this causes the P* term to go infinite (since at boiling P=P*, and as you approach boiling P*-P gets progressively smaller).


    I further note that these authors agree with me. Later they state:

    “Relation (1) is valid when there is no calibration pulses, and not at boiling, where the analysis this approach becomes difficult because the denominator of (3) is close to zero as the temperature approaches boiling and water vapor pressure is close to the atmospheric pressure.”


    “Relation (1)” is given as “Excess heat = A + B + C – D” but the A,B, C, and D terms are not explicitly defined. They do give equations or terms however that one who knows what is going on can then substitute into Relation (1). Not explicitly stating “A = …” is confusing to the new reader. That shouldn’t have gotten past the reviewers.


    A + B+ C is the summation of output powers or power loss terms and D is the input power. D is given by equation 5.


    A can be assigned to the radiative heat loss term which used differences in temperatures to the 4th power to compute. B would be the enthalpy loss due to the exiting gas stream and is given by (3*I / 4 * F) ( P / ( P* - P )) * L . This is the term that doesn’t work near boiling.


    C is apparently given by equation (or relation) 4, which is Cp * M0 * d(theta)/dt. Note that this 'relation' uses Cp, which is known to be a function of temperature. For accuracy, the impact of the temperature dependence needs to be evaluated to assess if it needs to be explicitly included. No discussion of this is given.


    D should be the standard input power, but they list relation 5, which is (E – Eth) * I. Note that total input power if E * I. Subtracting the thermoneutral voltage times the current takes out the part that is used to do electrolysis, but they use the P* term (as I am calling it) to account for enthalpy lost from the exiting gases. There is a problem here as they then add in the output enthalpy for this. That automatically means that they are bumping up the output power and thus the excess power. I believe the correct input power is E * I, so their equations and terms listed in this paper are quite confusing to the uninformed.


    Basically, the description of their method is terrible. I’m sure they didn’t really do what they write, as it means they automatically would have artificially created an excess energy signal.


    At boiling, they use a different term as noted above. They say they use an equation that is “Excess heat = A + L – D)”, where the A, D, and L mean the same thing as before. But of course, what exactly is that? We don’t know because they are unclear. In the P* term “L” was the enthalpy of water vaporization, which is a ‘per mole’ quantity. In their boiling phase equation, that would have to be multiplied by the moles of water vaporized, but that means their equation definition is wrong. More sloppy writing and inadequate reviewing.


    But what is amusing to me is that they end up agreeing with my whitepaper comments on the F&P claim for a HAD in their 1993 paper. Lonchampt, et al, say “It is difficult to follow accurately the level of water during this period because of the formation of foam, so it is only at the end of the experiment, when the cell is dry that the excess heat can be calculated with precision.”


    But F&P used that foaming point to define their HAD (which as I noted forced them into an unacknowledged disagreement). So, indirectly Lonchampt, et al, (I will use “L” below to indicate the authors from now on) point out the same problem I did.


    Next L begins to discuss calibration experiments. Some comments…


    L says: “Figures 3a and 3b show a small apparent excess heat when temperature rises. This is most likely due to the heat losses by conduction, not taken into account in our formulas that assume all heat transfer is radiative.” and “By definition, we assume that platinum does not produce excess heat.” – But Storms found significant excess heat with Pt, so this ‘definition’ is inappropriate and actually illustrates the pre-forming of conclusions. Yes, maybe they should do better on their equations, but maybe they are seeing Pt-based FPHE, just as Storms.


    Then L moves to discussing Pd runs. They find excess heat signals in 5 of 18 runs, with “Mean relative excess heat %” values of (16, 3, 7, 20, 7) percent. These numbers increase when the close-to-boiling period is taken into account, and are given as “Relative Excess Heat during “grand finale” %” of (153, “IS”, 36, 97, 29). The latter set of numbers is not reliable for reasons mentioned above. The first set of numbers is well within the ‘CCS/ATER’ regime, which they certainly do not evaluate at all. Therefore, their data are not exclusively interpretable as ‘excess heat due to unknown (read CF or LENR) causes’.


    In their Discussion section L says: “One of the criticisms of the Fleischmann and Pons work has been the temperature uniformity inside the cell. If temperature varies, the radiation law is not valid, and all radiation losses calculations should be wrong. We have looked carefully at this point, and by raising the thermistor, from its standard location in the middle of the cell all the way to the surface of the water. We have seen no significant temperature variation, indicating that mixing by the gases of the electrolysis is sufficient.”


    I note that a). They did NOT move the thermistor into the gas phase, so no CCS/ATER test, and b) they claim good mixing, which agrees with F&P’s dye drop experiment. Note that in his 2006 ‘rebuttal’ of my 2002 paper, Storms claimed O2 bubbles would not make it to the cathode because they only travelled straight up. I pointed out this meant bad mixing then, which in turn means bad calorimetry. Here L disagrees with Storms, as I did in my 2006 reply to Storms.


    I think most of their Conclusions are inaccurate w.r.t. detecting excess heat. As I tried to teach Kevin, this paper was written in 1996 before my 2002 paper, so the authors did not consider a CCS-type problem. Their experimental protocol hints are probably useful (feedthroughs sealed, etc.).


    So in summary, this paper is poorly written to the extent one is not positive what was actually done. They predate the CCS/ATER problem definition, so that issue is not addressed, but the results seem to be well within the realm of that problem. Probably not a good choice to cite if one is looking to bolster belief in LENR.

  • Well, you can't compromise on science. Given that calorimetry can have an error rate of 30% or more

    No, it cannot. You made that up. There is no basis for that statement in any paper on cold fusion or any textbook on calorimetry.

    . The excess energy from this experiment is equal to about 12 food calories or about 2 potato chips over a timespan of ~ 2 months,

    You are wrong by many orders of magnitude. The excess heat ranges from 1 MJ to 294 MJ. That is 239 to 70,268 food calories, or the equivalent of 12 kg of potato chips, coming from a device weighing ~1 g. That's roughly 12,000 times more energy than any chemical device can produce.

  • No, it cannot. You made that up. There is no basis for that statement in any paper on cold fusion or any textbook on calorimetry.

    You are wrong by many orders of magnitude. The excess heat ranges from 1 MJ to 294 MJ. That is 239 to 70,268 food calories, or the equivalent of 12 kg of potato chips, coming from a device weighing ~1 g. That's roughly 12,000 times more energy than any chemical device can produce.

    Sir, you seem to be mistaken. Here is a quote from your earlier post:



    start

    "The calorimetry conclusively shows excess energy was produced within the electrolytic cell over


    the period of the experiment. This amount, 50 kilojoules, is such that any chemical reaction


    would have had to been in near molar amounts to have produced the energy. Chemical analysis


    shows clearly that no such chemical reactions occurred. The tritium results show that some form


    of nuclear reactions occurred during the experiment."


    http://lenr-canr.org/acrobat/Lautzenhiscoldfusion.pdf


    You may not believe it is a nuclear effects, but you should not project your belief onto the researchers. I do not think you can find a single paper by anyone who replicated who claims it is not a nuclear effect.



    "

    end



    I checked the paper. The timeframe was roughly two months for 50Kjoules of excess energy. I focused on this paper since you chose it as an example. I have papers which talk about the possible 30% error in calorimetry measurements and I will post them shortly. Also, please stop applying my statements to everything with broad strokes. I DID NOT say that 100% of all calorimetry experiments have ~30% error. The errors in some may be around that or higher. If some LENR experiments have much lower calorimetry error than this, then other error mechanisms and sources need to be investigated to determine if they provided the "claimed" excess energy.

  • I checked the paper. The timeframe was roughly two months for 50Kjoules of excess energy. I focused on this paper since you chose it as an example.

    Ah, I see your point. Yes, there wasn't much energy production in this paper, although it far exceeded the limits of chemistry, as the authors pointed out. So, let's pretend the rest of the literature does not exist! Let's pretend this was the only replication of the effect, and thousands of other tests that produced far more energy just never happened.


    If you narrow your vision down enough and refuse to look at more than one fact at a time, you can prove just about anything, or deny anything. When evaluating case A, pretend that cases B through Z do not exist.

  • From

    https://en.wikipedia.org/wiki/Cold_fusion


    Lack of expected reaction products

    Conventional deuteron fusion is a two-step process,[text 6] in which an unstable high energy intermediary is formed:

    D + D → 4He* + 24 MeV

    Experiments have observed only three decay pathways for this excited-state nucleus, with the branching ratio showing the probability that any given intermediate follows a particular pathway.[text 6] The products formed via these decay pathways are:

    4He*n + 3He + 3.3 MeV (ratio=50%)4He*p + 3H + 4.0 MeV (ratio=50%)4He*4He + γ + 24 MeV (ratio=10−6)

    Only about one in one million of the intermediaries decay along the third pathway, making its products comparatively rare when compared to the other paths.[40] This result is consistent with the predictions of the Bohr model.[text 8] If one watt (1 eV = 1.602 x 10−19 joule) of nuclear power were produced from deuteron fusion consistent with known branching ratios, the resulting neutron and tritium (3H) production would be easily measured.[40][137] Some researchers reported detecting 4He but without the expected neutron or tritium production; such a result would require branching ratios strongly favouring the third pathway, with the actual rates of the first two pathways lower by at least five orders of magnitude than observations from other experiments, directly contradicting both theoretically predicted and observed branching probabilities.[text 6] Those reports of 4He production did not include detection of gamma rays, which would require the third pathway to have been changed somehow so that gamma rays are no longer emitted.[text 6]

    The known rate of the decay process together with the inter-atomic spacing in a metallic crystal makes heat transfer of the 24 MeV excess energy into the host metal lattice prior to the intermediary's decay inexplicable in terms of conventional understandings of momentum and energy transfer,[138] and even then there would be measurable levels of radiation.[139] Also, experiments indicate that the ratios of deuterium fusion remain constant at different energies.[140] In general, pressure and chemical environment only cause small changes to fusion ratios.[140] An early explanation invoked the Oppenheimer–Phillips process at low energies, but its magnitude was too small to explain the altered ratios.[141]

  • Ah, I see your point. Yes, there wasn't much energy production in this paper, although it far exceeded the limits of chemistry, as the authors pointed out. So, let's pretend the rest of the literature does not exist! Let's pretend this was the only replication of the effect, and thousands of other tests that produced far more energy just never happened.


    If you narrow your vision down enough and refuse to look at more than one fact at a time, you can prove just about anything, or deny anything. When evaluating case A, pretend that cases B through Z do not exist.

    Sir - I am attempting to focus on one item at a time in detail. I've pointed out the possible issues that, in my opinion, require additional review in the 1989 document. Unless these issues were addressed in a later published document, this document is inconclusive at best to me. I've also stated the two issues I have with the recent Dec. 2016 SRI document: 1) using analysis and not direct measurement to determine the input power, 2) stating in the conclusions that they are still trying to improve the calorimetry. So, the SRI document is also inconclusive at best to me. If the publishers of these two documents had waited until these issues were resolved either positively or negatively before making the decision to publish, then perhaps I could draw a conclusion from them.

  • I've also stated the two issues I have with the recent Dec. 2016 SRI document ... If the publishers of these two documents had waited until these issues were resolved either positively or negatively before making the decision to publish, then perhaps I could draw a conclusion from them.


    Can you point us to the December 2016 SRI document? Is this the one by Francis Tanzella looking at Brillouin's device? That one is billed as "an interim progress report." I recall significant questions coming up with it, but I get the sense that it doesn't really fall into the category of something that has been formally published. Perhaps you have another document in mind.

  • Sir - I am attempting to focus on one item at a time in detail.

    Exactly! A tried and true technique. You look at A and pretend B does not exist, then you look at B and pretend A does not exist. Continue to Z. In military terms this is known as destroying an army in detail.

    I've pointed out the possible issues that, in my opinion, require additional review in the 1989 document. Unless these issues were addressed in a later published document, this document is inconclusive at best to me.

    Yes, right, we get it. You will refuse to read any other documents, or look at any other experiments. Or, when you do look at them, you will refuse to discuss how this one fits in with the others. Any given experiment can be "inconclusive at best" to you because you think that by denying one part of this paper for a contrived, flimsy, half-baked reason that sorta, kinda sounds like it makes sense as long as you don't think about it (potato chips -- brilliant!), and then one part of the next paper for some equally silly reason, and the next, and the next, you can avoid seeing the forest for the trees.


    Go ahead and play your semantic games. That isn't science. It isn't rational. You are just making up stupid excuses to avoid looking at the facts.

  • Lack of expected reaction products

    Conventional deuteron fusion is a two-step process,[text 6] in which an unstable high energy intermediary is formed:

    These are theoretical objections to high-sigma replicated experimental results. If these statements are a correct expression of the theory, that proves the theory is wrong. When experiment and theory conflict, the experiments always win, theory always loses. That's the most fundamental rule of the scientific method.


    Also, these statements come from Wikipeidia, which as I said, is not a reliable source of information. It is like drinking water from a sewer.

    • Official Post

    These are theoretical objections to high-sigma replicated experimental results. If these statements are a correct expression of the theory, that proves the theory is wrong. When experiment and theory conflict, the experiments always win, theory always loses. That's the most fundamental rule of the scientific method.


    This is where your stubbornly scientific way of mind face modernity of theoretical driven reality.


    I remember of an argument against a similar anomaly "if it is true, physics fall appart"...

    That is the most unscientific argument (beside that there are experimental critics that are very fair).


    I think it is called "appeal to consequence" ?


    This is how things work today, sorry Jed, we are obsolete.

    One thing I learned in Army Service, it is that if you are alone doing something well, you are just dead, and a stupidity done in group may work. Fact shows that Science without Academic Consensus just don't work... no publication in Science or Nature, no federal budget...

  • Exactly! A tried and true technique. You look at A and pretend B does not exist, then you look at B and pretend A does not exist. Continue to Z. In military terms this is known as destroying an army in detail.

    Yes, right, we get it. You will refuse to read any other documents, or look at any other experiments. Or, when you do look at them, you will refuse to discuss how this one fits in with the others. Any given experiment can be "inconclusive at best" to you because you think that by denying one part of this paper for a contrived, flimsy, half-baked reason that sorta, kinda sounds like it makes sense as long as you don't think about it (potato chips -- brilliant!), and then one part of the next paper for some equally silly reason, and the next, and the next, you can avoid seeing the forest for the trees.


    Go ahead and play your semantic games. That isn't science. It isn't rational. You are just making up stupid excuses to avoid looking at the facts.


    This I remember was a big disagreement between you and Kirk Shanahan, where he suggested that the proper way to deal with such experiments was specifically to look at each one specifically.


    Kirk's rationale was (I believe) that without this detailed analysis systematic error is possible, and therefore aggregating likely results unsafe.


    I'll add to that. In the case of LENR where the specific anomalies found are not quantitatively predicted (with Abd's He Lubbock experiments as only counter-example I know of), we expect both experiment-level selection and result-selection mechanisms to apply for positive results, thus amplifying the incidence of undetected systematic and one-off errors.


    Normally science protects against this through having controlled replicable experiments: that removes one-off error, and systematic errors can be investigated and made increasing unlikely through replication with different instrumentation etc as naturally happens.


    In the case of LENR experiments this is not true, which is why much more caution is appropriate than might at first sight seem reasonable.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.