The perpetual “is LENR even real” argument thread.

  • An interesting point. I agree, except that they may easily all make the same interpretative mistakes.

    The methods of interpretation for these experiments were perfected in 1841 by J. P. Joule for isoperibolic and adiabatic calorimeters. Joule could have detected most cold fusion excess heat with ease. The methods for flow and Seebeck calorimeters were perfected around 1900. There methods are used hundreds of thousands of times a day by HVAC engineers, people operating factory equipment, and others. They are essential to our civilization. If these methods could fail because of interpretative mistakes, factories would routinely explode. They do, in fact, explode on rare occasions when a person makes a mistake in calorimetry, or when instruments fail and a thermometer reports the wrong temperature.


    There is absolutely no way 92 professors could make mistakes for 20 years using techniques that have been used worldwide for 180 years.


    It is a fact that a few researchers made interpretive mistakes in a few experiments, such as the NHE experiment I described above. I am not suggesting that no professor ever made a mistake. The NHE mistake was inept, but not blatant. The lower bound heat coefficient method they used is more difficult than the techniques used in most experiments. In contrast, the flow calorimeter technique is dead simple. There is no way you could interpret it wrong. That is why it is used in factories and HVAC everywhere. A few mechanical problems with it can occur, which is why factory boilers occasionally explode. There are 100% reliable methods of double checking to be sure the flow calorimeter equipment is working. McKubre and everyone else who used flow calorimeters employed these methods. All factory boiler operators are supposed to use them. As I said, that is why modern factory boilers seldom explode.

  • There is absolutely no way 92 professors could make mistakes for 20 years using techniques that have been used worldwide for 180 years.

    Let me add something that should be even more obvious. People have read 4.5 million copies of these papers. Many of the papers describe the calorimeters, the techniques, and the method of interpretation in detail. Sometimes in tremendous detail, with many graphs and tables of data. If the professors were making mistakes interpreting the data, the readers at LENR-CANR.org would see this. They would write papers describing these mistakes. No one has done that. There is not a single paper describing a mistake in any major study. *


    So not only can be we be sure the professors made no mistakes using 180-year old textbook methods, we can be sure that millions of readers checked their results and these readers also found no errors.




    * I realize that you, Morrison, and few others others believe you found mistakes. However, you are wrong. You have found nothing. This is not surprising; it is extremely unlikely you will find an error that 92 professors overlooked, and that millions of readers at LENR-CANR.org overlooked. Or that I overlooked. I am not an expert in calorimetery, but I know enough to see that you are wrong. If you or some other reader here wish to evaluate how much I know, I suggest you read my reviews of Miles, McKubre and Fleischmann, and the first chapter of my book. If you wish to evaluate how much the professors know, read their papers. If you want to learn about calorimetry, read Hemminger and Hohne.

  • We have file drawer effect.

    No, we do not have this effect. I wish you would stop saying that. McKubre, Miles and others published lists of every experiment they did, with the results positive or negative. Many others sent me complete lists. There are not many negative results hidden in file drawers. For that matter, there are some positive results hidden away because the authors were told they would be fired if they published.


    You think there might be a file drawer effect because you have not read the literature, and you have not seen the lists of experiments by McKubre and others. Again let me suggest that before you comment on something, first read about it.

  • Even the paper I wrote for the last IWAHLM workshop acknowledges failed experiments;


    During these tests and others various outliers have been observed. For example, two different
    types of ferrocerium (mischmetal) fire-starter rod were tested, known as ‘soft’ and hard’.
    These are made from impure cerium metal blended with iron and other oxides, similar to the
    material used for Zippo flints. When abraded with a piece of hard steel they produce hot
    sparks sufficient to ignite dry tinder. The soft variety sparks more readily than the hard, and
    probably contains more lanthanide material.


    When a trial - not included here - was made
    using zinc chloride electrolyte soft ferrocerium behaves more like a battery than a LEC


    suggesting there was residual chemical activity in the somewhat eroded bulk. A hard
    ferrocerium rod behaves exactly like other LEC metals.

  • JedRothwell


    Here's another influential and poorly informed sceptic talking nonsense about LENR.


    External Content youtu.be
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

  • This statement make no sense to me. All papers report the amount of excess heat. It is always known. Of course we can cross-check replications to show quantitative results. That's what Storms' graph shows, and his tables showing hundreds of results.


    I hesitate to ask, but what on earth did you mean by this?!?

    A theory which predicts excess heat - without predicting the quantity - will be right by chance 50% of the time. Add file drawer effect and unexpected systematic errors.... There is no "1g of this will generate 120J of heat" equivalent here, and results for on a nice line, where you can detect the ones out of line as probable errors, which makes the experimental evidence weaker. In Bayesian terms you get much less filtering of priors when the posterior is 1 bit of information.

  • No, we do not have this effect. I wish you would stop saying that. McKubre, Miles and others published lists of every experiment they did, with the results positive or negative. Many others sent me complete lists. There are not many negative results hidden in file drawers. For that matter, there are some positive results hidden away because the authors were told they would be fired if they published.

    File drawer effect operates at many levels. You have said that in same cases you know it is not operating at the level of "all experiments from given researcher". How about those researchers who find nothing or negative results giving up. How about those methodologies abandoned because they do not work. Thus a systematic error in methodology can be selected - or a systematic unexpected chemical effect - it is indistinguishable in this experimental paradigm from a genuine optimisation of the anomalous effect. Are you optimising systematic error or effect? You cannot know.


    THH


  • That graph is very revealing.


    Suppose LENR comes from NAEs, some materials have many more than others, etc, etc. We still have "typical statistics" and that graph looks nothing like that.


    But of course that is because, like many LENR graphs of excess heat, it is essentially meaningless.


    What underlies it is that (known) relationship that the chances of an experiment running at a high input power (output excess power is a fairly typical fraction of input) become increasingly smaller as the power gets higher.


    Understandable - but provides no info about LENR.


    If one single figure is taken to characterise error performance it would be COP. But typically - for quite a range of powers, the various systematic errors nearly all scale as changes in COP. Not power/area nor absolute power. of course you need more than one metric - very small powers out suffer a larger class of potential errors than large powers out. Then - to characterise LENR performance, you need power/ surface area - if you assume it is a surface effect.


    Comparing these two graphs, seeing which is more consistent, will shed light on do we have errors or LENR.


    We can distinguish between LENR and error in another way - for experiments where temperature provides the claimed stimulus - simply by changing the insulation and power in of the reactor, keeping temperature the same. Excess heat - the COP should change. Error - the COP should stay the same. Anyone want to guess which of those happens in the few cases where researchers have done this obvious thing (I am thinking about Aherne's (?) implementation of an insulated oven).


    I would love for every LENR excess heat result to have this obvious sanity check. Alas few do.


    This is complex - I have simplified it above - but interesting. It is why intelligent comparison of different results can shed light on whether we have errors or not. I've not seen that with the very large number of excess heat results - and that graph is a classic example of unintelligent (meaningless) comparison.


    If we were wanting to investigate typical heat out - assuming this was a surface phenomenon, yes - we would look at excess heat / active area. So that is another relevant metric for comparing different results - but ity correlates with power 9higher surface area normally has higher power in) so to distinguish correlations you need to look at both. Comparing power/area with the COP metric for all these trials would however be interesting.


    So: my impatience with the LENR field claiming it is "obvious" what putting all their experiments together amounts to is that when they collate experimental results, as here, they do so in ways that have no meaning except to tell you what fraction of LENR researchers can afford larger higher power setups!

  • My argument is that they were well-funded - took time - had a clear remit to try and replicate LENR. So they would have tried whatever were the "best bets" unless they were idiots. I don't think they were idiots.

    Let's be clear about the argument that you're actually making here, though.


    Jed posted a link to a list of 92 groups, collated by Fritz Will (himself a noted electrochemist), that had published positive findings of Cold Fusion by 1990.


    You asserted that these results were not 'certain' and you knew that this was a fact because if there was something there, then Google would have replicated it.


    Perhaps you could sustain such an argument if you could prove that A) Google was aware of the list and had reviewed it (which seems unlikely) and B) that Google had systematically attempted every novel experiment in Will's list. I think we both know that such an argument would strain credulity.


    Absent such a systematic attempt to replicate, you're essentially suggesting that Google has some sixth sense about what might work and what might not. That they winnowed out the bad stuff by intuition alone.


    'They're smart people, so they would only do experiments that they were quite sure will work. Hence we can be sure that none of those experiments are 'certain' because Google didn't do them.'


    But given that Google failed to replicate anything of substance, this is a prima facie absurd argument. They did 100s of experiments that didn't work, and so their ex ante judgement about what might plausibly work seems as questionable as anybody else's.


    So we can be sure that they weren't able to accurately judge the success of any experiment ex ante, and we can be reasonably sure they didn't do a systematic review of Will's list.


    So by what chain of logic does Google's work say anything substantive about Will's list?


    I suppose you could argue that Google surveyed the entire field, and given that Will’s list is a subset of the field, it was thus captured by that exhaustive survey. That is a reasonable enough proposition - though one that none of us can really substantiate satisfactorily. However, my point about Google’s ex ante judgement remains intact, and thus, at best, Google’s efforts can still only reflect on the experiments that they did do, and not those that they opted not to. If you accept this line of reasoning, then Google’s efforts still say nothing substantive about Will’s list (unless Will’s list only contains the experiments Google attempted).


    You haven't read the papers in question, which is fine. But it means that you can't have an opinion on them. Which is also fine. Instead though, what you've done is construct a deeply problematic piece of fallacious reasoning because you want to have an opinion on them.


    To be clear, I'm not asserting anything about the papers. I'm not asserting them as 'certain.'


    What I'm saying is that you haven't read them, and so you can't make assertions about them. Nor should you construct odd and fallacious hypothetical trapdoors to try to extract yourself from your own haphazard assertions.


    Papers and results aside, this is not a rigorous or methodical way to go about the task of thinking.


    You demand rigour of others, which is fine, but it seems only right that you demand it of yourself also.

  • A theory which predicts excess heat - without predicting the quantity - will be right by chance 50% of the time.

    That would only be the case if people reported cases with very low excess heat, within the margin of error, as positives. They do not. The reports describe significant excess heat, well above the margin of error for the calorimeter. Results with marginal heat, where it is 50% likely they were actually negative, are listed as negative.


    In point of fact, any experiment that is even marginally positive may actually be positive. No calorimeter recovers 100% of the heat. That is impossible. When the instrument is working properly, a heat balance with no excess is always slightly negative. It would only be slightly positive if there is a bias somewhere. Everyone knows this, and it would show up in a calibration, so the researcher would say something like: "the instrument had a 1% positive bias, so all results less than 3% are considered negative."


    Add file drawer effect and unexpected systematic errors....

    As I said, the file drawer effect does not exist for the major studies. They reported all results. There are no systematic errors. If there were, someone would have found them by now. These studies at Los Alamos, SRI and elsewhere underwent very thorough peer-review, lasting months or years in some cases. Reviewers included skeptical people who looked carefully for systematic errors. They found none. You, Morrison and the other skeptics have found no errors.


    Granted, you think you have found errors, but Storms, McKubre and the other authors disagree, as do I. They may not have read your comments but they saw the same comments by reviewers during during peer review, and they answered all of them to the satisfaction of the reviewers. Not to your satisfaction, of course.


    There is no "1g of this will generate 120J of heat"

    Yes, there is. Again, if you would read the literature you would know this.

  • That would only be the case if people reported cases with very low excess heat, within the margin of error, as positives. They do not. The reports describe significant excess heat, well above the margin of error for the calorimeter.

    The problem is that that no controlled LENR results have complete analyses of error. That is because error estimates must include a term for the differences between control and active runs: that is difficult to bound. It is usually assumed negligible - but that assumption may be wrong. Checking whether it is wrong is necessary in each case and given the surprising effects possible in metal-H or metal-D electrolysis we cannot rely on "what electrochemists all know" wisdom.


    For first principle results where data depends only on direct measurement of excess heat I do not think we have so many positives? Perhaps it would be interesting to look at what fraction of excess heat measurements come from direct measurement. The most obvious such positive, historic, which I am now allowed to mention, has accompanying video evidence which appears to contradict the calculation in the paper. Perhaps we could list others?


    In point of fact, any experiment that is even marginally positive may actually be positive. No calorimeter recovers 100% of the heat. That is impossible. When the instrument is working properly, a heat balance with no excess is always slightly negative.

    This is covered by this issue: excess heat results got by comparison with a control run (the usual case) do not have this built-in negative bias.

    As I said, the file drawer effect does not exist for the major studies. They reported all results. There are no systematic errors. If there were, someone would have found them by now. These studies at Los Alamos, SRI and elsewhere underwent very thorough peer-review, lasting months or years in some cases. Reviewers included skeptical people who looked carefully for systematic errors. They found none. You, Morrison and the other skeptics have found no errors.

    The level of peer review of these experiments is lamentable. I, just as you, regret the way that mainstream science will mostly ignore LENR papers. However, now there is maybe a chance to remedy that with the increasing interest. A similar experiment done now claiming new evidence could be peer reviewed properly, with the full process where reviewers go back an ask experimenters to perform additional checks to make results more solid. You need to find peer reviewers from outside teh LENR community: mostly people have now forgotten the old controversy, so most now (the younger ones anyway) have no preconceptions.


    But when you say skeptics have found no systematic errors: Shanahan has found two errors which might apply. Specifically CCS will apply whenever error bounds are not properly calculated in a controlled calorimetry result. That can be determined for every such experiment yes/no. there is no argument. It is trivial, and you would hope it did not apply to many, but Shanahan's have some examples of it applying.


    ATER will apply directly to all electrolysis experiments. The arguments against it being relevant all have sound counterarguments:

    (1) It does not happen. No-one can know this given the unusual effects on metal-H electrochemistry and the fact that it can be mistaken for excess heat.

    (2) It is not relevant in experiments with a recombiner. It is relevant because it alters temperature distribution and unless the effect of such changes on results (which can with correct result be small) is analytically bounded it remains an unknown problem


    So: what is needed is a replicable electrolysis experiment showing excess heat where the various questions:

    • Differences with control
    • ATER exaggerating differences with control
    • CCS - that is just ensuring that the correct error bounds are calculated, taking into account the fact that error between control and active systems deliver enthalpy error multiples by the input power - often much larger than the excess heat power.


    Are all done properly without assumptions skeptics will not agree. Where there is such an assumption we can have an agreed extra process where it can be tested built into the replication protocol.


    You give me one experiment (McKubre ?) showing repeated excess heat above error bounds where that is all done already and I will either agree with you - and we have our "replicable evidence of LENR" experiment to give to people like Florian on the other thread who ask - or point out something you are getting wrong.


    I agree replicating these experiments properly is expensive - but not impossible and worth it if we can be sure the original results are certain as above.


    THH

  • Yes, there is. Again, if you would read the literature you would know this.

    No, Jed, I have read the literature enough to know this.


    Results cannot be quantitative because the quantity and efficiency (getting fusion) of NAEs is variable from one sample to another and cannot be predicted in advance. It is because I have read the litrature that I know this.


    The only quantitative result I know of is He / excess heat comparison. That is very difficult to do well, in away skeptics would accept, but possible. You remember Abd wanted to do that.


    THH

  • According to Hagelstein at ICCF24, all of Google's 400 replication attempts were on the Parkhomov experiment. I think that revelation came in the Q&A of the "Panel Discussion" hosted by Matt Trevithick, which has since been edited out.

    I would dearly like to understand why they did not replicate Pd/D electrolysis? I vaguely remember that somone somewhere said they had a reason for that - I am sure they were asked?


    Did the LENR community recommend they do that? I would have I know (in fact I remember saying that here when asked).


    THH

  • Did the LENR community recommend they do that? I would have I know (in fact I remember saying that here when asked).

    They did work with some of the old guard before starting in the lab, but not sure if they solicited their input as to what replications they would recommend. As I recall, Google said they had a couple grad students do a deep dive and draw up a list of the most promising.


    Based on that, we thought they were doing 400 experiments covering many of the field's experimental "successes". Instead, they attempted 400X's to do the same Parkhomov, and failed 400x's.

  • They did work with some of the old guard before starting in the lab, but not sure if they solicited their input as to what replications they would recommend. As I recall, Google said they had a couple grad students do a deep dive and draw up a list of the most promising.


    Based on that, we thought they were doing 400 experiments covering many of the field's experimental "successes". Instead, they attempted 400X's to do the same Parkhomov, and failed 400x's.

    I find this very difficult to believe - I'd like to hear their side of it.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.