The perpetual “is LENR even real” argument thread.

  • No, it is not possible the replications can be replicating errors. An error is always systematic. It is a problem with instrument or method. The groups in 1990 used many different systems and methods, such as flow calorimeters instead of isoperibolic ones, or thermistors instead of thermocouples.

    Suppose all of those researchers had used the very same kind of calorimeter, with cells of the same dimensions, and the same array of 5 thermistors, from the same maker (Thermometrics Ultrastable Thermoprobes). And the same kind of power supply, and so on. In that scenario they might have replicated the errors. It would be highly unlikely, but it is at least possible. However, professors tend to do experiments their own way. They use equipment they have on hand. Oriani used a Seebeck calorimeter designed to hold a human baby. I kid you not! There is no way he could have replicated a calorimetric error made by F&P with their half-silvered Dewar calorimeter cell. He might have made some other completely unrelated error. You can make a mistake with any kind of calorimeter, but you cannot make the same mistake with an isoperibolic Dewar that you make with a Seebeck gadget designed to measure baby metabolism.


    One of the reasons professors use different instruments and techniques is to avoid accidentally replicating an error. That is what they tell me. They are well aware of the risk of following the methods too closely, and using instruments exactly alike.


    In some cases they do lend or borrow instruments. I am not saying it never happens. What often follows is they do things differently. Wrong, in some cases. Fleischmann lent an instrument to the NHE lab. He wrote detailed instructions, which they ignored. They did the experiment wrong. Which shows that you are right -- experts do make mistakes! Fleischmann was upset by this. Miles went to the NHE lab for months, looked at the data, and determined what they had done. I described this briefly on pages 32 - 35 here:


    https://www.lenr-canr.org/acrobat/RothwellJreviewofth.pdf


    Fleischmann and Miles wrote far more technical and detailed descriptions than mine. I linked to their papers in the References. Have a look. They included long tables of the original data from the NHE, so you can run the numbers yourself.

  • I went to one of the top 6(7) universities in the world..

    there were plenty of clever idiots who went there who didn't work on LENR..

    There are clever idiots in every profession, in every walk of life. Read history and you will find countless examples. Despite that, our institutions work pretty well on average. Airplanes seldom crash. The internet seldom stops dead. But airplanes do crash, and it is usually because some clever idiot did something stupid. No matter how many precautions they take in aviation, things go wrong. But only occasionally, because on average, people are competent. Our species would have gone extinct eons ago if that were not the case.


    And THAT is why experimental science works! Because most people know how to do their jobs. That is also why replications are essential in experimental science. One professor can easily be wrong. Two or three might be wrong. But when 92 of them do an experiment again and again, over many years, using different instruments and techniques, and they all get more or less the same answer, that has to be right.


    You will find plenty of mistakes in the cold fusion literature, such as the inept NHE analysis I described above. That happens to be a false negative, but there are false positives as well. The thing is, taken as a group, on average, 92 different researchers in different labs using different instruments do not all make mistakes, every single time, for 20 years.

  • There are lots of replicable uncertain results like CR39 tracks.


    I know that - because had there been replicable certain results the google team would have replicated them again, successfully, and be very happy.

    “We know the results are uncertain because Google didn’t replicate them” is a strange and nonsensical argument.


    Moreover, it’s inconsistent that you would accept a single replication by Google as proof positive of LENR, whilst routinely dismissing the replications of eminent scientists.


    One suspects that if Google had actually replicated something, this discontinuity would be resolved in favour of your skepticism remaining intact.

  • Forgive me, Edo. You have not actually given any evidence t

    Well.. As suggested before, perhaps you could do your homework. I am under no obligation to provide you with anything. Sounds to me you are lazy or comfy with your ignorance so you can keep throwing that antiquated stuff in my face and ask for evidence, there is a boatload of that out there, but you do need to open your eyes or rather mind to actually see it!

    Neutrino - Thought to trigger nuclear decay, which is an assumption. Then we use the decay of Cl36 I think it was to tell us that the neutrino did it. There is NO evidence for the existence of neutrino's other than that assumption. Neutrino are on my list of made-up stuff, forgot that one before.

    Prove to me,
    - neutrino existence (invented to make up for the spin in free neutron decay)
    A deuteron in SAM is proton-electron-proton, just like a single proton in effect because the center has one electron charge negating 2 half proton charges. Still spin half if you want to talk about that. So the electron proton organization DOES work, but you still think it doesn't.

  • I guess though - for retracted LENR papers - I should reference Paneth & Peters as a venerable example?

    Paneth & peters were forced under pressure to retract not because they believed they made a mistake and as for all those failed replications in 89 and 90 only one matters and it was the base for killing cold fusion. MIT failed replication was fraud as a whistleblower alerted gene malove to the fraud showing graf's and stats that proved a positive result. This is probably what formed your early skepticism or your newest uncertainty. All based on lies!

  • Anyone can know the approximate density by looking at the cell. If the density is so large that the waterline is totally obscured, you can tell by looking. As I said, a 5 year old could tell. If you asked her, "how high is the water now?" she would say: "I can't tell; there are too many bubbles." F&P would see there are too many bubbles, and they would abandon that technique.

    I am not allowed to answer this!!!

  • Paneth & peters were forced under pressure to retract not because they believed they made a mistake and as for all those failed replications in 89 and 90 only one matters and it was the base for killing cold fusion. MIT failed replication was fraud as a whistleblower alerted gene malove to the fraud showing graf's and stats that proved a positive result. This is probably what formed your early skepticism or your newest uncertainty. All based on lies!

    So, this account is contrary to that: which bit of it is wrong?


    https://link.springer.com/content/pdf/10.1038/338692a0.pdf


    Unlike Pons and Fleischmann, Paneth and Peters did not observe the release of large amounts of heat from their apparatus. They write that they would have expected only a fraction of a calorie of heat to be produced by the creation of w-• cubic centimetres of helium. Paneth and Peters conclude that the energy must be released in the form of radiation, but add that they had not detected it. In April 1927, came the retraction. Paneth et al. had tested their results at Cornell University and in Berlin and drew the conclusion that they had "underestimated" two sources of error. The first clue emerged during experiments designed to check whether helium could have diffused from the atmosphere through the glass walls of the apparatus. While performing numerous control studies, Paneth et al. found that glass heated in a hydrogen atmosphere yielded up absorbed helium, in amounts of about w-· cubic centimetres, whereas glass heated in a vacuum yielded none. Helium detections at this level, they concluded, were to be discounted. The second blow was the realization that the palladinized asbestos catalyst that had given the best results was, like glass, a considerable source of helium, which it released readily in the presence of hydrogen, but not in that of oxygen. In an almost self-mocking tone, Paneth et al. write that they must strike from their results all the trials with a palladinized asbestos catalyst, in which helium was 'created' in amounts up to w-' cubic centimetres, and upon which they had earlier placed "particular value".

  • “We know the results are uncertain because Google didn’t replicate them” is a strange and nonsensical argument.


    Moreover, it’s inconsistent that you would accept a single replication by Google as proof positive of LENR, whilst routinely dismissing the replications of eminent scientists.

    My argument is that they were well-funded - took time - had a clear remit to try and replicate LENR. So they would have tried whatever were the "best bets" unless they were idiots. I don't think they were idiots.


    I don't accept a single replication by an independent group as proof positive. For example, they might be replicating the results, and interpreting them (wrongly) following the original. But it does make the results a lot more definite and because the google guys were well aware of the controversy they tried for a higher standard of certainty than most would.

  • How can you make any kind of statement at all about papers you haven't read?

    Like all of us, I rely on what others have done. In this case I have said which others.


    It coincides with my experience of those experiments I have looked at.


    It is also common sense: if a replicable certain experiemnt existed it would be replicated and bring LENR out of the cold.


    It does exist - sort of - for type 2 LENR (classic fusion promoted by deuterated metal lattices). That is now being taken seriously.


    Not for type 1, as I remember has been discussed here many times.

  • The thing is, taken as a group, on average, 92 different researchers in different labs using different instruments do not all make mistakes, every single time, for 20 years.

    An interesting point. I agree, except that they may easily all make the same interpretative mistakes. All experiments interpret results based on assumptions (like - that voltmeters work). Sometimes everyone gets an assumption wrong because there is some never before seen condition in that field that means it breaks, and which no-one had thought of.


    Science is full of that stuff.

  • Suppose all of those researchers had used the very same kind of calorimeter, with cells of the same dimensions, and the same array of 5 thermistors, from the same maker (Thermometrics Ultrastable Thermoprobes). And the same kind of power supply, and so on. In that scenario they might have replicated the errors. It would be highly unlikely, but it is at least possible. However, professors tend to do experiments their own way. They use equipment they have on hand. Oriani used a Seebeck calorimeter designed to hold a human baby. I kid you not! There is no way he could have replicated a calorimetric error made by F&P with their half-silvered Dewar calorimeter cell. He might have made some other completely unrelated error. You can make a mistake with any kind of calorimeter, but you cannot make the same mistake with an isoperibolic Dewar that you make with a Seebeck gadget designed to measure baby metabolism.


    One of the reasons professors use different instruments and techniques is to avoid accidentally replicating an error. That is what they tell me. They are well aware of the risk of following the methods too closely, and using instruments exactly alike.

    Jed, you make a good point there and I agree.


    The problem is that all these different experiments were finding small amounts of excess heat in these systems. We have file drawer effect. Suppose in these difficult calorimetry experiments there are various errors, some not understood by many of the participants such as ATER and CCS. Those who get negative results go away. Those who get positive results publish.


    Normally we can cross-check replications because we have a quantitative result. That is not true here because the amount of excess heat is unknown and variable. It means that we don't have the normal protection against many different unexpected issues, each affecting different experiments, giving all those positives, with the negatives either going away, or being replaced (with a determined group trying to "do it right" and get positives) by a positive using different methodology.


    It is ironic. To be sure of one type of error you need precise replication. To gain confidence against another you need imprecise replication. And you always need to watch for shared implicit assumptions.


    THH

  • The problem is that all these different experiments were finding small amounts of excess heat in these systems.

    That is incorrect. Some of them found small amounts, but some found large amounts. From my video:


    Here are 124 tests from various laboratories, grouped from high power to low. Only a few produced high power. Most produced less than 20 watts.

    This data is courtesy Dr. Edmund Storms, retired from Los Alamos National Laboratory. The data in the graph shown in the video is binned in groups from 0.005 to 10 W, from 11 to 20 W, 21 to 30 W and so on, so that it shows up clearly in the video screen:


    StormsPeakheat124tests


    Video


    I expect you will next say "in all these different experiments, input power was close to output power." That is also incorrect. In some cases input was 3 to 5 times output, and in some cases reported by McKubre and others, there was no input power, but only output. Even when input is close to output, the signal to noise ratio can be high, because input power is not noise, and it is easily subtracted.


    More to the point, you have failed to define what you mean by "small amounts of heat." The signal to noise ratio is only low when absolute power is at the low end of what the calorimeter is designed to measure. With a microcalorimeter, even 1 mW of excess heat can be measured with confidence. 1 mW would be a gigantic heat, almost at the upper limit of detection. However, only a few microcalorimeters have been used in cold fusion studies. The water based calorimeter designed by Mel Miles measured 500 mW in some cases. That instrument can detect 50 mW with confidence, so that was a large amount of heat. It would be a small amount with some other instrument.

  • Normally we can cross-check replications because we have a quantitative result. That is not true here because the amount of excess heat is unknown and variable.

    This statement make no sense to me. All papers report the amount of excess heat. It is always known. Of course we can cross-check replications to show quantitative results. That's what Storms' graph shows, and his tables showing hundreds of results.


    I hesitate to ask, but what on earth did you mean by this?!?

  • I present, once again, the


    Super-Kamiokande - Wikipedia
    en.wikipedia.org

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.