The perpetual “is LENR even real” argument thread.

  • To be OT for this thread:


    I have some hope that the post-google rash of respectable scientiifc interest in LENR will result in more certainty from these classic results.


    Either they are real, in which case the better instrumeted experiments will confirm this - or thye are not.


    If no confirmation from better experiments I predict:

    The LENR community will ignore it and say eitehr the experimenters were biassed or they juts did the wrong thing, LENR is fickle, etc

    The mainstream science community will (gradually) lose interest.


    What is confirmation? It is this:


    1. Find a replicable experiment.
    2. Replicate it with identical methodology and equipment, etc - but adding extra instrumentation
    3. If results prove illusory -stop. If results still appear robutsly interesting
    4. Repeat - with modified experiment that has the same or more instrumentation, and focusses on the (nor clearer) anomaly


    That is the only procedure that will make non-LENR-enthusiasts pay muhc attention to LENR.


    Rightly so.


    And if LENR anomalies are real, it will be relatively easy to carry out.

  • The Miles & Patterson work is much more substantive, and interesting. a candidate for replication if others believe the certainty of the SEM-EDS results and the calculations about total mass.

    Miles-Patterson used SIMS and INAA for their chemical analysis, not EDS. INAA is specially robust as it delivers the content for the bulk of the material, not the surface. Both techniques used are robust by themsels, but together are uncontestable.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • Again - I resort to chatGPT knowing nothing myself:


    Yes, SIMS (Secondary Ion Mass Spectrometry) elemental analysis can be misinterpreted if not properly understood or if certain factors are overlooked. SIMS is a technique used to determine the elemental composition and distribution of a sample by bombarding it with a focused ion beam and analyzing the secondary ions emitted from the surface.

    Here are some factors that can lead to misinterpretation of SIMS elemental analysis:

    1. Surface Contamination: SIMS is a surface-sensitive technique, and any contamination on the sample surface can interfere with the accurate measurement of the elemental composition. Contaminants may produce secondary ions that can be mistakenly identified as sample elements, leading to incorrect interpretations.
    2. Matrix Effects: The matrix in which the elements of interest are embedded can affect the SIMS analysis. The ionization efficiency can vary depending on the matrix composition, which can lead to errors in quantification and interpretation if not properly accounted for.
    3. Ionization and Sputtering Yield Variations: Different elements have varying ionization and sputtering yields, meaning that the intensity of secondary ions produced by different elements may not be directly proportional to their abundance in the sample. This can lead to misinterpretation of relative elemental concentrations.
    4. Depth Profiling Artifacts: SIMS is often used for depth profiling, where the composition of a sample is analyzed layer by layer. However, factors such as differential sputtering and ion beam-induced mixing can cause changes in the sample's composition, leading to inaccurate depth profiles if not properly considered.
    5. Spectral Interference: SIMS analysis involves the measurement of mass spectra, and overlapping peaks from different elements can cause spectral interference. If not properly resolved, this interference can lead to misidentification or incorrect quantification of elements.

    ------------------


    While INAA (Instrumental Neutron Activation Analysis) is a highly accurate technique for elemental analysis, there are potential factors that can lead to misinterpretation of the results if not appropriately addressed. Here are some examples:


    Interferences: INAA can suffer from interferences caused by the presence of other elements or isotopes that emit gamma radiation at energies similar to the isotopes of interest. These interferences can lead to misidentification or incorrect quantification of elements if not properly accounted for during data analysis.


    Sample Heterogeneity: If the sample being analyzed is not homogeneous, variations in elemental composition within the sample can affect the accuracy of the results. It is essential to ensure representative sampling and homogenization to minimize the risk of misinterpretation.


    Irradiation Conditions: The accuracy of INAA results relies on proper selection and control of irradiation conditions, such as neutron flux, irradiation time, and energy spectrum. Variations in these parameters can introduce errors and lead to misinterpretation of elemental concentrations if not properly accounted for during calibration and data analysis.


    Decay Corrections: INAA relies on the measurement of gamma radiation emitted by radioactive isotopes produced during irradiation. The decay of these isotopes over time needs to be accurately accounted for during data analysis. Incorrect decay corrections can lead to misinterpretation of elemental concentrations, particularly for isotopes with short half-lives.


    Calibration and Quality Control: To ensure accurate results, proper calibration using certified reference materials (CRMs) with known elemental compositions is crucial. Additionally, implementing quality control measures, such as analyzing quality control samples alongside unknown samples, helps monitor the accuracy and precision of the analysis. Failure to perform adequate calibration and quality control can result in misinterpretation of the elemental analysis results.


    Data Analysis Errors: Accurate data analysis is essential for interpreting the results correctly. Errors in data processing, such as incorrect background subtraction, peak integration, or statistical analysis, can lead to erroneous interpretations of elemental concentrations.


    External Factors: Environmental factors, such as radiation background or instrumental drift, can influence the accuracy of the results. Proper calibration, quality control, and instrument maintenance help mitigate such external factors.


    To minimize the potential for misinterpretation, it is crucial to follow established protocols, perform proper calibration and quality control, use appropriate standards, account for potential interferences, and carefully consider the specific limitations and sources of error associated with INAA. Collaborating with experts in the field of neutron activation analysis and employing good laboratory practices will also contribute to accurate and reliable results interpretation.



    To minimize misinterpretation, it is important to consider these factors and use appropriate calibration standards, analytical protocols, and data analysis methods. Additionally, complementary techniques and expertise in materials science or analytical chemistry can help validate and interpret the SIMS results accurately.

  • THHuxleynew , your insistence on casting egregious doubts about published results based on your belief that is not possible is really annoying, specially when you imply that the people that performed the analysis did not know what they were doing, is not only insulting but patronizing, and belittles the great care and effort that the authors put in bounding, identifying and quantifying all potential sources of error.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • I agree with Alan, having a skeptic in residence is useful. However, such a skeptic needs to play by certain rules and be learned in the subject. For example, I have written a total of 10 reviews of the experimental work as well as two books. These publications attempted to show what was real, what might be the result of error, and how the patterns of behavior might identify the mechanism. These evaluations are never cited by the skeptics. The observed patterns are never discussed. Instead, the focus is on a few details in a few old papers, with no acknowledgment of how the LENR process was studied by other people using different and better measurements while seeing exactly the same behavior.


    Everyone knows that all measurements contain errors. Everyone knows that measurements improve over time. If the same behavior is observed, confidence that the same phenomenon is being observed is increased. That process has happened. LENR has been replicated and studied in so many different ways, all showing the same basic behavior, that its reality has been established. So, why are we still discussing the reality of LENR? Why is our time wasted this way? Why is the nature of this amazing discovery not discussed instead? Civilization has been gifted the ideal energy source and a clue to a very unusual kind of nuclear process having broad implications. Why is this not discussed?

  • I agree with Alan, having a skeptic in residence is useful. However, such a skeptic needs to play by certain rules and be learned in the subject. For example, I have written a total of 10 reviews of the experimental work as well as two books. These publications attempted to show what was real, what might be the result of error, and how the patterns of behavior might identify the mechanism. These evaluations are never cited by the skeptics. The observed patterns are never discussed. Instead, the focus is on a few details in a few old papers, with no acknowledgment of how the LENR process was studied by other people using different and better measurements while seeing exactly the same behavior.


    Everyone knows that all measurements contain errors. Everyone knows that measurements improve over time. If the same behavior is observed, confidence that the same phenomenon is being observed is increased. That process has happened. LENR has been replicated and studied in so many different ways, all showing the same basic behavior, that its reality has been established. So, why are we still discussing the reality of LENR? Why is our time wasted this way? Why is the nature of this amazing discovery not discussed instead? Civilization has been gifted the ideal energy source and a clue to a very unusual kind of nuclear process having broad implications. Why is this not discussed?

    Well, what can I say, you are correct.


    On the issue of why o why are we still discussing this, basically I reduce it to:


    Because many people has yet to catch up to the news (20 years later, assuming it took 14 years to be absolutely certain of all), and while we strive to reduce that gap. in doing so, many times a person that gets curious enough to take a look, often not having the time or skills to be able to judge for themselves, neither being bothered to dig deeper, they rely in other people's published or available opinions, and while doing so they will probably bump into one of this threads, and may be tempted find it easier to justify their lack of interest in the comments of a well intentioned skeptic like THH. It's a PR guerrilla if you boil down to its essence. Unfortunately, part of that public that we are trying to reach includes people with decision making power, and a good online discussion with a well meant skeptic like THH can be important from that point of view.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • with no acknowledgment of how the LENR process was studied by other people using different and better measurements while seeing exactly the same behavior.

    And that's a plus! A feature, not a bug. It is good that people use different and better measurements. Seebeck instead of isoperibolic calorimeters.


    What is ironic is that when people use different instruments, skeptics often say: "Why didn't they do exactly the same experiment?!? Why isn't this a close replication?" Then if someone does the very same experiment, the way Lonchampt and Biberian replicated the boil off experiments, the skeptics say: "It could be a systematic error!" Researchers are damned if they do, and damned if they don't.


    Ed's point that people should read his books is right on target. Everyone should read them.

  • Got a problem with research integrity? Set up a committee of politicians to deal with it! Might as well use monkeys to regulate the banana trade. I have attended and spoken at a couple of these 'expert committees' in parliament. The level of ignorance about the topics they are pontificating about is mind-boggling. Thank goodness we have a skilled and professional civil service to sort them out. (Which they do not like).


    A report released by the science, innovation and technology committee of the UK House of Commons is recommending that a subcommittee dedicated to tackling issues related to the reproducibility of research should be created.

    In 2018, the committee, which consists of 11 MPs, previously recommended that a body be created to address research integrity issues at the country’s universities.

    In response, the Committee on Research Integrity (Cori) was launched in July 2021 as a free-standing committee for three years. In May last year, after Cori’s inaugural meeting with the full membership, the organisation released its updated remit and aims.

    That includes developing a strategy to achieve independence from the country’s umbrella funding agency UK Research and Innovation within its first year. It will also need to advise across the sector on issues related to research integrity and work with the advocacy group Universities UK to implement the Concordat to Support Research Integrity.

    But now the committee is saying they are concerned at Cori’s apparent lack of focus on irreproducibility of research – a thorny issue in academic research that has received a lot more attention in recent years.

    ‘While we welcome the establishment of the new Committee on Research Integrity and note that one of its so-called strategic pillars is to “define the evidence base”, we are concerned about the absence of reproducibility as a priority in the new organisation’s strategy,’ a new report released by the group on 10 May reads. ‘We recommend that a sub-committee focused solely on questions of reproducibility in research should be established.’

    The report adds: ‘We found that while there are many reports of problems of non-reproducibility, there has been no comprehensive and rigorous assessment of the scale of the problem in the UK, nor which disciplines are most affected and therefore the extent to which this is indeed a “crisis”.’

    Scholarly publishers will also need to play a part by ensuring timely correction of the scientific record with retractions, corrections and errata. The process, the report suggests, shouldn’t take longer than two months.

    Marcus Munafò, a biological psychologist at the University of Bristol, UK, who heads the UK Reproducibility Network, says the new report makes progressive and radical recommendations, which are achievable if there is coordination across the sector.

    ‘There are promising early signs – wider adoption of Registered Reports Funding Partnerships (RRFP) by funders and journals is one key recommendation in the report, and the pilot by Cancer Research UK and a range of journals from Springer-Nature, Wiley and PLOS demonstrates that such collaborative approaches are feasible,’ Munafò notes. Under RRFPs, research funders and journals team up to ensure that all research results are posted online regardless of their outcome.

    Training researchers on research integrity during undergraduate, postgraduate and early-career stages is also crucial to ensuring reproducible research, the report argues.

    In addition, funders should also check whether they provide the necessary resources to ensure that the papers stemming from their grants are reproducible, the report says. The report recommends that funders, including UKRI, start requiring reproducibility as a prerequisite for the grants they fund.

    While the committee welcomes the UKRI’s move to mandate all publications stemming from its funding to be open access, it says the funding body should go further to also require data and code underlying papers to also be freely available online, making it easier for outsiders to replicate and reproduce studies.


  • And some people think that these kind of 'integrity' problems are unique to LENR, which in general is not a field where people compete for promotions, are pressured to publish a huge amount of materials or are competing for public funds.


    Speaking of 'appalling ignorance' in Parliamentary committees, I was at the 'All Party Parliamentary Group' on the Hydrogen Economy last September. They had a presentation from one of the bigwigs (not a scientist) from the HyGreen Teesside project - one of the biggest green hydrogen facilities in the UK, targeting production by 2025 with an initial planned phase of 80MWe of installed hydrogen production capacity.


    Somebody asked him what the difference between green and grey hydrogen was. He didn't know.

  • And some people think that these kind of 'integrity' problems are unique to LENR, which in general is not a field where people compete for promotions, are pressured to publish a huge amount of materials or are competing for public funds.


    Speaking of 'appalling ignorance' in Parliamentary committees, I was at the 'All Party Parliamentary Group' on the Hydrogen Economy last September. They had a presentation from one of the bigwigs (not a scientist) from the HyGreen Teesside project - one of the biggest green hydrogen facilities in the UK, targeting production by 2025 with an initial planned phase of 80MWe of installed hydrogen production capacity.


    Somebody asked him what the difference between green and grey hydrogen was. He didn't know.

    My 14 years of experience (1998-2013) as a "research grantsphere" dweller (as writer of grants, project engineer, researcher and marketing of R&D results executive) left me not wanting ever to come back to that kind of work environment. Research was the fun part, but after a certain point it stopped being rewarding enough to make it worth dealing with all the rest (dealing with the multiple levels of absurdity of the system and pettyness of the oversight authorities).

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • I agree with Alan, having a skeptic in residence is useful. However, such a skeptic needs to play by certain rules and be learned in the subject. For example, I have written a total of 10 reviews of the experimental work as well as two books. These publications attempted to show what was real, what might be the result of error, and how the patterns of behavior might identify the mechanism. These evaluations are never cited by the skeptics. The observed patterns are never discussed. Instead, the focus is on a few details in a few old papers, with no acknowledgment of how the LENR process was studied by other people using different and better measurements while seeing exactly the same behavior.


    Everyone knows that all measurements contain errors. Everyone knows that measurements improve over time. If the same behavior is observed, confidence that the same phenomenon is being observed is increased. That process has happened. LENR has been replicated and studied in so many different ways, all showing the same basic behavior, that its reality has been established. So, why are we still discussing the reality of LENR? Why is our time wasted this way? Why is the nature of this amazing discovery not discussed instead? Civilization has been gifted the ideal energy source and a clue to a very unusual kind of nuclear process having broad implications. Why is this not discussed?

    Ed - I can understand your view. it is natural.


    i can also answer your question. the (only) reason why you need to do this is if you want to convince more mainstream scientists that the LENR phenomena are something worth more effort investigation (I think anyone who thought they were nuclear would probably also think they were worth more effort).


    Maybe, post team google, that is not now needed. There are quite a few mainstream proposals to investigate further which all have the form of - take a type of experiments where the anomaly seems well documented and replicable - replicate them with better instrumentation - understand it better.


    Like the one I posted here from April 2023.


    In fact I am going to change my mind. Better instrumentation, more definite parametrisation, will all help find a mechanism. Even if this is not needed better to validate LENR. For example, for transmutation there are a few testable ideas. And with new experiments and more instrumentation they can be tested and either become more likely, or knocked down. That is surely valuable in finding theory. There are a few examples of what the additional data needed might be for the transmutation experiments in the paper I linked.


    THH

  • My 14 years of experience (1998-2013) as a "research grantsphere" dweller (as writer of grants, project engineer, researcher and marketing of R&D results executive) left me not wanting ever to come back to that kind of work environment. Research was the fun part, but after a certain point it stopped being rewarding enough to make it worth dealing with all the rest (dealing with the multiple levels of absurdity of the system and pettyness of the oversight authorities).

    It is hated by everyone. But if you (or collaborators) play that game a bit, and you are at a good enough institution to have some slack from the need always to find money, you can go for and get blue sky grants. But I understand anyone not wanting life as a university scientist - it is tough.

  • But now the committee is saying they are concerned at Cori’s apparent lack of focus on irreproducibility of research – a thorny issue in academic research that has received a lot more attention in recent years.

    ‘While we welcome the establishment of the new Committee on Research Integrity and note that one of its so-called strategic pillars is to “define the evidence base”, we are concerned about the absence of reproducibility as a priority in the new organisation’s strategy,’ a new report released by the group on 10 May reads. ‘We recommend that a sub-committee focused solely on questions of reproducibility in research should be established.’

    The report adds: ‘We found that while there are many reports of problems of non-reproducibility, there has been no comprehensive and rigorous assessment of the scale of the problem in the UK, nor which disciplines are most affected and therefore the extent to which this is indeed a “crisis”.’


    Alan Smith , I read that piece about research integrity and gave me a migrain. As if public funded research had not enough bureaucracy already.


    I actually disagree.


    On this site, and elsewhere, people are concerned about the more sophisticated variants of the p-value problem. The quality of peer review is variable and it has been proven you can get stuff published which can then be quoted as poistive when it is not (and should never have been published).


    This is always going to be a problem when you get paid research - and it is right to have it generally understood and where it happens most determined.

  • And some people think that these kind of 'integrity' problems are unique to LENR, which in general is not a field where people compete for promotions, are pressured to publish a huge amount of materials or are competing for public funds.


    Speaking of 'appalling ignorance' in Parliamentary committees, I was at the 'All Party Parliamentary Group' on the Hydrogen Economy last September. They had a presentation from one of the bigwigs (not a scientist) from the HyGreen Teesside project - one of the biggest green hydrogen facilities in the UK, targeting production by 2025 with an initial planned phase of 80MWe of installed hydrogen production capacity.


    Somebody asked him what the difference between green and grey hydrogen was. He didn't know.

    Yes - strange someone would not know that. I mean I've never had anything directly to do with hydrogen stuff, and am not highly interested in it, but I remember programs about grey (or maybe if you believe in CCS will work blue) hydrogen. I thought this stuff was general knowledge for anyone with a vague interest in modern technologies.


    it shows, appallingly, the way that in the UK particular those with management skills can pride themselves on being scientifically illiterate. Unlike Germany.

  • And that's a plus! A feature, not a bug. It is good that people use different and better measurements. Seebeck instead of isoperibolic calorimeters.

    Yes and no.


    A different calorimeter will have different characteristics - so what you get is not a replication - making any one result more certain. I am always in two minds about whetehr this is helpful or unhelpful.


    if you distrust all the results the only real way to get to the bottom of it is take a claimed definitely positive result and go on replicating it with identical methodology but more instrumentation until the truth outs. I think this is what many mainstream scientists would want to do and frankly it may annoy people here but if it leads to results everyone accepts that is surely good? It is exactly what want to do when I have doubts.


    Also the better instrumentation and parametrisation you get from this process can help even if you feel LENR is proven so that aspect is irrelevant.

  • The accepted values for the enthalpy of formation of most chemical reactions in the field of chemistry are based on the use of a calorimeter. This technique is well understood and obviously very accurate. No one challenges these measurements. They are accepted with an error band that is assigned using well-accepted methods. But when the same method is used to study LENR, suddenly, all kinds of hidden errors are proposed to make the measurements invalid.


    People have been measuring tritium at very low levels for many decades. These measurements are accepted as accurate and reliable. But when the same methods are used to measure tritium produced by LENR, suddenly all kinds of "hidden" errors are suggested to make the measurement invalid.


    People who are considered competent to make measurements involving chemistry, public safety, and national security are deemed incompetent when they make measurements involving LENR. The double standard borders on how the insane behave.


    Yet, this double standard is allowed to flourish. Why?

  • I gave the answer earlier on this thread; saying that I realised no-one in the LENR community would agree.


    LENR experimental evidence has a higher bar to meet than that to support most other theories - because LENR - as a theory - is not highly predictive as to exactly what the results of experiments should be. Indeed there is no possible experiment that would disprove LENR: whatever the results. That makes for a higher needed standard of evidence.


    And all of these uncertainties about experiments could be settled to everyone's satisfaction by repeated exact replication (of a good experiment) adding instrumentation and doing parametrization. In one lab till all uncertainty is removed. Then in another independent lab. Many now think they have decent replicability - say 1 in 5. You would need more repetition than you think necessary. And more instrumentation than you think necessary. But it can be done. So why not?

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.