THHuxleynew Verified User
  • Member since Jan 18th 2017
  • Last Activity:

Posts by THHuxleynew

    They could not all be missing the same thing because they use completely different instruments and methods.

    That is not technically true. There could be some effect that related to the electrochemistry - not in the canon - that would be in common. Since results between different equipment and methods are very inconsistent - with differing amounts of apparent excess heat and no "expected value of excess heat" - they could all be missing the same thing.

    I mention this because you claimed you know more about electrochemistry and calorimetry than world-class experts such as Fleischmann and McKubre, and now you claim you know more about spectroscopy than the people who manufacture the instruments, who you say are inexperienced amateurs. This is not credible. If you want readers here to believe you, you should make your case. Tell us why McKubre is wrong.

    That is not true Jed.


    I always claim that no person is infallible, and that experts in a specific field may have group think that means they all miss the same thing. I don't go far down that rabbit hole - but we need to keep it in mind as a possibility.


    Your mis-state those things (which I guess you would agreement) as your summary above.


    Jed,


    It is true that I do not wish to be a bore here. You remember the MAJOR PAPER that we discussed for a long time which had major errors in? That was the paper associated with F&P video (foamgate) showing heat after death.


    The blatant discrepancies there were so unanswerable (literally - no-one answered the statements of errors) that the topic was banned here. I refer anyone wanting to continue that conversation (I will not) to the thread then.


    And quite right too! This forum is for what people here want to discuss. You would like me to propose major errors in papers only if you can dismiss them. And, i suspect you usually can, and that is because you have different standards from me about what set of results is definitive. Look at all the flaky modern stuff we people here view it is likely LENR and it is just pseudo-science or (more charitably) interesting difficult to explain phenomena. The fact that LENR can be used to explain suhc things means that it is too imprecisely defined to be science - and therefore pseudo-science. The bits of LENR that are science (and they exist) can be better confirmed, or refuted, by current experiments. While there is no negative experiment that would for you refute LENR, it is not, for you, science.


    I look at the progression of evidence. I personally think that the early papers on excess heat from D2O-Pd electrolysis are more convincing than any other of the corpus of LENR evidence. And that for me is a big negative. I would agree that the NAE hypothesis is plausible and might account for such excess heat. But also that the same work - where active environments in Pd can perform unexpected catalytic reactions - offers possible non-nuclear explanations for enough of the evidence that the rest loses coherence.


    And that is where we part company. For you, once that stuff is proved, it is proved. There is not need for coherence.


    I very much welcome the convergence of Material Science, theory (a whole load of screening + resonance + coherent behaviour ideas) and experiment - measuring reaction rates from lowish-energy collisions etc. That is real science which might explain some of the results as unexpected nuclear reactions. It might also explain relative lack of reaction products from certain specific reactions: though it is a bit of a coincidence that those are the only ones that happen to be allowed by screening/resonances/etc.


    Why am I pessimistic? The arguments for absence of high energy reaction products remain very speculative. I know there is a putative branching ratio change idea together with lowish energy alphas being blocked that might help. We will see. It looks contrived to me - but i will like it a lot more if it leads to doable experiments which can confirm or refute it.


    The post-google (actually - though I hate to say it - post-Rossi - to give credit where it is due) influx of interest and money should make things less speculative. If those old experiments were real we now know so much more that we should have much clearer results soon (maybe we should already have had them - it has been some time). We do not yet. It is perfectly fair to live in hope.


    Give me an experiment that confirms or refutes LENR?


    Or, more narrowly, give me an experiment that confirms or refutes those old D2O/Pd excess heat experiments?


    The modern ones are characterised by results that get smaller when the experiments get higher accuracy and more certainty, or experiments with large uncertainties or lack of replication (Mizuno's untestable by anyone else super-reactors). I bet before Ed did his "relatively cheap" accurate calorimetry, together with careful cathode selection, he expected he would get results as good as other less accurate and careful experiments. It is what i would expect were the effect real. In which case that would have been lab rat proof of LENR and even without disprovability the results would be so interesting to non-believers that effort would go into the field. But the nature of LENR is that no experiment can disprove it. That is what makes it non-science. Specific hypotheses within LENR can be disproved, or proved. They are science. And post-google much effort is going into some of those hypotheses.


    I live in (some) hope. Mainly because I am an eternal optimist. I will start being more interested when the comment here and elsewhere centres around real science.


    Oh - and to keep Alan happy - yes LEC is real interesting science. I've yet to see anything that makes LENR a likely explanation for it - unless you already think LENR is a common effect. LENR is so un-predictive that it can be used to explain almost any weird results...


    THH

    This is a great example of what I DO NOT LIKE about the field of LENR.

    I agree that this would be a great thing to master, but the problem arises with the fact that even though this is a proper reaction (real and factual) we simply generally do not even believe it is possible.


    The experimental results are trace qtys of 17O and 22Ne. We'd need a person very experienced in mass spectometry (not sure if any are here) to determine what are the possible false positives reading spectra and therefore how reliable are these results. There is no serious exploration of this possibility in the paper. Nor of reaction-induced outgassing of material that could lead to these results. Thus it can be a combination of these two potential mechanisms which opens up a lot of things to consider and rule out.


    Many people here present the straw man that such reactions are not believed because of the Coulomb barrier and the perceived difficulty of making nuclear reactions happen.


    I disagree, Personally, I have no problem envisaging weird QM processes that allow normally forbidden nuclear transitions. Many such processes have been suggested here.


    The problem with the "low-level nuclear reactions of many different sorts happen quite easily" is what happens to the excess energy. It goes like this:


    • Nuclear energy scales are much higher than chemical
    • The chances of nuclear reactions exactly balancing (energetically) are low - and indeed the reaction proposed here as +3MeV or so.
    • The expected high energy particles are never observed
    • Coupling MeV energy scales 100% (or even 50%) to eV energy scales - allowing the excess energy to turn into heat - seems pretty well impossible.


    Hagelstein noted this a long time ago and I know tried for quite a while to find solutions. That work or equivalent, if it had experimental evidence and the theory panned out, is what this "lots of nuclear reactions happen" view needs for people to start entertaining it as a sane hypothesis.



    And remember - we need not just a "could possibly happen" coupling method. We need a reason why ONLY those nuclear reactions that couple near 100% in this way are allowed: otehrwise we would be getting clearly unambuguous high energy product signatures.


    The disconnect for me here is that when you look holistically at the whole problem - people do not join these dots and instead suspend disbelief in this area (where are the high energy results?). Because if you had to characterise what was special about LENR you would say:

    LENR reactions do not produce high energy result particles, nor unstable reaction products.


    And the skeptics like me would note that this needs an explanation, and there is one obvious candidate:

    "The apparent LENR reactions are in fact not nuclear reactions."

    which ticks all the boxes in explaining this characteristic.


    So: to make this type of "everywhere in many ways" LENR believable I need a better answer to the question: "where are the high energy products / unstable products"?". And I think most physicists who look at the LENR collection of evidence seriously would have the same question.

    This is a great example of what I DO NOT LIKE about the field of LENR.

    I agree that this would be a great thing to master, but the problem arises with the fact that even though this is a proper reaction (real and factual) we simply generally do not even believe it is possible.


    The experimental results are trace qtys of 17O and 22Ne. We'd need a person very experienced in mass spectometry (not sure if any are here) to determine what are the possible false positives reading spectra and therefore how reliable are these results. There is no serious exploration of this possibility in the paper. Nor of reaction-induced outgassing or ingress of material that could lead to these results (Ne22 is 10% isotopic distribution of normal Ne). Thus it can be a combination of these two potential mechanisms which opens up a lot of things to consider and rule out.


    Many people here present the straw man that such reactions are not believed because of the Coulomb barrier and the perceived difficulty of making nuclear reactions happen.


    I disagree, Personally, I have no problem envisaging weird QM processes that allow normally forbidden nuclear transitions. Many such processes have been suggested here.


    The problem with the "low-level nuclear reactions of many different sorts happen quite easily" is what happens to the excess energy. It goes like this:


    • Nuclear energy scales are much higher than chemical
    • The chances of nuclear reactions exactly balancing (energetically) are low - and indeed the reaction proposed here as +3MeV or so.
    • The expected high energy particles are never observed
    • Coupling MeV energy scales 100% (or even 50%) to eV energy scales - allowing the excess energy to turn into heat - seems pretty well impossible.


    Hagelstein noted this a long time ago and I know tried for quite a while to find solutions. That work or equivalent, if it had experimental evidence and the theory panned out, is what this "lots of nuclear reactions happen" view needs for people to start entertaining it as a sane hypothesis.



    And remember - we need not just a "could possibly happen" coupling method. We need a reason why ONLY those nuclear reactions that couple near 100% in this way are allowed: otehrwise we would be getting clearly unambuguous high energy product signatures.


    The disconnect for me here is that when you look holistically at the whole problem - people do not join these dots and instead suspend disbelief in this area (where are the high energy results?). Because if you had to characterise what was special about LENR you would say:

    LENR reactions do not produce high energy result particles, nor unstable reaction products.


    And the skeptics like me would note that this needs an explanation, and there is one obvious candidate:

    "The apparent LENR reactions are in fact not nuclear reactions."

    which ticks all the boxes in explaining this characteristic.


    So: to make this type of "everywhere in many ways" LENR believable I need a better answer to the question: "where are the high energy products / unstable products"?". And I think most physicists who look at the LENR collection of evidence seriously would have the same question.

    Hora had some nice papers a while ago about how non-thermal laser-driven H-B fusion could maybe work. It is a neat idea, which looks possible but very unproven. As always with these things you don't know it will be feasible till you have a detailed proof of concept. The work so far is not enough to convince me of that.


    It’s been an exciting last 12 months of research, experiments and collaborations for HB11 Energy – a year of shared goals, mutual support and achieving great things together. May 2024 be our year for major breakthroughs!


    From their (new) website. I certainly hope they have a major breakthrough but it is a long shot. In any case they are doing experiments now so will have more data showing us whether this idea can work or not.


    They got 22 million to try things https://www.businessnewsaustra…lear-fusion-industry.html


    Recent editorial which is Hora making the case he has been making since 2017

    Editorial: Non-Local Thermodynamic Equilibrium (NLTE) Hydrogen–Boron Fusion
    Recently, in a series of publications it has been suggested that for the HB 11 case we do not need to reach a local thermal equilibrium (LTE) at such a high…
    www.frontiersin.org

    Belief in the Big Bang is a scientific psychosis. The hypothesis of the Big Bang and the origin of our material World from one point in space-time and physical vacuum lulls the mind and does not help solve any of the pressing problems of modern physics, namely:

    1) the problem of the fundamental incompatibility of quantum mechanics and general relativity;


    there is a lot of evidence out there which cannot easily be explained without a big bang. You might almots say by now that disbelief in big bang theory was a psychosis. Not saying you can't construct alternate theories - but they are pretty contrived.


    The compatibility of QM and GR is very strongly hinted by holographic principles shown in string theory. It does not yet work properly - and in fact one reason for hoping for some non-big-bang early universe theory is that would remove the current theoretical problem to using string theory.


    But blaming experimental evidence because it is not convenient for theory is a mugs game. Why not instead work on a better theory?

    In quantum field theory, false vacuum decay is now an experimentally verified event where a vacuum that is relatively stable decays into a alternative state.

    Reference for this experimental verified false vacuum decay? The one you gave before verified that an expected phenomena, analogous to false vacuum decay, can be experimentally observed.


    That is not contentious - everyone expects such phenomena to exist - they come from the maths when you have a metastable quantum state.


    Then there is false vacuum decay of the universe itself:


    https://arxiv.org/pdf/2109.04496.pdf

    False vacuum decay in quantum mechanical first order phase transitions is a phenomenon with
    wide implications in cosmology, and presents interesting theoretical challenges. In the standard
    approach, it is assumed that false vacuum decay proceeds through the formation of bubbles that
    nucleate at random positions in spacetime and subsequently expand. In this paper we investigate
    the presence of correlations between bubble nucleation sites using a recently proposed semi-classical
    stochastic description of vacuum decay. This procedure samples vacuum fluctuations, which are
    then evolved using classical lattice simulations. We compute the two-point function for bubble
    nucleation sites from an ensemble of simulations, demonstrating that nucleation sites cluster in
    a way that is qualitatively similar to peaks in random Gaussian fields. We qualitatively assess
    the phenomenological implications of bubble clustering in early Universe phase transitions, which
    include features in the power spectrum of stochastic gravitational waves and an enhancement or
    suppression of the probability of observing bubble collisions in the eternal inflation scenario.


    I think you are confusing false vacuum decay as words for a mathematical phenomena that can occur in many QM systems that have metastable states (experimentally confirmed by your reference) with an observation of the universe false vacuum decay (for real). This does not yet exist. It is entirely possible that it did happen - many models predict it - but we know if it happens - that the energy scale for it to happen are extraordinary in the current universe. We will get more info about the very early universe from gravitational wave observations which are getting better and better. Even so observations of false vacuum decay would be indirect: and based on a whole load of other hypotheses about evolution of the early universe at very high energy levels.

    It might be an advance on the state of the art. It says "strong AHEs have been measured." How strong? How reliably? How steady is the heat? If they can produce steady heat on demand that would advance the state of the art.



    (Note. You are quoting from: https://cordis.europa.eu/project/id/951974/reporting)

    It might be.


    My comment: the text was too ambiguous to know - and more tellingly they list it in the results so far summary, not the advances on state of the art so far summary. So presumably they do not view it as that.

    3rd periodic report from CleanHME:


    https://cordis.europa.eu/project/id/951974/reporting

    What they say they have done - but not an advance on state of the art - such indications have been observed for very many years:


    Indication of nuclear events, typically weak neutron emissions and strong anomalous exothermic reactions have been detected during experiments based on Ni/C, Ni/Cu, Ni/Al, and other catalyzing elements both under hydrogen or deuterium atmosphere. Several potentially active materials have been designed and are being tested in different laboratories of our consortium. During a significant number of successful experiments, strong AHEs have been measured, thus demonstrating the effectiveness of the applied reaction activating procedures. Detected AHEs produced commercially promising COPs even if the most powerful exothermic reactions still last for relatively short periods. The power density achieved, however, is extremely promising.


    They hope - no new results reported:


    It is very likely that at the end of the project we will be able to have a demonstration unit capable of producing large amounts of energy with a high Coefficient Of Performance.

    Such a working demonstration unit would open new perspectives for energy production in Europe and worldwide. A new source of green energy at low cost, easily available everywhere would allow for a new industrial applications and the actual development of extremely efficient smart grids. Additionally, the total absence of climate affecting emissions from the HME generators could give a real, effective contribution to the containment of ongoing climate changes.

    The fallback (what we all know they will have data on):
    We could also demonstrate very large screening energies determined in proton and deuteron induced nuclear reactions observed in the accelerator experiments on some metallic alloys. The results will allow to understand the enhancement mechanisms of nuclear reactions at extremely low energies and propose special materials for gas-loading experiments. The corresponding theory of the deuteron-deuteron nuclear reactions predicting the nuclear reaction rates at thermal energies has been developed.



    So now Rossi has just delayed everything YET ANOTHER FULL YEAR.

    He now says to expect an EV demo and a Solar demo by the end of the year.

    The thing that fascinates me about Rossi is the way that he wants his fakes to work. I mean - even when they seem surprising they are still, when working, fakes. But Rossi will delay as needed.


    Perhaps I am being too generous in this case - the delay could be because he needs sign-off from 3rd parties who provide EVs, or might invest, etc and they are being a little bit difficult forcing him to tune his fakery?


    Anyway - technically - he has stopped doing fakes that are interesting and creative. So I am not that engaged any more. But there is still some interest in what exact benchmark he needs to release the next fake.

    Agree with RB


    The actual article is about NUCLEATION.. not nuclear transmutation

    Axil salad is confused and confusing.


    The relevant quote from Axil's article - which he seems not to have read:


    Ian Moss, Professor of Theoretical Cosmology at Newcastle University's School of Mathematics, Statistics and Physics, said, "Vacuum decay is thought to play a central role in the creation of space, time and matter in the Big Bang, but until now there has been no experimental test. In particle physics, vacuum decay of the Higgs boson would alter the laws of physics, producing what has been described as the 'ultimate ecological catastrophe.'"

    Dr. Tom Billam, Senior Lecturer in Applied Math/Quantum, added, "Using the power of ultracold atom experiments to simulate analogs of quantum physics in other systems—in this case the early universe itself—is a very exciting area of research at the moment."


    Vacuum decay as hypothesised in the early universe happens at energy scales which are quite extraordinary

    https://arxiv.org/pdf/2301.03620.pdf

    10^10 GeV should be compared with the LHC 13600 GeV collision energy. it is 1,000,000 times higher than the LHC gives us - and that is the highest energy collider on earth.


    Freak cosmic rays do have this energy - they are very rare. But to obtain false vacuum decay you need somehow to turn such a cosmic ray into 1000 superimposed Higgs particles. Good luck with that.


    Anyway - a bit off topic for LENR?

    :)

    DR Campbell concerned about new Brain Virus.


    External Content youtu.be
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    As he should be - I remember he got infected early in the COVID epidemic and has been struggling since then with Long Brain Virusitis

    Not behind schedule if you move the schedule…

    Since they have no research that indicates such a prototype possible - they have nothing until the completion of an initial working prototype.

    They (correctly - if optimistically) talk about the research results showing the potential for high energy density. Great word that.

    No-one has answered yet? I am a bit surprised. It is A-level (maybe even GCSE) Chemistry.


    The vapor pressure change depends on the reaction. Thus we are told


    H2 + (solids) -> n(H2O) + (solids)


    Assuming the solid part does not loose or gain H we must have n = 1 =>

    have swapped 1 mol H2 for 1 mol H2O =>

    (from Avogadro's law) no change in pressure if temperature stays same.


    If n > 1 pressure increases. if n < 1 pressure decreases.


    It is however quite possible that the solid part does gain or lose H (=> n is less or more respectively than 1) - even with overall reduction - so things can be more complex than the simple answer.


    PS - Chemistry was my least preferred A Level out of Maths, Physics, Chemistry so I apologise if I've made a mistake. Somone else might know what are the mots likely reduction products.

    Very nice paper from a time when printers didn't existed yet :)

    In the same way, i like reading again some first investigations what could not be necessary unrelevant, as this Vigier's thoughts.

    The history of cold fusion is a remarkable story at the confluence of science and sociology. This paper - written from an arts perspective - shows how tricky it is to reach scientific conclusions from contested data.


    Yet - all major science advance comes eventually from a better theoretical understanding that turns messy anomalous contested data into stuff that is understood.


    BTW - just to burnish my skeptical credentials - note that the converse is not true. Messy anomalous contested data may be just that, a sociological artifact or even just a sign of looking for things that capture the imagination but cannot be disproved or proved. Many examples of that - e.g. evidence for bigfoot.


    I am highlighting the paragraph below in the conclusions. I agree that the early CF debate was frames as physicists vs chemists. I agree also that initially physicists were predisposed against seeing CF as nuclear, whereas chemists more open. I disagree that the continued negativity of most physicists was unreasonable.


    Initially - F&P claimed conventional nuclear fusion (with expected energy and particle byproducts) from an electrochemical reaction. Physicists reacted with extreme skepticism because that seemed unlikely - the Coulomb barrier (CB) argument.


    That morphed over time as expected particle (and reactant) product measurements proved elusive - and belief in any such measurements was not helped by initial false positive errors in measuring particles!


    Fusion without the expected products from known reactions seemed to physicists even less likely. The no-expected-product argument (NEP).


    Now, both CB and NEP arguments can be got round. Branching rates for nuclear reactions could be affected greatly by novel reaction mechanisms so that we have unexpected results in both areas is not as surprising as it might otherwise seem. But, both arguments require something new and surprising to overcome the skepticism. Of the two - the CB argument is the easiest to get round.


    For me, that remains the case now. I see this playing out in two ways:


    (1) LENR positives stay elusive. The sign for that would be that decades-old data remains the "best quality evidence". Funding now, post google-guys-effort (for which incidentally we should be thanking Rossi) is large enough to generate new better results and advance the filed.


    (2) The hypotheses needed for arguments that counter CB and NEP get filled in. They have been formulated and new more informative experiments are being done. The sign of these experiments is that what you get out is more self-validating than "heat/no heat" or "neutrons/no neutrons". It could be LENR-adjacent work: for example looking at how reaction rates as evidenced by products are characterised from input energy or something else. Or looking theoretically or via simulation and how reaction branching rations change. It could be other stuff where new physics unrelated to LENR leads to new nuclear reaction possibilities. Or even something LENR-contradictory where new physics leads to non-nuclear-reaction high energy and power density power production (e.g. hydrinos, weird Rydberg states, weird alternatives to QM). I have never seen any LENR-contradictory stuff yet that looks remotely close to explaining the corpus of LENR evidence - and in fact it all looks like woo-woo at the moment.


    So - we get papers like #245, #246. #247 above (Ahlfors) which do not mention LENR but are LENR-adjacent and could lead to undeniable LENR evidence.


    The conclusion of the arts degree dissertation below misses an important scientific point and simplifies what is scientific progress. It also misses the point about LENR.


    Both quantum physics and relativity emerged from a maelstrom of previous novel theories [1,2] trying to explain anomalous data. The anomalies were undisputed, out there all over the place, and clear. How to explain them required radical new ideas and was not clear. But both special relativity and quantum mechanics were theories that had many less successful but building-block theoretical precursors. They (SR and QM) were accepted in the end because they explained so many disparate observations with better economy than alternatives and made new predictions that turned out right.


    [1] https://en.wikipedia.org/wiki/History_of_quantum_mechanics

    [2] https://en.wikipedia.org/wiki/Relativity_priority_dispute


    In the case of LENR things are a bit different. We do not have anomalies cropping up all over physics from assorted non-LENR research. Nor do we have significant LENR quantitative predictions validated by LENR experiments. (the few cases here are contentious). Nor do we have (yet) a comprehensive theory that emerges from prior less successful theories addressing the anomalies.


    The point is that the groundbreaking "major" physics discoveries have build on a lot of previous work both experiment (clear anomalies found by assorted people without any theoretical axe to grind) and theory (those anomalies attract interest and theoreticians try to find solutions - even though before the final synthesis these are partial).


    If that analogy was to apply to LENR then it is work like the papers linked by Ahlfors that are needed first - before any successful "LENR discovery". And the discovery would be some major shift in our understanding of (most probably) nuclear reaction rates or (less probably) something woo-woo due to lack of coherent anomalies giving it support.


    At the moment the LENR anomalies are comparable to the situation before QM or relativity - where things were not well understood. But then the anomalies stood as real without the need for a hypothesised explanatory theory. They were things that needed explaining. Too much of the LENR corpus is backwards-looking: LENR is hypothesised and indirect evidence that seems to support it is highlighted.


    It is - as a matter of science - nor surprising that this type of evidence is less convincing than the anomalies that led after many years of incremental theory generation to the success of relativity or quantum mechanics.


    Happy Christmas & New Year everyone.


    THH



    Did you consider in your argumentation, that obviously an R type TC is used to control heater temperature?

    The characterisation given is based on constant Pin, not constant temperature. I expect they did have a thermocouple. But they have not used it to characterise the system so we can gain no additional information from it.

    https://arxiv.org/ftp/arxiv/papers/2311/2311.18347.pdf


    I have previously noted (and no-one here has disagreed AFAIK) that scientific papers from Iwamura shown as supporting Clean Planet's claims did not merit the excitement (from some) here, nor in any way support their commercial claims to be working on excess heat generating reactor. Of course, they can be working on it - without it working. But the press releases made it seem that a reactor that worked possibly existed.


    This new paper is alas more of the same, it makes distinctly underwhelming - in the context of CP's commercial ambitions - claims (calculated excess heat 5W is roughly 15% of input power 30W) from a system where that calculation is very indirect and uncertain. It interested me because of the difficulty I had initially in understanding it. So here is my attempt at doing that.








    The calorimetric system used in this paper consists of an evacuated chamber containing a sample (with electric heater embedded) which is affixed via a holder (which would contain wires to power the heater). The sample is heated to around 900K. It is very clear that the whole system is designed to make radiant power transfer predominant. That is absolutely fine, but it requires some care with the calorimetry.


    1. There are various unknowns which must be inferred - emissivity & temperature of sample and holder

    2. In this system the temperature of the sample and holder is not well controlled, nor even directly measured. Instead the input power is controlled - this determines (via various unknown parameters) the temperature - from which the expected radiant power out can be compared with measured radiant power out.

    3. From the numbers here, the sample power emission is roughly 10X smaller than the holder power emission. Thus differences in holder emission between control and active runs are 10X more significant than equivalent errors in sample emission.

    4. You might think that the fact that eqns (1) neglects the power transfer from environment back to holder and sample, is problematic. It is lax that it is not considered in the equations. To 1st order it is ok. You would expect the reverse power to be scaled by emissivity in the same way as the forward power. you would expect it to be approximately 2% of the power transfer in the other direction as calculated by (270/(900+270))^4. Therefore this does not seem to be a problem and indeed I don't think it is a problem. It will be included inferred the conductive power constant C.

    5. You might wonder how the linearisation works, given the T^4 term in radiant power! That is because it cancels: the sample and holder temperature varies nonlinearly with Pin. The radiant power out varies nonlinearly with temperature in the opposite way. The paper claims that for given conditions that are met, the linearisation is good. I can accept that - although it would need checking especially because of 3. which means that the holder, with temperature varying in a complex way through its surface, makes a term 10X the sample power out so small errors in this get magnified.


    1. - 5. make this system quite difficult to analyse and the very short description in the paper really puts a high burden on the reader. However 1. - 5. do not themselves provide any obvious error. although they do cast some uncertainty on any conclusions that further more careful analysis would need to check.


    The main take home from them is to be careful with errors in QH (the holder emitted power). This is 10X the sample emitted power.


    Stated in the paper (top of 2nd screenshot) is that in excess heat generation during desorption of H2 generally changes emissivity by 10% (a factor of 0.9) and that this is incorporated into the equation (2). Obviously, from the linearised (2), this difference in emissivity between control and active experiments is important. We can quantify this. The 10% variation in sample radiant power (Qs ~ 3W Fig 3) due to emissivity (beta factor) is ~300mW - insignificant compared with apparent excess heat of 5W Fig 5 from Ni(5)Cu(1) sample.


    One thing missing here is control for the holder characteristics perhaps being different in the H2 and no H2 systems. After all, holder radiant output is 10X sample radiant output so any change here would have 10X the effect on the numbers.


    For these results a 15% change in overall holder emissivity could generate the data.


    However - I'm not sure that is the right solution. Such an error would be likely to be proportional to QH (and therefore Pin since QH ~ Pin). The shown error is roughly constant with Pin and therefore we should look for some error related to the sample temperature not the radiated power. Which leaves a complex poorly characterised system delivering results which are not immediately obvious.


    One interesting - not understood by me, not explained by Kasagi et al - fact. The differences between Ni and NiCu control samples show that a fixed QS change of approx 300mW - not a change varying with QS. I'd like to understand what that is. Different emissivity should make a difference proportional to Qs. Perhaps if this question is resolved, we would be closer to understanding the apparent excess in the active system.


    One variable too complex for me to analyse without simulations and more data than is in this paper is the possibility of thermal resistance inside the sample/heater leading to non-uniform sample temperatures across the sample surface. Thus higher thermal resistance might make the outer areas of teh surface lower temperature and therefore reduce the emitted power. That effect could maybe make for these constant errors? I've no idea.


    The paper claims that radiant power calorimetry has some advantages over calorimetry based on temperature measurements. That is true - if you look at Iwamura's previous experiments - but you can see from the uncertainties and assumptions here why it is so seldom used! The data here, given the uncertainties, would be more useful if we had temperature measurements as well as more information about the variables - e.g. exact pressure in the various control and active systems, thermography across the sample or different embedded thermocouples to determine temperature change over surface, etc.


    THH

    Okay, Ed Storms translated the y-axis label from Martinese to English as follows:


    log 100*(excess power/applied power)


    In other words, output was 3 times input soon after the first burst began. If this is the event shown in the last row of Table 1, input was 7 W and output around 21 W. That sounds right.

    Saw this after I replied.


    The only problem is that it would go to - infinity for excess power <= 0 which you might well expect at some times - e.g. the start - and nothing like that is shown.

    The y-axis is the log of 100 times "enthalpy generation / joule enthalpy input." That does not seem to correlate with anything in Table 1. And why is it "Joule" enthalpy input and not "Watt"? At peak, it is 100 * 3 = 300 times . . . what?And what does "enthalpy generation" mean, anyway? Is that excess heat? (~10 W just after 1.6 million seconds.) Or is it input power plus excess heat? (~17 W just after 1.6 million seconds, I think.) Why is it multiplied by 100?? I am confused!

    it is pretty confusing.


    (1) it must be the ratio of excess heat / heat in (or just possibly total heat out / heat in) averaged over some small period - because if it was total heat out / in you could not get a graph of that shape with sharp changes. So basically it is excess power out / power in or possibly total power out / power in.


    (2) the 100 X is weird but as above makes sense if it is a percentage.


    (3) the log is very weird - but the values in the graph do make sense of this.


    Thus log10 (%age excess pout / pin) would be:


    pout = pin => 0 % => - infinity (we do not see this)


    log (%age total power out ./ power in):


    pout = pin => 100% => 2 (after the log). Most of the graph is a bit below that.

    pout = 10*pin => 1000% => 3 (after the log). Seems reasonable for a large heat burst as is claimed.


    Using this interpretation that system is mostly endothermic, absorbing heat, especially at the start (which makes sense). However there is a large power generation burst of up to (instantaneously) about 10X the input.


    I realise a few of the things about this interpretation are guessed. But the alternatives are I think all eliminated as not being compatible with the given graph.


    So: y = log10(100*Pout/Pin)

    Pout - instantaneous power out (total = Pin + Pexcess)

    Pin - instantaneous power in (from heater hence joule enthalpy).


    The other possible fit would be to suppose that both the 100X and the log annotation on the Y axis are mistakes. In which case this could reasonably just be Pout / Pin - with the reaction clearly exothermic over some time, but the heat burst much less significant.