Kirk Shanahan's critique of LENR experiments

  • So, the argument here is that because CCS/ATER explains only some of the LENR corpus of results, it is obviously wrong.

    Right. That plus the fact that his theory violates many fundamental laws, and nothing remotely like what he describes has been observed in the last 180 years of electrochemistry.


    Or, as the saying goes, when you hear hoofs, think horses, not unicorns.

    But that corpus is problematic: were it indisputable and replicable mainstream science would have a very different view of LENR.

    The excess heat results, the tritium and helium are indisputable. That is to say, no one has ever disputed them. There are no published papers showing errors in the measurements of any mainstream paper, except Shanahan and Morrison, who are both crackpots in my opinion. You can read their papers and judge that for yourself. Morrison is here:


    http://lenr-canr.org/acrobat/Fleischmanreplytothe.pdf


    The mainstream science rejection of cold fusion is because of academic politics and funding disputes. It is led by plasma fusion researchers who will lose their livelihood if cold fusion research is funded. It has nothing to do with scientific issues. If results similar to cold fusion were published in any other field, no mainstream scientist would dispute them, any more than they dispute thermodynamics or Faraday's laws, which together are proof that cold fusion is real.

    A more accurate approach would be to see which results CCS/ATER could apply to, look at the rest, and see whether they remain compellingly anomalous.

    They do. But, more to the point, I do not think we would be justified in throwing away the entire corpus of electrochemical heat balances and calorimetry going back to Faraday and Joule because of this one theory. I think it is extremely unlikely that Shanahan has discovered that generations of electrochemists were wrong. And if they were not wrong, how could the present generation be wrong, since as Fleischmann pointed out, he is using the techniques developed by Faraday and Joule.


    Shanahan also fails to explain why this calorimetric error occurs only with Pd-D and not Pt-D or Pd-H; and why it only happens at high loading; how it correlates with helium, tritium and x-rays, and much else. In other words, he only "explains" one of the anomalies, ignoring the others, and his explanation violates textbook physics.


    But you don't need me to explain all of this. Marwan et al. blew Shanahan out of the water years ago:


    http://lenr-canr.org/acrobat/MarwanJanewlookat.pdf


    Shanahan continues in the Monty Python "Black Knight" mode, not realizing that his arms and legs are amputated. It is pathetic. As I said, this is classic crackpot science.

  • My my, quite the 'debate'. If only good debating tactics were being used by the LENR crowd. I note that THH has grasped the essentials of my many arguments, thanks to him for defending against the pathological believers.

    .

    So, the argument here is that because CCS/ATER explains only some of the LENR corpus of results, it is obviously wrong.


    That is how Jed thinks. See his subsequent reply. He is no scientist. He doesn't understand confounded variables.



    In answer to a few points:


    So, I suggest that he builds a no-excess-heat electrolysis/calorimetric system and reports the results.


    As THH noted, no-excess-heat experiments prove nothing. The easiest way to see this is to realize the LENR supporter just has to say "They did something wrong." to invalidate any attempt to draw conclusions from it. But, why should *I* be the one to build this. This is a strawman argument with no value. My purpose has been explained, and it is met by what I have done. I need do no more. On the other hand, if LENR researchers want to be taken seriously, they need to participate in the give-and-take of the cyclic scientific process of reporting results, getting critiqued, responding to the critiques (a type of report), and so forth.


    More to the point, LENR researchers have built many calorimeters which cannot have the problem described by Shanahan, yet they do show excess heat.


    I just posted in this very thread an examination of a Sebeck calorimeter report issued by Dash, et al. He actually did give his calibration equation, and thus I could mathematically show a 1% CCS covered his reported excess heat results. Jed, THAT MEANS IT CAN HAPPEN THERE TOO! Of course, a scientist recognizes this, at least a non-pathological believer scientist.


    Which proves that Shanahan is wrong. He sometimes admits this and sometimes claims that Seebeck, ice calorimeters and other types can have the problem after all.


    You want to run that by me again?? Your statement makes no sense.


    (What I claim is that ANY calibrated analytical device or method that undergoes a steady state shift will require a new calibration. I note that the same Dash et al paper illustrates this perfectly when they discuss the calibration problems of the commercial instrument they first tried using.)


    The Shanahan theory has to make them indistinguishable, since it targeted to explain exactly the same data sets. If it actually applied to all data sets, that would make it impossible to falsify the theory.


    I have posted several ways to test my theory. The prime way would be for a CFer to show a run that cannot be explained by CCS/ATER. Since CCS/ATER is founded in real chemistry, it will have limitations. The magic pixie dust called "LENR" has no such limitation. Your statement is incorrect.


    Fortunately, as I said, it does not apply to many calorimeters that showed excess heat, so obviously it is wrong.


    Wrong.


    Also, it cannot explain why Pd-D works and Pd-H or Pt-D does not work;


    THH already answered this point. Make sure you understand his reply.


    it cannot explain the tritium;


    As THH said, CCS/ATER is for excess heat, and it has a couple of ramifications towards other things as well. Tritium however, is generally not covered by this. However, there is minimal tritium data out there, as noted by Storms in his book, so there is less to work with in crafting a mundane explanation. However, the simplest mundane explanation comes from Fritz Will's second paper on tritium analyses, namely, contaminants. Since no one even bothers to specify the analytical protocol used (saying "LSC" is not a complete description), it is almost impossible for a critic to do more than point this out. I could formulate the CCS/ATER 'theory' because I had Storms' data, Szpak's ir video, and all the rest that I folded in.


    and it cannot explain the helium is commensurate with the heat.


    There is no 'He commensurate with heat'. I replied to Abd to show the data he cites to 'prove' this is actually to noisy to draw any conclusions from. Other He data from, say McKubre, that shows He increasing while 'LENR' is supposedly active is not valid because we can't be sure it isn't just a leak. Prove it isn't (not you specifically Jed, one or more of your 'heros') any maybe we can talk more.


    All in all, it cannot explain anything. It is a classic crackpot theory, as I said. With a crackpot theory, the fact that you cannot falsify it or even test it is considered a feature, not a bug


    It actually explains a lot, but you simply refuse to hear what it says. That's your problem, not the theory's. Non-falsifiable crackpot theory == LENR.


    when you hear hoofs, think horses, not unicorns


    Unicorns - no proof of existence. LENR - no proof of existence.


    The excess heat results, the tritium and helium are indisputable


    Nope. Very disputable, very.


    There are no published papers showing errors in the measurements of any mainstream paper, except Shanahan and Morrison


    Doesn't the fact that you name two published critics invalidate your initial statement? yes, I believe so. Therefore you should be saying: "There are published papers showing errors in the measurements." And you can follow up with: "While Morrison was reasonably well responded to, the Shanahan criticisms have been meet with obfuscation and mis-direction, and remain relevant today."



    I think it is extremely unlikely that Shanahan has discovered that generations of electrochemists were wrong.


    So do I. Did they not anticipate a CCS/ATER in the highly specialized case of water electrolysis where the electrolysis gases are not kept separated in cells with thermal loss pathways all concentrated in one spatial region? Yes. But why would they? That is only of relevance to those who run the specialized cells.


    A repeat in some points...

    Shanahan also fails to explain why this calorimetric error occurs only with Pd-D and not Pt-D or Pd-H; and why it only happens at high loading; how it correlates with helium, tritium and x-rays, and much else. In other words, he only "explains" one of the anomalies, ignoring the others, and his explanation violates textbook physics.


    Doesn't correlate with He. Not enough tritium or x-ray data to know if any correlation really exists. Also, doubtful if tritium and x-ray signals are even real.


    As I have said many times, and as THH has grasped, CCS/ATER is primarily for excess heat. It has implications in the CR-39 and He results, possibly in tritium results too. However, the other big go-to explanation is contamination and subsequent concentration of those contaminants. With those two general proposals I believe a massively large portion of CF data can be explained as mundane chemistry. Very little is left to tweak the imagination.


    Kirk has written very extensively about his detailed hypothesis in many places without AFAIK doing any experimental work to demonstrate that it is more than a theory, I find it of diminishing interest, despite the fact that he is polite and well-argued (for certain values of 'well argued') it remains just a hypothesis. His central theme is that there is no LENR, but only Kirk-energy. Well, perhaps he could devise an experiment to prove it.


    You started out reasonable but then went off the deep end. What is 'Kirk-energy'? To be clear, I postulate that there is no energy present except that put in by the experimenters. And I have devised experiments to at least test the CCS thing. They were even posted in this forum. What is you complaint here?

  • Kirk has written very extensively about his detailed hypothesis in many places without AFAIK doing any experimental work to demonstrate that it is more than a theory, I find it of diminishing interest, despite the fact that he is polite and well-argued (for certain values of 'well argued') it remains just a hypothesis. His central theme is that there is no LENR, but only Kirk-energy. Well, perhaps he could devise an experiment to prove it.


    Kirk's hypothesis applies to a significant class of old and (probably - the Austin Lubbock work) ongoing LENR experiments. It is thus of continuing interest because its explanatory power can be tested against the new as well as old results, and if there were the will to do this it could be checked in new experiments, leading either to a clearer negative (=> LENR more likely) or positive (=> LENR less likely) conclusion when arguing from excess heat.


    It becomes irrelevant if no-one claims excess heat electrolysis evidence as support for anything (like LENR) that they care about now. As long as this old data is considered relevant than CCS/ATER is also relevant. In fcat it remains relevant even if there is other undeniable evidence for LENR, because LENR is still unclear and therefore whether specific experiments give results due to LENR or spome otehr phenomena alters the total knowledge about how LENR works.


    That is why I find a dismissal of Kirk's ideas strange. Marwan et al propose various scenarios in which CCS/ATER would not be rleevant, but they don't show that these apply to many of the old or even current experimental data. We could go over their points in detail, see how they are answered by Kirk, and (always important) look at the things that the two sides of the argument do not consider to work out what is correct. I remember doing a bit of that last time, but not reaching a clear conclusion as to which experiments CCS/ATER would likely apply to. That uncertainty makes it a prime candidate as a mundane explanation for those results, given the only other proposed is LENR which is both extraordinary and not detailed.

  • You started out reasonable but then went off the deep end. What is 'Kirk-energy'? To be clear, I postulate that there is no energy present except that put in by the experimenters. And I have devised experiments to at least test the CCS thing. They were even posted in this forum. What is you complaint here


    It's not a complaint, it's an observation. You can devise all the experiments you like but if you don't actually ever perform them they are merely virtualities, and not realities. Since (I repeat) you don't ever seem to have actually done the experiments you devised they - for me at least - lack the substance you seem to think they have. There goes our Kirk, the only one marching in step, etc etc.


    Apologies if 'Kirk-energy' hit a raw patch btw, it wes mere laziness because I couldn't for a moment remember your own carefully devised acronym for it.;)

  • It's not a complaint, it's an observation. You can devise all the experiments you like but if you don't actually ever perform them they are merely virtualities, and not realities. Since (I repeat) you don't ever seem to have actually done the experiments you devised they - for me at least - lack the substance you seem to think they have. There goes our Kirk, the only one marching in step, etc etc.


    Apologies if 'Kirk-energy' hit a raw patch btw, it wes mere laziness because I couldn't for a moment remember your own carefully devised acronym for it.


    OK. I thought you were implying that I was suggesting some other kind of energy was present giving the calorimetric signals. To be clear again, there isn't, it is all a problem of noise.


    The issue is that the CF community refuses to compute their noise properly. The look at baseline fluctuation and don't do Propagation of Error Calcs. If they did, the would understand that small variations in calibration constants give *apparently* large excess heat signals, many times the baseline fluctuation. On top of that, there is clear evidence there is a systematic nature to it. This is even after the fact has been pointed out many times. Their refusal to deal with those issues is a clear sign of a pathological fixation on finding LENRs.


    And again, I fail to see why you all keep thinking I should be the one to do the experiments that prove/disprove this. *They* are the ones with the already existing data to do so. *They* are the ones with the equipment that could be modified in simple ways to test out the ideas. It would take me years at this point to reach that level. They could do it in days or hours if they had wanted to. And that was true in 2000. Upshot, there is unlikely to be a resolution to this near-term, if ever. And as I have said, I have met my needs.


    There's really not much left for me to say on this topic. From now on, I will probably withdraw from these discussions, since I think I have covered everything that can be squeezed out of the CF field. Lots of interesting chemistry, no nuclear stuff that I can see.

  • In prior messages I recall Eric commented on my ‘novel’ way of analyzing error. I wanted to comment on that because it’s a very important point, and it’s also not novel. I was taught this in junior-level PChem Lab, but didn’t really start using it heavily until I got out of school and into quality control support work as a PhD. If you haven’t had this before, I’m not surprised because I work with some PhD’s who were never taught this, and some who didn’t get it until grad school, so teaching of this is spotty at best.


    The big difference between modern science and the ancient Greek way of doing science is that today we test out theories. ‘Back in the day’ the idea was truth=beauty and beauty=truth, so if your theory was ‘beautiful’ it must be true. But as soon as we started testing this against reality with some degree of precision, we found out that idea didn’t work all that well, although there are some today who still insist the ‘real’ laws of nature will turn out to be beautiful once we figure them out fully.


    In any case, once we started testing things, we quickly discovered a very important fact, that you didn’t always get the same answer when you supposedly did the same test or experiment. And from that we recognized that natural variation or fluctuation exists. It is somewhat unfortunate that this became known as ‘error’, because that term implies someone messed up. But that’s not true, the correct use of the term when referring to natural variation carries no negative connotations with it. It is just a fact.


    So, when you have a fuzzy measure of a quantity, how do you tell if it has changed when you vary a control parameter in an experiment? Usually via statistics. Means (averages) comparison, standard deviation comparisons, regression analyses with correlation coefficients, etc. One part of that is figuring out how the error in measured quantities translates through a computation to the final computed ‘answer’. The method for this is known as ‘Propagation of Error’ or ‘Propagation of Uncertainty’ and it is a standard approach. See for example https://en.wikipedia.org/wiki/Propagation_of_uncertainty. (I have an old book with it in also: “Statistical Treatment of Experimental Data”, Hugh D. Young, McGraw-Hill, 1962, pps 96-101. Should be many other books with it in as well.)


    Given an equation to compute an answer (say output power for example) from measured variables, one takes the partial derivative of the equation for each measured variable, squares it, multiplies it by the variance for that variable (square of the standard deviation) and then adds up all the terms for all the variables. The square root of that is the standard deviation of the computed answer. Of course, when you calculate the partials, the other variables are treated as constants in the process, which means they end up in the final expression. Now comes the tricky part. You have to determine which set of measured values to use to plug into this expression. Do you want the error (uncertainly) near the center of the data region, or the extremes, or somewhere else? I typically like to evaluate it at the point of maximum uncertainty, but other choices are equally valid.


    In relation to the CCS thing, the power out equation for a mass flow calorimeter is helpful to look at. It is: Pout = k * Cp * flow * deltaT where Cp is the heat capacity of the calorimeter fluid at constant pressure, flow is the fluid’s flowrate, k is the calibration constant, and deltaT is the temperature different between entry and exit points of the calorimeter fluid. All of the terms on the right-hand-side of the equation are experimentally determined and must be included in the propagation of error (POE) calculation to compute our best estimate of the error in the output power. If you work it out, it turns out the cal constant term is quite significant.


    The problem with all the CF papers I’ve seen so far, is they all neglect this term. Typically they don’t do the POE calc, they just look at the baseline fluctuation of the calorimeter (usually ~50-80 mW) and call that the error. Realizing that the mass flow equation above is translateable to Pout = k*Pin via the calibration process, you can easily see that a 1% error in k (what I relate to the ‘CCS’) gives a 1% error in Pout. If the input power is 20W or so, as it was in Ed Storms’ data, that’s an uncertainty of 200 mW. And since this is a standard deviation derived via random statistics assumptions, we usually multiply that by either 2 or 3 to get the ‘spread’, meaning we are talking about 400-600mW just due to natural variation in the determination of the calibration constant. And a 1% uncertainty is extremely good for these kinds of measurements. If we talk about just ‘good’, and not ‘extremely good’, somewhere in the 2-5% range is typical, meaning in our example we are up to 1.2-3W uncertainty (vs. the usual claim of 0.05-0.08W). That by itself covers the large majority of excess heat claims. If we have bigger claims, we have to examine those a little more to see if this CCS problem still applies.


    Note that this is a calculation of a supposed random error. When I actually examined the impact of calibration constant shifts on Storms’ data I also detected a systematic effect.


    In any case, this is what should be done by all competent scientists. Unfortunately as I noted above it is not taught very uniformly and many times you see ‘other’ methods being used (which are usually less reliable and accurate). So again, what I have done is not ‘novel’, it’s just more correct than what the CFers do.