Press Release: SRI Successfully Replicated Brillouin’s LENR Technology

  • @JedRothwell


    Re "thousand of tests prove LENR". There is some fascinating meta-argument about this, which Kirk's slant is one specific variant on. But I'll keep clear of that for now because it would be repetitive and I see no point!


    Re BE. For 5 years, since reading their write-ups, I have thought that RFI issues with those Q-Pulses would provide identical results from what they claim. Of course, there are subtle differences - for example the differing time constants thermal and electric - but I've not seen anyone at BE or SRI investigate the matter fully and show how they control for this or how the time constants resolve the matter. Have you? For me, it would be an obligatory section of any report claiming that these results are positive for something new.


    This is work that SRI should now be doing. I just hope they are.

  • SRI does not seem to be getting it's due respect as the fine research institution it is. Yes, even the LENR branch of SRI, formerly headed up by Dr. McKubre.


    One of the 'issues' in the CF field today is the continual 'calls to authority' used to try to gain credibility for a given CF report. The institution involved matters NOT. What matters is the people doing the work and their choices as to how to do (and report) said work.


    Case in point, Dr. M. McKubre. In 1998, he put out a monster 400+ page EPRI report that covered CF research he and his supporting cast did in the time period of 1993-1994. Half the report was on 'Degree-of-Loading' experiments, which I believe is where he decided that Pd had to be loaded to >= 0.9 D/Pd to get CF (not required actually). The other half was on the results from a series of calorimeters he built and tested in that time frame. One of those calorimeters was the "M" one, which is where the "M4" run comes from. I believe Krivit reported on that run (I haven't read his report). I obtained that report and was amazed to find a CD in it with 200+ Mbytes of data files from the various runs. I was in heaven for awhile. Then I found that he had only included experimental runs, no calibration data.


    The M4 run contained the strongest CF signal detected in the report (peak of 360 mW). The figure presenting it had a perfectly flat baseline with the usual 20-50mW noise. The report mentioned he had use 'transfer functions' to create calibration curves, but there were only generalities, no specifics, so I couldn't evaluate the results any better than any other reader.


    So, I wrote McKubre. twice, separated by a few months, to ask if he could tell me the exact equations used. This was in 1999, since the report came out late in '98 and I had spent a little time familiarizing myself with the report. Both times he replied he didn't have the time to look it up for me. As an industrial chemist myself, I understood well the possible issues, but I was of course disappointed, and I decided to try a 'Hail Mary' move. There was an email discussion going on at the time between a large group of people and I wrote an email describing the situation and asking if anyone knew what equations McKubre had used. I had very little hope of success, and I was right I never got an answer from that direction.


    However, that email list included Dr. McKubre, and as it happened with in an hour of me sending it out, I received an unrelated email from Dr. McKubre discussing some issues (but NOT containing the information I sought). Subsequently, Dr. McKubre deemed the 'Hail Mary' email to be some sort of terrible offense. He responded to it to all addressees, and named me a 'grandstander' and cut all communication with me.


    Shortly thereafter, Ed Storms posted his first set of data to the Internet, and "the rest is history".


    At some point that I don't recall right now, I 'cheated' on the M4 analysis and used the two runs of the M series of results that did not see a CF signal to construct a simple linear y=mx+b calibration equation and applied it to the M4 run. It showed a significantly interrupted baseline (i.e. baseline shifts). The amount of shift and the CF peak intensity was very dependent on the calibration constants employed. IOW, a 'typical' CF result. So it is very important to know how McKubre eliminated these issues with his transfer function approach. (In fact it's funny to note that when the MIT group did something similar by clipping the data display to cut out baseline shifts, Gene Mallove took massive offense, quit the MIT Press Office, and became the legendary CF promoter he was.)


    Again, the point it that it is the person, and what he or she writes, that is important, not where they happen to work.

  • Think of Garwins report what you will, but I read some positive things in there. At the least he could find nothing wrong with the set-up,


    What Garwin did was assume, along with everyone else, that a lumped parameter approach to the calorimetry was adequate. As I have showed in publications, it isn't.


    I will end with a quote from McKubre when he was questioned about his calorimetry: "We have been doing this many years, and we know what we are doing".


    Which translates to: "Trust me! I did it right!" - Nope, no thanks.


    {added note} - McK is one of the 10 authors who thinks what I wrote describes a 'random CCSH', so he obviously never bothered to even carefully read what I wrote...

  • Everybody should read Beaudette...


    I first communicated with him while he was proofing the second edition of his book. I had just finished reading the first edition and had some issues with it. He challenged me with three papers, suggesting that they would prove my whole "CCS thing" wrong. I subsequently sent him analyses of the papers showing their consistency with my proposals. I never received a reply to that.


    His book has all the 'standard' mistakes in it.

  • Re "thousand of tests prove LENR". There is some fascinating meta-argument about this, which Kirk's slant is one specific variant on. But I'll keep clear of that for now because it would be repetitive and I see no point!



    @THH: Your observation is correct. It could easily be the case, that some 10000 experiments prove LENR, if we include everything below 1 keV impact, including the Russian experiments since 1960 (first was done by Sakharow!)


    As a man capable of clear thinking, you should be able to distinguish between a physical phenomena (LENR, CMNS etc.) and successful implementation of an effect to produce a useful amount of energy.
    Just the word useful tells everything, we don't have so far. LENR experiments are 100 % reproducible (sonofusion, proton beam fusion below 100eV etc..) but they produce only low level energy, with a process that degrades over time.
    BR uses an old fashion approach, with severe drawbacks. (self destruction of NAE)
    NANOR is the far better approach, which is re-breeding LENR active material at much higher energies. Mills is the best solution so far as the NAE works in a self contained E-field, which is dynamically reproduced.


    Thus it's just a matter of time until the table turns: It's your choice on which side you place your seat...

  • On the other hand, when you say "I have an effect that cannot be explained" that is science.


    But that of course is not what they say at all. They all say "I have observed an effect that can only be explained by a LENR of the (fill in the blank) type." Now, that actually is 'science', at least the start of it. Any scientific publication is a call to the interested parties to examine the claim, both experimentally and theoretically. What is unscientific is when they use false logic to dismiss criticisms and they trumpet the 'fact' they they have rebutted 'all' criticisms. To actually continue being 'scientific' they would need to deal honestly with criticisms.


    Furthermore, we know that cold fusion is a nuclear effect because it consumes no chemical fuel[1], and it produces tritium[2], and helium in the correct ratio[3] for D+D fusion.


    [1] a claim based on the flawed calorimetry they use
    [2] a claim based on analytical methods (that are susceptible to interferences) that they do not describe
    [3] a claim based on data that does not actually prove anything (see previous discussions in this forum)



    You need to find errors in the instruments or methods in every experiment that has produced significant excess heat.[4] You have to show that thousands of experimental runs in 200 laboratories were all wrong, for one reason or another.[5]


    [4] lumped parameter approach resulting in a CCS-induced artificial excess heat signal
    [5] all use lumped parameter approach to data analysis. Why is it that you fail to understand that the way you work up your data is as much a part of the method as the type of equipment you use?



    There is not the slightest chance you will find mistakes in all positive experiments.


    ummm...


    Every major type of calorimeter has been used including isoperibolic ones with sensors inside, others with sensors outside; flow calorimeters with water, oil and gas; Seebeck calorimeters of many different types and configuration, bomb calorimeters and ice calorimeters.


    All analyzed via a lumped parameter approach... (P.S. It impacts _all_ the methods...)


    I think that we should not focus on improving the scientific evidences of LENR real as Jed have shown that denial is not rational.


    Jed has shown nothing at all, except his inability to be objective and analytical.

    • Official Post

    Kirk,


    I don't see any problem with appealing to authority (ATA). It used to drive my old friend/nemesis JC :) crazy when I would do that, but hey it works. Actually, it, along with "judging by the actions of others", are about the only tools available for lay people to weigh in on complicated issues. You have to admit this is some high level stuff here. Even those in the science based fields admit some difficulty following, so to one degree or another most here are guilty of doing the same.


    In fact, being an avid reader of science biographical books...presently reading "The Age of Brilliance", it is apparent that even the highest level scientists of our times sometimes/often got stumped, and needed to go up one level of the food chain for help, or ATA. If that did not work, they appealed to the next higher level up, and so on. Or debated the hot topic of the day, by resorting to AT (a higher) A. So actually everyone does it. I bet even you, or the ivory tower types like JC/Garwin, do too.


    So here we have Dr. McKubre, with his excellent credentials as an electrochemist, long career with absolutely no blemishes, heading up a department in one of the worlds leading industrial R/D companies. While from your standpoint he was inaccessible, and even angry at you, from mine...after watching many of his videos, read his many presentations, his reports, he comes across as very open, amiable and accessible. I have never seen him hesitate to answer the most basic, or probing of questions, including your paper, which he and the others took some of their valuable time to critique.


    He is, or was, a leader in the LENR field, a great speaker (how can you not like that New Zealand accent ;) ), and a public figure, having kept the light on during LENRs darkest days. So there is a sense of loyalty too for his effort.


    That said, for all I know you/TTH/Norris/JC may be right, and LENR should be renamed to "The study of odd lab effects". I do not rule that out, as it is strange how so much is correlated one day, and anti-correlated the next. In a nutshell, the results seem all over the place to me. I sometimes wonder; if you took 100 scientists, gave them 100 standard heater/calorimetry set-ups, equipped them with detectors for neutrons, He, 3He, then put them in separate labs...would they report later about the same distribution of AHE, He, Tritium, gammas, the occasional melt down, radiation poisoning, maybe even have a "dead lab assistant" or two :) ,etc.?

    Nonetheless, for the time being, I am sticking with McKubre/Tanzella/Mills/Godes. :)

  • Nonetheless, for the time being, I am sticking with McKubre/Tanzella/Mills/Godes.


    What you are discussing w.r.t. McKubre is what are called 'human factors'. Yes, he is very smooth and polished on camera. Does that automatically make him right? No. It does make him a person to be careful with because he has strong interpersonal skills and thus could be a person who could manipulate another's feelings easily. If that person bases much of his/her life choices on 'feelings', that can be a problem because one's feelings are a poor indicator of how things really are.


    Science essentially requires the suppression of emotions and feelings when it comes to conclusion drawing. Of course, people have a very hard time doing that, and often fail at the task, and thus we gets into the knock-down, drag-out fights that so entertains the lay community.


    So, when I tell you that a simple shift in calibration constants, well within normal error limits, can 'zero-out' a 780mW 'CF' signal, you should say "How so?" not "That doesn't feel right!". Of course, that's what Storms, Jed, McKubre, Hagelstein, et al, do (choose the latter response). But that is the 'human factors' approach, and is an antithesis to good science. Very human though...

  • Thanks to David for linking to the SRI report. I have liked a couple of posts related to this because they caution us not to get too excited over this. I’ve now spent a little time looking the report over and I have a few pointed comments to reinforce that.


    (1) There is *NO* error discussion. Therefore we must default to the lowest level of this, the use of ‘significant figures’. The ‘b’ coefficients presented in Tables 2, 3, and 4 are only listed to 1 significant figure. Using standard sig fig thinking, this means that all (but 1) COPs listed in the tables are equivalent and have the value of 1. There is a 1.58 listed in Table 6 that would properly round to ‘2’, but I place no special significance on that number, it’s probably just ‘luck of the draw’ that it rounded up.


    Kirk,


    I have to disagree with your standard sig thinking.


    1) First intuitively: when you look at the b values and the experimental values in table 4, it turns out that the part of the equation that uses the b value only has a minimal effect on the calculated COP value. I'm talking less than 0.01. So who cares if b=0.03 or 0.035? It doesn't have any impact on the calculated COP. And the other values have 2 or 3 significant digits, so we're good.



    2) Secondly, I brushed up a bit on math with significant figures:


    a) When you multiply/divide, "the calculated result should have as many significant figures as the measured number with the least number of significant figures"


    OK so far so good.


    b) When adding:


    "For quantities created from measured quantities by addition and subtraction, the last significant decimal place (hundreds, tens, ones, tenths, and so forth) in the calculated result should be the same as the leftmost or largest decimal place of the last significant figure out of all the measured quantities in the terms of the sum."


    So if I understand correctly, for example 1.49+0.04= 1.5 => we have two significant digits and not 1 as you predicted


    If you look at examples in table 4, it looks to me that we will end up with 2 significant digits for the COP. So COP=1.2 or 1.3.


    Happy to be corrected if I'm wrong!

  • But that of course is not what they say at all. They all say "I have observed an effect that can only be explained by a LENR of the (fill in the blank) type."


    No, they do not say that. If they did, you could ignore that part and look at the experimental result only. A scientist (or anyone else) can be right about some things, and wrong about others.

  • Rothwell:

    Quote

    Furthermore, we know that cold fusion is a nuclear effect because it consumes no chemical fuel,


    This is not a logical argument until you establish that the heat exceeds the input. That is, until you exclude the possibility that the apparent excess heat is the result of an artifact or mismeasurement, like the possibility you yourself suggested in the SRI measurement of the QPulse power.


    Moreover, the formation of metal hydrides is exothermic, and so you have to establish that the observed heat is in excess of that chemical heat.


    Quote

    and it produces tritium, and helium in the correct ratio for D+D fusion. Those are not theoretical claims, they are experimental observations.


    If that claim were widely accepted, there would be no arguments like this in obscure web forums.


    But it's not accepted, and probably not true. Maybe you're not familiar with the literature. There are no claims of commensurate tritium in the refereed literature, and the only claims of commensurate helium are even less persuasive than the claims of excess heat. The only claims of commensurate helium in the refereed literature are from Miles in the early 90s, and they are not credible and were challenged in the literature.


    On the other hand, efforts to observe commensurate helium after Miles that *were* published in refereed journals are all negative. These include papers from Aoki in 1998, Gozzi in 1998, and Clarke in 2002 and 2003. All looked for helium in cells that had allegedly produced excess heat, but the He-4 was either absent or not definitive, much less commensurate. Clarke suspected that Case and others who claimed helium in non-refereed reports were the victims of systematic error.


    One could add Arata's papers to this list; his papers were in Japanese journals, and claimed helium, but the levels (which are not easy to extract from his papers) appear to be orders of magnitude below commensurate levels.


    Even McKubre said in 1998 in his EPRI report that "it has not been possible to address directly the issue of heat-commensurable nuclear product generation". He later changed his tune about the same data, after "reconsideration".


    Finally, the best evidence for the claim of commensurate helium was presented to a panel of 18 experts enlisted by the DOE in 2004, and they were unconvinced. Nothing has been published in the past 12 years to change this assessment.

  • This is not a logical argument until you establish that the heat exceeds the input. That is, until you exclude the possibility that the apparent excess heat is the result of an artifact or mismeasurement . . .


    This has been ruled out. The effect has been replicated at high signal to noise ratios in hundreds of major laboratories, in thousands of tests. No skeptic in the history of cold fusion has ever found a meaningful error in any major experiment. (Claims by Shanahan and others are tin-foil-hat class delusions.)


    Such widespread replication means the effect has to be real. There is no other standard of being real in experimental science. There is no other way to rule out artifact or mismeasurement.


    I am sure you have not found any errors in any major experiment, so you are not in a position to dispute what I say here.


    , like the possibility you yourself suggested in the SRI measurement of the QPulse power.


    That was not my suggestion. I do not know about the present SRI results. I have not looked at them closely, and they have not been replicated yet, so no one can judge.

  • Rothwell:

    Quote

    This has been ruled out.


    Well, that's where the disagreement lies. We know you think it has been ruled out, but if the skeptics you're arguing with agreed, there wouldn't be an argument.


    The argument that the observations can be plausibly attributed to artifacts is not contradicted by the fact that chemical fuel is not consumed. None of the artifacts put forward to explain Rossi's or Mizuno's observations resulted in the consuming of chemical fuel. That's why I said it was not a logical argument against the possibility of artifacts.


    On the other hand skeptics' arguments that there is an absence of good evidence for commensurate nuclear products *does* support the claim of artifacts.


    Quote

    The effect has been replicated at high signal to noise ratios in hundreds of major laboratories, in thousands of tests.


    This at least qualifies as an attempt to argue against artifacts, but I'm not convinced. It's the quality of the evidence, not the quantity that is important. Otherwise, I'd have to accept alien visitations as real.


    Counting papers to establish legitimacy is characteristic of fringe science, but to my thinking, it supports the skeptical view. If there are so many alleged replications, but the results are erratic without any definitive scaling observations, and none of them are unequivocal, then that fits artifacts much more plausibly than it fits a claim of cold fusion.


    And it's not just the skeptics who claim the results are erratic and equivocal. You yourself wrote in 2001 "Why haven’t researchers learned to make the results stand out? After twelve years of painstaking replication attempts, most experiments produce a fraction of a watt of heat, when they work at all. Such low heat is difficult to measure. It leaves room for honest skeptical doubt that the effect is real." That's 15 years ago, but you still cite work from before then as the best in the field.


    The executive director at the Office of Naval Research, who had funded experiments by Miles and others said (from a NewScientist article in 2003): "For close to two years, we tried to create one definitive experiment that produced a result in one lab that you could reproduce in another,” Saalfeld says. “We never could. What China Lake did, NRL couldn't reproduce. What NRL did, San Diego couldn't reproduce. We took very great care to do everything right. We tried and tried, but it never worked."


    McKubre wrote in 2008 "… we do not yet have quantitative reproducibility in any case of which I am aware.", and " in essentially every instance, written instructions alone have been insufficient to allow us to reproduce the experiments of others." To most scientists, this means there is no reproducibility in the field. And that represents low quality evidence. And he emphasized it in 2016 when he said "there exists no consensus around an agreed set of facts."


    A few years ago Hagelstein wrote "aside from the existence of an excess heat effect, there is very little that our community agrees on."


    The very existence of the MFMP (which you said was exactly what was needed) is an admission that no unequivocal experiment has been established in LENR, because their first goal is to identify one.


    Quote

    No skeptic in the history of cold fusion has ever found a meaningful error in any major experiment. (Claims by Shanahan and others are tin-foil-hat class delusions.)


    And now we're back to identifying artifacts. First, dismissing Shanahan as delusional may play well to your choir, but your certainty won't convince skeptics. After all, you were certain about Rossi too.


    Secondly, notwithstanding Shanahan, Jones, Cerron-Zeballos, Dmitriyeva, Faccini et al., and others who have published indications of errors or artifacts in cold fusion experiments, and the many more who have published negative results, it's not the responsibility of skeptics to find errors or artifacts. It's the responsibility of the claimants to exclude them, or at least make them far less likely than what they are claiming as an explanation.


    And here's the parallel again: No advocate in the history of cold fusion has found a meaningful explanation for how a nuclear reaction can explain the results. And to be clear, I'm not dismissing a nuclear explanation for lack of a theory. I'm just saying you also can't dismiss artifacts for lack of a specific explanation.


    And it wouldn't take a full blown nuclear theory to exclude artifacts. Just some kind of consistent explanation: evidence for commensurate reaction products, quantitative scaling with amount of fuel, reproducibly self-sustaining operation, would all make artifacts far less likely. In short, the researchers should be able to make the results stand out.


    In the current situation, well represented by the above quotations, the observations are far more plausibly attributable to artifacts, errors, and confirmation bias than to unprecedented, largely radiation-free, nuclear reactions that are inconsistent with generalizations of a century of *experimental* results, and that somehow contrive to prevent discovery of their nature.

  • Rothwell:


    Quote

    The effect has been replicated at high signal to noise ratios in hundreds of major laboratories, in thousands of tests.


    By the way, I'm puzzled by your numbers for a few reasons.


    1) Why you think hundreds is a large number, when the far less significant phenomenon of high temperature superconductivity, discovered at about the same time, has been reproduced in many thousands of labs, and published in refereed literature more than a hundred thousand times. Cold fusion would surely dwarf these numbers if the evidence were solid.


    2) If hundreds of major labs produced high sigma replications in such an important field, then they would not abandon it. And any major lab investigating a subject would be expected to publish several refereed papers on it per year. Doing the math, that should have resulted in tens of thousands of refereed papers. And yet, I'm not aware of any new experimental claim of excess heat in an electrolysis experiment in the last decade, and not more than a few claims of excess heat in other types of experiments. What are those hundreds of major labs doing now?


    3) In 2009, you made a tally of replications in the literature, and came up with 153 papers, and you cast a pretty wide net to get to that number. And of course some groups are responsible for multiple papers in that list. That would say that fewer than 100 labs published replications that made your list. Now, whether it's a hundred or hundreds is not that important, but I wonder why you would inflate the figure like that if you were confident that the raw truth was sufficiently convincing.

    • Official Post

    Using the number of paper is the answer facing people who claim there is no evidence facing evidence, claim there is an artifact not describing an artifact, claiming a consensus based on non-experts...


    This is an heuristic, but what can you do when the critics have nothing else to say than either propose refuted claims challenging known science of calorimetry, or calim an undefined artifact ?
    Charles Beaudette have well


    Maybe we should not use the number of papers, but simply remind the fact that the only 4 papers challenging Fleischmann &Pons calorimetry, are refuted in a way or another.
    No need to go further.
    Negative results are not evidence.
    theory is not evidence.
    Only an artifact that is confirmed can refute a chain of so well defined results, done by highest experts, and replicated massively by competent groups (chemists)..


    Choose the paper you want to challenge... Texas AM Bockris, BARC, F&P, Lonchampt, McKubre, and propose something you can publish in a peer reviewed journal.


    I would simply cite the key page of Beaudette Book that could be the only page to read.


    http://www.amazon.com/Excess-H…h-Prevailed/dp/0967854830

    Quote

    Unfortunately, physicists did not generally claim expertise in calorimetry, the measurement of calories of heat energy. Nor did they countenance clever chemists declaring hypotheses about nuclear physics. Their outspoken commentary largely ignored the heat measurements along with the offer of an hypothesis about unknown nuclear processes. They did not acquaint themselves with the laboratory procedures that produced anomalous heat data. These attitudes held firm throughout the first decade, causing a sustained controversy.


    The upshot of this conflict was that the scientific community failed to give anomalous heat the evaluation that was its due. Scientists of orthodox views, in the first six years of this episode, produced only four critical reviews of the two chemists’ calorimetry work. The first report came in 1989 (N. S. Lewis). It dismissed the Utah claim for anomalous power on grounds of faulty laboratory technique. A second review was produced in 1991 (W. N. Hansen) that strongly supported the claim. It was based on an independent analysis of cell data that was provided by the two chemists. An extensive review completed in 1992 (R. H. Wilson) was highly critical though not conclusive. But it did recognize the existence of anomalous power, which carried the implication that the Lewis dismissal was mistaken. A fourth review was produced in 1994 (D. R. O. Morrison) which was itself unsatisfactory. It was rebutted strongly to the point of dismissal and correctly in my view. No defense was offered against the rebuttal. During those first six years, the community of orthodox scientists produced no report of a flaw in the heat measurements that was subsequently sustained by other reports.


    The community of scientists at large never saw or knew about this minimalist critique of the claim. It was buried in the avalanche of skepticism that issued forth in the first three months. This skepticism was buttressed by the failure of the two chemists’ nuclear measurements, the lack of a theoretical understanding of how their claim could work, a mistaken concern with the number of failed experiments, a wholly unrealistic expectation of the time and resource the evaluation would need, and the substantial ad hominem attacks on them. However, their original claim of measurement of the anomalous power remained unscathed during all of this furor. A decade later, it was not generally realized that this claim remained essentially unevaluated by the scientific community. Confusion necessarily arose when the skeptics refused without argument to recognize the heat measurement and its corresponding hypothesis of a nuclear source. As a consequence, the story of the excess heat phenomenon has never been told.


    Publish something to refute all the F&P papers in Journal of electroanalythical chemistry, and you will have a chance to be scientific.

  • "For quantities created from measured quantities by addition and subtraction, […]
    So if I understand correctly, for example 1.49+0.04= 1.5 => we have two significant digits and not 1 as you predicted


    I see that you are correct and I have made a ‘common mistake’ as per http://tournas.rice.edu/websit…gnificantFigureRules1.pdf.
    I was applying the multiplication/division rule to addition.


    It says:


    “For addition and subtraction, look at the decimal portion (i.e., to the right of the decimal point) of the numbers ONLY. Here is what to do: 1) Count the number of significant figures in the decimal portion of each number in the problem. (The digits to the left of the decimal place are not used to determine the number of decimal places in the final answer.) 2) Add or subtract in the normal fashion. 3) Round the answer to the LEAST number of places in the decimal portion of any number in the problem.


    WARNING: the rules for add/subtract are different from multiply/divide. A very common student error is to swap the two sets of rules. Another common error is to use just one rule for both types of operations. [<--my mistake...KLS]


    So for multi/div:


    The following rule applies for multiplication and division: The LEAST number of significant figures in any number of the problem determines the number of significant figures in the answer. This means you MUST know how to recognize significant figures in order to use this rule.”


    I rarely use ‘sig figs’ to consider errors as the fractional error or relative standard deviation can vary quite substantially in that context. For example, if we use two sig figs in a scientific notation mantissa, that means +/- .1 units of possible ‘error’. If the number is 1.0, that is 100 * .1/1=10% error. But if the number is 9.9, that is now 100 * .1/9.9 = 1.01% error, and that spans the range of typical useful analytical techniques. I routinely consider a 10% error to indicate marginal usefulness while a 1% error represents one of the best techniques available. So, ‘sig figs’ is a poor way to deal with accuracy and precision, and the other part of my mistake was to even bring it up!


    What I should have expounded on is the observed variation coupled with the temperature dependence in the fit coefficients (noted by others as well), and the comparative sizes of the terms used to compute COP in Table 4. One can see that the COP has 3 parts: DeltaQ_heater, Q_k, and Q_pulse, which are all of the same approximate size. Thus the calculation of all is relevant, but the data supplied for the m’s and b’s for the Q_k calculation suggest a very wide spread, which means a large % error, which means it is unlikely the listed COPs are statistically different. But as I noted, with no information on the standard error of the slope and intercept, or statements on the errors of the computed points, I can’t go any further than simple cautionary speculation at this point. BTW, these numbers are standard output for most linear regression routines. Microsoft Excel 'trend analysis' however doesn't show them. You have to use a canned function in a cell to produce tabular output that will show this.


    I should also note that this type of thinking is associated with routine use of Propagation of Error Theory, which assumes randomness in the data and no systemic effects, so one has to go even further and check into those issues as well, as I did when I discovered the systematic effect in the calibration constants from my reanalysis of Storms’ work. Back when I was trying to edit the Wikipedia CF page, I had a personal page (now archived I believe) where I detailed the POE approach to assessing error in mass flow calorimetry, and from that study it was clear the noise on the calorimetric numbers was hundreds of milliwatts even up to several Watts, not the 10s of milliwatts normally claimed. The CFers miss this because they don’t consider the calibration constants an ‘experimental variable’, but they obviously are. (This is another point Jed refuses to consider (illustrated recently in a post in this thread), and he constantly repeats that studies have been done with very high S/N ratios. But that is only true if you limit your error (noise) considerations to baseline fluctuation. But that’s a minor component of the full error, and not really worth discussing given the size of the ‘CCS’ problem.)


    In any case, this report is a preliminary report, not a submitted paper. If it were a submitted paper, it would (should) be rejected due to inadequate error discussion (since the superficial exam suggests large errors that wipe out the ability to distinguish between results (meaning that in reality, the true number of sig figs is probably 1)).

  • Publish something to refute all the F&P papers in Journal of electroanalythical chemistry, and you will have a chance to be scientific.


    See: "A Realistic Examination of Cold Fusion Claims 24 Years Later. A whitepaper on conventional explanations for ‘cold fusion’", Kirk L. Shanahan, SRNL-STI-2012-00678, Oct. 22, 2012


    Might still be obtainable via:


    http://www.e-catworld.com/2012…s-article-of-cold-fusion/



    Confusion necessarily arose when the skeptics refused without argument to recognize the heat measurement and its corresponding hypothesis of a nuclear source. As a consequence, the story of the excess heat phenomenon has never been told.


    The Beaudette quote could (should) also be written as:


    Confusion necessarily arose when the pundits refused without logical argument to recognize the heat measurement error and its corresponding hypothesis of a non-nuclear source. As a consequence, the story of the excess heat phenomenon has never been told.

    • Official Post

    The executive director at the Office of Naval Research, who had funded experiments by Miles and others said (from a NewScientist article in 2003): "For close to two years, we tried to create one definitive experiment that produced a result in one lab that you could reproduce in another,” Saalfeld says. “We never could. What China Lake did, NRL couldn't reproduce. What NRL did, San Diego couldn't reproduce. We took very great care to do everything right. We tried and tried, but it never worked."



    Norris,


    That is a double edge sword as yes, it shows problems with transportability in support of your anti-LENR belief, but also that high level military research facilities were successful in replicating LENR, which supports our pro-LENR beliefs. Nonetheless, debate over :) ,as BE seems to have licked the problem, which this excerpt from the SRI progress report shows:


    -That the repeatability and the consistency of the system output are similar, regardless of in which reactor, the core is being operated and which core components of a given design are being used, interchangeably.


    -To our knowledge, this is the first time in the LENR field that an independent examination of an entity’s reactor, i.e. Brillouin’s IPB HHT, is clearly demonstrating the production of a verifiable and repeatable LENR heat output with positive COPs, which are consistently initiated and uninitiated on command using system design control mechanisms.


    -In addition, Brillouin has invented and built a LENR reactor system that has been shown to be transportable from its own laboratory while showing the same positive results in its new laboratory. The unit was transported from the Brillouin laboratory to SRI, for purposes of independent operation, verification, and validation and produced similar excess power in both locations.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.