Uploaded Beiting report from The Aerospace Corporation

  • Ya know what Jed, you're right---for the claimant. The critic's job is to point out the error.

    You have not pointed out any error yet. You have only said you detected one with mathematics, but you have not told us what this error might be in terms of the physical arrangement of the experiment, or how it might be detected. Until you do this, your statement is not science. It cannot be tested.


    Most of your previous assertions can easily be tested. So I suggest you translate the numbers into a statement about the physical equipment and instruments. Something along the lines of, "the numbers suggest an instability in the temperature measurement" (or whatever it is you have in mind).

  • What extra information would the t-values give you, if both the data set and measured variable are the same?


    Back in the Age of Dinosaurs, I used a software package called RS/Series extensively for data analysis. When I did MLR it had canned routines to step you through this process I am describing. It would literally look at the t (or p) values for each term and tell you whether you should drop it out of the model or not. When I figured out how to get some of this info from Excel for the post on that above, I did all 3 models (quadratic, cubic, quartic) and looked at the standard errors of the coefficients (which are used in the t statistic calc and from that the p). Turns out none of them stand out as heads and shoulders better than the other, which means my use of the three models to calculate power from T based on Breiting's data were all relevant, and the spread in P found that way a possible estimate of error in the computed P. Definitely need more data...(which is always the answer when questions remain)


    You didn't answer the question... The non-waffling answer is that it wouldn't tell you anything. Although I guess when someone mentions a reliance on 'canned routines', maybe it's unfair to expect a deeper understanding of the issues at hand.

  • I find that the fastest way to judge the quality of a report is to see how much it angers and/or threatens the ego of those odd pathologically-skeptical types. Using the Beiting report as an example:


    1) Do skeptics feel the need to besmirch the author's integrity? For example, by heavily insinuating that they believe the author intentionally fiddled their calibration curve:


    I had to ask why he used a cubic equation....

    In other words, simple piddling around with the calibration equation covered the signal detected. That means to believe the calibrations, we need a lot more info on how he chose his equations


    2) Does the skeptic need to invent alternate laws of thermodynamics, statistical techniques, or invisible fume hoods, in order to help their argument? (Extra credit if they keep using guilty inverted commas to misdescribe it).


    'sensitivity analysis'

    'sensitivity analysis'


    3) Does the skeptic feel it necessary to reinforce their arguments by authoritatively spouting total nonsense, implying a deeper knowledge of a topic than they truly possess, in the hope that no-one will call them out on it?


    The R^2 values are not adequate to distinguish which model is best. The t values of the coefficients are needed...


    4) Does the skeptic resort to ANGRY CAPS ranting, in the manner of fellow mouth-foamer, Mary Yugo (RIP)?


    BUT NONE OF THAT CHANGES BLAH BLAH BLAH



    ...So 4 out of 4 for Beiting’s work: Must be pretty good then... :/

  • I find that the fastest way to judge the quality of a report is to see how much it angers and/or threatens the ego of those odd pathologically-skeptical types. Using the Beiting report as an example:


    So, no technical wherewithal used at all. Seems about right based on your contributions to this forum...



    1) Do skeptics feel the need to besmirch the author's integrity? For example, by heavily insinuating that they believe the author intentionally fiddled their calibration curve:


    You quoted 3 points I made with no technical explanation at all (which you apparently are incapable of doing), so let me add a couple of technical comments instead:


    1)" B also tries to do the energy per unit mass trick..."


    This has been an issue since day 1 of he CF saga. Does one use bulk or surface measures? Is the effect a surface effect or a bulk effect? One significant point from the Storms' work on Pt that I have previously noted is that Pt does not hydride. Therefore its CF signal must be surface derived. That suggests that increasing the surface area will increase the CF. So what do we see the field doing? First, they went to the codep process, which produces a high surface area, dendritic Pd. Next they went 'nano'. So B using energy per unit mass values is misleading. Further there are a lot of issues about choosing the mass to use (which I mentioned before, but that was a technical comment that Z probably didn't understand).


    BTW this doesn't 'besmirch the author's integrity' unless you assume the author is incapable of making a mistake. This is called 'peer review'.


    2) "I had to ask why he used a cubic equation...."


    B doesn't say technically why he chose a cubic. My quick look suggested there isn't much difference in 2nd, 3rd, and 4th order fits. So what's the problem in asking why?


    3) "In other words, simple piddling around with the calibration equation covered the signal detected. That means to believe the calibrations, we need a lot more info on how he chose his equations"


    See above comment. It is a technical issue so Z probably just doesn't understand...



    2) Does the skeptic need to invent alternate laws of thermodynamics, statistical techniques, or invisible fume hoods, in order to help their argument? (Extra credit if they keep using guilty inverted commas to describe it).


    Your quote does not support your contention.


    Sensitivity analysis is a standard technique, for ex., see https://www.edupristine.com/bl…bout-sensitivity-analysis, but it is technical.


    3) Does the skeptic feel it necessary to reinforce their arguments by authoritatively spouting total nonsense, implying a deeper knowledge of a topic than they truly possess, in the hope that no-one will call them out on it?


    You need to study up on chemometrics. But that is a technical area...



    4) Does the skeptic resort to ANGRY CAPS ranting, in the manner of fellow mouth-foamer, Mary Yugo (RIP)?


    Only do that after repeated failures to understand indicate that the failure is a deliberate choice. If you don't like it, tough. Or stop deliberately not understanding...


    BTW, your quote:


    Although I guess when someone mentions a reliance on 'canned routines', maybe it's unfair to expect a deeper understanding of the issues at hand.


    is hilarious. Do you think I would work better and faster if I just used my fingers and toes? ROFL.

  • Quote

    I find that the fastest way to judge the quality of a report is to see how much it angers and/or threatens the ego of those odd pathologically-skeptical types.

    Do you apply the same criterion to work outside of LENR, for example faster-than-light neutrinos? Or free energy motors based on magnets? Or cars that get more than 100 mpg with conventional internal combustion engines supplemented with engine-powered generators of "Brown's Gas?" Or for that matter, to Andrea Rossi's various incarnations of ecats? So you're saying those folks put out quality reports because those reports anger skeptics? Interesting argument.

  • So, no technical wherewithal used at all. Seems about right based on your contributions to this forum...


    Ha, this coming from the person who thinks thermodynamics is an art that involves words and hand-waving? (And see link here for some technical wherewithal, where I explain the implications of the first law of thermodynamics to some joker).


    Sensitivity analysis is a standard technique


    Yes it is, so why do you claim to be performing a 'sensitivity analysis', when all you are doing is multiplying each polynomial constant (at the same time), with a number you plucked from thin air?

    (See page linked above for several more examples of this plucking-a-number-from-thin-air behaviour).


    Zeus46:

    3) Does the skeptic feel it necessary to reinforce their arguments by authoritatively spouting total nonsense, implying a deeper knowledge of a topic than they truly possess, in the hope that no-one will call them out on it?


    You need to study up on chemometrics. But that is a technical area..


    What does chemometrics have to do with the price of fish? You wrongly claimed that "The R^2 values are not adequate to distinguish which model is best - The t-values of the coefficients are needed" [ie. nonsense spoken authoritatively]... Then when I asked you "What extra information would the t-values give you, if both the data set and measured variable are the same?" You gave a long answer that frankly didn't make much sense, but can apparently be summarised as 'the computer gives me numbers'.


    is hilarious. Do you think I would work better and faster if I just used my fingers and toes? ROFL.


    No, but learning a bit of theory rather than glibly pressing buttons might help you to answer the question about R^2 and t-values posed above.

  • It's a simple question Kirk. There's no trick.


    Of course, the answer proves the nonsense you spouted earlier about R^2 to be incorrect - But at least by answering it you'll get the chance to redeem yourself somewhat.

    And researching a bit of basic stats theory would be a much better way of spending your time, compared to the hour+ you've just spent skulking around 'The Playground', no doubt trying to find an error in something I wrote. ;) 
    (Unless you're teaching yourself about the 1st LoT of course... an overdue move that I whole-heartedly encourage).


    Do you apply the same criterion to work outside of LENR, for example faster-than-light neutrinos? Or free energy motors based on magnets? Or cars that get more than 100 mpg with conventional internal combustion engines supplemented with engine-powered generators of "Brown's Gas?" Or for that matter, to Andrea Rossi's various incarnations of ecats? So you're saying those folks put out quality reports because those reports anger skeptics?


    Yes, I do, and I would... Assuming (of course) these "skeptics" are driven so crazy by whatever report, that they resort to inventing new laws of thermodynamics, unique statistical methods, and imaginary fume hoods in order to keep their knee-jerk counter-arguments alive for any length of time.

  • While generally I am not a fan of Shanahan in this case I must admit I am. This argument by those who want to be seen as having the prominance to 'annoint' their choosen recipes vs. the proper discussion of how signal must be very dramatically above noise to have any credence, is at the heart of this writhing field. That's the reason for the now decades old sage advice/demand of showing the cold fusion/lenr nuclear fire be accompanied by nuclear smoke/ash. That quintessential smoke is abundant gamma rays orders of magnitude above the noise, and the ash ,4He, similarly orders of magnitude above the noise, in repeatable experiments that produce palpable heat. A measly ~1 watt in any experiment, let alone a 350 C device, is NOT palpable in anyones book. The discussion of whether this miniscule heat signal is buried in the noise is appropo, and the annointed ones and armchair annointers must be questioned with a proper WTF.


    Here's a helpful reference.

    obfuscation

    ɒbfʌsˈkeɪʃ(ə)n/
    noun

    the action of making something obscure, unclear, or unintelligible. "when confronted with sharp questions they resort to obfuscation'

  • Russ, I imagine you’d have more respect for Beiting’s work if he had never published any papers and relied instead on name-dropping and loose blog talk about supposed gamma rays?


    ETA: Come to think of it, its no wonder you’re such a fan of Rossi. Perhaps even some kind of kindred spirit..

  • Russ - what are you doing? Beiting is a high integrity researcher who is seeking truth and facts around anomalies that have been revealed in his lab results. The technical challenge that has recently surfaced to his work will only lead to better and more determined research (and results in my opinion) on his part. This all started with "measly" milliwatts.


    Those on the wrong side of history will soon clearly understand where they stand - into perpetuity.

  • I imagine you’d have more respect for Beitings work if he had never published any papers, relying instead on name-dropping and loose blog talk about supposed gamma rays?


    Zeus46 : May be you remember the glowstick/mfp experiment. There were abundant gammas, but nobody knew what it was. The main problem is that people measure with low resolution in the wrong range and use standard methods to interpret the lines. Today we know that the lines are modulated by magnetic moments.

  • You have not pointed out any error yet. You have only said you detected one with mathematics, but you have not told us what this error might be in terms of the physical arrangement of the experiment, or how it might be detected. Until you do this, your statement is not science. It cannot be tested.


    Most of your previous assertions can easily be tested. So I suggest you translate the numbers into a statement about the physical equipment and instruments. Something along the lines of, "the numbers suggest an instability in the temperature measurement" (or whatever it is you have in mind).


    Jed, your argument with Kirk here is incorrect.


    Kirk is claiming (correctly, AFAIK) that the reported results are 10X more sensitive to calibration error than you might think - pretty obvious given the experimental conditions where the signal is much smaller than the input power.


    That means that a 1% error in the calibration, or between calibration and active runs (due to different conditions) could give the result. That is the range where no-one can be sure there is not some error without additional careful checks. Given the small number of cal points it is a severe issue, which could be understood more by comparing multiple calibrations under different conditions etc.


    You are asking Kirk for possible causes for such a 1% error - this is silly. You can always speculate but the nature of such errors is that you don't know till you look for and close them. CF work is littered with such (good) practice where initial encouraging results are correctly clarified as one or other of a wide variety of errors. It also contains claims made where this careful checking has not been done, as here. They should be treated with extreme skepticism.


    The constructive reply to Kirk's (helpful) critique is to put more effort into showing that the calibration constants remain identical to say 0.1% over a range of experimental conditions that encompass the calibration and the active runs. Also, do enough calibration runs so that errors in calibration are experimentally understood and minimised. Those are actually two distinct issues, both of which need to be nailed before these results can be viewed as other than likely but not proven experimental error.


    You invoke expertise. But experts do not necessarily critique such results from the standpoint of someone trying to find possible experimental errors, especially when they depend on positive results to get funding for future tightening up of the experiment. That is just human nature.


    You have a binary distinction of wrong or right on these things. In reality there are gradations: checked, carefully checked, very carefully checked. And some analysis as Kirk has done to indicate (no more) possible issues in the level of checking. Kirk's analysis by the way cannot prove experimental errors, it can merely point out that they might exist.

  • Jed, your argument with Kirk here is incorrect.

    Kirk is claiming (correctly, AFAIK) that the reported results are 10X more sensitive to calibration error than you might think - pretty obvious given the experimental conditions where the signal is much smaller than the input power.

    The signal is far greater than the noise from the input power. Input power is not noise. It can be measured with extremely high precision. The only noise from it is in the microwatt level. People often mistakenly claim that input power is noise. I am a little surprised to see you make this mistake.


    You are asking Kirk for possible causes for such a 1% error - this is silly. You can always speculate but the nature of such errors is that you don't know till you look for and close them.

    Until you find an error, you cannot claim there is one. That is not falsifiable. Any experiment in history, including Newton's prism experiment, might have an undiscovered error. A claim that there is an error must be held to the same rigorous standard as any other claim about an experiment.


    The constructive reply to Kirk's (helpful) critique is to put more effort into showing that the calibration constants remain identical to say 0.1% over a range of experimental conditions that encompass the calibration and the active runs.

    Beiting has show that the calibration constants can be measured to 0.1% over all of the temperatures measured in this experiment. The calibration curve is not perfectly linear. You might say the "constant" varies (to put it in a humorous way), but it always returns to the same value at the same temperature, to within 0.1%. In other words, a given temperature always indicates the same level of heat, and the same level of heat always produces that same temperature. He calibrated with hundreds of points. One of curves he showed had so many points it looked like a solid line in the graph -- as he pointed out.


    Shanhan's critique is not helpful because he has not reduced it to an assertion about the physical experiment. He has found what he thinks is a problem by manipulating numbers. Fair enough; so far so good. However, all of the numbers are from instruments. If there is a problem, the instruments are not working, or the arrangement of the calorimeter is wrong. In that case, the calibration cannot work. Whatever problem there is in the active run, there has to be a way to demonstrate it in a calibration. In other words, Shanahan is saying there is a way to input exactly the same amount of heat into Cell A as Cell B (or Cell A with different powder), at the same temperature, and yet the temperature rises higher in Cell A, even though there is no extra source of heat in it. The conventional explanation, going back to the 1840s, is that a higher temperature is caused by an additional source of heat. Shanahan says there is a new, undiscovered way this can happen. He has to specify what that is, or we cannot distinguish his explanation from the conventional explanation.


    A different temperature can only result from some physical mechanism. If it is no excess heat, there has to be more of the cell wall exposed (leaking more heat), or a larger more conductive wire going into the cell, or a malfunctioning temperature sensor. If Shanahan is correct and the numbers show an error, the numbers must point to some specific physical error in the configuration or instrument. Otherwise they are meaningless.


    If the problem was only in the computation (the equations) it would have to show up in the calibration -- and it does not.

  • Dewey Weaver

    Quote

    Those on the wrong side of history will soon clearly understand where they stand - into perpetuity.

    Sure, Dewey. But the history isn't written yet. You and IH may be the ones on the wrong side. Your game score so far is Rossi one, IH zero. Maybe it's a bit early to make public grandiose projections for practical LENR.

  • THHuxleynew

    Quote

    The constructive reply to Kirk's (helpful) critique is to put more effort into showing that the calibration constants remain identical

    Of course. But another solution would be to design an experiment which has such a large absolute power level and signal to noise ratio (Pout/Pin) that Shanahan-type errors can't significantly affect it. Another obvious way to refute Shanahan would be to provide a test with a high power level and long duration with no power input. I know Jed claim that this exists but when one looks closely, it isn't quite the robust and reproducible test one would want. Certainly Mizuno's claims are close. But it seems, Mizuno's reactor won't work anywhere other than Mizuno's lab. yeah, I know it would be better if enough money would be available to replicate his work. I wish people would spend on that instead of very low power Pd-D work with lots of accuracy issues in the minds of many other than Jed Rothwell.


    So how about it, Dewey Weaver ? Why doesn't IH work hard and spend enough to replicate Mizuno's kilowatt experiment?

  • But another solution would be to design an experiment which has such a large absolute power level and signal to noise ratio (Pout/Pin) that Shanahan-type errors can't significantly affect it.

    The signal to noise ratio has nothing to do with Pout/Pin. They are totally separate. The power in is direct current, which can be measured in parts per million in both accuracy and precision, so the signal to noise ratio is very small. In other words, the portion of input power which is noise is a few parts per million. The rest can be subtracted.


    (You made the same mistake THHuxley did. I do not understand why both of you fail to see that input power is not noise.)


    The output power noise is larger. It would be larger even if there were no power in.


    The Shanahan-type errors you refer to are imaginary. They do not exist in the real world, which is why he cannot relate them to an actual experiment or tell us how they would happen.


    Another obvious way to refute Shanahan would be to provide a test with a high power level and long duration with no power input.

    High power would reduce the signal to noise ratio. The duration of this experiment was 42 days. If that is not long enough to convince him (or you), nothing will be long enough. It far exceeds the limits of chemistry.


    No test will refute Shanahan and other extremists because their objections are irrational nonsense. Shanahan says that sense of touch cannot distinguish between an object at 100 deg C and room temperature. He says that a 1-liter hot object will remain hot for 3 days, and that a bucket of water will evaporate overnight when left in an ordinary room. People who believe such things have no common sense and no knowledge of science. No demonstration, no matter how convincing, will change their minds. (It is possible Shanahan does not actually believe these things and he is trolling us, but in that case we can say he will never admit he is wrong or engage in a scientific discussion.)

  • Yes, that graph is a good visual explanation. To quibble just a little, in real life, the noise in input power is far smaller than what is shown here. It would be too small to show up in this graph. Using ordinary instruments, direct current electric power can be measured with greater precision and accuracy than any other physical quantity. *


    To summarize, "noise" is what you cannot measure with confidence. It is in the margin of error. Nearly all input power is outside the margin of error.



    * Using advanced instruments at NIST, time is the most precise quantity, and other SI units such as length are based on it these days. The new definition of mass, derived with a Kibble balance, is measured by measuring the electricity supplied to an electromagnet. This is used to derive Planck's constant with high precision. In other words, the new definition of mass depends on our ability to measure electricity, which -- as I said -- has high precision. Enough to put aside the physical weights and perfect spheres proposed to define a kilogram.


    See:


    https://www.nist.gov/news-even…e-international-unit-mass

  • Shanhan's critique is not helpful because he has not reduced it to an assertion about the physical experiment. He has found what he thinks is a problem by manipulating numbers. Fair enough; so far so good. However, all of the numbers are from instruments. If there is a problem, the instruments are not working, or the arrangement of the calorimeter is wrong. In that case, the calibration cannot work.


    I agree with this. To take Shanahan's argument to the extreme, imagine you only had three data points in your calibration - You could fit an infinite number of cubic equations to those three points, so the predictive power of any single one isn't good. When you have seven points to work with, it's true, you can click a few buttons on your computer and it'll tell you that essentially you need more data for a cubic regression (partly because Shanahan's method of calculating this has to account for a potentially infinite amount of randomness without going haywire).


    But, when heating up a lump of metal, we know roughly what will happen... i.e. There's a curve - and the data's gonna fit it... It's not some esoterical stochastic process that we need to tease the most useful moving average out of... So plotting your seven points, and fitting whatever low-order polynomial has the best R^2 becomes more than reasonable...


    ...And that allows you to then get on with doing something productive. (e.g. a normal error analysis, as opposed to 'piddling' around... cf. contrasting publication rates, perhaps).