Uploaded Beiting report from The Aerospace Corporation

  • A suggestion just do 2 independent runs for each point in the calibration curve or double the number of points but with the same degree of polynomial as in the current fit. The statistics go havoc because the low degree of freedom.

    You see each parameter in the model has an estimated variation but because there are so many of them compared to the number of points, the variation is quite high. So double the number of points and the statistics

    gets back on the track.

  • So double the number of points and the statistics

    gets back on the track.


    Well, sort of. Yes, increasing the number of points is always helpful and I definitely recommend that too (it's called 'replication'). It likely would assist in deciding which model to use (quadratic, cubic, etc.). It also might help in defining the standard deviations of the coefficients, but you also have to allow for other things to show up by repeating the calibration run at later points in time. However, it won't tell you about any systematic errors.

  • Some food for thought for those who refuse to understand the concept of defining the error in computed values and then studying its impact...


    We all know that a primary sign of pathological science is 'working in the noise'. If we think about it for a couple of seconds, we'd all agree (i'd guess) that those scientists who end up labeled with the 'pathological' label didn't deliberately set out to earn that moniker. So why do they end up with it? I'd suggest the primary reason is that they have or develop inaccurate ideas about the error levels in their work. Now, if they mistakenly assume the error level is too large, they would abandon work too soon, thinking they were 'working in the noise'. That's not so much a problem except that possible advances might be missed by quitting too early.


    The problem really comes when the researchers assume too good of an error level. Then they fool themselves into thinking that results that are actually in the noise are not, but in fact are very significant. This is what I observe occurs with most CF claims. Therefore I recommend and apply a more objective assessment of error levels using standard statistical methods, those primarily being propagation of error (or uncertainty) calcs and some version of response surface modeling (the quick and dirty version being 'sensitivity analysis') to map out the impact of these errors. (Of course the use of statistics requires replication for reliable results, but some insight can be gleaned from one-time events in some cases.) That is my standard approach, which I used in my 2002 paper that suggests a systematic error in CF calorimetry, and in the analysis here of Beiting's report (and in the Mizuno bucket anecdote, and the Mizuno air flow calorimeter data JR uploaded, and the McKubre M4 run, and so on). Accurately determining error levels is the only way to avoid working in the noise.

  • For those who don’t know how to extract the standard errors of the regression coefficients…


    Using MS Excel LINEST function…

    (Per the LINEST function help there is supposed to be a way to give the function a set of x data and have it internally compute the quadratic and cubic terms of a cubic fit, but on short notice it wasn’t working for me, so I did it manually).


    The hand digitized data for the Beiting curve I’ve been discussing (y=Power, x-Temp), plus the T^2 and T^3 values are (recall I said this data is not exactly correct, since my cubic fit coefficients came out different that Beiting’s, but it serves to make the point):


    P

    T

    T^2

    T^3

    0

    20.625

    425.3906

    8773.682

    1.357576

    82.5

    6806.25

    561515.6

    3.258182

    139.375

    19425.39

    2707414

    5.735758

    198.125

    39253.52

    7777103

    8.824242

    253.75

    64389.06

    16338725

    12.69333

    307.8125

    94748.54

    29164783

    13.49091

    317.8125

    101004.8

    32100583


    There are 3 columns of x values. Select an empty region that is 4 cols wide and 5 rows deep. Click on the formula bar text entry field and type “=linest(“ (no quotes). Then select the Y values with the mouse, type a comma, select all the x values with the mouse, type a comma, type “TRUE,TRUE)” (no quotes). And then press <Control><Shift><Enter> simultaneously. The formerly empty region will fill with the following info, EXCEPT FOR the first ROW (Row ‘0’) below, which I added afterwards:


    0.210044449

    0.265615

    0.105368

    -0.171160638

    1.15621E-07

    4.71E-05

    0.016991

    -0.38431339

    2.42855E-08

    1.25E-05

    0.00179

    0.065779325

    0.999969958

    0.041552

    #N/A

    #N/A

    33285.48887

    3

    #N/A

    #N/A

    172.4122894

    0.00518

    #N/A

    #N/A



    ‘Row 0’, the one I added, is just row 2 divided by Row 1, i.e it is an error fraction (multiply by 100 to get %)

    Row 1 is the fit’s coefficients in decreasing order (i.e. coeff for T^3 first, additive constant last)

    The second row is the standard error of the coefficients.

    Row 3 col 1 is the R^2, row 2 is the standard error of Y

    Row 4, col 1 is the F statistic, col 2 is the degrees of freedom

    Row 5 col 1 is the Regression Sum of Squares, col 2 is the Residual Sum of Squares



    What I did in examining the potential impact of experimental error in the determination of the power calibration equation is multiply each term by “1.0x” where the digit ‘x’ was 1 to 5, which is just increasing the coefficient by x%. (Later I also subtracted, i.e. .96 instead of 1.04).


    The 1-sigma value on the coefficients is almost 10X that, so my ‘piddling’ was on the trivial level numerically, but the results of that study were that the calculated powers varied enough to ‘cover’ the global reported excess heat rate of 0.944W. IOW, the ~1W excess reported is well within the noise band of the calibration.


    Now, running multiple calibration runs might help because it might show that the coefficient error bands are actually smaller. Or it might just confirm the current error levels (or even worsen them of course). The point is that you have to get that data to know.

  • I wasn't aware the SRI couldn't calculate the error levels in a calorimeter. Amazing, I could it before I left school.


    I'm sure they could if they were aware of the need. That is usually the problem. 'Old school' guys just wing it and talk about 5 or 10% errors as if that's all you need to do. The 2004 Szpak, Mosier-Boss, Miles, and Fleishmann paper I commented on in my 2005 publication had a discrepancy in the collected volume of water that was just a few %, but it was positive. I commented that it likely was entrained water droplets. In the peer review, the reviewers claimed it was 'just noise'. 'Old school' vs. 'new school'.


    I'm glad you got this in school. I've asked a lot of people about this and my observation is that the coverage is spotty. I got it in my undergrad junior-level pchem lab course. I asked Steve Jones when he got it and he replied in grad school. I worked with a PhD chemist who told me he'd never seen it. That's why I keep trying to explain it here, I assume many have never heard of this. Maybe I'm wrong, but the evidence in the CF literature says I'm not. What I have seen consistently is the use of the baseline noise of the calorimeter as the 'error' of the technique. However, that's not what my reanalysis of the Storms' data suggests is the full error, nor the study here on Beiting's data (and I note that Beiting actually did the POE for one variable (T) in his power equation), nor in many other places I've discussed in the forum before.


    P.S. To JR and Z. If you paid attention the prior post explains why an 8% shift would still be less than 1 sigma.

  • Yet you claim they made a mistake that you discovered in an hour. Are you quite certain of that?


    Absolutely. They used an equation that requires the examination of all experimental variables to estimate error, and they only looked at 1 of 5 experimentally determined numbers. That's an 80% miss. Doesn't take a rocket scientist to see that!


    Has it crossed your mind that you might be wrong, especially since you cannot propose a test that would reveal this error of yours?


    Of course it has. I make mistakes all the time. But what I did is pretty idiot-proof. You just back out the heat signal from the heat per unit mass signal and then check and see if little tweaks to equation constants can cover it. Simple and easy, and I've now gone through how I did it in excruciating detail even when I said I wouldn't just so you can understand Jed. Try to keep up....



    And all of the calibration tests show nothing.


    All one of them you mean?


    Cold fusion was confirmed by the creme de la creme of scientists


    Well, that's exaggerating a bit but it really isn't that important. Doesn't matter who you are, you can still make mistakes. You need to read up on how 'creme de la creme' scientists make mistakes just like 'the rest of us'.

  • What extra information would the t-values give you, if both the data set and measured variable are the same?

    Back in the Age of Dinosaurs, I used a software package called RS/Series extensively for data analysis. When I did MLR it had canned routines to step you through this process I am describing. It would literally look at the t (or p) values for each term and tell you whether you should drop it out of the model or not. When I figured out how to get some of this info from Excel for the post on that above, I did all 3 models (quadratic, cubic, quartic) and looked at the standard errors of the coefficients (which are used in the t statistic calc and from that the p). Turns out none of them stand out as heads and shoulders better than the other, which means my use of the three models to calculate power from T based on Breiting's data were all relevant, and the spread in P found that way a possible estimate of error in the computed P. Definitely need more data...(which is always the answer when questions remain)


    And I recollect someone claiming that industrial scientists tend to have rubbish publication records.


    Who said that?

  • Of course I can. So did Stefan. It's called 'replication'. (followed by doing the math right)

    This experiment has been replicated several times at Aerospace, and thousands of times elsewhere. The same technique has been used millions of times over the last 150 years. When the temperature rises above the calibration curve, the conventional explanation is that there an additional source of heat. You are saying there is another explanation. If so, there has to be some physical mechanism, and you have to be able to tell us what test would reveal this mechanism. If you cannot do this, your theory predicts the same result as conventional theory does, and there is no way to confirm or falsify your claim, so people will say the conventional explanation is correct.


    Note that your previous claims were easy to test, as follows:

    1. Heat up frying pan and see if you tell it is hot by holding it with a potholder, or holding your hand over it.
    2. Remove it from the stove, let it sit for 3 days, and see if it is still hot.
    3. Put a bucket of water in a room and see if it evaporates overnight.

    You have not suggested any similar experiment for your present theory. A theory that cannot be tested by experiment is not science. A theory that predicts exactly the same outcome as conventional theory cannot be falsified and serves no purpose.

  • This experiment has been replicated several times at Aerospace, and thousands of times elsewhere. The same technique has been used millions of times over the last 150 years.


    This is your standard misdirection tactics again Jed. Please stop trying to confuse the issues.


    In the Breiting report, there was 1 (count them, 1) reported cal curve for each cell, thermocouple pair, not 'several'. You have claimed that they did more. Claims are vaporware. Cite the paper (NOT an abstract) or shut up.


    Your 'thousands of times elsewhere' is you continuous fanatic chant. It isn't true, but you won't recognize that. The rest of us do.


    Similar techniques may have been used in the past 150 million years for all I know. All I am commenting on in this thread is the impact of possible experimental variation on Breiting's conclusions in the report you uploaded. Your tactic of trying to drag in every use of calorimetry in history is irrelevant.


    When the temperature rises above the calibration curve, the conventional explanation is that there an additional source of heat. You are saying there is another explanation. If so, there has to be some physical mechanism, and you have to be able to tell us what test would reveal this mechanism.


    This is funny. I clearly recall you ranting and raving over the years about how not having a theory to explain CF didn't mean jack. Now you sit here and claim I have to supply a 'mechanism', which is nothing but a theory.


    By the way, what you started off with is incorrect too. What you should have written is:


    "When the temperature rises above the calibration curve, one explanation is that there an additional source of heat. There are other explanations. For any deviation, there has to be some physical mechanism, but finding this is often quite difficult. Without reproducibility it becomes impossible to proceed further."




    If you cannot do this, your theory predicts the same result as conventional theory does, and there is no way to confirm or falsify your claim, so people will say the conventional explanation is correct.


    A.) I have not proposed any theory, you attempt to confuse the reader by postulating a variety of issues and attributing them to me, a tactic you leaned from your CF heroes and their strawman publication in the 2010 J. Env. Mon. paper.


    B.) What I have proposed, which is standard science, is that the 1 calibration curve presented by Breiting for the Cell #2, TC#1 combo is susceptible to variation, and that needs to be considered as to its potential impact on conclusions. That almost qualifies as a "Law" Jed, because after we figured out we needed to experimentally test 'scientific' theories such as an earth-centric solar system, we figured out that reproducing measurements didn't guarantee getting the same number, i.e., experiments have variation. We recognized as perhaps the second most importance concept of modern science, that we need to quantify that variation. We do that via replication. (P.S. 'We' is 'mainline science'.)


    Note that your previous claims were easy to test, as follows:
    Heat up frying pan and see if you tell it is hot by holding it with a potholder, or holding your hand over it.
    Remove it from the stove, let it sit for 3 days, and see if it still hot.
    Put a bucket of water in a room and see if it evaporates overnight.

    You have not suggested any similar experiment for your present theory. A theory that cannot be tested by experiment is not science. A theory that predicts exactly the same outcome as conventional theory cannot be falsified and serves no purpose.


    Another attempt to resurrect your false Mizuno bucket anecdote conclusions. This has nothing to do with this thread explicitly and needn't have been brought up. It just indicates your lack of cogent comments.


    But, the techniques I have applied here to Breiting's report are the same tactics I applied to the Mizuno bucket anecdote. You have implied there is more info out there on Breiting's work. Great! Let's see it. If it actually doesn't exist, then this Breiting report is also an anecdote and means next-to-nothing. Let's hope there is more right?

  • I asked twice, and no clear protocol seemed to be forthcoming.


    Have Jed and cohorts confused you Alan? What protocol are you asking for? If you are buying JR's hot air about my so-called 'theory', the above post should help clarify that there is no 'theory', just a call for replication because the data seems inconclusive when the error is examined carefully.

  • I asked you for a description of an experiment that would test your theories,


    I will assume you mean the CCS/ATER thing. If I'm wrong let me know. I've proposed two in this forum. First replace the electrodes with a Joule heater that has long leads so it can be placed in the electrolysis cell gas space. There should already be a heater in the electrolyte in most cells. It is needed. You calibrate with a fixed heat in the gas space (due to the recombination, due to the thermoneutral voltage times the current) and varied heat in the electrolyte. Then, you run with a lower heat in the gas phase but add that heat to the electrolyte, adding it to the 'routine' electrolysis heat. That should simulate a change of heat distribution that I claim causes the CCS.


    Second redesign the cells so that less heat is lost out the top of the cell. All F&P cells have all their penetrations in the top of the cell. Turn the cell upside down. You'll have to move the recombiner or vent line to do that, but now your power leads and TC connections are all from the bottom. Likewise do not mount the recombiner holder to the new top of the cell. Extend the rods up from the new bottom. That might show the effect, but I'm less sure on that. Might not need the taps through the top, might just need the top.


    I also noted as has THH that there are limits to what the CCS can do, since one can;t move 110% of the recombiner heat. If cases like that could be found in real data that might disprove the CCS/ATER thing.

  • [Suggests I say:] "When the temperature rises above the calibration curve, one explanation is that there an additional source of heat. There are other explanations. For any deviation, there has to be some physical mechanism, but finding this is often quite difficult. Without reproducibility it becomes impossible to proceed further."

    I would not say that because:


    I do not know of any other explanations. You are the one saying there are other explanations.


    I do not know what physical mechanism you have in mind, so I cannot say whether it would be difficult or easy to find. However, if you do not find it, you have nothing. You cannot make a scientific assertion without at least specifying how it can be physically tested, and you cannot prove it without actually testing and finding it. (Most of your previous assertions were easy to test, as I said.)


    There is reproducibility. This experiment was reproduced in this paper, and in many other labs. It is a close replication of Takahashi et al. The calibration curves were also reproduced several times in this paper. You have to show why your mechanism does not work with these reproducible calibrations, so you have several data sets to work with already, albeit null ones that do not apply (according to you).

  • I would not say that because:


    I do not know of any other explanations. You are the one saying there are other explanations.



    I know you would not say that. I wrote:

    What you should have written is


    The other explanations involve whatever measurement errors could have been present. There is always a long list of possibilities, but I know you refuse to see that, which is why I said 'should have written".



    I do not know what physical mechanism you have in mind, so I cannot say whether it would be difficult or easy to find. However, if you do not find it, you have nothing. You cannot make a scientific assertion without at least specifying how it can be physically tested, and you cannot prove it without actually testing and finding it. (Most of your previous assertions were easy to test, as I said.)


    Neither do I, it doesn't matter. No one automatically assumes an anomalous signal is 'true' as soon as they lay eyes on it for the first time (except pathological scientists). But I don't have 'nothing', I have an anomaly. Again, you don't assume an anomaly is real until you can reproduce it at will, preferably in varying degrees. Your last sentence is silly and once again, leaves out replication. Try applying that to CF.



    There is reproducibility. This experiment was reproduced in this paper, and in many other labs. It is a close replication of Takahashi et al. The calibration curves were also reproduced several times in this paper. You have to show why your mechanism does not work with these reproducible calibrations, so you have several data sets to work with already, albeit null ones that do not apply (according to you).


    The paper you linked to at the start of this thread has one calibration curve each for (cell1, TC1, vacuum), (cell 2, TC1, vaccum), (cell 1 TC1, 1 bar N2), and (cell 2, TC1, 1bar N2). Figure 4.4 shows the data, about 7 points per curve as I recall. Cubic equations are given for each of the variable sets above. It is noted that the TC#2 curves were supposedly not different enough to warrant looking at them here. That is not reproducibility, and that is all I've talked about so far. (In fact, the data shows the cells are slightly different. PwrC1T1Vac at 350C = 19.654 and at 300C = 14.273W, while PwrC2T1Vac at 350C = 17.340 and at 300C = 12.936.)


    The equipment used in these experiments was custom made, thus no one else has the same potential mix of errors as Breiting. Others may have done similar things. Fine. That is not exact reproduction, but partial reproduction.  Their work needs to be examined in the same fashion and if it doesn't pass muster, it will not be considered even partial replication.


    BUT NONE OF THAT CHANGES THE FACT THAT IT LOOKS LIKE THE REPORTED EXCESS HEAT CAN BE COVERED BY A TRIVIAL EXPERIMENTAL ERROR.


    I'm done arguing with you on this point, as I don't expect you to get it, not because you can't, but because you won't. Ditto for Z.

  • BUT NONE OF THAT CHANGES THE FACT THAT IT LOOKS LIKE THE REPORTED EXCESS HEAT CAN BE COVERED BY A TRIVIAL EXPERIMENTAL ERROR.

    You have not told us what that error might be. I mean what the physical cause of it might be. You have not told us how Beiting et al. can test for the error you have in mind, to confirm they are making it. Until you specify that, your assertion cannot be confirmed or falsified.


    This is a carefully done experiment with numerous calibrations and controls. It is not careless, or unreviewed. So, if this can be explained as a "trivial experimental error" so can millions of other previous experiments using this technique, over the last 150 years. I think it is unlikely that you have discovered a trivial experimental error in such a widely used and reliable experiment.


    The other explanations involve whatever measurement errors could have been present. There is always a long list of possibilities,

    Perhaps there are many possibilities, but you have not listed a single one of them yet. You have to tell us what the error might be, and how Beiting et al. can look for it. An assertion that "there might be many errors" applies equally well to every experiment in history, going back to Newton. No one can look for unspecified errors.


    Neither do I, it doesn't matter.

    If you are saying "neither do I have a physical mechanism in mind" then you could not be more wrong. Not only does it matter -- it is the only thing that matters. This is physics. An assertion that cannot be reduced to a statement about the physical conditions of the experiment, and an assertion that cannot be tested by an experiment, is not physics. By definition. If you cannot say "do this, this and this, and you will see the temperature rise above the calibration curve even though there is no excess heat" then you are not making a scientific statement. That which cannot be tested with objects in the real world, and thereby confirmed or falsified is not science. It is empty sophistry, or playing with numbers that have no connection to reality.

  • it is the only thing that matters. This is physics. An assertion that cannot be reduced to a statement about the physical conditions of the experiment, and an assertion that cannot be tested by an experiment, is not physics. By definition. If you cannot say "do this, this and this, and you will see the temperature rise above the calibration curve even though there is no excess heat" then you are not making a scientific statement. That which cannot be tested with objects in the real world, and thereby confirmed or falsified is not science. It is empty sophistry, or playing with numbers that have no connection to reality.


    Ya know what Jed, you're right---for the claimant. The critic's job is to point out the error. The claimant has to fix it. There is no further obligation implied or required for the critic.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.