Uploaded Beiting report from The Aerospace Corporation

  • What you get wrong is the whole of my argument. You focus on out-of-context minutia. To summarize for you one last time.


    I. A single, non-replicated experiment proves nothing scientifically. IOW, the MBA is interesting but just that, nothing else.


    II. The normal way one would evaluate a claim about water loss from an open bucket is to examine the evaporation potential as a possible primary cause.

    a.) The easily found equation to do this is the 'swimming pool equation' (SPE)

    b.) There are several important variables in the SPE.

    i. Some are known at least to a good approximation, such as bucket size, cell size, cell contents

    ii. Some are not, such as air flow over the bucket, actual water temperature, time profile of that

    iii. Some are inconsistent with other information (>100C, but no boiling)

    c.) A parametric study to evaluate the impact of the variables and to compare to available information is useful


    III. The parametric study confirmed the need for certain missing information


    IV. The report of the incident is anomalous and unresolvable without further replication and better data

  • I did no such thing. I investigated the system parametrically,


    And in doing so you ignored at least one law of thermodynamics, and conjured up an invisible turbo fume hood.


    These where then held up as being examples of the "tactics I have applied to [the Breiting report]"... Which amused me, and possibly Jed* too, which ultimately created the response you desired: Discussion of your "fascinating" Mizuno Bucket saga.



    But, the techniques I have applied here to Breiting's report are the same tactics I applied to the Mizuno bucket anecdote.


    *or whatever other verb applies

  • You focus on out-of-context minutia.


    Which includes almost all of your writings, so I don't see a problem.

    And you constantly moan about me not understanding your arguments, even claiming they are 'MY CONSTRUCT', before dismissing the embarrassing response as being out-of-context minutia? :/ More ego defence.


    Trying to figure out what that 'error' might be is sometimes amusing, for awhile. The MBA is way past that point.


    I agree - It's become a totally pathological behaviour: The supposed errors keep getting more and more ridiculous each time.

  • Zeus46


    Quote

    Which includes almost all of your writings, so I don't see a problem


    So you've read all of Shanahan's works? So you must have access to classified in house documents and your knowledge of how to make atomic bombs must be advanced enough after all that reading that the government will have to kill you.

  • Nah just the unhinged stuff.


    And doesn't everyone know how to make (both) nuclear bombs these days, give or take some minor details that make a bigger bang? ...It's more just a case of just not having the time, money, or balls to copy them.

  • People working to test the STM use the same methods I use in my comments, except they usually work at 5 sigma levels instead of 3 like most average scientists.


    kirkshanahan : This is the illusion of the STM. They are talking of 5 sigma relative to background never of 5 sigma relative to theory. I can calculate the Pion mass from the proton mass by 5 sigma, what about the Higgs and STM??

  • Yesterday I had an emergency that I had to deal with which forced me to truncate my last post. I wanted to add the following:



    The full context quote is:


    I wrote:


    "[Addition]


    [quoting Z] Zeus46 wrote: You are wrong to suggest the observed evaporation could be due to known natural causes.


    Missed this earlier. Classic strawman, a la the group of 10 authors. I claim it could be due to UNknown natural causes. My whole point in this discussion is that we don't have enough info to assign causes."


    In the context of that argument, my use of 'UNknown' was to emphasize that Z and JR (since Z and JR support one another through both posts and 'likes') were forcefully ignoring ventilation rate significance in the issue at hand, as well as refusing to understand the technical process of parametric studies to develop a crude idea what the response surface (the rate of evaporation as a function of temp, air flow, etc.) was. They still do this to this day, and they expect me to accept hearsay 'evidence' from JR on what some of these unknown parameters were. That's not how it works. I went through the process in the first part of this post put up yesterday.


    So recognizing that I was trying to bring their omissions to light (and specifically Z's statement quoted above) what I clearly have said multiple times with no confusion whatsoever is that air flow rate over the bucket is critical information, and it is unknown. I have also separately shown how JR routinely confuses things, but always to favor his views in typical fanatic fashion, so I refuse to take what he says at face value. All of it must be checked independently, which can't apparently be done in this case. But I also recognize that it is unlikely Mizuno had a hood that was equivalent to the one I worked in for 8 years. But that was the extremum of the response surface. Z claims I am mentally incompetent (paraphrasing) for doing that, but in fact I am following the training I received in this area many years ago to be 'bold' as its called, and push the limits a bit. Because I chose 17 mph as the maximum flow rate I calculated does not mean that is what I think was going on in the lab. JR and Z don't get that either.




    Z is a troll and JR is a fanatic. They both seek to confuse what I say for their own personal reasons. In the process they resort to illegitimate argumentation tactics and finally to insults. I will seek from now on to avoid answering them. If they try to make some point that I feel misleads unduly I may comment, but I will try to minimize that.

  • And K is a loon who changes his arguments with his socks - which is confusing to others - and apparently so confusing to himself, that one month later he is accusing others of 'inventing' his former statements. How ridiculous.

  • Previously there was a question about the statistical methodology (sometimes known as Multivariate Regression, MR) used by Beiting for calibration. I had asserted that the R^2 values were not enough to definitively choose the correct model, and I was challenged on that. I have previously recommended a good source of information in this thread: http://blog.minitab.com/blog/a…the-best-regression-model


    Subsequently I have found a couple more you might like---


    The discussion here (https://statistics.laerd.com/s…using-spss-statistics.php) give some interesting background on the assumptions underlying MR.


    The page ( https://stats.idre.ucla.edu/sa…iate-regression-analysis/ ) gives directions for doing multiple regression in SAS (PC version known as PC JMP). The key thing here is the table located right above this statement (use <CTRL>F to find it): “The table above gives the parameter estimates, their standard errors, t-value, and associated p-value.” That table shows the t and p values I was mentioning for the sample data set they are using.


    Note that these days I use MATLAB for these kinds of analysis. Here’s a link to MR from them: https://www.mathworks.com/help…variate-regression-2.html


    Also note there is a freeware analog of MATLAB called SCILAB. Here is a paper describing how to do MR with SCILAB:

    http://www.tf.uns.ac.rs/~omorr…lab/Gilberto/scilab17.pdf

    Especially important is the polynomial case explanation starting at page 42.


    For even more info on checking the quality of fit, see:

    http://dept.stat.lsa.umich.edu…401/Notes/401-multreg.pdf

  • Seems some need more spoon-feeding...OK...last shot...if this isn't good enough then you need to take a statistics course or two (or three...).


    From the Minitab reference I pointed to previously:


    Adjusted R-squared and Predicted R-squared: Generally, you choose the models that have higher adjusted and predicted R-squared values. These statistics are designed to avoid a key problem with regular R-squared—it increases every time you add a predictor and can trick you into specifying an overly complex model.” [emphasis added]


    “low p-values indicate terms that are statistically significant.”


    “The p-value for each term tests the null hypothesis that the coefficient is equal to zero (no effect). A low p-value (< 0.05) indicates that you can reject the null hypothesis. “


    From https://dss.princeton.edu/onli…terpreting_regression.htm


    “The t statistic is the coefficient divided by its standard error. The standard error is an estimate of the standard deviation of the coefficient, the amount it varies across cases. It can be thought of as a measure of the precision with which the regression coefficient is measured. If a coefficient is large compared to its standard error, then it is probably different from 0.”


    “Your regression software compares the t statistic on your variable with values in the Student's t distribution to determine the P value, which is the number that you really need to be looking at. “


    Use P to assess quality of fit of the model, t to assess numerical error of coefficient – both are important.



    {addition}


    From the Minitab site:


    "T and P are inextricably linked. They go arm in arm, like Tweedledee and Tweedledum. Here's why.


    When you perform a t-test, you're usually trying to find evidence of a significant difference between population means (2-sample t) or between the population mean and a hypothesized value (1-sample t). The t-value measures the size of the difference relative to the variation in your sample data. Put another way, T is simply the calculated difference represented in units of standard error. The greater the magnitude of T (it can be either positive or negative), the greater the evidence against the null hypothesis that there is no significant difference. The closer T is to 0, the more likely there isn't a significant difference."

  • kirkshanahan I think that you are describing the stepwise regression method, which is something I recomend people to use. But it was indicated that Breting used much more points than in the table in the paper and that

    the precition is very high as can be seen by the curves following each other as expected from a linearization of the multivariable target function. I agree with you in princip that essentially mean square estimation with

    polynomials of the same order as the number of points is a big no no but the question is if this really is the case. Also in case you start to reach measurements in the noise level I recomend people to take contact with their

    university stat department or mathstat department just to get the methods right cause it can be tricky.


    Noise can be wite or colored, But generally when you see that the measured value is above the noise level and integrate for a long time you are safe from integrated noise. If the measured values is in the noise then

    things can be tricky. The reason for 'safeness' is basically that if the mean turns to a constant, then the sigma is of order sigma / sqrt(n).


    P.S. I use the statistical software R and find it very powerful when I help out with statistics in research D.S.

  • stefan


    Yes, I am definitely describing stepwise regression, primarily because some posters here didn't seem to understand how to do it and kept asking for help. Presumably their questions are now answered. If not, oh well, I tried...


    I am aware of 'R' (the language) but don't know it. I wondered if I should mention that it shouldn't be confused with the R of the R^2 values...


    Getting back to the main points I was making (and out of the statistical minutia discussions), the first and most important point is the Beiting did not compute his error in output power appropriately. He started out well in that he used the POE equation, but then he only computed it for the T variable, which is often the least significant to the error. The calibration constants he computed from his calibration experiments are experimentally determined numbers that come with their own error that is then propagated through the computations to the output power. So he needs to fix that omission.


    Next, the error he lists on his calibration curves seems to be the 'standard error of y', and not the errors of the coefficients, which are the ones used in the POE. While Beiting says he used more points, my hand-digitized version gives roughly the same values as he reports, which leads me to believe he is not being clear in what he means. He need to clarify this.


    Finally, he does not report on why he chose the cubic equation. I found the quartic and quadratic to be almost as good (or maybe better), and I showed that this (the model choice) induced a difference in computed output power that encompassed the reported excess heats, i.e. his 'signal' could be due to a math problem, not a real heat source. More info is needed from him to validate the choice of calibration equation form.


    All that was the first part of a critical review of his report, which was announced here as 'one of the best ever'. While Beiting does at least use the POE process, he didn't do it right.


    There is a Part II and a Part III also, but I'm tired of trolls. The Part I analysis says the report is typical of the field, i.e. inconclusive.

  • @k, I think it's no point in discussing the possible model errors when the actual measured points in the validation data is unknown. If you use the data in the table, the model error can be quite large and you get the phenomena that you described here. But as the number of points grow larger the statistical model error diminishes and can be irrelevant for the conclusion unless there is extra variations in the actual experiment not catched by the validation effort.

    Also something not discussed here is that a low number of statistical points is bad for the math because much of the applicability of the formulas depends on asymptotic theoretical results of the central limit theorem kind.


    If the number of calibration points are moderate in size, then you need to take the model errors into account. I would like to ask Breiting et al to improve the paper, though, by including an estimate of the model error even though it is small. In medical papers one usually never produce an estimate without indicate the variation of it. If the error are significant I or you can suggest methods for the researchers to do a good enough estimate of the error in the final estimate.


    Finally the errors in the estimates are not independent to do a good analysis of error propagation the covariance matrix is needed. Having this, it is an easy matter to randomize the values of the parameters and estimate it's influence on the final estimate.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.