@Z
Ah yes, more hot air, your standard contribution.
@Z
Ah yes, more hot air, your standard contribution.
I find that the fastest way to judge the quality of a report is to see how much it angers and/or threatens the ego of those odd pathologicallyskeptical types. Using the Beiting report as an example:
So, no technical wherewithal used at all. Seems about right based on your contributions to this forum...
1) Do skeptics feel the need to besmirch the author's integrity? For example, by heavily insinuating that they believe the author intentionally fiddled their calibration curve:
You quoted 3 points I made with no technical explanation at all (which you apparently are incapable of doing), so let me add a couple of technical comments instead:
1)" B also tries to do the energy per unit mass trick..."
This has been an issue since day 1 of he CF saga. Does one use bulk or surface measures? Is the effect a surface effect or a bulk effect? One significant point from the Storms' work on Pt that I have previously noted is that Pt does not hydride. Therefore its CF signal must be surface derived. That suggests that increasing the surface area will increase the CF. So what do we see the field doing? First, they went to the codep process, which produces a high surface area, dendritic Pd. Next they went 'nano'. So B using energy per unit mass values is misleading. Further there are a lot of issues about choosing the mass to use (which I mentioned before, but that was a technical comment that Z probably didn't understand).
BTW this doesn't 'besmirch the author's integrity' unless you assume the author is incapable of making a mistake. This is called 'peer review'.
2) "I had to ask why he used a cubic equation...."
B doesn't say technically why he chose a cubic. My quick look suggested there isn't much difference in 2nd, 3rd, and 4th order fits. So what's the problem in asking why?
3) "In other words, simple piddling around with the calibration equation covered the signal detected. That means to believe the calibrations, we need a lot more info on how he chose his equations"
See above comment. It is a technical issue so Z probably just doesn't understand...
2) Does the skeptic need to invent alternate laws of thermodynamics, statistical techniques, or invisible fume hoods, in order to help their argument? (Extra credit if they keep using guilty inverted commas to describe it).
Your quote does not support your contention.
Sensitivity analysis is a standard technique, for ex., see https://www.edupristine.com/bl…boutsensitivityanalysis, but it is technical.
3) Does the skeptic feel it necessary to reinforce their arguments by authoritatively spouting total nonsense, implying a deeper knowledge of a topic than they truly possess, in the hope that noone will call them out on it?
You need to study up on chemometrics. But that is a technical area...
4) Does the skeptic resort to ANGRY CAPS ranting, in the manner of fellow mouthfoamer, Mary Yugo (RIP)?
Only do that after repeated failures to understand indicate that the failure is a deliberate choice. If you don't like it, tough. Or stop deliberately not understanding...
BTW, your quote:
Although I guess when someone mentions a reliance on 'canned routines', maybe it's unfair to expect a deeper understanding of the issues at hand.
is hilarious. Do you think I would work better and faster if I just used my fingers and toes? ROFL.
So zeus, you know so much, why don't you help out and tell us how exactly to judge the quality of Beiting's report. Or is that beyond your skill set?
An industrial scientist. Presumably one with a limited publication history.
Well that guy has a low opinion of himself. Just because you work in industry doesn't mean your pubs are 'rubbish'...
it is the only thing that matters. This is physics. An assertion that cannot be reduced to a statement about the physical conditions of the experiment, and an assertion that cannot be tested by an experiment, is not physics. By definition. If you cannot say "do this, this and this, and you will see the temperature rise above the calibration curve even though there is no excess heat" then you are not making a scientific statement. That which cannot be tested with objects in the real world, and thereby confirmed or falsified is not science. It is empty sophistry, or playing with numbers that have no connection to reality.
Ya know what Jed, you're rightfor the claimant. The critic's job is to point out the error. The claimant has to fix it. There is no further obligation implied or required for the critic.
I would not say that because:
I do not know of any other explanations. You are the one saying there are other explanations.
I know you would not say that. I wrote:
What you should have written is
The other explanations involve whatever measurement errors could have been present. There is always a long list of possibilities, but I know you refuse to see that, which is why I said 'should have written".
I do not know what physical mechanism you have in mind, so I cannot say whether it would be difficult or easy to find. However, if you do not find it, you have nothing. You cannot make a scientific assertion without at least specifying how it can be physically tested, and you cannot prove it without actually testing and finding it. (Most of your previous assertions were easy to test, as I said.)
Neither do I, it doesn't matter. No one automatically assumes an anomalous signal is 'true' as soon as they lay eyes on it for the first time (except pathological scientists). But I don't have 'nothing', I have an anomaly. Again, you don't assume an anomaly is real until you can reproduce it at will, preferably in varying degrees. Your last sentence is silly and once again, leaves out replication. Try applying that to CF.
There is reproducibility. This experiment was reproduced in this paper, and in many other labs. It is a close replication of Takahashi et al. The calibration curves were also reproduced several times in this paper. You have to show why your mechanism does not work with these reproducible calibrations, so you have several data sets to work with already, albeit null ones that do not apply (according to you).
The paper you linked to at the start of this thread has one calibration curve each for (cell1, TC1, vacuum), (cell 2, TC1, vaccum), (cell 1 TC1, 1 bar N2), and (cell 2, TC1, 1bar N2). Figure 4.4 shows the data, about 7 points per curve as I recall. Cubic equations are given for each of the variable sets above. It is noted that the TC#2 curves were supposedly not different enough to warrant looking at them here. That is not reproducibility, and that is all I've talked about so far. (In fact, the data shows the cells are slightly different. PwrC1T1Vac at 350C = 19.654 and at 300C = 14.273W, while PwrC2T1Vac at 350C = 17.340 and at 300C = 12.936.)
The equipment used in these experiments was custom made, thus no one else has the same potential mix of errors as Breiting. Others may have done similar things. Fine. That is not exact reproduction, but partial reproduction. Their work needs to be examined in the same fashion and if it doesn't pass muster, it will not be considered even partial replication.
BUT NONE OF THAT CHANGES THE FACT THAT IT LOOKS LIKE THE REPORTED EXCESS HEAT CAN BE COVERED BY A TRIVIAL EXPERIMENTAL ERROR.
I'm done arguing with you on this point, as I don't expect you to get it, not because you can't, but because you won't. Ditto for Z.
I asked you for a description of an experiment that would test your theories,
I will assume you mean the CCS/ATER thing. If I'm wrong let me know. I've proposed two in this forum. First replace the electrodes with a Joule heater that has long leads so it can be placed in the electrolysis cell gas space. There should already be a heater in the electrolyte in most cells. It is needed. You calibrate with a fixed heat in the gas space (due to the recombination, due to the thermoneutral voltage times the current) and varied heat in the electrolyte. Then, you run with a lower heat in the gas phase but add that heat to the electrolyte, adding it to the 'routine' electrolysis heat. That should simulate a change of heat distribution that I claim causes the CCS.
Second redesign the cells so that less heat is lost out the top of the cell. All F&P cells have all their penetrations in the top of the cell. Turn the cell upside down. You'll have to move the recombiner or vent line to do that, but now your power leads and TC connections are all from the bottom. Likewise do not mount the recombiner holder to the new top of the cell. Extend the rods up from the new bottom. That might show the effect, but I'm less sure on that. Might not need the taps through the top, might just need the top.
I also noted as has THH that there are limits to what the CCS can do, since one can;t move 110% of the recombiner heat. If cases like that could be found in real data that might disprove the CCS/ATER thing.
I asked twice, and no clear protocol seemed to be forthcoming.
Have Jed and cohorts confused you Alan? What protocol are you asking for? If you are buying JR's hot air about my socalled 'theory', the above post should help clarify that there is no 'theory', just a call for replication because the data seems inconclusive when the error is examined carefully.
This experiment has been replicated several times at Aerospace, and thousands of times elsewhere. The same technique has been used millions of times over the last 150 years.
This is your standard misdirection tactics again Jed. Please stop trying to confuse the issues.
In the Breiting report, there was 1 (count them, 1) reported cal curve for each cell, thermocouple pair, not 'several'. You have claimed that they did more. Claims are vaporware. Cite the paper (NOT an abstract) or shut up.
Your 'thousands of times elsewhere' is you continuous fanatic chant. It isn't true, but you won't recognize that. The rest of us do.
Similar techniques may have been used in the past 150 million years for all I know. All I am commenting on in this thread is the impact of possible experimental variation on Breiting's conclusions in the report you uploaded. Your tactic of trying to drag in every use of calorimetry in history is irrelevant.
When the temperature rises above the calibration curve, the conventional explanation is that there an additional source of heat. You are saying there is another explanation. If so, there has to be some physical mechanism, and you have to be able to tell us what test would reveal this mechanism.
This is funny. I clearly recall you ranting and raving over the years about how not having a theory to explain CF didn't mean jack. Now you sit here and claim I have to supply a 'mechanism', which is nothing but a theory.
By the way, what you started off with is incorrect too. What you should have written is:
"When the temperature rises above the calibration curve, one explanation is that there an additional source of heat. There are other explanations. For any deviation, there has to be some physical mechanism, but finding this is often quite difficult. Without reproducibility it becomes impossible to proceed further."
If you cannot do this, your theory predicts the same result as conventional theory does, and there is no way to confirm or falsify your claim, so people will say the conventional explanation is correct.
A.) I have not proposed any theory, you attempt to confuse the reader by postulating a variety of issues and attributing them to me, a tactic you leaned from your CF heroes and their strawman publication in the 2010 J. Env. Mon. paper.
B.) What I have proposed, which is standard science, is that the 1 calibration curve presented by Breiting for the Cell #2, TC#1 combo is susceptible to variation, and that needs to be considered as to its potential impact on conclusions. That almost qualifies as a "Law" Jed, because after we figured out we needed to experimentally test 'scientific' theories such as an earthcentric solar system, we figured out that reproducing measurements didn't guarantee getting the same number, i.e., experiments have variation. We recognized as perhaps the second most importance concept of modern science, that we need to quantify that variation. We do that via replication. (P.S. 'We' is 'mainline science'.)
Note that your previous claims were easy to test, as follows:
Heat up frying pan and see if you tell it is hot by holding it with a potholder, or holding your hand over it.
Remove it from the stove, let it sit for 3 days, and see if it still hot.
Put a bucket of water in a room and see if it evaporates overnight.You have not suggested any similar experiment for your present theory. A theory that cannot be tested by experiment is not science. A theory that predicts exactly the same outcome as conventional theory cannot be falsified and serves no purpose.
Another attempt to resurrect your false Mizuno bucket anecdote conclusions. This has nothing to do with this thread explicitly and needn't have been brought up. It just indicates your lack of cogent comments.
But, the techniques I have applied here to Breiting's report are the same tactics I applied to the Mizuno bucket anecdote. You have implied there is more info out there on Breiting's work. Great! Let's see it. If it actually doesn't exist, then this Breiting report is also an anecdote and means nexttonothing. Let's hope there is more right?
What extra information would the tvalues give you, if both the data set and measured variable are the same?
Back in the Age of Dinosaurs, I used a software package called RS/Series extensively for data analysis. When I did MLR it had canned routines to step you through this process I am describing. It would literally look at the t (or p) values for each term and tell you whether you should drop it out of the model or not. When I figured out how to get some of this info from Excel for the post on that above, I did all 3 models (quadratic, cubic, quartic) and looked at the standard errors of the coefficients (which are used in the t statistic calc and from that the p). Turns out none of them stand out as heads and shoulders better than the other, which means my use of the three models to calculate power from T based on Breiting's data were all relevant, and the spread in P found that way a possible estimate of error in the computed P. Definitely need more data...(which is always the answer when questions remain)
And I recollect someone claiming that industrial scientists tend to have rubbish publication records.
Who said that?
Oops, forget this one...
you cannot propose a test that would reveal this error of yours?
Of course I can. So did Stefan. It's called 'replication'. (followed by doing the math right)
Yet you claim they made a mistake that you discovered in an hour. Are you quite certain of that?
Absolutely. They used an equation that requires the examination of all experimental variables to estimate error, and they only looked at 1 of 5 experimentally determined numbers. That's an 80% miss. Doesn't take a rocket scientist to see that!
Has it crossed your mind that you might be wrong, especially since you cannot propose a test that would reveal this error of yours?
Of course it has. I make mistakes all the time. But what I did is pretty idiotproof. You just back out the heat signal from the heat per unit mass signal and then check and see if little tweaks to equation constants can cover it. Simple and easy, and I've now gone through how I did it in excruciating detail even when I said I wouldn't just so you can understand Jed. Try to keep up....
And all of the calibration tests show nothing.
All one of them you mean?
Cold fusion was confirmed by the creme de la creme of scientists
Well, that's exaggerating a bit but it really isn't that important. Doesn't matter who you are, you can still make mistakes. You need to read up on how 'creme de la creme' scientists make mistakes just like 'the rest of us'.
I wasn't aware the SRI couldn't calculate the error levels in a calorimeter. Amazing, I could it before I left school.
I'm sure they could if they were aware of the need. That is usually the problem. 'Old school' guys just wing it and talk about 5 or 10% errors as if that's all you need to do. The 2004 Szpak, MosierBoss, Miles, and Fleishmann paper I commented on in my 2005 publication had a discrepancy in the collected volume of water that was just a few %, but it was positive. I commented that it likely was entrained water droplets. In the peer review, the reviewers claimed it was 'just noise'. 'Old school' vs. 'new school'.
I'm glad you got this in school. I've asked a lot of people about this and my observation is that the coverage is spotty. I got it in my undergrad juniorlevel pchem lab course. I asked Steve Jones when he got it and he replied in grad school. I worked with a PhD chemist who told me he'd never seen it. That's why I keep trying to explain it here, I assume many have never heard of this. Maybe I'm wrong, but the evidence in the CF literature says I'm not. What I have seen consistently is the use of the baseline noise of the calorimeter as the 'error' of the technique. However, that's not what my reanalysis of the Storms' data suggests is the full error, nor the study here on Beiting's data (and I note that Beiting actually did the POE for one variable (T) in his power equation), nor in many other places I've discussed in the forum before.
P.S. To JR and Z. If you paid attention the prior post explains why an 8% shift would still be less than 1 sigma.
For those who don’t know how to extract the standard errors of the regression coefficients…
Using MS Excel LINEST function…
(Per the LINEST function help there is supposed to be a way to give the function a set of x data and have it internally compute the quadratic and cubic terms of a cubic fit, but on short notice it wasn’t working for me, so I did it manually).
The hand digitized data for the Beiting curve I’ve been discussing (y=Power, xTemp), plus the T^2 and T^3 values are (recall I said this data is not exactly correct, since my cubic fit coefficients came out different that Beiting’s, but it serves to make the point):
P 
T 
T^2 
T^3 
0 
20.625 
425.3906 
8773.682 
1.357576 
82.5 
6806.25 
561515.6 
3.258182 
139.375 
19425.39 
2707414 
5.735758 
198.125 
39253.52 
7777103 
8.824242 
253.75 
64389.06 
16338725 
12.69333 
307.8125 
94748.54 
29164783 
13.49091 
317.8125 
101004.8 
32100583 
There are 3 columns of x values. Select an empty region that is 4 cols wide and 5 rows deep. Click on the formula bar text entry field and type “=linest(“ (no quotes). Then select the Y values with the mouse, type a comma, select all the x values with the mouse, type a comma, type “TRUE,TRUE)” (no quotes). And then press <Control><Shift><Enter> simultaneously. The formerly empty region will fill with the following info, EXCEPT FOR the first ROW (Row ‘0’) below, which I added afterwards:
0.210044449 
0.265615 
0.105368 
0.171160638 
1.15621E07 
4.71E05 
0.016991 
0.38431339 
2.42855E08 
1.25E05 
0.00179 
0.065779325 
0.999969958 
0.041552 
#N/A 
#N/A 
33285.48887 
3 
#N/A 
#N/A 
172.4122894 
0.00518 
#N/A 
#N/A 
‘Row 0’, the one I added, is just row 2 divided by Row 1, i.e it is an error fraction (multiply by 100 to get %)
Row 1 is the fit’s coefficients in decreasing order (i.e. coeff for T^3 first, additive constant last)
The second row is the standard error of the coefficients.
Row 3 col 1 is the R^2, row 2 is the standard error of Y
Row 4, col 1 is the F statistic, col 2 is the degrees of freedom
Row 5 col 1 is the Regression Sum of Squares, col 2 is the Residual Sum of Squares
What I did in examining the potential impact of experimental error in the determination of the power calibration equation is multiply each term by “1.0x” where the digit ‘x’ was 1 to 5, which is just increasing the coefficient by x%. (Later I also subtracted, i.e. .96 instead of 1.04).
The 1sigma value on the coefficients is almost 10X that, so my ‘piddling’ was on the trivial level numerically, but the results of that study were that the calculated powers varied enough to ‘cover’ the global reported excess heat rate of 0.944W. IOW, the ~1W excess reported is well within the noise band of the calibration.
Now, running multiple calibration runs might help because it might show that the coefficient error bands are actually smaller. Or it might just confirm the current error levels (or even worsen them of course). The point is that you have to get that data to know.
Some food for thought for those who refuse to understand the concept of defining the error in computed values and then studying its impact...
We all know that a primary sign of pathological science is 'working in the noise'. If we think about it for a couple of seconds, we'd all agree (i'd guess) that those scientists who end up labeled with the 'pathological' label didn't deliberately set out to earn that moniker. So why do they end up with it? I'd suggest the primary reason is that they have or develop inaccurate ideas about the error levels in their work. Now, if they mistakenly assume the error level is too large, they would abandon work too soon, thinking they were 'working in the noise'. That's not so much a problem except that possible advances might be missed by quitting too early.
The problem really comes when the researchers assume too good of an error level. Then they fool themselves into thinking that results that are actually in the noise are not, but in fact are very significant. This is what I observe occurs with most CF claims. Therefore I recommend and apply a more objective assessment of error levels using standard statistical methods, those primarily being propagation of error (or uncertainty) calcs and some version of response surface modeling (the quick and dirty version being 'sensitivity analysis') to map out the impact of these errors. (Of course the use of statistics requires replication for reliable results, but some insight can be gleaned from onetime events in some cases.) That is my standard approach, which I used in my 2002 paper that suggests a systematic error in CF calorimetry, and in the analysis here of Beiting's report (and in the Mizuno bucket anecdote, and the Mizuno air flow calorimeter data JR uploaded, and the McKubre M4 run, and so on). Accurately determining error levels is the only way to avoid working in the noise.
So double the number of points and the statistics
gets back on the track.
Well, sort of. Yes, increasing the number of points is always helpful and I definitely recommend that too (it's called 'replication'). It likely would assist in deciding which model to use (quadratic, cubic, etc.). It also might help in defining the standard deviations of the coefficients, but you also have to allow for other things to show up by repeating the calibration run at later points in time. However, it won't tell you about any systematic errors.
Well really, he failed to compute your whimsical ideas about the possible error inherent in his calibration curve.
https://en.wikipedia.org/wiki/Propagation_of_uncertainty
Not 'my' whimsical idea, mainline science's.
Either that or you are trolling us.
Or you are incapable of understanding the issues, an interpretation your comments seem to favor.
It seems unlikely that such people overlooked a problem that Shanahan found in an hour or so.
That is the nature of systematic errors. Or lack of training.
Kirk: I suggest you explain why this 8% shift only occurs in cells with the material produced at Ames in specified conditions, and not during calibrations or with control cells.
Why would I try to explain your proposition?
It is an observation, not a measurement. People familiar with the Corporation say this, and I think the web site backs it up. If you do not think they are worldclass experts, perhaps you should tell us why.
Well, if it is 'just' an observation with no numerical quality to it at all, then you can't claim it establishes any type of priority to the organization vs. others. I believe that in fact you are asserting that the work coming from the Aerospace Corporation is better than that from many other organizations. And my intent is to point out that that kind of 'observation' is pointless. If that method was valid, I could claim that MIT was a flybynight university since Hagelstein doesn't understand the difference between random and systematic. That of course would be completely incorrect about MIT. What Hagelstein does reflect primarily on Hagelstein, and only marginally on MIT. Likewise, what any Aerospace Corp. employee does reflects only marginally in a technical sense on Aerospace. Of course political and social impacts can be more prominent. So in particular, I have no idea whether the employees of Aerospace in aggregate are good, bad, or ugly, nor do I really care, because all I am interested in at the moment is Beiting. Where Beiting works is of little importance. What Beiting does is much more relevant.
Furthermore, anyone can make mistakes, so the rep of the employer has no relevance at all to the question of the quality of the specific work.
Have they made serious errors? Have their other studies been discredited?
Beiting, yes, he has. He failed to compute the error of his calibration curve properly and he failed to take into account the proper chemistry in his sample prep and subsequent experimentation. There might be more if I study the paper more, but what I've seen so far is enough to class his efforts as 'typical soso CF community work'. And that isn't 'worldclass'. With regards to other Aerospace people, no idea, don't care.
and the people at The Aerospace Corp. are world class. (See: http://www.aerospace.org/)
Out of curiosity, how do you measure that?
McKubre described his calorimeter as easy to understand but "stupid" (simple minded, rather than ultraprecise).
Which one? The one he used for his 19921993 studies? Or one of the ones from his 19931994 studies (the L, M, HH, T, D, F, G, OHF type) reported on in the 1998 EPRI report? Or some other one?
The error in it was larger than several other calorimeters, such as Fleischmann's and Miles'.
Given that no one in the CF community sees fit to define the error in their calibration curves properly and carry that through to the final results, this doesn't mean much.
It do look like the calormetry is very exact, as taken for granted by those who work with it, and the method used e.g. advanced interpolation using higher order polynomial which is appropriate for deterministic systems. This means that statistics is out of the window and is not a good method to apply here because the estimated variance is way to high compared to the real one due to the low degrees of freedom. In any case a figure of e.g. the relative error is xxx% would be helpful just to checkbox that non are fooled in any way. There are method that does statistics based on known variances and they could be used here in case one want to dwell on it and I think that would be the appropriate way to do it in this case. We don't have a figure
yet, good researchers know the variance of their systems and my bet is that Breiting knows quite well that all is ok. But no proof on the table for the dining party.
Stefan, several comments...
Very exact calorimetry means very small changes can have very big impacts.
I checked the problem with low degrees of freedom. When the number of parameters (p) used in the fit approaches the number of data points (N), the use of R^2 is inappropriate and one must use what is known as the 'adjusted' R^2 = Ra^2.
Ra^2 = 1  ( ( 1  R^2) ( N1 ) / ( Np1) )
Didn't really change the values enough to matter in the cases I looked at, but one always needs to check of course.
It is a wellknown problem that linear regression can 'overfit' data sets, especially smaller ones. The chemometric procedure that is preferred is to used the PRESS value (Predictive Residual Error Sum of Squares) instead of R^2, which is determined through a process that uses jackknifing or bootstrapping the data set to check the predictive accuracy of the statistical model. (That's where the 'P' in PRESS comes from.)
However, the requirement for determining the error in the computed output power is still the same and requires that all experimentally determined variables be included in a propagation of error calculation, which does, in the end, only give an estimate of the error. My calculations in the prior posts simply show that the 50 mW claimed error seems to be overly optimistic (which is typical in CF calorimetry) based on the fact that even in the best calorimeters the cal constants seem to have 15% relative standard deviations. It is always possible that Breiting has done better, but he needs to show this, not just assert it, as you indicate as well with your 'dining party' comment.
An abstract is simply a claim until substantiated with the standard types of information that would come out in the supposed subsequent paper. Thus, at this point, your claim is probably just wishful thinking.
Beiting has made a second, flow calorimeter that confirms the first one.
Link? I can't be responsible for what I can't get.
Actually IO, I see it all the time.
As expected...
If there were an 8% shift, as Shanahan claims, much of the test shown on p. 20, the calibration runs, and the control runs would be endothermic. They would be swallowing up megajoules of heat. It would be a fantastic coincidence that the calibrations fell exactly on the zero line. This is impossible. Shanahan has a rare talent for inventing impossible physics.
Readers may first notice that p.20 of the report deals with the first experiment, which is denoted a learning experience and is not used to report apparent excess heats in the report's main body. Thus it is not relevant to my discussion to date. There may well be other things going on with it. I didn't look at it since it was not used to make excess heat claims.
Further, I clearly showed that slight changes in the calibration constants have significant impacts on the predicted power computed from measured temperature. At 350C, the model changes I looked at can cause an ~200 mW error, much larger than the claimed 50 mW. Likewise at 350C, a 5% (not 8%) change induces an 820 mW error (and as noted, the global numbers imply something on the order of 944 mW). Table E1 of the report shows that between the vac and N2 cal curves there is a 574 mW difference at 300C, so the small changes in cal constants I am discussing give changes on the same order of magnitude as changing the gas space contents. This implies knowing the variation in cal constants is as important as knowing what is in the gas phase of the cell which Breiting spends considerable time discussing and compensating for.
Why Jed thinks such changes would drive the curves endothermic I don't follow. Unless he is talking about much lower temps. As one can see, at 0C the cal equations give negative powers, but that is just an indication of the fact that the real cell behavior doesn't fit the polynomial form well outside the data span, which is typical of statistical fits. And in any case, the cell is going to do what the cell does. The question is how well the equations used to interpret the data correspond to the real behavior.
Also note that a 50mW error at 12 W input is a 0.4% error. That's a better calorimeter than McKubre ever made. Do you all really believe Breiting made that good of a calorimeter on his first shot? Better error analysis is required.
I will also say at this point I looked a little bit at the chemistry used by Breiting and I find it inadequate. It is incorrect to assume the observed heat (if assumed to be real) is not at least partially explained by chemistry.
The R^2 of a quadratic fit to a known cubic’s data is not particularly useful. Instead I hand digitized the Cell 2 Vacuum (red) data from Figure 4.4 as an approximation to the real data. I then fitted it with a quartic, cubic, and a quadratic. Results are:
Quartic eqn coeff.: 5.36754e10 2.46591e7 1.26090e4 1.09850e2 .280572 R^2 = .999990
Cubic eqn coefficients: 1.15621e7 4.741444e5 1.69908e2 .384213 R^2 = .999970
Quadradtic eqn. coeff.: 1.06138e4 9.05552e3 .171434 R^2 = .999743
The R^2 values are not adequate to distinguish which model is best. The t values of the coefficients are needed (or the significance levels) so the full analytical results are needed.
The interesting thing however is the differences in the preducted power for a given temp in the upper temperature operating range. 3 sets are shown below for 300, 325, and 350C.
T 
Power 
modelcubic 

quart 
350 
16.49228 
0.197372 
cubic 
350 
16.29491 

quad 
350 
15.9999 
0.295 
T 
Power 
Modelcubic 

quart 
325 
14.13117 
0.044798 
cubic 
325 
14.08638 

quad 
325 
13.98244 
0.10394 
T 
Power 
modelcubic 

quart 
300 
12.05278 
0.02491 
cubic 
300 
12.07769 

quad 
300 
12.09764 
0.019952 
Notice that a) the difference (model error) increases as the T increases and b) the magnitude is significantly larger that the 0.050 W claimed by Breiting for some cases. Again, Breiting needs to show more data and analysis results. Kudos to him for giving the cal. equations, because this is extremely rare in the CF field. That is what allows the above analysis.
As well, the 'sensitivity analysis' I did with the variation of 16% in the cal constants of the given cubic cal equation still stands as well. Proper error propagation is required...
My general conclusion still stands, the observed excess could well be noise (or a combination of noise and modelling error).
Some comments on the Breiting / Aerospace corp report…
Executive summary (for those who want to quit now): Looks like a good chance it’s noise.
Comments:
Breiting (‘B’ hereafter) does the same thing the rest of the CF community does and fails to consider the calibration constants in his cal. equations as experimental variables. He talks about doing a Propagation of Error calc but the only variable he looks at is T explicitly. So, I did my usual, and plugged in a 16% variation in cal. constants. I did it simply by increasing and decreasing the constants in his cal equations by a fixed percentage using the same number on all the constants. That doesn’t mean they couldn't vary independently, I just didn’t want to bother with all that.
But to start we need to know what the supposed error is that we are trying to explain. B also tries to do the energy per unit mass trick, which is bogus since he doesn’t know what ‘mass’ is the right mass to use. So backing it up, his 173 MJ/g number works out to ~0.94W excess power signal, which he also refers to as ‘~1W’ in his exec summary.
So using the PwrC2T1Vac equation, I calculated the power for a span of T’s, but I will only discuss the one for 300C. His numbers give 12.94W at 300C, increase all the constants by 4% and you get 13.45W, decrease by 4% and you get 12.42W. That’s a span of 1.03W. B only reports one calibration, so we have no idea what the reproducibility of his calibrations is, but these results are in the same range as the span on the Storms’ data, and on the various reports by Miles.
But further, I had to ask why he used a cubic equation. To get a rough idea what might happen with a different equation, I used his equation to make a small set of data similar in size to what he used, and then fit it with a quadratic. At 350C. the cubic gave 17.34W and the quadratic gave 16.41W, for a difference of 0.93W. At 300C the diff was 0.88W. (Note that using the cubic gave a positive excess heat.)
In other words, simple piddling around with the calibration equation covered the signal detected. That means to believe the calibrations, we need a lot more info on how he chose his equations, and we need to see some replication to assess the reproducibility of those equations.
After that we can consider the rest of the waffling he does.
I am not going to defend this. You do the calcs if you don’t like what I say.
@SOT
Don't forget:
"Two other CMNS researchers, John Fisher and Marissa Little, also observed clusters of tracks in CR39 chips, using “seeded” orings received from Oriani. "
http://pages.csam.montclair.edu/~kowalski/cf/358summary.html
and
"In a private message Scott Little wrote: “In my search for the sensitivity of CR39 to radon, I saw several mentions of the problem of radon progeny (decay products) sticking to the CR39 surface and influencing the track count."
http://pages.csam.montclair.ed…lski/cf/329mylogbook.html
(This page gives the plan to test for contamination from Orings causing tracks as well.)
There are other pages where Kowalski talks about these experiments for those interested.
Since someone is sure to challenge me...
http://citeseerx.ist.psu.edu/v…86.9118&rep=rep1&type=pdf
"Other Damage to CR39
As part of our search for possible artifacts, we attempted to make CR39 tracks
using various mechanical means. We quickly discovered that mechanical damage
often leads to round, tracklike marks after etching. Any scratch on the surface
would resolve itself into a chain of circular pits after etching. The following
photographs in Figure 3 show examples of pits created by various mechanical
means, including nothing more than the casual handling of the chips."
Scott Little first posted this to the Internet in prepublished form. The section above was longer there:
"As part of our search for possible artifacts, we attempted to make CR39 tracks using various mechanical means. We quickly discovered that mechanical damage often leads to round, tracklike marks after etching. Any scratch on the surface would resolve itself into a chain of circular pits after etching. We were able to create various marks with sandpaper, needle points and simply by carrying around a chip in a pocket for a day."
which is where the 'day' timeframe came from.