kirkshanahan Member
  • Member since Oct 8th 2014
  • Last Activity:

Posts by kirkshanahan

    Beiting knows that, and took it into account.


    Good. I'd like to see how he does it.


    More to the point, Garwin knows that, but he did not try to use that as an excuse to dismiss the results, so even he realizes it is a losing argument.


    Why would I care what Garwin knows/thinks/says?


    You, on the other hand, will still be saying this years from now, unless cold fusion becomes generally recognized.


    Well, it's true, now and in the future (unless the universe changes...) And I'll still say the thermal conductivity of hydrogen is significantly higher than nitrogen, even if someone finally proves LENRs exist.


    Of course he did.


    Of course. I only ask becasue the CF community as a whole has a very bad habit of not accurately estimating error bars. Take McKubre's post in the Mizuno bucket thread where he claims '90sigma'. That is true if and only if the only important noise factor in these experiments comes from instrument baseline noise. Unfortunately that's rarely true. And, in the case of Ed Storms' data that I reanalyzed for my 2002 paper, he claimed an ~80mW baseline noise and thus claimed an ~10X signal (780mW peak) (aka COP=10), when I showed that a 1-3% change in cal. constants flatlined his signal, which means he was at 3sigma, at best. McKubre's 90sigma is another example of this. So of course I wondered how Beiting did it, as there was no info in the abstract, and of course you failed to even mention what he claimed the error bar was, let alone explaining how he got it.

    After the talk, he was asked this question (about nitrogen calibration). He quickly described how he made a correction to account for that, but I did not completely follow it. Once someone gets a copy of his report, they could let us know more.


    Didn't the speaker give the error bars on his measured numbers? You said they measure 1W excess on 12 W in, that suggests to me their error must be <1W, maybe <0.5W, to consider 1W significant. Did they explain how they determined their 1W signal wasn't just noise?

    Some corrections to Alhfors list of my publications:


    Ahlfors lists a pub as: Storms, E., Shanahan, K. L. in Thermo. Acta. 441 (2006) 207-209 - that is not my paper and I should not be listed as author. I published the following paper which started on page 210.


    Not included in the list were:


    Discrete event simulation of an analytical laboratory

    K. L. Shanahan ,R. E. Beck, C. E. Taylor, and R. B. Spencer

    Analytica Chimica Acta, 282(1993), 679


    and Presentations/Proceedings papers:


    A Dynamic Simulation Model of the Savannah River Site High Level Waste Complex

    M. V. Gregory, J. E. Aull, R. A. Dimenna, G. K. Georgeton, T. Hang, T. E. Pate, P. K. Paul, K. L. Shanahan, F. G. Smith, G. A. Taylor, and S. T. Wach

    Presented at WM '95, Tucson, AZ by co-author, Feb. 1995 (Proceedings published)

    http://www.iaea.org/inis/colle…ublic/26/045/26045355.pdf


    Dynamic Simulation of the In-Tank Precipitation Process

    T. Hang, M. V. Gregory, K. L. Shanahan, and D. D. Walker

    presentation at 1994 Simulation Multiconference by coauthor, April 1994 (Proceedings published)

    https://www.osti.gov/scitech/servlets/purl/10121261


    A Dynamic Simulation Model of the Savannah River High Level Waste Tank Farm

    M. V. Gregory, K. L. Shanahan, and T. Hang

    presentation at 1994 Simulation Multiconference by coauthor, April 1994 (Proceedings published)

    https://www.osti.gov/scitech/servlets/purl/10107914

    SPEEDUP Simulation of Liquid Waste Batch Processing

    KL Shannahan, JE Aull, T Heng, TE Pate, PK Paul - Proceedings of Aspen World - iaea.org

    http://www.iaea.org/inis/colle…ublic/26/034/26034211.pdf


    SPEEDUP simulation of liquid waste batch processing. Revision 1

    KL Shannahan, JE Aull, RA Dimenna - 1994 - osti.gov

    https://www.osti.gov/scitech/servlets/purl/10188180



    A recent presentation by a co-author which may not have been published:


    Electron Microscopy of Helium Bubbles in a Palladium Alloy

    Proceedings of the Tritium2016 Conference, April 17-22, 2016, Charleston, SC

    David B. Robinson, Mark R. Homer, Joshua D. Sugar, E. Lynn Bouknight, Kirk L. Shanahan

    Fusion Science and Technology, 71, (2017), not published (?)



    From my 'pre-doc':


    J . Q . Searcy and K . L . Shanahan, Thermal Decomposition of the New

    Explosive 2-(5-Cyanotetrazolato)pentaamminecobalt(III) Perchlorate,

    SAND78-0466, Sandia National Laboratories, August 1978 .

    https://ntrl.ntis.gov/NTRL/das…eDetail/SAND780466.xhtml#

    What it is not correct is:

    a. It was unanticipated by early researchers until Kirk “discovered” it

    b. It applies to all forms of calorimetry

    c. It can explain all excess heat results (as has been claimed – in person if not in writing).


    What I have actually said:


    a. There is no evidence anyone considered a CCS error in any study to date. (Please cite contradicting examples.)

    b. The possibility it applies does apply to all calorimetric methods in theory, if they use a calibration equation, either explicitly or implicitly.

    c. It might explain all excess heat results, but that needs to be quantitatively checked. There are limits to what might be explained and exceeding those would possibly indicate actual excess heat.

    For such 99%+ efficiency calorimetry I don't see much scope for heat source position errors.


    Well, Ed's calorimeter was 98.4% efficient and showed a 780mW excess heat signal. If Mike's 99.3% calorimeter was the M series calorimeter, he only saw a 360mW signal. Cut the lost heat in half, cut the excess heat signal in half, seem straightforward to me...:) I know. The point is that without calculating the actual errors from calibration constant variation, you can't conclude there is no room for the detection of apparent excess heat due to a CCS.

    Putting it simply, I don't see this as a physically feasible shift in mass flow calorimeters. Specifically, an efficiency of more than 100% is not possible, and temperature change in the measured cell cannot increase efficiency above 100%. Finally, the detected heat must be proportional to cell efficiency. There may be second order effects but this would seem to make positive calibration constant shift bounded +1.6%, and realistically quite a bit lower than this.


    So, where is the error in my math then? I calculated the local calibration constants that would zero out excess heat, examined the results, and found that the required changes were trivial. I offered explanations for that that were, to me at least, very physically feasible. Please elaborate on where you see an error in my analysis.


    In fact it is not possible to have a 100% efficient real calorimeter, because a real calorimeterr needs sensing elements to detect changes, and those elements provide heat loss pathways. One can get very good though, up to 98-99%.


    I assume you mean systematic errors by "second order effects".



    The same argument does not apply to calorimeters where close-to-cell temperatures are used as proxies for heat flow since these can have changes unrelated to calorimeter efficiency due to lack of isothermality.


    I believe I disagree with this statement if I understand it correctly. If I place a temperature sensing device 'close to the cell' why would changes in cell temperature arising from a more efficient detection of internal heat not register in the device? I think it would. You just add another layer of complexity to the thermal transfer conditions without changing the base problem. However, that added layer can have additional problems above and beyond the CCS thing. Is that what you mean?

    Returning after a nice holiday break, with a response to Mizuno's bucket of water


    Quotes of Dr. McKubre’s comments are enclosed in “” below.


    “I am surprised that there is so much angst and uncertainty about this issue.”


    That’s probably due to the fact that you don’t seem to understand the issues. This is most clearly illustrated by your co-authorship of the 2010 J. Envir. Mon. article that replied to my comment on the prior Marwan and Krivit paper. Trying to a) pass off the CCS as a ‘hypothesis’ is incorrect, and likewise b) trying to assign my description of an issue as ‘random’ when I clearly and multiple times have called it ‘systematic’ is also incorrect. Now, the way I heard the paper was constructed was that the various authors contributed parts to Marwan and he combined them, so perhaps you missed the fact that my whole thesis and results were incorrectly presented. Do you stand by your supposed use of the term “random Shanahan CCSH” from that paper?


    “This was highly discussed and heavily worked out in “the early days”. I can't speak for anyone else but I expect it is true for others as well.”


    Really, so why did you apply an incorrect approach to you data analysis then?


    “The phenomenon that Kirk proposes was well anticipated (and better understood) by the design team for our first mass flow calorimeter (up to 1992) and improved in both design and understanding afterwards. Our calorimeters were designed to operate on first principles (first law).”


    Well technically I haven’t been able to check your work because you wouldn’t supply me with calibration equations from your massive 1998 EPRI report, and the prior one only presented Figures, not ‘raw’ data as you did in the attached CD on the 1998 report. However, in your M series runs, you observed two runs without any apparent excess heat and two with. I took the two without and used them as ‘calibration runs’ for a more standard type of calibration equation approach (using y=mx+b) than your transfer function approach, and I found a) the excess heat peak height was predictably variable (i.e. consistent with my claims and concerns) when the calibration constants were varied by a few percent, and b) that there were significant baseline shifts present that somehow disappeared when you used your transfer function calibration method. I would still love to see how you do that. I might find it useful myself some day. Care to share at this point?


    But I’ve seen no evidence you ever considered the ‘lumped parameter’ approach problem that allows the CCS problem to appear when the heat distribution in the cell changes from the calibration state. I suppose it’s possible you handled this, but you’ve never explained how that I can find. Please give me a reference to where I can obtain this information for study. Thanks.


    “Where systematic errors could conceivably occur the calorimeter was designed to be conservative - anticipated errors leading to under-measurement of heat.”

    I see no evidence your design fixes the lumped parameter approach. You should note that this problem is not a calorimeter design problem, it is a data analysis method problem. However, an altered calorimeter/cell design might potentially minimize the issue.


    “Some of you will remember me discussing this seemingly endlessly in 1989-1992."


    I didn’t get involved until 1995, so no, I don’t ‘remember’. This is why all of what you are talking about should be written down somewhere. Reference?


    “We obviated the precise issue that Kirk speaks about as follows:

    1. The electrochemical cell was enclosed (at pressure) in a metal heat integrator (“isothermal wrap” in THH's words).”


    And Ed Storm’s calorimeter also did the same thing with the heat-collecting fluid, but that still left ‘the problem. Likewise, you wrap does not prevent the problem.



    “2. Nothing left the cell except wires and a gas pipe for initial H2or D2gas charging.”


    Yes, yes, closed cell. So was Ed’s calorimeter.


    “3. A complimentary Joule heater was intimately wound into the metal heat integrator axially symmetric to the electrochemical cell.”


    Did it or was it used to probe changes in the heat distribution in relation to proposed high and less high heat capture efficiency zones? No. Didn’t think so…


    “4. The calorimetry fluid submerged and completely enveloped the integrator bathing externally all surfaces and picking up heat from wherever sourced (BTW there are 7 conspicuous heat sources in FPHE calorimeters, not just 2):”


    Just like Ed’s (effectively, Ed used a different design of course, but tried to do the same thing in his design. He achieved ~98.4% (as I recall) total efficiency, yet saw a fictitious 780 mW excess heat signal).


    Recall my little box diagram in the prior post. Call the high efficiency zone ‘Zone 1’, and presume it was where the electrolyte was. The gas space is ‘Zone 2’ and normally contains all penetrations through the cell wall, which remain together when exiting the calorimeter (i.e. these are the primary unaccounted for heat loss pathways)..


    “a. The anode (I * V anode)

    b. The electrolyte (I2 * R electrolyte)

    c. The cathode (I * V cathode)

    d. Any excess power”


    All Zone 1.


    “e. The recombiner (I * [V cell-V thermoneutral])”


    Zone 2.


    “f. The complimentary Joule heater that kept the sum of input power constant (I2 * R heater)”


    Power compensation calorimetry, fine. Henry Randolf (sp?) of SRNL used the same thing for his study as presented at ICCF1.


    This is technically a new wrinkle for me as I haven’t explicitly discussed power comp calorimetry before, but it’s not a significant one. The heat flowing out of the cell plus the heater power is held constant. When ‘excess heat’ appears, to keep the temperature the same, the heater power is decreased and the drop measured and reported as positive excess heat.


    But, heat lost up the tubes and wires never figures into this balance except via the correction that calibration gives, so if the heat loss changes, specifically by dropping when heat moves from the recombiner to the electrode for example, you get your CCS.


    “g. The wires (I2 * R wire). Note that since V was measured at the calorimeter boundary only the wires inside the calorimeter contribute to this term, and it is fully measured”


    I’m not concerned with power losses in wires and leads. I know some have claimed that as a problem in some cases, but I’m talking specifically about a CCS. If you lose power in the leads and don’t correct for that, shame on you, but I’d guess you did. What I am concerned with is how much heat from wherever is lost and not accounted for up the wires, and if that changes during an experiment. Apples and oranges here.


    “5. The thermal efficiency of our early design was ~98%, later improved to 99.3%.”


    A.) Ed’s as also 98% or so. B.) I’d like to look over you calcs. Reference?


    “6. Only the missing 0.7 to 2% (that is lost primarily by thermal conduction to the ambient down wires and the pipe) needs to be “calibrated”.”


    Correct. And changes in that during an experiment are one way a CCS could be induced.


    “7. Calibration of the first law parameters (I, V, ∂m, ∂t) were performed independently of the calorimeter.”


    Fine. You still calibrated. That means you are dependent on a maintained steady state condition to maintain calibration equation veracity. I propose you did not maintain a constant steady state due to some interesting physics and chemistry.


    “8. At constant input power the presence of excess heat can be inferred qualitatively by a rise in temperature of the outgoing fluid (normally water). “


    Not if there is a change in the steady state heat distribution as I postulate.


    “Our largest excess power levels were ~300% in input power. Our largest statistical significance (Excess power / measurement uncertainty) is 90 sigma.”


    Your 90 sigma is a bogus number. Your 1 sigma value is only one component of the total variation and a minor one at that. Looking at the baseline noise in inadequate. Ed’s experiments that I reanalyzed had a claimed 1 sigma of ~80mW and a peak signal of ~780mW for an ~10sigma signal. But in fact a 2-3% change in the calibration constant wiped out that 780mW signal, showing that 1 sigma was at least 780/3=260mW, not 80. You aren’t calculating the error in your results properly.


    “9. We tested our assertion that heat was measured equally independent of its source position two ways:

    a. Finite element calculation (this is a complex matter not handled by two term algebra) which modeled the entire calorimeter up to its isothermal boundary: submerged in a water bath held at constant temperature ±0.003°C; in a room held constant to ±1°C”


    As a chemical process modeling expert, I know the ‘Golden Rule’ of modeling: A model is only as good as the assumptions (equations and parametric ranges and values) you put into it. Did you try to simulate the effect of a heat distribution change such as I propose?


    “b. Experimentally testing the influence of current to the cell and the complimentary Joule heater over a wide range in blank cells (H2O, Pt or poorly loaded Pd cathodes, early before initiation of the FPHE)”


    Again, you need to try to account for my scenario. Did you do so? Also, the numerical results from this are of interest. What averages and standard deviations did you obtain from the different calibrations you did on a particular configuration?


    “10. The calorimeters were proven to be heat-source position-independent already by 1991 when I stopped worrying about this effect for our calorimeters. “


    Where can I examine this data? (recall that unpublished data/results doesn’t count)



    “The fact that long long long hours of calorimetry were performed (>100,000), covering wide variations of cell and heater power, with calorimetric registration of zero excess heat sadly but conveniently reinforces our conviction that the Shanahan hypothesis that heat excess can be incorrectly measured (always positively?) by the displacement of heat sources – plays no significant role in our calorimeters.”


    Really? I thought we all understood that ‘excess heat’ was a rare event. That’s all you established with the above studies.


    Also, regarding “(always positively?)”: This is just another example that proves you have not even considered my explanations. Your comment indicates you are still stuck on ‘random’. But your calibration methods as described above are clearly not random, and thus the change that we know as the FPHE is thus not random either. (The reason the excess is always positive is that you always calibrate with an ‘inactive’ electrode (or heater).)


    “11. This last conclusion, equally rigorously supported by their designers and authors, applies to the two other modes of calorimetry with which I am closely familiar: F&P’s partially mirrored dewar design; the heat flow calorimetry of Violante and Energetics (using heat integrating plates).”


    It is really immaterial to my theses what type of calorimeter is used. All of them have heat losses. All of them are calibrated (or assumed to be perfect, which is just assuming a particular set of calibration constants). All of them are studying the same system (I only refer to electrolysis cells) . Thus all of them are susceptible.



    “There are more insidious potential error sources possible particularly in electrochemical calorimetry.”


    I never said there weren’t. My CCS thing is just one potential error. It does not address others. But it seems to be quite large in relation to reported signals.


    “Ed discovered one in simple isoperobolic calorimetry for which the thermal barrier was the (pyrex) cell wall (changing wall hydraulics). Others exist and we should always be alert and open to suggestion.”


    Exactly. Like the whole CCS/ATER thing…


    “On the other side I suggest that the suggestors pay close attention to the literature, make quantitative calculation modeling the physical processes that drive the putative mechanism, and do not make global claims of “it is all wrong because…”."


    ROFL. A.) I’ve real ‘all’ the literature (an assertion, maybe I only hit 94%, but the point is I’ve read enough). B) My whole CCS thing derives from quantitative re-calculation based on real data. C) You cut off the important part with your ellipsis. It should have read: it is all wrong because a common mistake is being made in the data analysis. In other words, there is a systematic error in the calorimetric data analysis of F&P-type experiments that produces

    spurious excess heat signals.


    “It is not that I claim that Kirk’s suggested semi-mechanism has never applied to LENR calorimetry. The effect he describes did play a role in the NRL / Coolescence Seebeck calorimeters when the recombiner is more or less well coupled to the predominant heat-flow path. But this was recognized by them.”


    Thanks for writing that. I have pointed out many times before that Seebeck calorimeters can show the problem, but every time I do JR screams at me that I am wrong. Perhaps now he will learn something. However, what you write about the mechanism isn’t quite what I say.


    “It is not that his “discovery” is never significant, or never could be. It is that the mechanism is well known, was historically anticipated, and is irrelevant to most of the calorimeters with which I am familiar. “


    No. The problem is quantitatively documented in one highly efficient mass-flow calorimeter, and easily extended to all other calibrated methods, and is never tested for in any CF excess heat reports. So it appears to be unanticipated and highly relevant.


    “Even if he could show one case quantitatively, it would not affect the whole of our understanding.”


    Really? If I show a systematic error in your methods it is of no value to your whole understanding? Really?


    “Here endeth the lesson.”


    Hardly-eth. It seemeth to barelyeth have beguneth…


    “I will answer only relevant technical questions for clarification (and then probably slowly).”


    Ditto. One can check many of my posts on this forum however for more details.

    Continuing with simple explanations of the CCS...


    Imagine a box, with two point sources (*) of heat in it, call then P1 and P2.



    |--------------------------------|

    |.......................................|

    |...* P1................P2 * ....|

    |.......................................|

    |--------------------------------|


    Now assume we want to measure the power input to this box. If it makes no difference where we put the power (lumped parameter assumption) then we can do whatever we want and come up with our Pout number. (But Pout is a little less than Pin.) We then 'calibrate' by saying Pout should equal Pin, therefore we will put an 'adjusting factor', i.e. Pin = Pout, cal = k * Pout, meas (+ b in some cases).


    But now, let's assume that when we put X watts into P1 we detect 99.99% of that. i.e. if no P2, k_o= k1 = 1/.9999. But for P2 we find instead that P2 = 75% of the actual P1,in. So what we get for Pout, meas = .9999*(P1,in) + .75*(P2,in). Clearly then is we fix Pin,tot to some number, how we divvy up the power between the two points will affect what Pout, meas we seem to detect. Now as long as your calibration method accounts for that, everything would be fine. But all CF calorimetrists don't do that. Instead they always apply some variant of the 'simple' calibration technique from the lumped parameter assumption.


    So what they actually do is compute Pout, cal via an equation that actually looks like this


    Pout, cal = k_o * (.9999*(P1,in) + .75*(P2,in)) + b (k_o is the 'overall konstant') (recall .75*P2,in = P2,out, meas)


    And what is crucial to understanding the problem, all these calibrations are done in 1 of 3 ways: 1) with electrolysis using a non-active electrode (this fixes P2 to one value), 2) with a Joule heater in the high efficiency region, or 3) with a combination of 1) and 2). The key point being that P2 is either 0 in open cells, or fixed at 100% recombination in closed cells.


    But now consider the case where 1W of recombination power moves from P2 to P1 (closed cell config.) Prior to the move, only 75% of that watt was detected, so the 1W would be multiplied by k_0 times .75. After the move it would be multiplied by k_0 times .9999, which is larger than what was assumed to be the case via the prior calibration. So the new Pout,cal will show an excess heat signal since Pin did not change, just its split between P1 and P2.


    What this means is that because some of the power moved from P2 to P1, the calibration previously determined is no longer valid.


    Moving to the 'real world' now. I believe this two-zone model approximates the real situation in part because the design of every CF cell I have ever seen has all the cell wall penetrations exiting through one limited ares (usually the 'top' of the cell). This places the primary heat loss pathways (that cause less than 100% efficiency) all in one place, and thus I believe that heat produced there is less efficiently captured by the calorimeter. However, when it moves to the electrode, it is captured more completely because it is spatially removed from the major heat loss pathways, and it is now produced in a region where liquid heat transfer dominates vs. gas heat transfer.


    Open cells would have this same problem, but it would be masked even more by the fact that P2 is assumed to be 0 in all cases.


    The inherent limitations to this analysis are obvious, meaning that if you get an excess heat signal that can't reasonably fit the above model, it is unlikely your apparent excess heat signal arises from this CCS/heat redistribution mechanism, as THH also has noted.


    It is also obvious how one might test this. A.) Replace the electrodes with a heater that is placed in the gas space. Then simulate the above problem by calibrating with fixed but positive P2 and then lower P2 and increase P1 the same amount and see what your calibrated Pout does. Or B) redesign the cells so that not all penetrations are in the same place. (The ultimate of this would be to turn your cells upside down, which will require some modification to relocate the recombiner or vent line.)


    I am done for today, see you all next week. I reserve the right to correct errors in the above posts, this was all done relatively quickly.


    P.S. Jed, you still don't get it. How sad.


    EDIT - L-F won't let me use spaces or tabs to get the far wall of the box to line up. Please take that into account.

    2nd edit - modified the box drawing by using periods for spaces, This puts the far wall in proper alignment.

    As I understand your claim, your hypothesis "explains" all FPHE results - irrespective of calorimeter specifics or method. This extravagance is the reason I largely ignore it. But OK, after you have sent and we have examined your citation (above), please explain in the simplest possible way, quantitatively, how your "unrecognized systematic error" produces error in closed cell, >99% thermal efficient, mass flow calorimetry of the sort my group performed.


    THHuxley (THH) has pointed to several of the relevant publications. The original manuscript of my first paper in this field (sole-authorship, since you see to think 'lone-wolfery' is good) can be found on Jed Rothwell's site here: http://lenr-canr.org/acrobat/ShanahanKapossiblec.pdf The actual final version is slightly altered and is listed in Ahlfors list, but for ease:

    "A Systematic Error in Mass Flow Calorimetry Demonstrated:, Kirk L. Shanahan, Thermochimica Acta, 387(2) (2002) 95-110 (Please note the word 'systematic' in the title, which is different from the manuscript's title.)


    Also missing is my 2005 reply to the 2004 Szpak, Mosier-Boss, Miles, and Fleischmann publication (S. Szpak, P. A. Mosier-Boss, M. H. Miles, M. Fleischmann, Thermochimica Acta 410 (2004) 101): "Comments on 'Thermal behavior of polarized Pd/D electrodes prepared by co-deposition'" : Kirk L. Shanahan, Thermochimica Acta, 428(1-2), (2005), 207



    In the simplest possible way: All CF calorimetric methods assume the temperature distribution inside the cell is unimportant. (In dynamic chemical process modeling, a subregime of chemical engineering, this is known as the 'lumped parameter' assumption.) This is only correct if it is, and must be tested to show that is so. In my first CF-related publication I test this idea. I assume (actually derive) different calibration constants for each voltage excursion used by Storms in his data used for his ICCF8 presentation by assuming there was in fact no excess heat. Then, I examine the results of that process for rationality.


    I find the changes made to calibration constants to lie well-withing reported experimental variation of calibration constants. Ergo, I propose that a calibration constant shift (CCS) nullifies the exclusive claim of excess heat being present. Further I note that as this is just math, the potential problem could be present in any calibrated calorimetric experiment. (By the way, assuming one's particular calorimeter is so good it "doesn't need to be calibrated" is just assuming specific values for the calibration constants with no justification.) This leaves us collectively with an indeterminate situation. Does a CCS explain other apparent excess heat claims?


    Upon examination of the extant data, I conclude it can (NOT absolutely does without a doubt). I have acknowledged many times what THH repeated, namely that there are limitations to this problem that might be exceeded, invalidating the idea that a CCS caused the observations. But no one has attempted to show such a case. Further, as THH has noted, that would also not necessarily disprove the existence of a CCS problem in other cases. Therefore, each claim of excess heat must be checked for this, but none have to date (aside from the one case where I did this for Ed Storms, much to his chagrin).


    Now, to keep it simple I could stop right there. The variation in calibration constants of +/- ~3% max flat-lines a 780 mW 'excess heat' signal in a 98.4% efficient calorimeter. Roughly speaking a span of 3% means a relative standard deviation of ~1%, which based on my years of SQC experience is a top-notch technique. It would be exceptionally hard to get better than this. Yet, 780mW goes to 0 by not assuming that you can 'lump the parameters'. This is not a 'hypothesis' as you claim in your 2010 JEM publication. It is simple math.


    Of course, you will ask "How can that happen?". That's natural. But it is really irrelevant. I have shown a math trick can nullify a 'big' excess heat signal ('big' because it is in a highly efficient calorimeter). The question you have to answer is "Can that happen in my work?". There is only one way to answer that question. We have to look at your math. That's why I asked twice for your calibration data, and then tried to see if anyone else knew it when you declined to answer me. Your refusal to test your data to see if a CCS might explain your results leaves us in an undetermined state: CCS or LENR?


    As a conservative-minded scientist I lean towards CCS. You might be more liberal than me and lean towards LENR. But neither of us can reject the other's potential explanation without more data and analysis.


    (I will address follow-on issues in a separate post to keep it simple.)


    If error is present and unrecognized I would like to know


    Doubtful. If you did you would have supplied me with your calibration equations when I twice asked for them in 1999.


    but I have seen nothing at all in your writings that would account for such error.


    That would be because of


    I largely ignore it.

    [Please be advised I am preparing additional responses to other points from Dr. McKubre's post.]

    Perhaps you might also explain your "equivalent training".


    Before beginning that, I need to respond to the clear tone of your post, which somehow implies I am trying to besmirch the reputation of Dr. Fleischmann. I assure you I am not. However, he was a human, and thus he was capable of mistakes (as was evident from his inaccurate explanation of what became known as the Surface Enhanced Raman Effect, used in SERS), but that is not a derogatory remark since all of us make mistakes. It simply points out a fact. Systematic errors are some of the most pernicious problems that a scientist can face. What I was explicitly doing in my post that you quote is responding to Jed Rothwell's challenge to my credibility, 'my' credibility, not Fleischmann's. Since you have jumped on that bandwagon with Jed and his acolytes here, I will reply to you.


    I have a Ph.D. in Physical Chemistry, thesis topic of Surface Chemistry, from the U. of California at Berkeley, granted in 1984. My research advisor was Prof. Earl Muetterties, who was an organometallic chemist of some repute, having been the Vice president or Director (i can't recall his title) of Research at E. I. DuPont de Nemours until 1973 (DuPont's Central R&D organization in Wilmington, Delaware (at least back in those days)) when he entered academia. In 1977 when he moved to Berkeley from Cornell, he established a small surface science sub-group. I joined that group in 1979. Prof. Muetterties passed away while at Berkeley, and I am likely the first of his students to graduate without his signature. My thesis was actually signed by Prof. Angelica Stacy, who still teaches there. The other members of my committee were Prof. Gabor Somorjai, a world renowned surface chemist, and Prof. Alexis Bell, a world renowned chemical engineer focused on catalysis (both still teaching at Berkeley).


    While I can't substantiate my next claim for all the years concerned, I believe the following to be true, and I will gladly modify or retract the claim if I can be shown to be incorrect. The Department of Chemistry Graduate School has been among the top 10 in the world for the last 70 years or so, i.e. since roughly WWII. Many Berkeley profs participated in the Manhattan Project, and of course Glenn Seaborg worked at Berkeley for many years, along with other Nobel Prize and Priestly Award winners. When I entered in 1979, a survey had placed Berkeley at #3 in the US. This year, the US News & World Report rankings place Berkeley in a 4-way tie for #2. In 2016, Berkeley was in a 2-way tie for #1. In other words, as I stated, my pedigree to the Doctoral degree level is equivalent to Dr. Flesichmann's.


    Thanks to Ahlfors for posting my publications list. It is slightly incomplete as it doesn't list technical reports, which as an industrial chemist I have many. I will not cite them all, but I will describe certain ones as they bear on my accumulated expertise in relation to the cold fusion arena.


    I graduated from high school in 1973. I graduated with a B.S. honors degree from U. Nebraska-Lincoln in 1976. I had completed 1-1/2 years of undergrad research under Dr. Charles Kingsbury, having worked on two different projects. One was simply to get a FORTRAN program for 1H-NMR data from lanthanide shift reagent studies operational, which I did (after learning how to program in FORTRAN). The other was my thesis topic and involved studying the mechanism and kinetics of ketoester cyclization reactions with hydrazines via 1H-NMR, which included low temperature work.


    After graduation, I worked at Sandia National Laboratory in Albuquerque, New Mexico for 3 years in two groups, the Explosive Components and Explosive Materials groups. I was initially hired as a technician, meaning I worked with a PhD staff scientist (Dr. J. Q. Searcy), however, Sandia encourages their people to perform to the limit of their abilities, and I was nearly independent after about a year. I also enrolled in the graduate chemistry department of the U. of New Mexico in Albuquerque in the Masters degree program under a Sandia continuing education program. I worked with Prof. William F. Coleman there. Initially I was going to study gas-phase luminescence of europium compounds on campus and had begun work in that arena, but after a few months my Sandia management insisted I do my thesis work at Sandia, so I had to change my topic. I chose to attempt to study a corrosion problem we had experienced via the technique of Inelastic Electron Tunneling Spectroscopy. I was to study nano-sized Ni particles deposited on alumina and treated with various chemicals of interest. Unfortunately, the lost time upset my schedule, and to complete the Masters degree I would have had to delay my entry to the Berkeley PhD program by a year, which was unacceptable. So I withdrew from UNM after 3 semesters with no degree. But this period of time familiarized me with explosives and explosives technology, thin film deposition, and liquid helium handling, and many ramifications of them that have been useful in understanding CF claims.


    While at Berkeley, I procured and assembled my experimental apparatus (UHV chamber with LEED, Auger, and TDS (i.e. mass spectrometer)) , which initially led to some down time while waiting for parts to arrive. In that time period, I took up photography as a hobby, loading rolls of film, shooting photos, developing the film, and printing them. This gave me a good knowledge of film technology, which allows me to understand the issues of using dental films in supposed x-ray detection.


    After graduation in 1984, and after looking for a change of pace, I joined the DuPont Dacron R&D organization with the intent of constructing a computer model of the polymerization process of high enough quality to use for direct process control (model-based process control). I spent 18 months there learning how to do what now is called 'big data' by assembling process and product historical data, and in economically justifying my project, which I found to be an issue given the poor market for Dacron at the time. So in 1986 I transferred to the TiO2 R&D group where I spent the next two years in research and quality control work. The QC work is highly relevant to the CF arena as I advanced my statistical skills to a high level and applied them to fixing broken analytical methods, which I continue to do to this day as the need arises. I also did analytical method development as part of my research, and tinkered a making nanoparticulate TiO2 (before the 'nano' prefix became a buzzword).


    For personal reasons, I transferred to the Savannah River Laboratory in late 1987. DuPont left in 1989 and we have been run by various 'teams' over the years since then. We are now Savannah River National Laboratory and part of the Savannah River Site (SRS). SRS is a DOE-owned, contractor operated facility that is part of the nuclear weapons production complex. I have had multiple assignments hear as you might expect. They have included: more dynamic chemical process modeling of varying degrees of sophistication, some discrete event simulation, a touch of steady-state chemical process modeling, more SQC work, and more sensor development work, but most importantly, since 1995 I have worked with metal hydrides and all isotopes of hydrogen, include almost all of the materials claimed to show LENR, as you can see from my publications that Ahlfors has pointed out.


    So, how's that for 'grandstanding'. Your turn now. Aside from your acknowledged experience in calorimetry, what do you bring to the 'LENR' table.

    There was plenty of cold air coming through the windows


    I might also note that as the cold air coming in through the cracked windows warmed up, it would be able to hold more water vapor. Relative humidity is temperature dependent and another important variable in calculating evaporation rates.


    BTW, the heaters you claim were present would also cause air flow as they added heat to the room. Hot air rises, cold air falls, nice little cycle going there.

    Don't be ridiculous. You know damn well what I mean. I mean there were no fans or ventilation. There was plenty of cold air coming through the windows, which were single panel glass that did not shut well. As I recall, one of them was cracked. Post-war Japanese concrete buildings by that time were warped and falling to pieces. The windows would not shut.


    If there was no ventilation (natural or man-made) there would have been no air flow into or out of the room, i.e. it was sealed. In a sealed room, human beings consume the O2 and exhale CO2. Eventually, CO2 levels become toxic and/or O2 levels get to low and the humans asphyxiate. That obviously didn't happen, therefore the room was not sealed, therefore there was SOME ventilation. Your 'no ventilation' is obviously wrong. Another phrase for 'ventilation' is 'air flow'. Air flow is a critical variable in calculating evaporation rates. Without air flow information, evaporation rates cannot be calculated. Which doesn't matter anyway since the experiment was never replicated.


    P. S. Your 'plenty of cold air coming through the (cracked) windows' is ventilation.


    Also, you don't understand Pd unloading and D2+O2 oxidation on catalysts very well either.

    You are saying that peer-reviewed journal papers by Fleischmann are not credible.


    Yes. But I also give the reasons. A.) Inaccurate calorimetry in most cases. B.) In some cases, failure to prove the accuracy/precision of a presumed measurement method (specifically the 'video stills of foaming' method for detecting supposed 'heat-after-death' events). And I put the objections in writing for others to examine and critique. Unlike Fleischmann, who just gets irritated at his critics and throws their papers out. Hint: Fleischmann is not a god. His SERS experience should have told you that.


    You are telling us that a Fellow of the Royal Society writing in a peer reviewed journal is not credible.


    Yes. 'Calls to authority' prove nothing JR. Garbage gets published in peer-reviewed journals all the time. Peer review just minimizes the quantity of it. Further, the CF field is well known for publishing their own stuff after 'within the group' peer review, i.e. after highly pro-biased reviewing.


    Who do you think you are?


    I am an equivalently trained chemist with research experience is several relevant areas to the 'cold fusion' arena. Fleischmann's pedigree is not substantially different or better than mine. His publication record is better, because he went the academic route, while I went the industrial one. That usually means he would get at least 10X the publications that I do. However, that does not guarantee the correctness of said publications. Furthermore, if you'd ever read what I write, you'd understand that my view is that the whole field of CF pre-2002 was caught by an unrecognized systematic error. The failure came when they ignored my discovery and proceeded as if I was wrong.