Research Team in Japan Reports Excess Heat - (Nissan Motors among otheres)

  • LINR,


    17 authors from 4 universities, and 2 industries alone is some form of independent verification. In addition, this is from their report:


    "Reproducibility at different laboratories: Providing two divided sample powders of PNZ-type from

    same-batch fabricated powder, independent parallel test runs were carried out at Kobe University and Tohoku University. Results of excess heat generation data from both laboratories were very reproducible for room-temperature and elevated-temperature conditions. Thus, the existence and reproducibility of new exothermic phenomenon by interaction of nano-metal composite samples and H(D)-gas have been confirmed."


    P.S. This is my response to LINR's post moved to Clearance.

  • The CF/LENR field has had the same results for the past 29 years: A few carefully selected scientists at a few carefully selected labs were able to duplicate the results. Then, it stops at that point without any more progress. My theory for why this happens is that as individuals outside of LENR (with expensive, accurate, and contamination free equipment) try to reproduce the results, they fail.

  • To show confidence in their results, I ask that they send their device to at least two Ivy League, Caltech, or MIT laboratories for pass/fail tests with full disclosure of the results but not of any IP. If it passes all of those tests without any involvement or intervention by LENR proponents, then I will believe that particular device has a real effect which requires further study.

  • Then, it stops at that point without any more progress. My theory for why this happens is that as individuals outside of LENR (with expensive, accurate, and contamination free equipment) try to reproduce the results, they fail.

    Your theory explains something that did not happen. Cold fusion was widely replicated, and these replications were published in mainstream journals. The methods were explained carefully, in detail, and they could be used by experts again. The reason there are no present replications is not because replications failed, but because of academic politics. Any professor or national lab researcher who proposes a cold fusion experiment will be fired. His or her reputation will be destroyed in the mainstream media. That happened to dozens of researchers. That is why no one will do the experiment today. It has nothing to do with technical problems or with your imaginary explanation.


    New ideas and discoveries in science, such as the laser and the MRI, have often been suppressed. Researchers have often been fired for promoting them. No one should be surprised this happened with cold fusion. Fleischmann predicted that it would. It is unusual that the opposition has gone on so long, and succeeded so well.


    It is also no surprise that people like you, who read nothing and know nothing, imagine that there is some other cause. Read about the laser and other suppressed discoveries and you will find many people said the same thing you are now saying.

  • Jed Rothwell

    "New ideas and discoveries in science, such as the laser and the MRI, have often been suppressed."


    The recently deceased Callestous Juma cites 600 yrs of opposition to technological innovation,

    everything from tea harvesting to solar power


    https://theconversation.com/wh…ns-greatest-threats-62502


    Opposition is is part of human nature, based not on facts but on perception of loss

    ..loss of income, identity, worldview or power.


    The president of Tokyo university. Arima ,a nuclear physicist, is reputed to have said in 1989

    if cold fusion turned out to be real he would quit his job, shave his head and become a Buddhist monk.


    Arima has not lost all his hair as far as I know.. perhaps his worldview remains the same at the age of 87.

  • The recently deceased Callestous Juma cites 600 yrs of opposition to technological innovation,

    everything from tea harvesting to solar power


    https://theconversation.com/wh…ns-greatest-threats-62502


    Opposition is is part of human nature, based not on facts but on perception of loss

    ..loss of income, identity, worldview or power.

    I agree. That's a good essay. I would add two factors not described in the essay (which may be in the book by the same author):


    1. Machavelli's principle:


    "It must be considered that there is nothing more difficult to carry out nor more doubtful of success nor more dangerous to handle than to initiate a new order of things; for the reformer has enemies in all those who profit by the old order, and only lukewarm defenders in all those who would profit by the new order; this lukewarmness arising partly from the incredulity of mankind who does not truly believe in anything new until they actually have experience of it."


    2. People have an instinctive fear of novelty. They are afraid of the unknown. This instinct is at war with curiosity, which another instinct. (It often happens that instincts conflict, in things like "fight or flight" behavior.) This fear of novelty is described in this collection of quotes:


    http://amasci.com/weird/skepquot.html


    Especially:


    "If we watch ourselves honestly we shall often find that we have begun to argue against a new idea even before it has been completely stated."

    - Wilfred Trotter

  • To show confidence in their results, I ask that they send their device to at least two Ivy League, Caltech, or MIT laboratories for pass/fail tests with full disclosure of the results but not of any IP.

    That is ridiculous. No one at Caltech or MIT is capable of doing a cold fusion experiment, or has the slightest interest in doing one. To do an experiment takes months or years of effort, and PhD level skills. What on earth makes "lenrisnotreal" think there would professors at either of these institutions who would be willing to drop everything and spend the next few years learning how to do these experiments? This is like asking them to take over the controls of a Tokamak plasma reactor.


    It is not as if you plug the machine in and it gives you an answer. This comment reminds me of someone years ago who suggested that a skeptic should visit a cold fusion experiment with a helium detector and, when no one is looking, he should slip the detector out of his pocket, take a sample of gas, and see how much helium is in it. That's not how it works. The photo below shows the kind detector you need. Please note:

    • It does not fit in your pocket.
    • It does not produce an instant answer.
    • You have to learn a lot about how to use it before you get a meaningful answer. An ordinary prof. at Caltech or MIT would not know which end is up.
    • The experiment has to be designed around it.


    EneaMassSpec1.jpg


    Another problem is that if anyone at Caltech or MIT did express interest in doing a cold fusion experiment, they would be fired on some trumped up charge, tenure be damned.

  • LINR " ask that they send their device to at least two Ivy League, Caltech"


    Jed Rothwell "To do an experiment takes months or years of effort, and PhD level skills."


    Kobe/Tohoku Summary " In the first year (2015-2016) program, a new highly accurate oil mass-flow calorimetry system was installed at Tohoku University. The system was designed by improving performances of the already existing MHE calorimetry system (500 cc reaction chamber and many operation components) at Kobe University. We fabricated components and assembled at ELPH (Electron Photon Science Research Center) of Tohoku University. In July 2016, main body of system was constructed and started to make performance test in open room, and after two months for primary tests, the system settlement was finished in a new temperature-controlled (within ±0.1℃) room. Evaluated accuracy of the calorimetry system is satisfactory, namely less than ±1.5%error in thermal-power measurement, less than ±0.1℃ error in temperature detection by thermos-couples and RTDs, and less than ±2% error in thermal flux measurement. The new system has started to be used for the collaboration experiments of 6-parties-joint team since August 2016

    Sending the 'device' to Ivy League nirvana as LINR advocates is.... NR ..not real . The whole measurement/control system must be sent plus instructions/instructors

    Perhaps after 2 weeks reading LINR will be less NR.

  • Speaking of calorimetry; at minute 4:50 in the CFN McKubre podcast: Cold Fusion Now : New podcast with Micael McKubre he speaks of it. Interesting. He starts off by saying "we brought calorimetry kicking and screaming into the 21st century. It was a medieval discipline rarely used the way FPs needed to use it". Then goes on to describe how it was improved and tailored to meet the needs of LENR research.


    Speaking of which, one thing I have noticed is how so many in the field seem to reinvent that wheel, as they come up with their own version of an accurate calorimeter. Even in this Japanese report, they talk of painstakingly developing their own. Miles had his own way, even wrote a how-to guide on it. SRI theirs. MFMP most of all, publicly went through their own trial and error period, before settling on a system that is both portable and accurate. LFH sells a kit that is probably a little different from everyone else's also.


    Is it good to have so many variations? Seems to me, that if one model was agreed upon by all to be the best, then all used it, everyone would at least be starting on the same page. But that is just my layman's perspective.

  • Is it good to have so many variations? Seems to me, that if one model was agreed upon by all to be the best, then all used it, everyone would at least be starting on the same page. But that is just my layman's perspective.

    I think it is better to have many variations, for the following reasons:


    If they all used the same type of calorimeter, people might suspect they are all making the same systematic error. Since the systems are different there cannot be one systematic error.


    Different types of calorimeters have different strengths and weaknesses.


    Some are well-suited to one kind of experiment. For example, Melvin Miles' equipment worked well with the method of helium collection that he decided to use.


    Some calorimeters are very expensive, such as the one at SRI. Most researchers could not afford them, but it is a good thing that a few researchers could afford them.

  • Speaking of calorimetry; at minute 4:50 in the CFN McKubre podcast: Cold Fusion Now : New podcast with Micael McKubre he speaks of it. Interesting. He starts off by saying "we brought calorimetry kicking and screaming into the 21st century. It was a medieval discipline rarely used the way FPs needed to use it". Then goes on to describe how it was improved and tailored to meet the needs of LENR research.


    Speaking of which, one thing I have noticed is how so many in the field seem to reinvent that wheel, as they come up with their own version of an accurate calorimeter. ..

    Even if as Jed explains variations in instruments are a good thing to cross check, reinventing the wheel in each lab, especially the tricks, is for me the key to todays problems.

    Not enough capitalization of know-how...

    As if an electrician had to rebuild his galvanometer each time, and also discover all the artifacts .

  • Even if as Jed explains variations in instruments are a good thing to cross check, reinventing the wheel in each lab, especially the tricks, is for me the key to todays problems.

    Not enough capitalization of know-how...

    As if an electrician had to rebuild his galvanometer each time, and also discover all the artifacts .


    One thing for sure is that reinventing the calorimeter burns up a lot of extra time, and money. It may be for the better as Jed says, but the Japanese team took one year to get their system up and running:


    "In the first year (2015-2016) program, a new highly accurate oil mass-flow calorimetry system was
    installed at Tohoku University. The system was designed by improving performances of the already
    existing MHE calorimetry system (500 cc reaction chamber and many operation components) at Kobe
    University. We fabricated components and assembled at ELPH (Electron Photon Science Research
    Center) of Tohoku University. In July 2016, main body of system was constructed and started to make
    performance test in open room, and after two months for primary tests, the system settlement was
    finished in a new temperature-controlled (within ±0.1℃) room. Evaluated accuracy of the calorimetry
    system is satisfactory, namely less than ±1.5%error in thermal-power measurement, less than ±0.1℃
    error in temperature detection by thermos-couples and RTDs, and less than ±2% error in thermal
    flux measurement."


    That is one less year for the research, in addition to the added cost of man power and materials. I think MFMP wasted 2-3 years as they perfected their set-up.

  • " ±1.5%error " is probably necessary when you are looking for excess heat of 5-10% of input

    as the Kobe group reported earlier on


    Once you have excess heat running in the 50 to 100% level less accuracy is needed

    such as with air calorimetry, as Mizuno used,which is cheaper and more flexible

    to accommodate different reactor configurations.


    If and when economic levels of excess heat of 200% are reached

    where the durability, controllability ,nanopreparation are needing to be optimised,

    other assays of LENR activity e.g Helium/Xenon/He-3 /nitrogen levels may be more informative

    rather than just the temperature sensors.

  • bocjin,


    I read this last week on 22passi, and thought of you instantly. Thought of copying here, but forgot about it until your post. Getting old I guess. :) . It was a response from the GSVIT guy (Masso) to this Japanese report. I found it interesting in that he has always criticized LENR (an avowed skeptic) for lack of signal to noise ratio, and here he complains the opposite. Or at least that is how I interpret it:


    "I limit myself to highlighting some points of that publication.

    1 - The project aims to verify the existence of exothermic reactions.

    This means that they do not believe they have had sufficient experimental results to declare that they really exist.

    2 - In the first year of collaboration they developed the calorimeter (oil flow).For other months, until August 2016, after having placed it in a special thermostated room at the tenth grade, they tested and verified it.

    It is to think that you are looking for a very small excess, otherwise you do not understand what need there would be of such an effort to build a calorimeter so precise (except that, perhaps by mistake it is said that temperatures are read with an error of less than 0.1 ° When I was using the calorimeter built for Focardi 20 years ago, I used PT100 thermometers with precision 0.01 ° C). The question is: if you are still looking for some W in excess, how can you declare that in 5 years you will produce boilers for our houses? Did they meet Andrea Rossi?

    3 - Throughout the document we speak of excess heat. There is talk of an excess of 26,000 MJ / mole of hydrogen (or deuterium). This number is impressive, being 100,000 times higher than the calorific value of hydrogen.Even if it is a printing error (ie it was 26) and the correct value is that indicated shortly after (85 MJ) equal to 400 times the calorific value of hydrogen (I am not clear the meaning of the different measure, a referred to a volume "transferred" and the other adsorbed, since, if I interpret the text well, I would expect the second superior to the first), it is clear that we would not be in the field of chemical reactions.

    I would like to point out, however, that an excess of heat has no meaning except to estimate how much fuel is needed to obtain it. A number, albeit huge, gives no indication of the reliability of the data. That is, you should not think: "ok, they will be wrong, but if they say such a large number, there will be something for sure".

    In fact, an excess of heat is measured as a product of an excess of power for a time. If you use a long enough time you can declare huge excess heat, but the measured power was actually small and therefore with all the uncertainty that we know.

    For example, already 25 years ago Piantelli declared energy excesses of 100 MJ, leaving his cell in operation for many months. The power measured in excess was actually a few W. And after 25 years, despite the support of companies like Fiat, we know what results it has arrived.

    4 - In one point (at the end of page 3) we talk about excess power: we talk about 10-24W but it is not said that input power was necessary to maintain them. If, as was the case with Piantelli and Focardi, it was necessary to add a lot more than 100W, it is included in those measures (COP = 1.1) that are unconvincing if all the data are not available, including the construction drawings of the calorimeter . If instead they were obtained with a small input power, it is not clear why it was not indicated, as it would make the difference.

    5 - The document reports on the analysis of the components before and after working. These are data that in my opinion do not confirm anything, as we have already seen with Rossi's copper."

  • Massa Mario "This means that they do not believe they have had sufficient experimental results to declare that they really exist. "

    The researchers call their results modest in the ICCF20 report


    Massa Mario tends to exaggerate for rhetorical effect, as with asserting that Mizuno does not know how to calculate circumference.

    I fear that his website is not well visited so perhaps he feels the need to shout out some sensation to get attention


    Science is all about verification and measurement. Also the Kobe researchers Kitamura et al are trying to find alloys which give a better effect.

    Apparently a mixture of palladium and nickel in a 1:10 ratio has been found to give the best Xs heat.


    "The maximum excess power is about 10 W in the PNZ3#3-3 and #3-5 phases with the input power of (W1, W2) = (94 W, 40 W)"


    The researchers know as well as you and I and Mario that a ~10% xs heat to input ratio is useless practically which is why they are proposing 5 years to develop some kind of industrial application which might give then some funding leeway to get up to 200% xs heat level somehow.

    It's no use saying three years and getting only two years funding.. I'd go for more.

    Nissan and Toyota probably have plenty of competing uses for the small amount of yen they are donating.

  • If they all used the same type of calorimeter, people might suspect they are all making the same systematic error.


    Exactly. And the data analysis methodology is an integral part of the calorimetric method, just as the construction of the equipment is. All cold fusion calorimetrists use a lumped parameter approach to their data analysis, Irregardless of their design (some designs such as single point measurements as in isoperibolic calorimetry can't help this, there is no other option), and this includes Takahashi, et al. Thus they are all doing what Jed clearly identifies as a poor method. The papers put out by Takahashi covering the work in the recent report are another example of this, and what's worse is they show that their system is highly heterogeneous. Yet they still use the lumped parameter approach (i.e. 1 equation to describe the calorimeter), which implicitly assumes that there are no significant temperature variations. Thus they are set up for a CCS problem. Their calorimeter captures about 75% or so of the input heat (as opposed to 98-99% levels reached by Storms and McKubre) which makes the potential impact of a CCS larger.


    Further, they don't do standard error analysis (Propagation of Error) to establish their accuracy and precision, and they fail to treat the calibration constants as experimentally determined numbers (which they are), so their error band calculations are incorrect by normal standards.


    All in all, not a very good situation to claim extraordinary results from.


    I will also note that Takahashi, et al claim they were trying to duplicate the Kitamura, et al Physics Letters A paper that I tried to publish a Comment on. The manuscript for that Comment is included in my previously mentioned whitepaper. In it I show that those results were consistent with prior research with those materials and thus point out a non-extraordinary explanation for the observations. If Takahashi is replicating Kitamura, my explanation applies to their work also.

  • Kirkshanahan


    "they don't do standard error analysis (Propagation of Error) to establish their accuracy and precision"


    This appears to be 100% assertion on Kirkshanahan's part.


    I did propagation of error in high school physics experiments when the Vietnam war was young.

    Taking account of the overall system errors by considering component contributions is standard


    Where does it state that Takahashi et al have specifically ignored propagation of error in the 2016 report?


    Here . http://vixra.org/abs/1612.0250


    or here

    " Evaluated accuracy of the calorimetry system is satisfactory, namely less than ±1.5%error in thermal-power measurement, less than ±0.1℃ error in temperature detection by thermos-couples and RTDs, and less than ±2% error in thermal flux measurement. "

    Anomalous Heat Effects by Interaction of Nano-Metals and H(D)-Gas

    Authors: A. Takahashi, A. Kitamura, K. Takahashi, R. Seto, T. Yokose, A. Taniike, Y. Furuyama

  • “This appears to be 100% assertion on Kirkshanahan's part.”


    You’re not very good at discerning things are you? You should actually read the relevant papers and such.


    “I did propagation of error in high school physics experiments when the Vietnam war was young. Taking account of the overall system errors by considering component contributions is standard”


    Good for you. Yes, POE is a standard technique as I said. You can check here for a description (https://en.wikipedia.org/wiki/Propagation_of_uncertainty), but it has been around for a long time. I have a book by Hugh D. Young (“Statistical Treatment of Experimental Data”) written in 1962 that discusses it on pages 3-9 and 96-101. Any good text on the subject will cover it. Which is why it is so sad that these CF researchers don’t do it right.


    “Where does it state that Takahashi et al have specifically ignored propagation of error in the 2016 report?”


    Y. Iwamura et al. / Journal of Condensed Matter Nuclear Science 24 (2017) 191–201


    Page 195 has the relevant info. They use the approximate equation


    δ (HEX ) ≈ | δ (FR )| ρ C ΔT / ɳ + | δT)|    FR ρ C / ɳ + | δ (W)| ( Their eqn. (3).)


    (Hopefully the Greek letters and symbols will show up properly in the forum post.)


    to compute the error instead of the POE equation. Note that their ‘δ’ is some sort of approximation to the normal standard deviation, σ. It is mathematically incorrect to add sigmas, because it is variance, or σ2, that is additive.


    They also state:


    “Considering that experimental variables are FR, ΔT and W, we can assume that error range of the calculated excess heat is the sum of fluctuations of oil flow rate, temperature difference and input electrical.”


    Note the word “assume” and the lack of ɳ, the ‘calibration constant’, being listed as a variable (which it is as it is experimentally determined).


    I did some back-of-the-envelope calcs and the ɳ term can be significant. I also checked up on their use of density and heat capacity and found that they may be oversimplifying as well. See:


    “A Mass-Flow-Calorimetry System for Scaled-up Experiments on Anomalous Heat Evolution at Elevated Temperatures” A. Kitamura, A. Takahashi, R. Seto, Y. Fujita, A. Taniike and Y. Furuyama, J. Condensed Matter Nucl. Sci. 15 (2014) 1–9

    Where you will find figure 2 to show those quantities as functions of temperature, which convinced me that using an average temperature in their calibration equation could also add 10-20% errors in.


    All in all, very sloppy error analysis. My B-O-E calcs led me to believe 3sigma levels of 20-30% of input power are easily obtained, and since we are talking about radical new physics here if LENR is real, we probably should use the 5sigma level just like they used in searching for the Higgs boson, which give roughly 30-75% of input power for the error bars. None of the presented results are out of that range I believe.


    And don't miss that _here_ I _am_ talking about random error. The heterogeneous temperature distributions however can potentially allow for a systematic error as well (the 'CCS'), just like in the case I examined in my 2002 publication on the reanalyzed Storms data, on top of the random error.

     

  • Kirkshanahan "they don't do standard error analysis"

    You suggest that the errors in ɳ and the averaging of ΔT , C and ρ contribute large errors that the Iwamura et al have ignored.

    You write such things " could also add 10-20% errors in."

    Does that mean 10-20% of their calculated error figure or 10-20% of the actual calculated quantity?

    For example Iwamura et al get 0.26W error for 80W input (with an excess heat rate of 5W)

    Are you suggesting that the HEX error is up to 0.26 +20%= 0.31W???

    or =+20% of 5W = +1W??.


    In the latter case this would mean your error calc is 5 times bigger than that of the Iwamura et al .

    This is an extraordinary difference.

    You wrote B-O-E . What values of ɳ , ΔT , C and ρ are you plugging into your B-O-E?

    For the 80W input energy values Iwamura et al have written error estimates.

    oil flowrate error = 0:012(ml/min);

    oil temperature change error = 0:261(K);

    input energy rate error = 0:031(W);

    excess heat energy rate error (HEX) = 0:260 (W):


    What are your estimates for these?

     

  • That's better.


    This would be a useful example (of no binding significance, but still helpful) to investigate the way that systematic errors might, or might not, be significant in such CF experiments.


    Many temperature measurement points enable us to judge whether observed excess anomalous heat is real or not.


    From p192. We'd therefore hope that errors due to different temperature distribution inside the reactor could be detected or eliminated. But, I won't assume that is the case!


    Also, there may be any number of approximations in the calculations given there. But, if such approximations are truly conservative I see little harm. I'd still rather more precision because a precisely measured quantity can sometimes indicate an anomaly that otherwise can be overlooked, and might indicate an methodological issue brushed under the carpet.


    Error bound calculation. In this case summing variances instead of root mean squaring them is conservative, and unproblematic, when establishing error bounds. Kirk: as you know I often find your posts here invaluable: you bring facts and analysis to the table that others overlook. You might properly point out that such over-approximation of error bounds (especially without note) indicates a possibly cavalier approach to statistical analysis. But I can't see anyway that it harms the subsequent analysis except to weaken slightly the experimental results. The more serious issue in the given calculation is that it has no allowance for change in efficiency due to differences in reactor conditions between calibration runs and real ones. A better error calculation would note that sensitivity and therefore allow CCS to be eliminated or detected.


    Other errors. Kirk implies there are other errors in the Iwamura analysis. I can't see that a presupposition that there are (Kirk, perhaps) or are not (Bocjin, perhaps) without careful examination does anyone any good. As we say with the FTL neutrino measurements, when results are anomalous everyone properly expects it likely there is some subtle or obvious error underlying the analysis, and they are usually right.


    I'm a bit pressed for time now, but I find all these Japanese experiments interesting in that they have multi-lab replication (but of course that, from the writeups so far, does not rule out flawed and replicated methodology) and claim results that are strong given their expensive calorimetry. However, purely from the outside, I note the high temperature and therefore higher heat losees (Kirk mentions 75%ish efficiency). At lower efficiencies anything that changes the efficiency and therefore the calibration factor is more significant. this is therefore a prime candidate for CCS-style effects.


    I personally like things to be understood. These Japanese results are well enough documented now that an external observer could look at them and note flaws to be addressed, or not. It looks like they have enough support to address any flaws.


    In the process of doing this a better understanding will out. Either real flaws remain unaddressed, or real anomalous behaviour remains unremarked by the outside world.


  • To calculate the error (a value-laden term, ‘standard deviation’ is more correct and usual, but the term ‘error’ is ubiquitous) in a quantity computed from experimental variables, the Propagation of Error (POE) or Propagation of Uncertainty formula is used. In the following I will not be using Greek letters, I will use the standard English ones instead. I here define:


    in general, x_i is used to indicate ‘x subscript i’

    s = standard deviation, as per the normal (or random) distribution equation

    p(x) = partial of x, or with respect to x, i.e. p(R)/p(x) is the partial derivative of the function R with respect to x

    n = 1/eta = the reciprocal of the heat capture efficiency as defined in the paper referenced, note that in other papers they sometimes call eta Rh (below I use the reciprocal for simplicity to make the math easier to understand), it is also the ‘calibration constant’ and is determined by ‘calibrating’

    SUM [ ] – indicates a summation of the terms inside the brackets

    r = rho – here meaning the density

    C_p = C sub p – the heat capacity or specific heat of the calorimeter fluid

    dT = delta T = a temperature difference


    The POE formula is, given a function R that is a function of several experimental variables (x_i), the variance in R is computed from the summation of terms composed of the partial derivative of R with respect to x_i squared times the standard deviation of the variable x_i squared, i.e.

    (s_R)^2 = SUM[ (p(R)/p(x_i))^2 * s_x_i^2 ]


    In our case R = P_out = n * F * r * C_p * dT, so the partials are: p(r)/p(n) = F * r * C_p * dT, p(r)/p(F) = n * r * C_p * dt, p(r)/p(r) = n * F * C_p * Dt, p(r)/p(C_p) = n * F * r * dT, p(r)/p(dT) = n * F * r * C_p


    So the standard deviation of Pout = the sum of the squares of those terms multiplied by the variables observed variance (variance is the standard deviation squared).


    Now a little simplifying trick. In simple equations such as this one can divide the summation by the square of the function and each term will then convert to the variance of variable x_i divided by x_i squared, which equals the standard deviation divided by the variable, all squared. This will then allow one to talk about uncertainty using fractions of the variables (or percentages if desired).


    So, we get

    s_R^2/Pout^2 = ( s_R/ R )^2 = SUM [ s_x_i^2/x_i^2 ] = SUM [ (s_x_i / x_i)^2 ] (‘R’ = Pout)


    It is now easy to see that variables with small fractional variation will contribute little to the overall sum.


    Takahashi et al only focus on F and dT, as noted by bocijn. F is not specified in the paper so we will have to start assuming, but for consistency with the authors assertions, let’s assume the fraction s_F/F is about .01 (or 1%). That squared is .0001. It may be less than that, but we don’t know for sure. dT is given as .26 degrees. Shall we use 300C = 573 K? That means the ratio is .26/573 = .00045, squared is 2.06e -7, so if we only consider these two variables as do the authors, the fractional error in Pout is very small (~1%).


    How about F and p? The authors ignore them. We can compute those values from the equations on Figure 2 in the paper J. Condensed Matter Nucl. Sci. 15 (2014) 1–9 for the temperature range that the authors cite (200-300C), and we get 2.18 <= C_p <= 2.52 J/goK and 852 <= p <= 921 kg/m^3, for ranges of 0.34 and 69 or fractions (based on smallest numbers) of 0.13 and .081 or as percentages 13% and 8%. That is actually a significant span. In their equations the authors use the average temperature, but they don’t define what that is based on, the 200-300C range or a 25-300C range (which would be much worse than the numbers given here). So depending on the temperature variation of the experiment, one could get standard deviations in the excess power on the order of 10-15%.


    Now we have to decide how we will decide if we are ‘out of the noise’ or not. Do we use the standard 3 sigma, or do we get tougher due the import and unlikelihood of the results and use 5 sigma like the Higgs boson hunters? 3 sigma is 30-45%, 5 sigma is 50-75%. So on an input power of 134W, that gives a noise band of +/- 40-60W or 67-100W. The authors report: “The peak ratios of excess heat to input power were about 4% and 5% for 80 and 134 W input, respectively.”


    To summarize, using an average T in the excess heat calculation greatly increases the error and the assumption that density and specific heat are of no import becomes wrong. The reported results are well within a reasonable noise band. This could be fixed by not doing that of course, but that’s not what they say they did.


    Note that I haven’t talked about n yet. So what about it? It is certainly an experimental variable. It is determined via an experimental process. So it should be evaluated as to its contribution to the noise band on excess heat. But, once again, we have no information on the precision, so we have to resort to ‘sensitivity analysis’ to try to decide if possible errors in it are important. Having converted to the fractional approach for computing Pout's error, we can simply state that it will have a directly proportionate effect. If n's standard deviation is 1% of its value, that will cover the 4-5% value reported in the quote above. If it is bigger it will allow for larger errors, and those potential results would be deemed ‘in the noise’ as well.


    All of the above is standard error computation based on random statistics. Because the temperature distribution inside the calorimeter are quite large, as shown in the Figures, the possibility of a CCS exists. One would need to replicate the experiment many times, specifically including replication of the calorimeter/cell assembly, which is likely where the nonuniforminty is first introduced. Running multiple runs on 1 batch of powder would not even touch this issue. The other way to do it is the same as I suggested for the F&P-type cells. Put at least two resistive heaters in the powder and run them at different power levels to see if the calibration holds when the temperature distribution is changed.

  • Kirkshanahan

    Are you suggesting that the HEX error is up to 0.26 +20%= 0.31W???

    or =+20% of 5W = +1W??.


    In the latter case this would mean your error calc is 5 times bigger than that of the Iwamura et al .

    This is an extraordinary difference.


    You wrote B-O-E . What values of ɳ , ΔT , C and ρ are you plugging into your B-O-E?


    For the 80W input energy values Iwamura et al have written error estimates.


    oil flowrate error = 0:012(ml/min);

    oil temperature change error = 0:261(K);

    input energy rate error = 0:031(W);

    excess heat energy rate error (HEX) = 0:260 (W):

    What are your estimates

    oil flowrate error = ? (ml/min);

    oil temperature change error =? (K);

    input energy rate error = ? W);

    excess heat energy rate error (HEX) = ? (W):