Mizuno Airflow Calorimetry

  • I was awaiting 1 more :thumbup:from the other Mods to wrap this one up, but it's late in Europe so will use Ascoli's post as an opportunity to "unofficially" close this thread. This topic has been beaten to death, going around in circles, and other than create disharmony accomplishing nothing. Jed not only deserves a break, but kudos for being so open and honest. This peer review stuff is tough...especially so when one of the reviewers sees shades of conspiracy in almost everything, so my hat is off to him for being so tolerant.

    Until further developments warrant looking into this in more detail, we shall now take a wait and see attitude. Jed can have the last word if he likes, then I ask everyone to voluntarily refrain from comment.

  • Redux (revision 3)

    The "different methodology for origin or Power column figures" is now satisfactorily resolved - an example of how transparency and probing can lead to better understanding. And the solution is the: "both P=V'*I' and V and I come from a higher resolution set of figures V', I' " that I suggested! It should also be noted that this was an informal publication issue, and no methodological problem. A shame there is little appetite to resolve the other (resoluble) outstanding issues, specifically the difference between active and control dTair/dt that indicate very different effective specific heat capacities. My original suggestion for how this could be resolved (a heater heating the output air stream directly, rather than via the reactor case) does not work.

    The power issues

    Ascoli's useful observations on the 2016 data (that is the 100% excess power 2017 published results that preceded the R19 and R20 results):

    • The input power is measured differently between the control and active run spreadsheets. In the control case with a mains power analyser*. In the active case with figures computed from V*I. This is not evidenced on the spreadsheets where the power column is used and filled with V*I or other measurements which by inspection would come from a power analyser.
    • The resistance of the heating element for control and active runs differs by a factor of 2.
    • The dynamics of the control data are 10X faster than the active data
    • The mass of the reactor used for these results is inconsistently stated as 50kg old-style (from the 2019 paper) and 20kg - seemingly new-style though not called that (from the 2017 paper!). It would be good to have some confirmation of which reactor was used, and also which methodology. the old-style reactors were measured individually by the calorimeter. the new-style reactors used two reactors, control and active, together at the same time.

    On investigating the dynamics issue we find that the active data has an internal heater, The control data has an external heater. This explains the different resistance. It does not explain the very different time constants between control and active data. Active runs have lower dTair/dt than control runs which makes no sense given that both are supposed to have the same heater power. There remains also no explanation for the different input power measurement between active and control runs, which is unfortunate, although not in itself a problem. The exact methodology of these results (old-style or new-style) has still not been clarified; the two papers contradict each other.

    There is nothing here that necessarily invalidates these results. However, the poor methodology (very different systems used for active and control runs) and poor documentation - e.g. the difference in power measurement is not made explicit on the spreadsheets - is unfortunate and makes it more difficult to accept extraordinary results as real rather than some mistake caused by poor methodology and record-keeping. (Take another example of poor methodology - e.g. the calculated air speed shown as measured).

    As one example. It would normally be obvious from reactor temperature whether control power out was more or less than active power out (at least at the 2x level reported). In this case that cannot help, because the active reactor, with external heater, would get much hotter even with the same power as the control and no mesh inside.

    It should also be possible to do something similar (compare dynamics) with the R19 results given detailed spreadsheet data.

    On the positive side: we can see that looking at the dynamics of these runs, together with the reactor case temperature, allows an independent measurement of the total (output) power, as an excellent cross-check on the output calorimetry that should be accurate +/- 20%.

    * There are other options, but Yoko power analyser remains the most likely by far. See here for details.

    The calorimetry issues

    Enormous amounts has been said about the air-flow calorimetry. None of this (IMHO) invalidates the overall findings in this area, although there are many cases of laxness in reporting (e.g. substituting calculated air velocity for measured) that lead observers to distrust the data. Airflow calorimetry has known artifacts due to turbulence, these are evidenced by noisy power out results with noise proportional to deltaTair. However, on careful inspection, averaging results samples 20,000 per second over 25 seconds intervals, as stated in the paper, would make this randomness go away. It seems pretty likely that for the 2016 active run no such averaging was done, because the noise frequency statistics don't fit any plausible noise source after averaging, but do exactly fit turbulence before averaging.

    • It is lax that the exact conditions of each run and each control (averaging / no averaging etc, measurement instrument, calculation method) are not precisely described in the paper but this does not (for the matters investigated) affect the integrity of the results
    • It is very bad practice that conditions are clearly different between control and active runs in many ways as above. Some of these ways (averaging) do not affect integrity. Some (power measurement) might affect integrity. Some of them (different rise times) remain unexplained and so might also affect integrity when understood.

    The blower airflow, independently measured, seems non-uniform and not easy to measure precisely. That calls into question the measurements of this in the paper, which are very uniform. Possibly, Mizuno is just much better at doing this measurement, but it remains a question mark because the blower specified necessarily delivers a non-uniform airflow and this remains measured in the tube as specified by the paper. It seems possible, given other errors, that the conditions of this measurement are different from that in the paper. Also, it is possible that the uniformity is a measurement artifact (given that it seems easy to get a range of speed values from the specified anenometer).

    Does this ambiguity affect the integrity of the absolute measurements of power? Probably not, or at least not by more than +/- 20%. The problem is that lack of uniformity between control and active runs make it impossible to trust control data, and then questions about absolute measurements (inevitably there are many that can be made, it is more difficult than control vs active results) remain.

    Another ambiguity that is annoying is the difference between calibrated for calorimeter heat loss, and true, absolute, results. These differ by approx 25% - the measured heat loss of the calorimeter. So a 50% claimed excess turns into a measured 25% excess in absolute terms. The compensation for calorimeter losses is reasonable, but to benefit from it we need clearer certainty that calibration is all done under the same conditions as active runs. And, we need to know precisely when results are absolute, and when they are adjusted for heat losses. This is not made clear in the papers. The results rest on a lot of pre-calibration, and changes in setup will invalidate this:

    • Pre-calibrate for air speed in terms of fan power (all results with air speed not directly measured)
    • Pre-calibrate for calorimeter losses (adjusted results)

    Finally, it is regrettable that these mouth-wateringly good results (like the 2016 ones) cannot be tested independently - or at least when this was done one time the good results go away (IH). Excuses can be made that IH did the wrong thing - but this is weird - both Mizuno and IH would presumably be motivated to get proper results by doing the right thing, so any issues of that sort could be sorted out at the time, or after, assuming good will. If no good will the question is why, given such a large common interest.

    1. Ascoli's view that the presentation of these results shows deliberate attempt to deceive I do not share, and it has now been otherwise explained and even ascoli agrees. I agree with him that the methodological issues are bad, and could allow false positives. i disagree that this would be understood by the experimenters. Real work is chaotic unless well conducted with discipline throughout. that was not the case here. Getting some positive results (as false positives) from a large amount of poorly recorded data is quite possible as mistake. No-one can disprove allegations of "on-purpose" mistake. But, it is wiser to reckon this is mistake, and frankly not helpful to accuse scientists of bad intent whenever their work has mistakes. If you did this generally you'd end up with few scientists. So: be clear about issues, but don't add to this speculation about character.
    2. Jed's view that questioning these results shows pseudo-skepticism or deceit I also disagree with. I can understand how frustrating it is to have every little detail questioned - but that is what should be done when extraordinary results not immediately replicable are shown. In fact experimenters hoping to make a credible case would welcome it. None of this care is needed if results are replicable - you just point to (as many as needed) replications. Any issues can be investigated/corrected ab initio in the replications, which is so very much easier than trying to infer the past. For results of this type to stand up the methodology needs to be very well controlled - and that is just not done here, in very many ways.

    Given the above, you don't have to be a died in the wool skeptic to have reservations about whether Mizuno's collection of positive results actually represent working LENR. It is frustrating, because definite measurements of this magnitude would appear pretty easy to make. You'd think it would be worth it for Mizuno and some independent guy to go through the methodology, work out a water-tight set of checks based on what has been done, and make sure that the necessary tests are all done and recorded together in an experimental run. You feel it is all there, just not reliably all put together.


    PS - no doubt I've made mistakes above, happy to be corrected.

    PPS - this thread has suffered drift from Jack Cole's original intent. Apologies, I don't quite know how it happened. If anyone cares a lot posts could be split off onto another thread. Original was airflow stuff. this has morphed into two things: power stuff specifically looking at 2016 results, and airflow stuff. They do however fit together.