Error bounds for Mizuno R19 results

  • OK - well i agree that is more difficult to explain but i still hold for mistake rather than deliberate malfeasance. Jed perhaps can work it out.


    Yes, I fully agree on this. IMO, the more likely cause of the error that lead to the dramatic values of alleged excess heat was an inadvertent entry into the data system of a wrong value of shunt resistance. But let's allow JedRothwell to provide his explanation. He knows a lot about the Mizuno's sensational results that he presented at ICCF21 in June 2018. I'd just ask you to help me explain him the absolute necessity of having a plausible answer on this crucial aspect, putting aside for a while the less urgent issues.


    As we know from the spreadsheets, the two 120 W tests were held on two consecutive days of May 2016 and it is very likely that the other four tests (at 80 and 248 W in input) were carried out in the same week, by alternating the active and control tests at the 3 selected power levels. So there is no apparent reason why the two published spreadsheets do not show the same type of data.

  • Yes, I fully agree on this. IMO, the more likely cause of the error that lead to the dramatic values of alleged excess heat was an inadvertent entry into the data system of a wrong value of shunt resistance.

    That cannot be the case. It is ruled out for three reasons:


    1. Input power measurements were confirmed with other instruments, as I said. One mistake with one shunt would be instantly obvious, by looking at the other meters, including the plug-in AC meter. That's why I put it there.


    2. The calibrations would all show the wrong numbers. The heat would not balance. There would be spurious massive excess heat with a resistance heater alone. It would be at exactly the same level as the apparent heat during cold fusion experiments.


    3. As I described above, output heat fluctuations in a recent test are not correlated with input power. If the apparent excess heat was caused by an input power misreading, this could not happen. Along the same lines, the same input power with different meshes causes drastically different levels of output, and often no output at all. It changes over time, responding to loading and deloading.


    Your assertion is wrong for those reasons. It is absolutely, positively, ruled out. There is not the slightest chance you are right. However, I am sure that you and THH will continue to make this assertion. Carry on!


  • I'm trying to understand the logical points here. And, you are correct, I do not have the absolute, positive certainty that you do.


    I think there is a common difference between you and me. You assume that if there is some problem in these measurements it will apply uniformly, and therefore be setected. I don't. For example in the 2017 spreadsheet the calibation and active runs used a different input power measurement procedure. Obviously a problem with one of those would not be reflected in the other. In general it id difficult, looking at the results, to know which use what setup.


    1. I absolutely understand that for the runs on which this checking was done, with independent non-shunt-resistor-based, measurement of input power after the PSU, the measurements must be correct. I don't think you are quite right about the plug-in meter. The relationship between output power and input power for a PSU depends on the type of PSU and the PSU output voltage and current, and is complex. If you have worked it out in full then it could be used to check, otherwise that might lead to error. I don't think you can tell anything from input power measurement except that the output power will be less than the input power. So, given this, my question is which of the R19 and R20 measurements have had this welcome extra checking? Your comments are thus far to unspecific to help.


    2. You raise the good point that independently of absolute (first principles) data, the calibration data can prove excess heat. I agree, but then there is a different set of checks, that the calibration and active reactor measurements are sufficienctly similar - e.g. changes caused by airflow might alter recovery by 30%. Also, do we know that the calibration and active measurements use the same equipment, and the same processing (e.g. the additional calibration for blower velocity etc)? As always I like to deal with matters one by one. Let us consider what is needed to have confidence in the absolute results. If that can't be found, we can move on to what is needed for confidence in the calibration process. I would also point out that the impressive absolute data might well, considered properly, be enough. So it is worth looking carefully at the evidence and methodology. Especially because that can help to make new data collection more robust.


    3. The old data did not appear to have these fluctuations, so this is new evidence. I disagree that fluctuations => excess heat. We would need to look at them carefully and consider all of the possible mundane reasons for power fluctuations, or changes in test equipment that seem like power fluctuations. However any time dependent changes are interesting and deserve careful analysis.

  • Just the principle here. If you say "proposition A is proved by two independent proofs, X and Y, it must therefore be true!"


    Then, I ask to check X. If it checks, A is proven. If not, i ask to check B. If it checks, A is proven. Otherwise A is unproven.


    However, what does not work is to say: "X is 90% proved, I'm pretty sure it is OK. Y is 90% proved, I'm pretty sure it is OK, and because X is nearly proved, I don't need to check it fully".


  • All 3 reasons are easily refutable, but doing it now would deviate the debate away from the most urgent question: why do the two spreadsheets of the 120 W active and control runs performed in May 2016 show different types of information in the "Input power" column?


    You are continuing to avoid explaining this manifest and serious discrepancy. You have also ignored the polite request addressed to you by THH (1). I hope he will continue to support the need to first clarify this point. I also hope that other L-F members - who count on the reliability of the information contained in your posts to form their opinion on the reality of LENR - will join us in urging you to give a full clarification of this critical point.


    For the moment, the only plausible explanation is that the true values contained in the "Input value" column of the 120W active test spreadsheet and measured directly by the Yukogawa power input analyzer have been deleted and replaced by the I/DC*V/DC products. And, as everyone understands, this substitution can't be done inadvertently.


    So, we are eager to know your alternative explanation.


    (1) Error bounds for Mizuno R19 results


  • My reading cuts to the heart of why I consider the calibration vs active data here unsafe without clear and complete description of methodology - as well as carrying possible errors due to the different shape and placing of the reactors.


    There are two reactors - calibration and active, each of which can be run independently. maybe they have separate supplies, separate power input measurement. Either that or a common supply must be switched. If separate one could be from a power analyser, the other from V*I directly on (one) of the two reactor heaters.


    The problem here is that with so much of the setup different it is possible for mistake to lead to a serious error that would not be detected, but instead seen as LENR.

  • My reading cuts to the heart of why I consider the calibration vs active data here unsafe without clear and complete description of methodology - as well as carrying possible errors due to the different shape and placing of the reactors.


    What you say is correct, but what I'd like to stress is that confidence in the reliability of the information sources is preliminary to any subsequent technical and numerical evaluation based on this information.


    Quote

    There are two reactors - calibration and active, each of which can be run independently. maybe they have separate supplies, separate power input measurement. Either that or a common supply must be switched. If separate one could be from a power analyser, the other from V*I directly on (one) of the two reactor heaters.


    Well, we wouldn't need to speculate about the experimental set-up. In this case, we have the rare opportunity to ask one of the authors of the presentation of these sensational results at ICCF21 (1), so he could tell us everything we need.


    Anyway, for what I found in literature (2), the lab is equipped with one single power supply unit (Takasago EH1500H) and one single power analyzer (Yokogawa Co. model PZ4000). They are both very expensive devices, so it makes no sense to duplicate them, considering that the active and control reactors are run separately. Therefore the recorded quantities for the two tests should have been exactly the same.


    Quote

    The problem here is that with so much of the setup different it is possible for mistake to lead to a serious error that would not be detected, but instead seen as LENR.


    In reality, the only difference between the two runs should have been the connections that go from the two reactors to the single power supply and to the single power input analyzer. The two runs at 120 W have been carried out in two consecutive days, so we can suppose that all six runs at 80, 120, and 248 W were carried out in six (almost) consecutive days, by alternating the active and control tests at progressively increasing (nominal) input power. The most practical way to do it was to predispose a multi-wire selector to be switched between the active and control setups. In connecting the reactor wires to this hypothetical selector (or something equivalent), it could have happened that the internal heat resistor of the active reactor was connected instead of the external heat resistor, as clearly revealed by the trends on the curves on Figures 27 and 28 of the JCMNS article (3). This inadvertent error led to a sensational but only apparent production of excess heat.


    What can't be equally inadvertent is the possible substitution of data in the spreadsheet of the active run.


    (1) https://www.lenr-canr.org/acrobat/MizunoTexcessheat.pdf

    (2) https://www.researchgate.net/p…Conventional_Electrolysis

    (3) https://www.lenr-canr.org/acrobat/MizunoTpreprintob.pdf

  • Excuse me for butting into a discussion where I don't really belong (I'm not an experimental physicist), but maybe some others are wrinkling their foreheads in similar ways. As I remember from my college courses in physics and chemistry, you can't really do any kind of error bounds characterizing until you know what kind of variances you have in the raw data; only then can you define 1-sigma or 2-sigma bounds on the measured quantities. Normal (Gaussian) distributions in measured data are simply an ad hoc assumption until you actually identify them in the data. To characterize a distribution meaningfully, you need lots of independent data points of the measured parameter while all others (mains voltage, input power, environment temperature, gas pressure, loading history) are kept constant, to eliminate causal dependencies. I.e., you have to isolate the genuinely independent Gaussian (or whatever) noise sources. In such a complex experiment, this would be a lot a data, but any serious characterization of the error bounds would require this, which would be a big study in itself; anything else is just an algebraic exercise chock full of simplifying assumptions -- all of which could be wrong -- that doesn't really reveal much more than Jed's eyeballing. Or am I missing something?

    Anyway, who cares? If the COP is reproducibly 1.5 +/- 0.25, roughly, then there's something unexpected going on that needs serious, well-funded investigation -- let alone COP = 10 +/- 5. There are much more important matters than the error bounds -- like the plating of the nickel mesh.

  • Excuse me for butting into a discussion where I don't really belong (I'm not an experimental physicist), but maybe some others are wrinkling their foreheads in similar ways. As I remember from my college courses in physics and chemistry, you can't really do any kind of error bounds characterizing until you know what kind of variances you have in the raw data; only then can you define 1-sigma or 2-sigma bounds on the measured quantities. Normal (Gaussian) distributions in measured data are simply an ad hoc assumption until you actually identify them in the data. To characterize a distribution meaningfully, you need lots of independent data points of the measured parameter while all others (mains voltage, input power, environment temperature, gas pressure, loading history) are kept constant, to eliminate causal dependencies. I.e., you have to isolate the genuinely independent Gaussian (or whatever) noise sources. In such a complex experiment, this would be a lot a data, but any serious characterization of the error bounds would require this, which would be a big study in itself; anything else is just an algebraic exercise chock full of simplifying assumptions -- all of which could be wrong -- that doesn't really reveal much more than Jed's eyeballing. Or am I missing something?

    Anyway, who cares? If the COP is reproducibly 1.5 +/- 0.25, roughly, then there's something unexpected going on that needs serious, well-funded investigation -- let alone COP = 10 +/- 5. There are much more important matters than the error bounds -- like the plating of the nickel mesh.


    Hi Bruce,


    First, I agree. If it is reproduced, who cares!


    But then, if it is not reproduced, or if positive but marginal (R19 not R20) type results are claimed...


    The issue here is not error as in noise, but error as in accuracy. Even so, it is I agree challenging to do this, but necessary because otherwise a given result could be just invalid.


    The point is that estimating errors: asking the question - "how do we know that this assumption is correct?" is helpful. For example, if an active test shows 20% more output than a control test that seems pretty definitely excess power - unless the calorimeter heat loss varies by that amount due to different airflow across the two differently placed reactors. If an absolute measurement is done using a shunt resistor with 5% rating, otherwise untested, we cannot hold the result to be more accurate than 5%, etc. And yes, I think a systematic analysis can be much more powerful than Jed's non-quantitative eyeballing, whether it delivers the same conclusion or no. Jed, in any case, has said above that he is considering noise, rather than possible error. The two are different.


    Whatever the challenges the process of trying to do this is necessary, because without such an estimation we cannot tell whether the R19 results are in fact extraordinary.


    In addition the scrutiny needed to estimate errors is helpful because it tests methodology and helps to determine what mistakes could lead to these results - in all unreplicated results we must be aware of the possibility of mistake.


    Were this anything other than LENR it would not be needed. Replication would be attempted: and would settle the matter.


    Here no negative replication results can remove the possibility that from some combination of not understood parameters R19 or R20 worked when nothing else. That makes analysis of the positive results we do have, however unsatisfactory, important, until such time as credible positive replication exists.


    That includes ascolfi's correct but ignored questions about inconsistency of result presentation that is difficult to explain and if repeated could provide consistent false positives.

  • @ THHn: Of course, it's important that the spreadsheets are understandable and that the logic of the experiment is clear. It's great that you and others are now asking questions that nasty reviewers may ask when the paper is finally submitted, but why not just ask specific questions the way reviewers will ask: What is the measured vs. assumed resistance of the shunt? (Personally, I'd be embarrassed to ask; Mizuno is an experienced electro-chemist). Has Mizuno tried putting vanes in the calorimeter to reduce turbulence? Has he tried exchanging the positions of the reactors in the calorimeter? Has he looked at the input to the heaters with a wide-band spectrum analyzer to see if AC energy is sneaking past the power meters? The list of possible error mechanisms is nearly infinite, however, and Mizuno's time is limited. Remember, too, that journals have length limits. They're not going to publish discussions on the elimination of conceivable but silly or improbable error mechanisms. No single experiment proves anything conclusively anyway, and it's foolish to think that's what it must do. To me, as a complete outsider, the experiment looks quite plausible, even if there may be slip-ups and peculiarities in the data here and there. The way you sometimes argue, you would have quashed the early development of transistors, as Jed has pointed out.

  • @ THHn: Of course, it's important that the spreadsheets are understandable and that the logic of the experiment is clear. It's great that you and others are now asking questions that nasty reviewers may ask when the paper is finally submitted, but why not just ask specific questions the way reviewers will ask: What is the measured vs. assumed resistance of the shunt? (Personally, I'd be embarrassed to ask; Mizuno is an experienced electro-chemist). Has Mizuno tried putting vanes in the calorimeter to reduce turbulence? Has he tried exchanging the positions of the reactors in the calorimeter? Has he looked at the input to the heaters with a wide-band spectrum analyzer to see if AC energy is sneaking past the power meters? The list of possible error mechanisms is nearly infinite, however, and Mizuno's time is limited. Remember, too, that journals have length limits. They're not going to publish discussions on the elimination of conceivable but silly or improbable error mechanisms. No single experiment proves anything conclusively anyway, and it's foolish to think that's what it must do. To me, as a complete outsider, the experiment looks quite plausible, even if there may be slip-ups and peculiarities in the data here and there. The way you sometimes argue, you would have quashed the early development of transistors, as Jed has pointed out.


    That is maybe a good idea. However there is a language issue and I was sort of expecting that such questions would most easily come from Jed, who can also ask them most tactfully, and is most likely to get answers. Perhaps we could have an "ask Mizuno" thread. I see the probing here as more about working out what are the unconsidered gaps worth asking about: since you don't work these things out when first looking at results.


    I think the issue re journals is interesting. I think the paper as it stands would be difficult to publish because it has the wrong tone. There is a specific style that reassures the reader. It also has the wrong content: most of what is in there should be referenced as lab notes or whatever. In addition:

    • In a "tight" paper presenting these results it would be important to state specifically what is the setup and precise apparatus used to obtain the presented results, and the corresponding control data, rather than as is done to describe apparatus used and tested over several years, and then present results, with the possibility that some things have changed. This is specially important because elements of the calibration (the blower power vs velocity curve) are blower specific, and other elements (the traversal data) depend on many things. You can reference things discovered and documented in previous papers, but when presenting extraordinary results should be as clear as possible about all the current details.
    • In a "tight" paper an informed discussion of the various error mechanisms (speed measurement, airflow change between calibration and active) which does not dismiss them would be needed, with experimental data showing the validity of the control by using a new-style reactor as control in the same position as the active reactor normally sits.
    • In a "tight" paper the various minor inconsistencies we have noted (e.g. what does the 0.35C blower heat come from - is it via bracket conduction and if so how does that scale), if mentioned, would need to be tied up.


    In reality if these results, even R19, are correct, academic papers are not needed for this. Good independent testing by respected institutional 3rd parties would count for more and be easier, because such a result from a single author would not be very strong without replication.

  • For example in the 2017 spreadsheet the calibation and active runs used a different input power measurement procedure.


    No, they did not.


    I absolutely understand that for the runs on which this checking was done, with independent non-shunt-resistor-based, measurement of input power after the PSU, the measurements must be correct.


    That was done, with clip on meters.


    I don't think you are quite right about the plug-in meter. The relationship between output power and input power for a PSU depends on the type of PSU and the PSU output voltage and current, and is complex.


    No it is not. Not with a resistance heater. See the work of J. P. Joule for details. If you think it is, I suggest you try this. Put a $50 plug in meter between the wall and a power supply, and another meter after the power supply, and test a resistance heater at various power levels. You will see the relationship is not complex.


    If you have worked it out in full then it could be used to check, otherwise that might lead to error.


    Of course I did! I don't need to work it out, I just need to measure the power over a full range of calibrations with a resistance heater. Contrary to your claims, the relationship is not "complex" at all. There is a little overhead from the power supply. It increases a little as power increases. It is always the same overhead at the same power level.



    2. You raise the good point that independently of absolute (first principles) data, the calibration data can prove excess heat. I agree, but then there is a different set of checks, that the calibration and active reactor measurements are sufficienctly similar - e.g. changes caused by airflow might alter recovery by 30%


    That is wrong. The calibration and active reactor measurements are exactly the same in all cases, using the same instruments. The same wires, physically unplugged from one and plugged into the other. I saw Mizuno do this.


    The physical calibration and active reactors themselves have been the same in most cases, but in other cases they are different. However, it is impossible to tell the difference from the data. When the two are swapped, and powered at the same power level, they produce exactly the same calorimeter output -- within the errors of the calorimeter. You cannot tell which point came from which heater. The variations from hour to hour within a calibration (caused by ambient fluctuations) are larger than the difference between a bare resistance heater and heater inside a 20 kg reactor, once you reach terminal temperature.


    You are also confused about which first principles I mean. The first principle method derives the air flow rate. It does not depend on it. It is based on the input power, the inlet and outlet temperature. These have been measured with multiple independent instruments brought by other people, including me. They are right. There is no chance they are wrong by even 1%. The average air flow at a given power level to the fan can be derived from these numbers. It does not vary 30%. Or even 2%. At low power calibrations where heat losses from the box are negligible, it agrees with the instrument readings.



    Of course you will ignore all of this, and you will continue to claim the calibration and active cells were measured using different instruments and techniques, and you will say that measuring power into a resistance heater is "complex" even though it is not. You will repeat this bullshit again, and again, and again, to give the readers here the impression that you found a real error, when in fact you have only made up a bunch of nonsense. I do not have time to respond to you, because I have to prepare for the conference. So I will let you have the last word. Carry on!

  • In such a complex experiment, this would be a lot a data, but any serious characterization of the error bounds would require this, which would be a big study in itself; anything else is just an algebraic exercise chock full of simplifying assumptions -- all of which could be wrong -- that doesn't really reveal much more than Jed's eyeballing.


    I did not actually eyeball that. I used the spreadsheet functions, typically for 24 hour segments of calibrations at various power levels. So did Mizuno, and he is better at this than I am. I put the 2-hour graph in the paper to show that most of the errors come from ambient temperature fluctuations, and to illustrate that the typical magnitude of the noise is ~2 W.



    It's great that you and others are now asking questions that nasty reviewers may ask when the paper is finally submitted, but why not just ask specific questions the way reviewers will ask: What is the measured vs. assumed resistance of the shunt?


    The ICCF21 paper went through months of review, with much tougher questions than THH has come up with. Tougher, because they were real considerations, not imaginary nonsense. The shunt was not an issue because input power was confirmed with other methods and instruments.



    Has Mizuno tried putting vanes in the calorimeter to reduce turbulence?


    No. He wants to increase turbulence, not reduce it!



    Has he tried exchanging the positions of the reactors in the calorimeter?


    Yes, as noted in the paper.

  • Hi Bruce,

    let me add my POV to the wise answers you already got from THH.


    @ THHn: Of course, it's important that the spreadsheets are understandable


    Yes, but it's much more important that the spreadsheets are trustable, ie that you have the confidence that they report all and the true values measured during the tests. If there is some doubt that the values in the spreadsheet of a specific test have been altered, it makes no sense to ask any other question for any other test, because you will doubt any answer you get.


    Presently, we know with certainty that the two spreadsheets of the 120W tests preformed in May 2016 contain different information (*). Until now, the only plausible explanation is that one of them - namely the spreadsheet of the active test that shows a sensational excess heat, well beyond any experimental inaccuracy - was modified with respect to the original data. We are still awaiting a plausible alternative justification for this incongruence.


    (*) Mizuno reports increased excess heat

  • THH: I don't think you are quite right about the plug-in meter. The relationship between output power and input power for a PSU depends on the type of PSU and the PSU output voltage and current, and is complex.


    Jed: No it is not. Not with a resistance heater. See the work of J. P. Joule for details. If you think it is, I suggest you try this. Put a $50 plug in meter between the wall and a power supply, and another meter after the power supply, and test a resistance heater at various power levels. You will see the relationship is not complex.


    Jed, it does not help your case when you make such strong assertions about electronic devices, and they are wrong. You have less qualifications in this area than many here (including me). More to the point you tend to overlook things: one reason why for this type of experimental write-up I think you are wise, as you have done, to ask for community critique of experiments.


    For a linear PSU with output Ix, Vx, for example, the input power is typically of the approximate form:


    P0 + V0*Ix * I0*Vx


    Where P0, V0 depends on voltage range and necessarily V0 > Vx. I0 is often small and can be ignored.


    For a switching PSU the input power is different: efficiency varies from typically <10% to > 90% according to Vx and Ix, although for most values it is in the range 50% - 80%. Efficiency will also vary, for given Vx and Ix, with the PSU voltage range.


    For a lab PSU it can be either form, or a bit of both.


    J.P. Joule had AFAIK nothing to say about the efficiency of modern lab PSUs?


    THH

  • That is wrong. The calibration and active reactor measurements are exactly the same in all cases, using the same instruments. The same wires, physically unplugged from one and plugged into the other. I saw Mizuno do this.


    I'm glad to hear this. In that case you are saying that the comments ascoli made about the 2017 results are wrong. But, I remember being very convinced by his detailed work. Shall we revisit that? There appears a 100% disagreement between you and him. I'd just like to remind you that the last time tehre was such a difference, after about 5 pages of posts, you ended up agreeing with him (that the velocity data was calculated from blower power calibration, rather than directly measured).


    The physical calibration and active reactors themselves have been the same in most cases, but in other cases they are different. However, it is impossible to tell the difference from the data. When the two are swapped, and powered at the same power level, they produce exactly the same calorimeter output -- within the errors of the calorimeter. You cannot tell which point came from which heater. The variations from hour to hour within a calibration (caused by ambient fluctuations) are larger than the difference between a bare resistance heater and heater inside a 20 kg reactor, once you reach terminal temperature.


    Again i'm glad to hear that, but would like more data on how it was checked, for example at what reactor temperatures. It is not documented in the paper; so adding this from appropriate data would strengthen it.


    You are also confused about which first principles I mean. The first principle method derives the air flow rate. It does not depend on it. It is based on the input power, the inlet and outlet temperature.

    OK, that is fine, it will overestimate the flow rate. I was however using the phrase in a similar way to indicate output power estimation again assuming 100% calorimeter efficiency,


    These have been measured with multiple independent instruments brought by other people, including me. They are right. There is no chance they are wrong by even 1%. The average air flow at a given power level to the fan can be derived from these numbers. It does not vary 30%. Or even 2%.

    I would not expect such airflow variations for a given blower. The issue about varying calorimeter efficiency would come from heat losses or RTD errors (as documented by you in the paper) varying with reactor position. And there is the question of which blower is used for which data: but I'm hoping it is all the same.


    At low power calibrations where heat losses from the box are negligible, it agrees with the instrument readings.

    Your data in the paper perhaps needs revising, since it shows 5% loss at ambient temperature, though this is what I'd expect? More generally, there are different loss mechanisms, some lose a power proportional to input power (roughly) and thus correspond to a fixed loss, some (anything relating to natural convection) lose a power proportional to the temperature and hence roughly also proportional to input power.


    It would also be helpful to understand, in the calorimeter heat loss data, whether you compensate for the blower contribution to the calculated output power and if so how?


    Regards, THH

  • I'm glad to hear this. In that case you are saying that the comments ascoli made about the 2017 results are wrong. But, I remember being very convinced by his detailed work. Shall we revisit that? There appears a 100% disagreement between you and him.


    Actually, in this case there is no disagreement. JedRothwell confirmed that "The calibration and active reactor measurements are exactly the same in all cases, using the same instruments." This is exactly what I supposed for the experimental setup of the May 2016 tests. Therefore, the spreadsheets generated by the data system did originally contain the same types of information. However, we can see that this is not the case with the 2 spreadsheets of the 120 W active and control tests which were uploaded in internet in September 2017 (*).


    It necessarily follows that the spreadsheet of the active test has been modified in order to remove the "Input power" measured by the Yokogawa power analyzer and replaced by the V*I products. JR has not provided an alternative explanation, so we don't disagree on this point either, since we can consider his silence as an implicit confirmation.


    Now, we have to understand when and why the spreadsheet of the active test has been modified.


    The main clue could come from the text in the "Input power" column at row 8, where we can read: "V/DC*I/DC but probably measured directly with a wattmeter".


    This is a very strange statement. How is it possible that a skilled experimenter, who has used the same experimental setup with same instruments for many years, doesn't know exactly which quantity is reported in one of the most important column of his spreadsheet, a column which contains one of the terms used to calculate by difference the presumed excess heat?


    So, the suspicion is that this strange and ambiguous statement was added after the modification - in one of the two spreadsheets - of the content of the "Input power" column, because at that point the contents of the two columns were no longer homogeneous.


    As for the reason, well, unless JR let us know his own, there is only one imaginable.


    (*) Mizuno reports increased excess heat

  • A calibration with no input power to the reactors shows that
    when the blower power is stepped from 1.5 to 5 W, the outlet RTDs are ∼0.35◦C warmer than inlet (Fig. 8). This is a
    much larger temperature difference than the moving air in the box alone could produce.
    Blowers are inefficient, so most of the input power to the blower converts to waste heat in the motor.


    This explanation of motor heat appears to be excessive.

    Electric motors in axial fans are referenced to be btw 87% and 93% efficient.

    This means that the waste heat from the motor itself is 7 %/13% of 5W or 0.035/0.065W.

    Assuming that 2/3 of this motor heat goes into the blower.. doesn't account for much of the 0.35C rise.


    The 0.35 C temperature rise may largely be due to the kinetic energy of the axial fan blades

    being converted to thermal energy in the exit airstream


  • This is a mystery - the paper says - and some of the data indicate, that this 0.35 excess is measured both upstream and downstream of the fan. heat added to the air - which I agree is plausible would not affect upstream.


    That is why this aspect of the paper bothers me: in the grand scheme of things it is not itself significant, but it is also not understood.


    THH

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.