Mizuno Airflow Calorimetry

  • The key thing is that the mesh has very little affect on the casing temperature, because it is much lower mass, so its time constant does not matter.

    "so its time constant does not matter" This is pure assumption.

    The long time constant appears only when the mesh is active ....not when it is inactive,

    Same tiny 23g mass in both cases

    The time constant does matter... but it is not related to the mass of the mesh.

    There is something in the mesh other than 23g of mass causing the long time constant.

    The amount of energy stored in the mesh should cause it to vaporise... but it doesnt



    9698-dzuntitled-jpg

  • That is completely wrong. Where do you and Ascoli come up with this stuff?


    As I already told him (1), THH and me don't agree on this point. THH insists that "input power is measured differently between the control and active run spreadsheets", but IMO they have been measured exactly in the same way and the difference between the spreadsheets can only be explained by a subsequent removal from the active run spreadsheet of the "Input power" values measured by the Yokogawa analyzer and its replacement with the V*I products.


    Quote

    The spreadsheet is generated by the multichannel HP gadget, and it only records V*I on fixed channels. The analyzer has its own memory and it can be dumped into a computer, but it is not part of the automatic data collection system. Perhaps it could be interfaced, but it is not in these studies.


    Not true. As shown in a previous jpeg (2), the two spreadsheets of the active and control runs both report the value of V/DC and I/DC. However, they differ in the quantity reported in the "Input power" column: the active run spreadsheet contains the results of the V/DC*I/DC products, while the control run spreadsheet contains another quantity, clearly different from the V*I product. This quantity can only come from the Yokogawa power input analyzer.


    In fact, figure 13 in the JCMNS article (3) shows that the "Power analyzer" is connected to the "Data logger" and this latter to the "PC". The same is reported in the text: "The rectangles in the lower left of the figure represent the input power supply, the power input analyzer (Yokogawa, PZ 4000), the data logger (Agilent, 34970A), and the PC for data acquisition. […] Data from six reactor temperatures, electric power to the test reactor that is processed by the power-meter, electric currents and voltages for the power supply of the blower, and the temperatures of the inlet and the outlet air flows were collected by a data logger and recorded to a PC every 5 s."


    Quote

    The analyzer was purchased for the plasma discharge experiments, which have rapidly changing input power.


    So, this analyzer was suitable for capturing all the power absorbed to internally heat - "mainly with glow discharge" as you have recently specified (4) - the reactor during the active runs. Is this the reason why the data measured by this instrument have been removed from the 120W active run spreadsheet and replaced by the V*I product?


    Quote

    The HP gadget measures electric power on the same channels (corresponding to spreadsheet columns) with the same wires during calibrations and active runs. The wires are unplugged from one machine and plugged into the other.


    I believe you on this point, it's reasonable. But you have still to explain why the quantity reported in the column "Input power" of the two 120 W test spreadsheets are different from each other.


    (1) Mizuno Airflow Calorimetry

    (2) Mizuno reports increased excess heat

    (3) https://www.lenr-canr.org/acrobat/MizunoTpreprintob.pdf

    (4) Mizuno Airflow Calorimetry

  • THH: "so its time constant does not matter" RB: This is pure assumption.


    It is calculation based on normal behaviour of materials and heat.

    ....


    "The long time constant appears only when the mesh is active ....not when it is inactive,

    Same tiny 23g mass in both cases

    The time constant does matter... but it is not related to the mass of the mesh.

    There is something in the mesh other than 23g of mass causing the long time constant.

    The amount of energy stored in the mesh should cause it to vaporise... but it doesnt"


    This is pure non-physical assumption contrary to what is understood about physics, also contrary to what is understood about LENR. You can invent LENR unicorns to make any data fit your assumptionss - but that is not science.


    THH

  • As I already told him (1), THH and me don't agree on this point. THH insists that "input power is measured differently between the control and active run spreadsheets", but IMO they have been measured exactly in the same way and the difference between the spreadsheets can only be explained by a subsequent removal from the active run spreadsheet of the "Input power" values measured by the Yokogawa analyzer and its replacement with the V*I products.


    OK, just for definiteness, I don't think ascoli and I disagree. I cannot know what was measured, and therefore accept what Jed has said. I do know FROM THE DATA that one run sheet has V*I in the power column, the other has somthing NOT THE SAME, that looks like what would be expected from power analyser on input side of PSU, scaled to give output power under steady state conditions, there.


    So with more precision I'd say the switch is in what is presented on the spreadsheet in the power column. Frankly, I thought that was obviously what I meant, I apologise for any misunderstanding.


    THH

  • As I already told him (1), THH and me don't agree on this point. THH insists that "input power is measured differently between the control and active run spreadsheets",

    I guess one measures " watts " and the other measures " whats ":)

    TIME
    seconds
    ACTIVE
    WATTS
    CONTROL
    WHATS

    o 0.00 0.00
    24.4 121.14 46.02
    48.9 121.00 121.61
    73.4 121.06 121.34
    97.8 121.03 121.13
  • Ascoli.. I am sure that Jed is busy with La Dolce Vita in Assisi ... or maybe he is getting a 4D headache,


    Here is how each spreadsheet arises... I am not sure what Mizuno is using for his data format

    But there is conversion/ Jap-Eng translation involved and each spreadsheet requires a lot of work

    from 2017...

    Mizuno : Publication of kW/COP2 excess heat results


    Okay, here is a spreadsheet converted to Google format:

    https://drive.google.com/open?…r0UahtG1ZPAJmAJmejB6EMipY

    Let me know if you can read it. And copy, work with or download it.

    This still needs work but I have added some notes, converted the numeric formats to something more reasonable,

    and made some other changes. This is a conversion from the Excel format.


    This is a little repetitive and odd-looking because columns A-R are basically the raw output from the data logger.

    I think some channels are NIU (not in use). I marked them.

    The graphs did not convert well. I left one graph. This is part of Fig. 28, p. 19.

    I added notes in Row 8 describing what I think the columns represent. I left some question marks in there for things

    I do not understand, such as some of the constants.

    Anyway, you can see the actual arithmetic used to generate the graphs.

  • This is pure non-physical assumption contrary to what is understood about physics, also contrary to what is understood about LENR.

    LENR unicorns

    This is just blah... unicorn blah

    Just as when THHNew asserted with his eyeball that the heating/cooling profiles

    were explained by a single time constant of 7000s.

    Thanks to Robert Horst for the a biexponential fit.

    of a short plus a long time constant,


    However investigations with Graphpad will be fruitful I suspect


    Comparing the biexponential fit for


    120W inactive mesh

    120W active mesh


    I suspect the fruit will look like this.>>>>>>>>>>>


    Perhaps THHnewcan engage his scientific curiosity

    in this rather than in unicorn blah?

  • words are cheap. and eyeballs are erroneous

    There is no single time 7000s constant.

    Try a bi-exponential fit or other

    Here is the INACTIVE mesh spreadsheet.( Pls open in Chrome not IE)

    Probably one doesn't need GRAPHPAD

    to work out the rising and falling

    time constant

    Excel will do.. better than ...


    my eyeball which gives tc= 970 +- 150 seconds

    please compare with Robert Horst's

    split out in the biexponential ACTIVE mesh model

    fast tc=1000s ( again .. my eyeball)

    and the slow tc=8656s

  • It is hard to draw many conclusions based on the long time constant at the tail of the experiment. The amplitude of the tail is less that 1 degree which is much less than the errors in the rising edge measurements. That means that there might not even be any tail at all. You can't conclude anything from measurements smaller than the noise.


    If your calculation of the heat capacity and thermal resistance of the mesh is correct, you are right that it will not have much effect on the tail. But I am not sure if we can get accurate values for either the heat capacity or the thermal resistance.


    I used "mesh" to mean everything inside the vacuum chamber. There may be other thermal masses, like an internal heater used for later experiments. That would have a big effect on the heat capacity.


    The thermal resistance to the mesh (plus whatever) could be extremely high. Part is in contact with the reactor wall, but part is not. Try heating one end of a screen and see how much heat you detect a few inches away. The thin Ni wire in the screen is a lousy thermal conductor. Also, if there was something else like an unused heater inside, that could be very well insulated by the vacuum.


    My simulation was not intended to try to arrive at any specific numerical results. I posted it more as a technique that could be used by those doing the experiments. They could measure the values of the six parameters fairly easily and come up with a simulation to show what should happen without excess heat, then show what actually happens and analyze the differences.

  • We will try to supply more info for this simulation. Contact me after the conference.


    I think you can use any mesh from any source to study the thermal charactoristics. The thermal mass of every object in the reactor does not amount to much.

  • Thanks for the reply, Robert. We may, in part, be misunderstanding each other, but i'll reply in detail to your comments in the hope of reaching more clarity.


    It is hard to draw many conclusions based on the long time constant at the tail of the experiment. The amplitude of the tail is less that 1 degree which is much less than the errors in the rising edge measurements. That means that there might not even be any tail at all. You can't conclude anything from measurements smaller than the noise.


    OK - so this is perhaps my misunderstanding. If by long tail you mean the anomalous (low amplitude) non exponential tail, there are so many artifacts relating to room temperature and other things that I agree. I was not trying to explain this, or in fact thinking that it needed explanation. The key question for me is why the active test rise and fall times are much (10X) slower than the control rise and fall times. Related to this, we do know the active test used an external heater, the control internal.


    If your calculation of the heat capacity and thermal resistance of the mesh is correct, you are right that it will not have much effect on the tail. But I am not sure if we can get accurate values for either the heat capacity or the thermal resistance.


    I took 300g total mesh mass, and 450 J/kgC heat capacity as for Nickel. This may not be entirely accurate (especially the mesh mass) but is ball park correct, and easily verified. For the reactor as a whole I took 20kg and 500J/kgC for stainless steel. I'm pretty confident those are accurate. We have no information on mesh/reactor thermal resistance except that it is determines the mesh heating time constant. My point is that no value of mesh/reactor thermal resistance works to explain the data.


    I used "mesh" to mean everything inside the vacuum chamber. There may be other thermal masses, like an internal heater used for later experiments. That would have a big effect on the heat capacity.


    Agreed, but we know the structure of that heater, and it is insignificant in total mass compared with the 20kg stainless steel reactor. The heat capacity cannot be that much different.


    The thermal resistance to the mesh (plus whatever) could be extremely high. Part is in contact with the reactor wall, but part is not. Try heating one end of a screen and see how much heat you detect a few inches away. The thin Ni wire in the screen is a lousy thermal conductor. Also, if there was something else like an unused heater inside, that could be very well insulated by the vacuum.


    I agree that the thermal resistance could be very high - and high enough for any time constant. It could, perhaps, explain that low amplitude tail, though i have my doubts. My point is that it cannot explain the main rise and fall times because the total heat going into these internal elements is very small compared with that needed to heat up the reactor body.


    In addition, if this device generates power from the mesh, there is a simple argument to show that the thermal resistance to the mesh itself must be low. That is because, with a high thermal resistance, and the high power supposed to be generated (100W) the mesh would become much too hot and melt.


    My simulation was not intended to try to arrive at any specific numerical results. I posted it more as a technique that could be used by those doing the experiments. They could measure the values of the six parameters fairly easily and come up with a simulation to show what should happen without excess heat, then show what actually happens and analyze the differences.


    I agree, and it is very helpful. I myself don't see any of the second order dynamic effects as important when compared with the unexplained as yet and dominant reactor casing heating time. The merit of simulation is that it can explore more complex effects, but only once the dominant effect is properly understood and modelled. For that you don't need a simulation, though it does no harm. Algebraic analysis is more helpful when things are simple, as for the reactor casing heat-up, because it shows you the dependency of the results on parameters.


    So, my question to anyone interested in those 2016 (published 2017) results is: how can the fast rise time of the control trace be reconciled with the slow rise time of the active trace?


    The obvious answer would be that the active reactor heats up more than the control reactor. It would have to heat up to an approx 10X higher temperature delta from ambient (if the reactors are the same mass). For example:

    control 25C -> 50C

    active 25C -> 275C


    That seems to indicate a very significant difference in setup. Maybe, for example, the heater round the reactor has much poorer thermal conductivity to forced air cooling then the bare reactor, so allowing a higher temperature to be reached. I can imagine a factor of 2 in such a way, perhaps, but not 10.


    My big idea here is that if we have data similar to that in the 2017 paper for other tests R19, R20, or R21, we can cross-check the power calculations against the speed at which the reactor heats and cools, given that the reactor temperature should be the dominant factor in determining the output power. This can make the supplied data more robust, or expose mistakes. Both are to be welcomed as representing more information. As always, if this behaviour can be replicated, or even reproduced for independent testing, there is no need for this, the replications can be instrumented in different ways and cast iron new results obtained. Otherwise, relying on this data, we can get more out of it by explaining the dynamics - and that means explaining the heating/cooling of the dominant pole (in transfer function terms) represented by the reactor body.


    One assumption i'm making is that at these timescales the reactor body can be modelled as isothermal - that would not be true for very fast changes in temperature.


    THH


    Maths needed:


    Rhc = Reactor casing total heat capacity (W/C) = mass * SHC of stainless steel

    P(t) = Pheater(t) + Pmesh(t) - Pcooling(t) Is the total thermal power heating or cooling the casing. Pmesh is the "LENR power" and also the difference between steady-state measured Pout and Pin if the calorimetry is accurate.

    DeltaTout ~ DeltaTReactor The assumption is that the reactor body temperature is the main driver of the output air temperature rise. There will be errors, but this is a good approximation +/- 20%. A more accurate model good be got from the known calorimeter heat loss curve vs reactor temperature if wanted.

    We have graphs of DeltaTOut vs time from which we can determine the rate of change of DeltaTReactor:


    d/dt(DeltaTReactor) = P(t)/Rhc


    The simplest place to do this is on the rising edge of the output temperature. For this, the mesh and reactor are still cold, so Pmesh = PCooling ~ 0 and we should have the reactor temperature change determined precisely by the heater power.


    That allows us to put a reactor temperature scale onto the output power (or DeltaTout) graph and determine reactor temperature.


    We can then check that against the measured reactor temperature. That might be innacurate due to local cooling effects etc, but we can at least compare active and control data. And understanding inaccuracy in reactor case measurements is helpful, if that is the cause of a discrepancy in temperature rise times.


    We can also potentially (this may not be worth it, because more difficult to analyse) compare the rise rate of change at start with the fall rate of change when power is off. If the mesh stops generating power when the heater cuts off we expect the fall dT/dT to be double the rise time (for a COP 2 steady state). If the mesh delivers power out as Mizuno believes dependent on temperature (see graph in paper) then we can model this and see whether it matches the overall shape of the curve (that is more difficult because it can be conflated with other non-modelled nonlinearites). This is probably not very helpful investigation because the bit that can easily be done delivers the same results for no excess heat, and for any amount of excess heat that is dependent only on temperature. It would help with R20 where we known that the excess heat must fall when the heater power stops.


    if there are big discrepancies, as here, we know that active and control data come from different setups and therefore comparison of the two should be taken with caution.

  • I think you can use any mesh from any source to study the thermal characteristics.


    The problem with the thermal characteristics is the complicated kinetics,


    The two main thermal masses are

    -the acrylic calorimeter box 11000g x1.5J/gC

    - the steel reactor vessel mass 20,300g x 0,5 J/gC

    each with their own kinetics


    Perhaps we would expect to see

    a bi-exponential characteristic for the inactive mesh case

    one fast for the steel,? one slow for the acrylic ? or vice versa?

    Graphpad analysis shows weak support for this

    Perhaps slow 540.. fast 2004 time constants???

    Perhaps..

    BUT...

    The heat loss /gain is not very first order wrt to delta T..

    because as we know both radiation and turbulence becomes more

    predominant as the temperature rise..

    so the kinetic constants change as the temperature rises

    especially for the reactor.

    Perhaps there are better calorimetric setups

    to examine the thermal kinetics of

    the LENR mesh processes.

    These probably have their own slow/fast kinetics.

    The calorimeter was not designed to do this.

  • The two main thermal masses are

    -the acrylic calorimeter box 11000g x1.5J/gC

    - the steel reactor vessel mass 20,300g x 0,5 J/gC


    We know that the box temperature must be much less than the reactor temperature. Indeed, if radiation is not significant, the inside of the insulation (which itself is inside the box) would be no higher than the outlet air temperature ~ 10C delta. Whereas the reactor ~ 100C delta or more. Of course, we know that radiation is significant. Still, the temperature swing in the acrylic must be << the temperature swing in the reactor


    Therefore in terms of effect on dynamics even though the energy stored for a given temperature change (mass * SHC) is comparable - slightly higher for the box - the overall effect on the system of the box will be << that of the reactor.


    in fact the main source of nonlinearity in this system (the only one I can see) is the natural convective cooling of the box which will be proportional to the square of the box temperature rise above ambient.


    The airflow remains (mixed) not very hot and so the forced air convection would be expected to reasonably linear in its behaviour (nonlinear cooling corresponds to reactor temperature rise no longer proportional to output temperature delta).

    The radiation is an issue if the box gets very hot.


    If the box becomes very hot we do however need to consider whether that might cause changes in the output RTD reading through convection or radiation.


    This would be a lot easier to analyse if we had better information about how much the box heats up. For example, uneven airflow that allowed part of the internal air to be stagnant and heat up to the reactor temperature could have a significant affect on this altering the calorimeter characteristics.


    Still, that factor of 10 difference in initial gradient of the rising edge is pretty difficult to explain based on what we currently know. It does not comfortably lend itself to explanation from LENR power, nor from any of these nonlinearities which only come into play at higher reactor temperatures, nor the case which would (isolated by the insulation) not have much affect initially until the reactor had warmed up.


    So that gradient is still safe to analyse quantitatively.


    THH


  • I guess "Input power" is expressed in "true-watts" in the calibration spreadsheet and in "fake-watts" in the excess heat spreadsheet. :)

    120 W input excess heat result shown in Fig. 28

    120 W input calibration result shown in Fig, 27

    Spreadsheet columns

    Added columns

    Spreadsheet columns

    Added columns

    Time/s

    V/DC

    I/DC

    Input power

    V/DC*I/DC

    Diff.

    Time/s

    V/DC

    I/DC

    Input power

    V/DC*I/DC

    Diff.

    811.08

    0.05

    0.00

    0.00

    0.00

    0.00

    29.6

    0

    0

    0

    0.00

    0.00

    835.41

    -49.74

    -2.44

    121.14

    121.15

    0.01

    54.0

    -41.33

    -1.11

    46.02

    45.88

    -0.14

    859.91

    -49.76

    -2.43

    121.00

    121.00

    0.00

    78.5

    -67.26

    -1.81

    121.61

    121.74

    0.13

    884.52

    -49.79

    -2.43

    121.06

    121.06

    0.00

    103.0

    -67.26

    -1.80

    121.34

    121.07

    -0.27

    908.85

    -49.80

    -2.43

    121.03

    121.03

    0.00

    127.4

    -67.26

    -1.80

    121.13

    121.07

    -0.06

    ...






    ...






    Guess unit

    (added)

    V (true)

    I (??)

    W (fake)

    W

    W

    -

    V (true)

    A (true)

    W (true)

    W

    W

    Notes:

    The "Input power" values in the spreadsheets coincide with the V/DC*I/DC products. This means that the original "Input power" values directly measured with the Yokogawa power analyzer have been substituted with the V*I products.


    The "Input power" values in the spreadsheets differ from V/DC*I/DC products. This means that calibration spreadsheet still reports the original "Input power" values measured by the Yokogawa power analyzer.


    Ascoli.. I am sure that Jed is busy with La Dolce Vita in Assisi ... or maybe he is getting a 4D headache,


    He was asked for an answer more than one month ago (1). It's evident that he has no intention to answer this question.


    Quote

    Here is how each spreadsheet arises... I am not sure what Mizuno is using for his data format

    But there is conversion/ Jap-Eng translation involved and each spreadsheet requires a lot of work

    from 2017...


    Whatever the data format used by Mizuno, no conversion or Jap/Eng translation tool is able to substitute the original values in one column with the products of two other columns. This substitution can be only explained by a XXXXXXXXX of the original experimental data.


    Edited for accusatory language: Shane


    (1) Mizuno reports increased excess heat

  • I think XXX is the wrong word: however the active data is undoubtedly different, using V*I to calculate power rather than (presumably) the Yogakawa analyser.


    That difference should have been noted explicitly, and should be explained, just as all instrument differences between control and active runs should be explained. On the face of it, it is strange.


    I doubt Jed is in any position to explain this himself: but he could note the issue (which he has been reluctant to do) and either get an explanation from Mizuno or note as an unresolved issue here that there that he cannot do that.


    Why do these methodological issues matter? Because if unresolved they show bad practice that could easily result in significant mistakes leading to false positives. We cannot know, from these issues, that there will be any mistakes. But, we cannot know that there will not. It is a red flag, along with a number of other issues (for example the very different and unexplained initial rising edge gradient for output temperature in the control and active tests).


    THH

  • I think suppression is the wrong work: however the active data is undoubtedly different, using V*I to calculate power rather than (presumably) the Yogakawa analyser.


    I repeat: the Yokogawa analyzer is not connected to the A/D HP gadget. Data from it is not collected into the spreadsheets. When you turn on the power and adjust the Variac, you look at the numbers on the analyzer because they are large and right there. You then check the computer display from time to time to be sure they agree with the analyzer, and you check the AC meter. They always agree. The only numbers shown in any report are from the V*I collected by the HP gadget.


    That's all there is to it. If you don't believe me, that's fine. Go right ahead and continue making these ridiculous claims. But please do not say I never told you this. You should just say "Jed is lying."


    I doubt Jed is in any position to explain this himself:


    I explained it, again and again. I told you that only the V*I data is in report. I told you that the same HP channels are used to collect the data from the control and then the active reactor. I am sure of this BECAUSE I WAS THERE. I have a photo of me operating the Variac, watching the analyzer. Mizuno showed me how he moves the leads from the control to the active. I can see where the wires go into the A/D gadget. The channels are in the same order as the spreadsheet columns.



    Why do these methodological issues matter? Because if unresolved they show bad practice that could easily result in significant mistakes leading to false positives.


    These methodological issues do not exist. They were invented by you and by Ascoli. They are bullshit, lies and trolling, intended to confuse the issue and raise questions where no questions exist. Data from the analyzer has never been tied into the A/D converter at any time in the history of these experiments. I never said it was tied in. Mizuno never said that. Many people have visited him, and none of them have ever said that. Yet you and Ascoli insist that is the configuration!

  • Oh Dear. Oh Dear, Oh Dear! So you are claiming another Rossiesque fiddle is behind the high excess heat results, that the excess heat input power spreadsheet was in fact much more than the data displayed? Is that the motivation behind this endless calorimetric analysis - you've found the deliberate mistake? I think this is extremely unlikely - probably just an error in translation from the Japanese versions of the spreadsheets? Furthermore, if someone was intending to fool everyone (eg like Rossi), surely they would any not leave evidence of data having been modified as you are suggesting, they wouldn't be that stupid would they?

  • One reason for the difference in speed between calibration and control would be if the sample rate was different - however i believe that the spreadsheets capture real time for each sample, which makes that not something that could cause an incorrect time axis.