MIZUNO REPLICATION AND MATERIALS ONLY

  • Paradigmnoia and I have already agreed with each other that there is nothing he found in his studies of Mizuno's air-flow calorimeter that would account for the differences he (Mizuno) sees between control and active mesh behaviours.

    I am talking about the way the ongoing tests are being done. From the prior tests the lack of third party replicability is my only concern, the calorimetry was good enough.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • After all, if you know through calibration that the component you are measuring captures X% of the heat you can estimate what 100% is. Of course that is a risky process because if you don't have the right value for X, or if conditions in one of the other components changes, you might not be aware of it and your estimate could go badly awry.

    It s these risks that you have been assessing for Mizuno's air-flow calorimeter.

    Only when the heat captured by the air flow is less than input energy. When there is a lot of excess heat, the heat captured in the flow exceeds input energy, and you can ignore errors in other heat flow paths.

  • . When there is a lot of excess heat, the heat captured in the flow exceeds input energy, and you can ignore errors in other heat flow paths.

    I don't mean actually "ignore" it. I mean you can discount it. You don't need to worry about it as much as you have to worry when heat recovered from the flow is less than input power. Of course you should calibrate and measure it as much as possible. It is a fudge factor.


    One problem with a fudge factor is that it may have several different components, glommed together. The heat loss you do not measure directly may be going out by several different paths. So it changes under different conditions, shifting from one path to another. It may not be linear. That's what happens with an isoperibolic calorimeter at low power levels where heat losses from the lid and the metal connectors that stick out of the constant temperature bath begin to dominate. That is what this graph from Miles shows, "Calorimetric Principles" Fig. 4, cell constant. Below 600 mW things go haywire.



  • Mizuno;s 2017 0.165 eV /atom seems consistent witn Storm's latest published values..

    Storm's makes a good case for the fusion process and subsequent heat release being rate limited by the diffusion of deuterium to the active sites.

    He also states that the major difference in overall reaction rates is due to whether there are more or less active sites.


    This is commonsense chemistry...and is mathematically expressed in the preexponential factor A in the Arrhenius function


    rate= AeEa/RT


    Although overall rate increases can be achieved by raising the temperature up to or near the melting point 1400 C of stainless steel.. and so increasing diffusion..

    major work is still needed to understand what is the "active site"and how to increase the number of active sites

    because presently the number s very low.

  • We tried PWM control to maintain a fixed temperature but the large thermal mass and different rates of outward heat flow above and below the set point, and possibly varying power output of the reactor, made it very noisy and difficult to detect any differences. When we fixed the input power and let temperature vary, the data became very clean and stable.

  • We tried PWM control to maintain a fixed temperature but the large thermal mass and different rates of outward heat flow above and below the set point, and possibly varying power output of the reactor, made it very noisy and difficult to detect any differences. When we fixed the input power and let temperature vary, the data became very clean and stable.

    Well, that’s a pity to some extent.


    I mean, as long as the calibration is done with an exact dummy reactor, a higher temperature achieved with the same energy input can be surely interpreted as excess heat, and, in all good faith, it should be interpreted as such.


    But measuring temperature as indication of energy brings out a lot of mistrust issues (just the type and placement of sensors can be a huge Can of worms, specially if you intend to convince skeptics that will always suspect a trick before conceding the excess heat) and is not as straight forward. This brings all the scrutiny to the calibration and validity of the dummy vs active comparison.


    Don’t get me wrong, I think is perfectly possible to prove excess energy this way, but skeptics, specially staunch or cynic ones, will never concede excess heat has been measured properly in such circumstances.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • 1) Do multi step calibration.


    2) Reactor version should start at same input for each step, then input should be trimmed back until the same steady state temperature is achieved for each step. The difference in input (calibration minus reactor run input) to reach the same steady state temperature is acceptable as the reaction input to the final temperature. Reaching a higher temperature at the same input is similar but not the best comparison unless it coincides with a step already done.

  • I beg to differ and welcome anyone to a debate on this issue. Any serious scientist would prefer steady state end points without any electronic intervention and integral calculations and PID controls. The physics and the reasons for this couldn’t be clearer.


    Measuring temperature to resolve energy is what all calorimeters do. Our calorimeter doesn’t have mass flow and delta T where ultra high accuracy are needed. If you want to measure delta T of 4C using a typical PT100 with a total uncertainty of 1.5C for each probe, then your data will be meaningless as you would have a 75% possible error.


    In the oven calorimeter a fan mixes the air and six points are all within 1.5C. But we are measuring temperatures of hundreds of degrees.


    There is no a priori reason to do it the way paradigmnoia proposes but there are a slew of reasons not to do it that way. Go ahead and try. Let me know how it goes.

  • I have been thinking about what happens when you harvest heat from a system equipped with a temperature-sensitive source of internal heating (which is how many people think LENR acts). The simplistic model I have mentioned before helps to think about these issues. To remind ... the model consists of an idealized thermal mass with Newtonian cooling, a source of controllable extrinsic heating, and internal (LENR-like) heating that is activated by an increase in temperature (and that activates quickly relative to the heating time constant of the mass). Recently, I have been using an Arrhenius function for the temperature dependence of the internal heating. This allows me to use some of Mizuno's own data in the model.


    I now add to the model a term which corresponds to a harvest of heat from the thermal mass. For simplicity, I model this as withdrawing heat at a set rate (power) such that the rate doesn't depend on temperature. This is doable physically although it may not be the physically simplest way to withdraw heat. The reason I make this choice is that it is conceptually simple -- heat harvesting and heat input both then become temperature independent and, in fact, heat harvesting is really just the same as just turning down the power of the input.


    Here is how the model behaves under particular circumstances. This is only part I. There will be more in subsequent posts.


    Result 1 (below). Temperature (degrees C) of the system after the input is turned on. Red is control and blue is with a temperature-sensitive source of internal heating. The parameters of the model have been chosen to replicate the results described by Daniel_G earlier on this thread. Time is in dimensionless units that can be thought of as heating time-constants for the thermal mass (i.e., after 1 time constant the red trace is 1/e of its way to its settling temprature).




    Result 2 below). Begin to harvest heat. Heat is withdrawn from the thermal mass at a rate that is 25% of the input power. The temperature declines accordingly and the internal heating becomes slightly less active.



    Result 3 (below). Harvest heat at the same rate as it is input. The temperature declines right back to baseline. Excess heat is basically turned off.



    Remarks: Although the temperature-sensitive internal heating is activated by the inputs shown here, with these inputs you can't get out of the system any more power than you put in. Thus, in the second panel above one might think 'Hey great, here I am constantly harvesting some of the excess heat being generated internally!' ... but the rate of harvest is less that the power being fed into the system to support the working temperature. Overall, no net power is being created. If you attempt to get around this by increasing the harvested power, as shown in the bottom panel, eventually the the temperature decreases so much that the excess heat turns off.


    So ... no free lunch so far. To move beyond this you need to raise the system temperature enough that it becomes self heating. Then, theoretically, you can turn off the input altogether and harvest some of the heat that is still being released. It turns out that this is, indeed a capability of the model. That is next (maybe later today, maybe tomorrow).

      

  • Remarks: Although the temperature-sensitive internal heating is activated by the inputs shown here, with these inputs you can't get out of the system any more power than you put in. Thus, in the second panel above one might think 'Hey great, here I am constantly harvesting some of the excess heat being generated internally!' ... but the rate of harvest is less that the power being fed into the system to support the working temperature. Overall, no net power is being created. If you attempt to get around this by increasing the harvested power, as shown in the bottom panel, eventually the the temperature decreases so much that the excess heat turns off.


    Again your model does not follow the simple physics I explained. By changing your model’s assumptions (some quite ludicrous) you can get any result you want. You don’t follow the known physics of the calorimeter we use.


    In the real world I only control temperature and everything else is up to nature. I believe cop of infinity is possible. Time and data will tell if we are correct or not.

  • Simply, what I was suggesting was trim down input and let steady state occur, maybe trim down a bit more if necessary, etc. to try and aim for steady state occurring at certain known calibration temperatures. Obviously messing with it as little as possible is best. For example, if COPs or whatever are typically 1.2, then with an active reaction aim for about 80-85% of what the calibration required for input for a temperature step and see the reaction fill in the rest.
    This makes the oven a power compensation calorimeter.

  • Simply, what I was suggesting was trim down input and let steady state occur, maybe trim down a bit more if necessary, etc. to try and aim for steady state occurring at certain known calibration temperatures. Obviously messing with it as little as possible is best. For example, if COPs or whatever are typically 1.2, then with an active reaction aim for about 80-85% of what the calibration required for input for a temperature step and see the reaction fill in the rest.
    This makes the oven a power compensation calorimeter.

    Yes I get that and that is what I tried to do originally but as I wrote already, there are too many variables that change in non-linear ways to be able to do this properly. Perhaps with enough patience and perfect tuning of the control systems this could be achieved BUT, my question is why is this any more valid than simply measuring the steady state temperature with and without an active reactor? the latter is 1000x easier so why unnecessarily complicate the system?


    To be fair, we had the same line of thought prior to running this experiment. I have decades of experience in metrology and control systems engineering and despite this toolbox, I could not make it work reasonably. If one really wanted to achieve it, it could possibly be done but it would involve extensive modeling of the 3 main modes of heat flow, the thermal masses and resistance of the system, and an extensive fuzzy rule set. Given unlimited time and budget I am sure I could do it. But both time and budget are not unlimited so then this begs the question: Why fix what ain't broken. If the only way to increase the temperature of the oven is to add more power, then why would reducing power to maintain a given temperature be any more or less valid than recording the required increase in power to maintain the higher temperature seen with the reactor inside?


    Is there some specific systematic error based on actual physics and thermodynamics that you want to address? If so, I would like to hear about it. Otherwise, your proposed system unnecessarily complicates the system and hence makes those results less valid to the objective observer in my mind.

  • Simply, what I was suggesting was trim down input and let steady state occur, maybe trim down a bit more if necessary, etc. to try and aim for steady state occurring at certain known calibration temperatures. Obviously messing with it as little as possible is best. For example, if COPs or whatever are typically 1.2, then with an active reaction aim for about 80-85% of what the calibration required for input for a temperature step and see the reaction fill in the rest.
    This makes the oven a power compensation calorimeter.

    When steady state of the current oven literally takes days. Can you imagine how long it would take to get everything adjusted properly without over or undershooting? It might take you a year to get everything right with this strategy. Been there. Done that.

  • Ovens can be operated in two equally valid modes: fixed power as the independent variable and let temperature be the dependent variable, or fixed temperature and vary the power. The problem with a large thermal mass muffle furnace, is that the response time is so slow it will take you forever to search out the equilibrium point. It would be even harder than trying to steer a large ship with a compass that has a time constant on the order of 6 or 12 hours. At least for this ship the response would be linear. Now try to do it with a non linear rudder. Good luck!

  • . It would be even harder than trying to steer a large ship with a compass that has a time constant on the order of 6 or 12 hours. At least for this ship the response would be linear. Now try to do it with a non linear rudder. Good luck!

    Are you saying that it is harder to temperature-control your oven when it has an active reactor inside than when it has a nonactive reactor?

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.