MIZUNO REPLICATION AND MATERIALS ONLY

  • If you know well your temperature range and power levels, you can build your calorimeter to make the delta T range that is most favourable for measurements, to any delta T per x Watts recovery, by adjusting and optimizing the volume of “coolant” and flow rate.

    That’s true but remember if we remove too much heat from the system in order to measure the heat flow then we affect the reactor power. So there is an optimal point we have to find between these competing issues.

  • That’s true but remember if we remove too much heat from the system in order to measure the heat flow then we affect the reactor power. So there is an optimal point we have to find between these competing issues.

    You have to remove everything it makes, as fast as it makes it, or the temperature will rise until something fails. (Removing includes losses to the environment.)

  • You have to remove everything it makes, as fast as it makes it, or the temperature will rise until something fails. (Removing includes losses to the environment.)

    Not completely true, but true enough!


    For a heat source that increases exponentially with temperature (e.g., the advertised LENR heat) there should be a range of input energies that result in steady-state temperatures. But put in a bit too much energy and the system, just as you say, will tip over into a regime of temperature runaway until the mechanism destroys itself. I am puzzled as to why we have heard nothing about such a regime. It should be a big source of practical trouble when operating the reactor.


    Even the stable states are trouble because if the LENR mechanism activates exponentially with increasing temperature it also deactivates exponentially with decreasing temperature. This means that COP should be rather modest unless you push the system point at which thermal runaway occurs. So the whole thing is a bit of a difficult balancing act.

  • Not completely true, but true enough!


    For a heat source that increases exponentially with temperature (e.g., the advertised LENR heat) there should be a range of input energies that result in steady-state temperatures. But put in a bit too much energy and the system, just as you say, will tip over into a regime of temperature runaway until the mechanism destroys itself. I am puzzled as to why we have heard nothing about such a regime. It should be a big source of practical trouble when operating the reactor.


    Even the stable states are trouble because if the LENR mechanism activates exponentially with increasing temperature it also deactivates exponentially with decreasing temperature. This means that COP should be rather modest unless you push the system point at which thermal runaway occurs. So the whole thing is a bit of a difficult balancing act.

    I would caution you against the idea that this is in any way a thermally symmetrical reaction, from my observation it is more complex than that, with a tendency for LENR to just 'switch off' suddenly at below certain critical temperatures, but has more 'leeway above that point, probably because Stefan Boltzmann ensures (personally) that radiated heat is proportional to the fourth power of the absolute temperature. So while you might get shutdown suddenly at 300C you might under some circumstances see the system temperature increase to over 1000C before energy output is affected. But technically speaking proportionate cooling is not difficult to construct or control.

  • I would caution you against the idea that this is in any way a thermally symmetrical reaction, from my observation it is more complex than that, with a tendency for LENR to just 'switch off' suddenly at below certain critical temperatures, but has more 'leeway above that point, probably because Stefan Boltzmann ensures (personally) that radiated heat is proportional to the fourth power of the absolute temperature. So while you might get shutdown suddenly at 300C you might under some circumstances see the system temperature increase to over 1000C before energy output is affected. But technically speaking proportionate cooling is not difficult to construct or control.

    I have asked Daniel_G several times about this and he is absolutely definite that steady state activation in his system has an exponential dependence on temperature with no apparent hint of a separate deactivation process. No word of complications. So my observations (and my puzzlements) stand.


    I do recall that you described a more complicated range of behaviours for the Russ George fuel. There was word of several different ranges of temperature activation, and a bursting behaviour that I interpreted at the time as evidence of a slow inactivation. But it is all so dubious! All original posts on the matter were taken down from LENR Forum and, 5 years later, there have been no subsequent publications and no release of information that would enable any sort of replication.

  • I have asked Daniel_G several times about this and he is absolutely definite that steady state activation in his system has an exponential dependence on temperature with no apparent hint of a separate deactivation process. No word of complications. So my observations (and my puzzlements) stand.


    I do recall that you described a more complicated range of behaviours for the Russ George fuel. There was word of several different ranges of temperature activation, and a bursting behaviour that I interpreted at the time as evidence of a slow inactivation. But it is all so dubious! All original posts on the matter were taken down from LENR Forum and, 5 years later, there have been no subsequent publications and no release of information that would enable any sort of replication.

    Bruce you misquoted me and also mischaracterized the entire context of my claim. It really makes me wonder if you are serious. When we make measurements it’s a prerequisite that we do so in the steady state. As Alan says you continue to chase straw men about your modeled dynamics while serious science requires us to measure in the equilibrium state.


    I don’t really understand what drives your desire to hyperfocus in the dynamics. Again Alan pointed to the proper physics (Stefan-Boltzmann). Rather than being a “problem” in the physics experiments, it’s a wonderful volume knob that can be used to tune output to load in a practical device.


    I never ever said the dynamic response was perfectly symmetrical. In fact I don’t much care about it. The older calorimeters and our reactors have too much thermal mass to resolve dynamics so we take them completely out of the equation. Our data shows an output exponentially proportional to temperature AT STEADY STATE.

  • You have to remove everything it makes, as fast as it makes it, or the temperature will rise until something fails. (Removing includes losses to the environment.)

    Not true. The heat builds up increasing the temperature of the reactor and calorimeter and the radiative heat transfer increases with the fourth power of absolute temperature. It’s entirely possible to make a device that will need active cooling to prevent runaway but the balance between heat output and heat transfer has not reached that point yet.


    There is not any reason that all excess heat needs to removed immediately as you mentioned.

  • Not true. The heat builds up increasing the temperature of the reactor and calorimeter and the radiative heat transfer increases with the fourth power of absolute temperature. It’s entirely possible to make a device that will need active cooling to prevent runaway but the balance between heat output and heat transfer has not reached that point yet.


    There is not any reason that all excess heat needs to removed immediately as you mentioned.

    It doesn’t need to be excess heat, or logarithmic, or anything special. Just plain old lightbulb heat is fine.


    Set heat to constant input level. Then:

    10 Constantly remove 99% heat

    20 Continue heating

    30 Goto 10


    *tidied that up a bit for clarity

  • Lightbulb heat output is constant so it’s a poor model for the LENR system. Doesn’t even begin to approach the behavior of a LENR device. Anyway for now we just want to produce quality data from credible labs at small scale. One solid step at a time.

  • Our data shows an output exponentially proportional to temperature AT STEADY STATE.

    Exactly. That is what I told Alan you were saying.


    I told him you are "... absolutely definite that steady state activation in his system has an exponential dependence on temperature...". So I don't see how I am misquoting you or mischaracterizing the context.


    As for the rest, I have used a simple lumped model of a thermal mass with Newtonian cooling and a temperature-dependent internal heating to model your reactor/incubator system. I think that this model is adequate to explore, first of all, what sort of steady states to expect. I have used the techniques of a branch of mathematics called "dynamical systems theory" to do this. The analysis shows that at low input power there will be 2 coexisting steady states ... one of which is stable and the other unstable. Operationally you will only be able to easily measure the stable steady state. That is what you are seeing in your experiments. As you increase the input power, however, the stable and unstable steady states will approach each other and then mutually annihilate. This is called a bifurcation point. Beyond it, at higher input powers, there is no steady state possible and you get thermal runaway.


    These are all just simple, straightforward predictions of how a incubator system like yours should act when it finds a temperature-dependent heat source inside it. They are robust, qualitative results in the sense that the precise form of the temperature dependence and of the cooling relation don't matter too much. That is why I don't think that adding radiative cooling to the model will change much. And it is also why it puzzles me that you don't see thermal runaway in your system.

  • Great question Daniel.


    I think the issue for validation (if we ignore human factors) is all of the assumptions involved in the calorimetry and the experiment itself.


    A finding of COP = 1.1 implies accurate measurement of input and output powers. Actually it is probably better to work with bounds, where a definite upper bound on sum of input powers, and lower bound on (perhaps part of) output power is enough to indicate a lower bound on COP which if e.g. 1.1 and sustained in such a way as to make chemical mechanisms impossible is extraordinary.


    Note that a good bound of 1.1 probably means a typical COP of at least 1.5 because of all of the assumptions that lead to (boundable) uncertainty.


    The part of this which is most well understood is the calorimetry. Not that this is straightforward, but Jed will point out that high accuracy calorimeters exist, and it the various sources of errors - at least for all of the standard types, are well understood.


    However if the calorimeter used was unusual, or (even) a well understood calorimeter was used in a way that was unusual, it brings back the possibility of some not understood systematic error. Technically, there could be a not understood systematic error in any calorimeter no matter how well studied - some assumption always made that under never yet observed but possible conditions breaks. But that is very unlikely when calorimeter is well understood and it is being used in a manner that is also well understood with very many previous similar experiments, done by different people all of whom would have had some chance of catching errors.


    Given this - if either the calorimeter is non-standard, or the way it is being used is non-standard, different methods provide more robust results than a single method.


    There is another factor (more significant I think) which is human error. If a single lab is doing the experiments, or even multiple labs where the methodology for the experiment and calorimetry is worked out in one and followed by others, there is the possibility of errors. A subtle error will often not be caught by the followers, since checking somone else's work is not usually as rigorous a process as working it out blind. Which is why the "hard" perturbation theory calculations are done by multiple groups using blind-modified data - the people doing the work do not know what the correct answer should be because it can only be discovered when the blinding is removed and the corresponding change made on the outputs. That also means multiple groups doing the work cannot check against each other - they all have different inputs. Keeping a small random data change blind can be very robust so this is really good at eliminating human factors.


    Anyway - that means that the more variety in methodology possible, the more convincing the results.


    One caveat. The variation must be determined (and published) before the experimental results are obtained, not after. Otherwise the results can be challenged on the grounds (as happens in the famous misuse of p-values medical experiments) that many different measured were tried and those that produced the desired results were cherry-picked. That actually makes the overall result less convincing.


    So:

    different methodology if decided and registered beforehand so that negative as well as positive results are captured, => better integrity

    same methodology has the advantage that non-systematic errors can be more easily identified.


    And while this in principle applies to calorimetry as well as how calorimeters are used, how inputs are measures, etc, integrity is probably less of an issue from the calorimetry if the types of calorimeters that are used, and ways in which they are connected to the experiments, are both well studied.

  • Bruce it seems you are over complicating a simple issue. You choose to ignore the most important component of heat flow. So your model is wrong. Stefan Boltzmann radiation is increasingly the major form of heat transfer at the system warms.


    Why we don’t get runaway? Simple answer is heat flow out of the system is more than the system is producing. Not rocket science.


    It should not be puzzling at all. The system consists of input, thermal mass, convective output, radiative output. Your model seems to be overly simple so it doesn’t fit reality, hence any hypothetical dynamic behavior doesn’t mean anything. After all these conversations I really don’t understand your purpose.

  • Hello THH


    What you are describing is simply correct statistical experimental design although it is stated in an unusual or unorthodox way.


    Yes we do uncertainty budgets in every reading and we calculate the statistical power required to falsify the null hypothesis at a given confidence level and a hypothetical effect size.


    That’s exactly what Google proposed in their Nature Perspectives paper and exactly what we intend to do. That’s just normal science how everyone is trained to do it.


    Blinding can have its utility in biological experiments where placebo effect is something real that has to be considered. If I’m dealing with measurements logged to a real time data logger, it’s rather meaningless in my opinion.


    As for input measurement error it’s typically less than 1W. Output is calculated based in a delta T measurement (errors add) and a mass flow reading in flow calorimeters which can be in turn based on velocity profile, diameter and density measurements. The two labs we intend to cooperate with use either airflow or water flow calorimeters. Uncertainly in any of these specific measurements or assumptions add up and the more measurements you have the higher the uncertainty.


    Each lab is highly experienced and credible in calorimetry using different methods, instrumentation and personnel. Of our total uncertainty is plus or minus 5W at 3 sigmas and we detect a 100W effect, I don’t think any peer reviewer is going to be able to claim systemic error, especially with multiple calibration runs and multiple active runs chosen randomly and repeated 4-6 times.


    The same randomization method (choosing calibration or active reactor) over 4-6 runs on at least two different calorimeters will likely bring us over 5 sigmas of statistical significance.


    The reason I also choose the incubator method is simply due to its simplicity and single measurement reduces any chances for error. The only error could be from PTD resistance or thermocouple voltage or from insufficient mixing. Since we are using class A thermocouples maximum error is 5K at 1000K or 0.5%. Mixing can be confirmed by multiple probes at different physical locations in the incubator.


    If 200W calibration input gives us 1000K and we can achieve 1000K with only 100W input then the maximum input can be 101W and at maximum error in the temperature can become 995K then you still would claim systemic error? Alternatively one could fix the input power and measure the equilibrium temperature with and without an active reactor.


    After all the above is said and done, I’m not sure how any serious reviewer could claim systemic error. The incubator calorimeter was specifically designed to eliminate all types of errors to the extent possible.


    If I programmed the system to choose any random input, equilibrate, measure and the. Randomly choose a new input, and we could detect such random input within a few Watts, would you suddenly feel that we could not properly detect additional heat when the active reactor is placed in the system?


    Would not the suddenly doubling of input power be enough to convince you in such a system?

  • You choose to ignore the most important component of heat flow. So your model is wrong. Stefan Boltzmann radiation is increasingly the major form of heat transfer at the system warms.

    It is legitimate to say that the model I am using is inadequate. The problem is that this is always true, to some extent, for any model. We always neglect some things when we create models. In some ways that is what models are for ... to leave out what is negligible so as to leave the simplest picture of what is important. The trick in model making is to start from the simplest picture and then add more elements only when it is established that they are needed.


    The real question here, then, is whether radiative transfer is important in relation to convective cooling over the range of temperatures your system works at. I have been considering this since last Spring. It is dead easy to add a radiative component to the model I am using and that is what I did at that time. Adding this additional cooling mechanism did not change the qualitative phenomena I have mentioned -- i.e., the presence of thermal runaway, hysteresis, inflection points in the heating timecourse -- unless it is much stronger than it appears from the few steady-state temperature power vs input power plots you have posted.


    But this is not very satisfactory. I am just eyeballing your plots and making guestimates. This is why, beginning last Spring, I began asking if you were willing to provide more data. It was exactly so that I could add radiative transfer to the mix in a reasonable way.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.