MIZUNO REPLICATION AND MATERIALS ONLY

  • I find your analysis here somewhat over-complicated and unnecessary. Surely can't you accept that LENR's are essentially all-or-nothing or essentially digital signals? All the calorimetry does is convert such digital signals to analogue signals at equilibrium. So, scientifically all we have to do is compare analogue to analogue heat signals from an active reactor (containing all the ingredients (Ni mesh, D or H2O) with an exact replica (no Ni mesh, D or H20) as control. Then repeat the same measurements repeatedly (the 'boring' part) until we have statistically significant results!

  • So I would prefer that the graph with the T^4 radiative cooling to have reactor temperature on the vertical axis, giving the thermal Q of the system as the slope of the curve, or dT/dP. If I understand Bruce's examples correctly, the red trace curving upward does show decreasing dT/dP. The vertical axis is labeled as dQ/dt in the first two graphs, which may be the equivalent of input power P.

    I am using "Q" to denote heat energy (in joules, say). dQ/dt is, therefore, power (in joules per unit time) and the plots I made show the power of heat dissipation or generation in the reactor as a function of temperature. Since dQ = mc*dT, I really could have just labelled the vertical axis on the power plots "dT/dt" and it wouldn't have made any difference from the standpoint of the qualitative analysis I have undertaken.


    I treat internally generated (LENR) heat as determined by reactor temperature, T. That is, for any value of T, I assume that there is a unique rate of internal heat generation produced by the LENR mechanism -- just like a reaction rate. That is why I put rate on the vertical axis and T on the horizontal axis. It is Daniel_G's claim that the rate of LENR heat generation increases exponentially with T. The red lines in my plot show this. To fit cooling into the same plots, the blue lines show the rate of heat dispersal as likewise a function of temperature (*but see footnote).


    Overall then, I am assuming that the temperature dynamics of the reactor are described by an autonomous nonlinear 1-dimensional ODE with state variable T. So we have dT/dt = f(T, parameters) where f() is a nonlinear function of T given by heating rate minus cooling rate (the red line minus the blue line).


    One of the parameters of f() is the amplitude of externally controlled heating (which is independent of temperature). I use this as the control parameter in the bifurcation analysis of the system. The bottom panel in my figures shows a so-called bifurcation diagram with the control parameter (external heating) along the x-axis and the dynamic variable (T) on the vertical axis. The general set of analysis ideas and techniques here is taken from dynamical systems theory which is often used to analyze excitable systems -- which is what a reactor supplied with a temperature-activated source of heating is.


    I hope this is clear. I am not sure how it tracks with the methods you use for analyzing calorimetric data.


    * [I should make clear that, purely to aid insight, I have portrayed the rate of cooling in my plots with the opposite sign that it actually has. This is so that equilibrium points, where heating and cooling rates just balance, can be spotted as crossings of the red and blue lines. In reality, the blue lines in the plots should be downwardly sloping and lie mostly below the x axis. I perhaps didn't explain this well before].

  • Mizuno's forced-air calorimeter is so different from the oven-based system that Daniel_G has been using that I haven't yet tried to model it. On the other hand, something that I pointed out to Daniel_G is relevant here. I pointed out that the ultimate model of his oven-based system is the system itself. One could replace the reactor mesh with a computer-controlled heat source and make sure that the rate of heat injected into the reactor via this internal heater activates exponentially with reactor temperature (so ... one would need a thermocouple in the reactor and to control injected heat on the basis of the sensed temperature). One would then not need to make assumptions about different types of cooling etc. because they are real and playing out against internal heating with LENR-like properties as determined by the operator. I would then expect all properties I have been enumerating to pop up and Daniel_G could, for example, find out the location of the threshold beyond which lies thermal escape to meltdown in his system. This is something I would have thought one would want to know in a system that is advertised as being prepared for commercial operation.


    In your forced-air calorimeter I would think that you could play the same game. Tuck in a source of internal heat generation that is temperature sensitive and see if you get what Mizuno and Rothwell claim. This could be roughly approximated by your lights if you switch them on on a temperature-sensitive basis. You would have to have some sort of geometric progression of lights ... i.e., 1 light at first, then double to 2 lights, then 4, and so on, as temperature rises.

  • Radiative cooling becomes the main one above 700°. So his expectations are closely linked with the kind of reactor we are talking about.

    Yes, in the R20 case the conduction plays now in the case of Glowstick experiments only the radiative cooling was involved, i think.

    Yes. From looking at some early results Daniel_G released, the cooling of his oven/reactor system is almost linear over 20-200 degrees C. Probably emissivity (of the outside of his oven) is low and so radiative cooling doesn't play much of a role there.

  • Mizuno's forced-air calorimeter is so different from the oven-based system that Daniel_G has been using that I haven't yet tried to model it. On the other hand, something that I pointed out to Daniel_G is relevant here. I pointed out that the ultimate model of his oven-based system is the system itself. One could replace the reactor mesh with a computer-controlled heat source and make sure that the rate of heat injected into the reactor via this internal heater activates exponentially with reactor temperature (so ... one would need a thermocouple in the reactor and to control injected heat on the basis of the sensed temperature). One would then not need to make assumptions about different types of cooling etc. because they are real and playing out against internal heating with LENR-like properties as determined by the operator. I would then expect all properties I have been enumerating to pop up and Daniel_G could, for example, find out the location of the threshold beyond which lies thermal escape to meltdown in his system. This is something I would have thought one would want to know in a system that is advertised as being prepared for commercial operation.


    In your forced-air calorimeter I would think that you could play the same game. Tuck in a source of internal heat generation that is temperature sensitive and see if you get what Mizuno and Rothwell claim. This could be roughly approximated by your lights if you switch them on on a temperature-sensitive basis. You would have to have some sort of geometric progression of lights ... i.e., 1 light at first, then double to 2 lights, then 4, and so on, as temperature rises.

    Perhaps a MoSiC heater element is a more close analogue.


    Some neat stuff in here: Kanthal silicon carbide catalog for oven design.

  • Perhaps a MoSiC heater element is a more close analogue.


    Some neat stuff in here: Kanthal silicon carbide catalog for oven design.

    You bet!


    Any heating element that is good up to highish temperatures and capable of a sufficient rate heat generation would do. Then hook it up to a controllable current source and you have a potential LENR analogue. You don't even need an elaborate control system if the expected speed of temperature excursions is not too fast. A human with a lookup table would do.

  • I find your analysis here somewhat over-complicated and unnecessary. Surely can't you accept that LENR's are essentially all-or-nothing or essentially digital signals? All the calorimetry does is convert such digital signals to analogue signals at equilibrium. So, scientifically all we have to do is compare analogue to analogue heat signals from an active reactor (containing all the ingredients (Ni mesh, D or H2O) with an exact replica (no Ni mesh, D or H20) as control. Then repeat the same measurements repeatedly (the 'boring' part) until we have statistically significant results!

    My analysis do you mean? Or Paradigmnoia's?

  • Overall then, I am assuming that the temperature dynamics of the reactor are described by an autonomous nonlinear 1-dimensional ODE with state variable T. So we have dT/dt = f(T, parameters) where f() is a nonlinear function of T given by heating rate minus cooling rate (the red line minus the blue line).

    Thanks for your clear explanation. As an experimentalist, I want to test a model against data with known accuracy, for example a set of calibrations. This eliminates the unknown LENR part from f(). By starting with such a test, the usefulness (or not) of a model is shown for a specific experiment. Once that is done, the model can be used to predict the effect of a temperature-dependent LENR reaction.


    A model lacking such calibration can still be used to explore the parameter space, but it will yield hypotheses, not predictions. It's still a useful tool, but the context should be made clear to avoid unwarranted criticism or conclusions.

  • A model lacking such calibration can still be used to explore the parameter space, but it will yield hypotheses, not predictions. It's still a useful tool, but the context should be made clear to avoid unwarranted criticism or conclusions.

    I partly agree. I would only strike out the word "still" in the second sentence.


    I understand your approach. You would follow a subtractive paradigm where the characteristics of an LENR-absent system are sufficiently closely characterized that when claimed LENR mechanism is added its properties can be deduced as observed minus baseline. That is excellent for proving that sufficiently large LENR phenomena either exist or don't exist in the system being examined, but it doesn't address other issues of research and engineering design. Suppose you detect an LENR signal, then what? Well I suppose that one would start to wonder about exactly the issues I have been exploring ... operating regimes, thresholds, meltdown and so on. Conversely, suppose you don't find an LENR signal that rises above uncertainty. Then one would start to wonder about whether there are other parts of parameter space that one would rather be in so as to create bigger, more unmistakable, easier to research signals (hence my fascination with thermal escape or meltdown).


    I contend that there is value to making models right now. One can explore the existence of the diverse behaviour regimes that crop up in a thermal mass equipped with temperature-activated internal heating without too closely specifying exact parameter values. The behavioural regimes fit together in characteristic ways and this insight suggests particular experiments or engineering modifications. On the other hand, if there is no temperature-activated heating after all, some of the behaviours won't exist and this lack could be diagnostic too. This is a strong way of moving forward and is the way a lot of science is done anyway.

  • I meant the reverse resistance regime of the heater element itself.


    A nice plot of the curve here:

    https://www.practicalcontrol.c…icon_carbide_control.html

    How would you use it? I would have thought that you would want something with increasing resistance over the temperature interval of interest. So, for example, as temperature rises, I^2 R dissipation would increase (if a constant current source is used).

  • understand your approach. You would follow a subtractive paradigm where the characteristics of an LENR-absent system are sufficiently closely characterized that when claimed LENR mechanism is added its properties can be deduced as observed minus baseline.

    Not quite, but close. What I suggest is piecewise building of a model schema. Make sure the basic structure is solid by testing it against known parameters. In particular, there are some places in the thermodynamics where values of constants measured empirically can be inserted, to see if the predicted behavior matches the system under study. For example emissivity can be set to any arbitrary value and the resulting thermal behavior calculated, but the result may not be realistic.The Lugano study of Rossi's device is a good example of this kind of modelling flaw.


    Anyone familiar with the concept of positive feedback can understand the possibility of a LENR reactor meltdown. To model such a system behavior it's useful to explore the parameter sensitivities, and I think that is what you are working on. In electronic systems, such investigation is sometimes done in order to specify component tolerances, especially where positive feedback or metastable behavior is involved.

  • This is a strong way of moving forward and is the way a lot of science is done anyway.

    perhaps such modelling can move forward on a "strong' BruceH initiated "modelling " thread


    rather than piggy backing on the Mizuno replication thread


    I look forward to Bruce's strong sigmoid presentation in California..

    may be he has a few sponsors...?