Birger Member
  • Member since Jan 26th 2016
  • Last Activity:

Posts by Birger

    I find the above a bit ... strange. What would be in a 1:1 ratio is the electron : proton ratio. Today it is thought that the proton : neutron ratio is about 7 : 1 . Really has nothing to do with hydrino as dark matter, or confirming the Standard Model.

    Today yes. In the beginning of the nucleosynthesis no.

    From https://brilliantlightpower.com/ciht-cell/: ”Our new results add to the long-standing discredit of cold fusion, this mechanism is disproved by the lack of any evidence of a nuclear reaction. ” This is an interesting statement, indicating that Mills has abandoned his earlier theories. It is startling, by two reasons:


    a) as most of us here know, there is a number of published scientific reports showing both 'cold' transmutations of elements, and energetic photons (even though their energy usually is much lower than in high-energy transmutations). Are they all wrong? Dissmissing all that work so easily might be understandable from a business perspective, but perhaps not so from the perspective of the hard working LENR scientists.


    b) ultra-dense hydrogen (UDH), or hydrinos, as Mills call them, have very short proton-proton distances and the high electrostatic shielding effect by the low-orbital electrons strongly increases the likelihood for nuclei reactions. So why dismiss that option even though Mills does not seem to have succeeded to show that in his own work?


    The BLP business presentation, https://brilliantlightpower.co…Overview_Presentation.pdf, is not easily understandable for an investor. It does not give the sharp professional and pedagocical appearance that might be expected for something supposed to help raise millions of dollars (I assume it is). My impression is that BLP throws a lot of seed around in the hope that some will start to grow (or raise money). It is an indication that they do NOT have a killer application in the pipe. The big problem is, as I see it, that at this stage, this is technology suited for the research labs, not the industry. It's premature to commercialize it and that's a problem for BLP, but it might explain their approach.


    I believe I am not the only one that got the first impression that the nanofibers shown in the image actually was an UDH-composed material (yes I know, GaO is written in the image - but not in the text, where it is called a 'hydrino compound'). UDH might be regarded as a trace substance. I didn't find any data about it.


    This document: https://brilliantlightpower.co…alytical_Presentation.pdf is more interesting since it contains more information. Some of the results indicating (not proving) the existence of UDH are quite interesting. Not least the negative peaks in GC (TCD) that are both extraordinary and easily interpreted. Gas chromatography is relatively inexpensive. If they can be repeated by critical independent labs, they could pave the way for a broader acceptance of the existence of UDH.


    The idea that dark matter actually is UDH, proposed both by both Holmlid and Mills, is contradicted by the neutron-proton ration of 1:1, just in the beginning of the creation of Universe (according to the Standard Model), and the measured and calculated abundance of H and He in space. These abundances confirm the Standard Model, leaving no room for about 7 times more hydrogen-based dark matter than ordinary hydrogen. Where would all those extra protons for UDH come from? The initial neutron-proton ratio then should be 1:7 instead of 1:1. Quite a big difference. I don't know if there exists any astronomers proposing UDH as a dark matter candidate.


    The validation reports have found excess heat in the range 2 – 4, measured in one-shot, water-bath cooled, experiments. We know how difficult it can be to measure energy from dynamic currents. Most of the validations have been made in BLP's facilities. From the 100-hour runs, that should be the most interesting stuff from an energy investors perspective, no data are published in the reports, as far as I have seen. Omitting high excess heat results is unlikely, since such data is what talks to the heart (or brain) of the investor. Therefore, my conclusion is that the longer (continuous) runs have not produced any impressive excess heat – if any.

    After the gaps form, the material needs to be stored in D2 gas to prevent complete loss of d. Complete loss would happen rapidly, thereby making the material inactive unless it were again subjected to gas discharge.


    When heated, the over pressure of D2 needs to be as high as practical. Impurities in the gas are not important, except the H content of the gas needs to be low. Any H will dilute the D, thereby reduce the amount of power resulting from each fusion process. The D2 gas is necessary to keep some d in the structure. The heating should be done slowly in stages while excess power is measured. The excess power, if real , will be found to increase linearly with respect to log power vs 1/T. Failure to see this temperature effect is evidence that LENR did not occur.

    If we consider the nano-gap hypothesis correct, we will have two primary cases, as I see it:


    a) the 'elastic' case, where low pressure hydrogen will enter into existing cracks and react without essentially increasing the cracks' propagations. When reaction products have diffused from the solid, the cracks will be subject to new reactions. Perhaps 'elastic' is not the right term to use, but it's the one that comes into my mind, based upon elastic (non-destructive) stresses in the solid.


    b) the destructive case, where high pressure hydrogen makes the cracks propagate through the fuel material, eventually making the solid passive (or exhausted).

    From the R19 paper: ”As the reactant temperature rises, the amount of deuterium in the reactant decreases. This causes the exothermic reaction to decrease and the reactant temperature to fall. When the temperature decreases, more deuterium enters the reactant again, and excess heat increases.”


    a) If this is correct and the reaction is driven by temperature, the reaction should stabilize at some optimal temperature level. Heating (slowly) above that temperature will cause COP to decrease. We will have a Peak COP Temperature (PCOPT).


    b) Reducing the heat loss high enough will, for any input power within a broad range, make the reaction temperature exceed this PCOPT. If then the power is switched off and COP is high enough, the reaction should become SELF-SUSTAINED and stabilize at PCOPT.



    Excess energy vs input power (current) seems to be a close to linear relation (table 1 and fig. 4, when dotting peak power out vs power in, in the R19 paper). Additionally, the 300 W input and up to 3 kW output does not produce particularly higher COP than 50 W in and 300 W out (R20). Ordinary exponential reaction temperature dependency (Arrhenius equation) does not seem to apply in this case. This indicates that temperature is NOT the main driving force for the reaction. But, if magnetic stimulation is, this might explain some other anomalies the presentations expose:

    1) When the heating coil is moved from the outside to the inside, COP raises 10-fold. This shift of position means that the electromagnetic shielding of the tube wall is bypassed and that the nickel mesh is subject to a substantially higher (even though relatively weak) magnetic influence.

    2) The cooling coils' killer effect on the reaction might possibly be explained by that the coils interact with and weakens the magnetic field created by the heater coil (positioned inside the cooling coils) on the inside of the reactor wall.


    3) This secrecy about the design of the heater in the reaction chamber. If the coil geometry is essential, and possibly other factors such as current pulses etc, to reach COP>5, the secrecy is more understandable – but it is not in the benefit of highly successful replications.


    4) A self-sustained reaction has not been demonstrated. Switching of the power seems to stop the reaction.

    I don't like to pick on details but it is part of a scientific process, as this discussion is, or should be. I hope my main objections will prove unfounded and that Mizuno's efforts will become rewarded, but the more I have looked into the data, the more questions are raised. The results seemed robust after my first reading.


    The exponential excess heat/temperature curve in fig. 8 transforms into a line if the two(only) data points at 0 power input are omitted. And they should. With 0 power and the reactor wall temperature 23 C we should not expect any measurable output. The 2 W presented should be considered within the error limits.


    In my experience (and according to the common energy dependencies), a straight line is not the expected behaviour for a temperature dependent reaction. It does not mean that the presented data must be erroneous, but it is a warning sign.

    It would be good to have details of the in situ heater geometry. It is a key component in any replication experiments. The distance between the windings (or whatever it is) and the mesh and the length of the folded heater are of particular interest. I am surprised it is omitted in the paper.

    I think the way to go without air calorimetry is to switch off the power in a hot reactor with a dummy load and take cooling curves- many many cooling curves from all kinds of temperatures. They should be classic Newtonian curves of course. Then do it in a properly fuelled reactor and look for a difference.


    And if the power is switched off, it eliminates many of the arguments about 'artifacts' .

    Good idea, but it requires that the reaction continues long enough after turning off the heater. The heat capacity in the steel tube could also make the results less convincing if we haven't a self-sustaining reaction, as in this case. A more direct measuring of the mesh temperature by an IR-camera, through a quartz window, is a further option.

    Not as far as I know.

    If so, I can't see any logical explanation for why a thin insulation around the reactor would not provide high enough reactor wall temperature, equal to or higher than for a free standing unit. If you have a layer with high heat resistance, the method to cool its outer surface is of minor importance, whether its solid copper, air convection or water-cooled tubes.


    If not so, if water-cooled coils really suppress the reaction even when the reactor wall temperature is the same as in the presented experiments, that mechanism should be understood. It will say something important about the reaction - or the measurements.


    Would it be just as bad using insulation but no coils?


    Would measured COP be effected if the air temperature inside the enclosure is raised (by lowering the fan speed)?


    I agree with you that in many cases the calorimetric method chosen can have an effect on the results, increase or suppress any measured excess heat. However, in Mizuno's experiment, the power levels and temperatures are perfect for water calorimetry. If using it kills the reaction, that is a problem. In my first post I suggested using both the present air cooling system and a second water calorimetry circuit. Now it seems even more justified to exclude the existence of a systematic gross error in Mizuno's air calorimetry.



    Not as far as I know.

    That is wrong. The calorimeter is an integral part of the experiment. It always affects the experiment, as described in the paper. This has been seen in many other cold fusion experiments, and conventional chemistry experiments.


    The effect of heat removal from water filled tubes is much greater than this difference.


    If he had other reasons we would have disclosed them.

    Is it correct, according to you, that there is another parameter than the reactor wall temperature that is important when water cooling hampers the reaction?

    No doubt there can be a number of replication trials being made, but there are a number of ways to mess up the experiments, even for a skilled experimenter. Making the reactor work is just part of it. The calorimetry is essential. The calorimetric data from a good replication should be easy reading even for skeptic engineers and physicists without calorimetric experience.


    Just noted in Mizunos and Rothwells paper: “In previous experiments we used water-flow calorimeters with cooling coils up against the reactor walls, or cooling coils WITH INSULATION between the coil and the wall. Both types removed heat too quickly, reducing or eliminating the reaction.”


    This raises questions.


    If genuine (we don't know for shore yet), the high measured COP should have nothing to do with which calorimetric method that is used. The triggering energy comes from the central heater so what happens at the outside of the reactor should not have much effect on the output (if any), when the reactor wall temperature is kept within the operational limits.


    COP at 230 C is about the same as at 380 C (1.4 – 1.5), according to table 1. Thus, the reaction does NOT seem to be very sensitive to the reactor wall temperature, which makes me wonder about the quote.


    We can estimate the heat loss from the reactor, based on the data, to 1 W/K.


    From heat loss coefficient tables: the total heat loss coefficient (radiation + convection) at 300 C should be about 7 W/K,m2. Thus, 300 W from a reactor wall at uniform temperature would require a corresponding effective reactor length of 0.4 m (3.14 x 0.114 x 0.4 x 7=1 W/K). This sounds reasonable - the reactor wall temperature is not uniform.


    An insulation example: A 10 mm thick insulation eliminating convection losses should have a heat loss coefficient about 5 W/K,m2, when the cooling coils are in contact with its surface (and lower if they are not). This is smaller than for the free standing reactor. The temperature should be higher even with such a thin insulation plus cooling coils!


    Using only 5 mm thick insulation plus a 5 mm thick air gap to the cooling coil should provide a similar heat loss level as for the free standing unit.


    Using a slightly wider low-cost coil and cm-thick insulation seems much simpler than abandoning the water calorimetry that Mizuno had been using and build a completely new air flow system.


    Using insulation also would increase the reactor wall temperature that should be beneficial to the reaction, or would it not?


    Perhaps Mizuno had other reasons to abandon water calorimetry and NOT use insulation, than disclosed in the paper? Would be interesting to know.

    You assume that what I suggest is to convince myself. That is a mistake. It is all about convincing others. Those 180 replications did not pave the way for LENR, did they?


    Now, what we have is a very (in comparision) simple experiment that might be replicated, or might not. If it will be replicated by the same 'hundreds of labs' as before, do you expect the outcome different? If so, why? It is extraordinary in most physicists eyes. That is a fact. You have to make clever people shift their core beliefs. Try to look at it from their point. Adding an extra water circuit is not the major cost in the experiment, but it will add credibility to the results. I don't expect Mizuno do do this. Why should he (and you, if 'us' means you and him)? Please don't be so hostile towards suggestions deviating from your ideas. Being creative and curious is the basis for scientific development. Cooperation, not division, is the way.

    Extraordinary claims require extraordinary proofs. In this case, I would use what otherwise would be overkill: two independent cooling and measurement systems, the first cooling the reactor using air, the second cooling the air using water. Keeping sufficient pressure to prevent cavitation and maximum water temperatures below 60 C guarantee we are dealing with liquid water – not steam. If both systems show similar levels of excess heat, the proof is good. The air temperatures would be higher than in Mizunos experiments, but that should not be a problem.


    The air system would have a single reactor inside a cylindrical insulation, with a separation between reactor and insulation optimized to keep recommended heat transfer rates at the chosen air flow rates. In addition, I would use a gold surface IR reflector at the inside of the insulation.


    The experimental protocol would cycle dummy and active mode, perhaps 3 days each (when good active mesh has been made, giving at least 100% excess heat, based on Mizunos results and the preliminary tests that need to be made). The only difference would be that in dummy mode, the mesh is removed. This means that there will be a number of mesh handling, degassing, loading and deloading procedures between dummy and active modes.


    The choice of heater might be critical. I don't understand this one yet. If IR is responsible for triggering high thermal emissivity would be beneficial (highly oxidized casing).

    The results are indeed impressive. The relative simplicity of the as-it-seems robust experimental procedure is encouraging. Difficult to find any flaws explaining the high COP. Mizuno has been in the game for a long time, many hours and good thoughts behind this. However, no time to celebrate until credible replications are reported.

    This isotopic data could derive from anything but since Alan has given it some cred I made a few transmutation simulations with the same input data as I used for the Lugano samples. It did fit one particular model surprisingly well. If the data is a fraud it is an elaborate fraud. It is a pity we probably won't get the data validated.