Brillouin Energy Corporation (BEC) updates.

  • Quote

    June 2018 report ~10 Watts excess power

    December 2018 report ~50 Watts excess power

    March 2019 report ~80 Watts excess power

    Excess power is meaningless unless you know what the output/input ratio is. And that is even before we get to replication/verification.

    • Official Post

    Excess power is meaningless unless you know what the output/input ratio is. And that is even before we get to replication/verification.


    That's I'm afraid to say, nonsense. Do you know what the initial power output of Fermi's 500 tons of graphite and uranium (The Chicago pile) was? Initially it was a half watt. Then after a while they cranked it up to 200 watts. The excess power only makes sense when you compare a test with a control. If you have to account for everything else, where do you stop? The PSU losses, the data-logging computers, the lights, the fans, the air-con, the grid losses, the car you drove to the lab in and what you ate for breakfast? Where do you stop?

  • The excess power only makes sense when you compare a test with a control. If you have to account for everything else, where do you stop? The PSU losses, the data-logging computers . . .


    Yes. The only reasonable metric is the signal to noise ratio. You have noise even when there is no input power. Input power is not necessarily a major source of noise. Electrolysis input is usually quiet and it can be measured with high precision, so it does not add much noise. Therefore, the input power level does not matter much from an experimental point of view. It would matter with technology.

  • Fermi's 500 tons of graphite and uranium

    U238 generates 0.1 W/tonne through natural decay according to


    http://www.world-nuclear.org/i…ium-how-does-it-work.aspx


    If there were 45 tonnes of U238 in the Chicago pile1


    This 'geothermal 'heat would amount to 4.5 watts..which is more than the calculated 0.5W early December output


    Maybe Enrico factored that into his xs heat calculation somewhere so he could celebrate with Chianti


    The less cautious 200W output 10 days later was much in excess of the geothermal heat.


    https://en.wikipedia.org/wiki/Chicago_Pile-1

  • Yes. The only reasonable metric is the signal to noise ratio. You have noise even when there is no input power. Input power is not necessarily a major source of noise. Electrolysis input is usually quiet and it can be measured with high precision, so it does not add much noise. Therefore, the input power level does not matter much from an experimental point of view. It would matter with technology.


    While that is true, SOTs comment was not ridiculous.


    COP is relevant in LENR experiments where (as is often the case) there is fairly standard calorimetry, with errors not properly quantified but supposed no more than "typical 10% or so". In that case 100% excess looks interesting, whereas 10% excess looks much less interesting.


    Quantifying errors properly below 10% is difficult because the assumptions on which results rest (e.g. calibration or control runs are similar to real runs in all calorimetric variables) can be challenged. And, in addition, low powers present additional problems independent of COP close to 1.


    So while Jed is right, some setups make it quite difficult to determine what noise is.


    In this case the Q pulses add calorimetric complexity because you have to quantify:


    (1) EMC error change control vs real

    (2) Q pulse power estimation error control vs real

    (3) Control box efficiency change control vs real


    If "no Q pulse" output is considered control than (2) reduces to estimating Q pulses. (3) would not be necessary in a better experimental system where control box dissipation could be separated from output (though that might make for additional EMC problems).


    Given all this unquantified "noise" neither absolute power excess nor COP helps us alone.


    Change in power excess might represent progress but only if other parameters remain the same between these different reports.


    Robert: your headline summary - without the details - does not tell us much.


    The details from brillouin do not help much because they specify "peak" values in the table. But peak over what time period? if small this means very little. Also they don't specify under what assumptions this is made (e.g is control box dissipation included, if not how is it controlled).

  • COP is relevant in LENR experiments where (as is often the case) there is fairly standard calorimetry, with errors not properly quantified but supposed no more than "typical 10% or so".


    I do not think this is "often the case." As far as I know, this is never the case. Which experiments do you have in mind? In what paper did you read "typical 10% or so"?


    If errors are not properly quantified than the COP would not make any difference. Whether it was 110% or infinity (meaning no input power), the results would be questionable, or meaningless.

  • I do not think this is "often the case." As far as I know, this is never the case. Which experiments do you have in mind? In what paper did you read "typical 10% or so"?


    If errors are not properly quantified than the COP would not make any difference. Whether it was 110% or infinity (meaning no input power), the results would be questionable, or meaningless.



    I thought that might be what you'd say. This is a pragmatic question. Since complete error analysis for control or calibration based results is never possible without assumptions (give me a counterexample and I'll tell you the assumptions) there is always a badly defined issue of what quantitative bound have the total assumptive errors. By definition. Choose a value different from 10% if you like - obviously it is very variable.

  • there is always a badly defined issue of what quantitative bound have the total assumptive errors.

    Always? In every paper? Are you saying that Fleischmann, Miles, McKubre, Storms or Mizuno have not dealt with these issues, or they do not understand them?


    I suggest you look at 3 or 4 of the major papers and show where they have made incorrect assumptions and badly defined issues relating to signal to noise. You often claim you have found mistakes, but when I ask you for specific instances in specific papers, you do not answer. I do not think you have actually found any errors.


    Along the same lines, you claim you have discovered reasons why the boil off experiments might be wrong, but when I asked you what specific reasons you have in mind, you pointed to a nonsensical list compiled by someone else, and then you claimed that the pressure from steam is pushing macroscopic drops of pure water out of the cell, which is preposterous. You do not get points for claiming you have discovered an error and then -- when asked what that error is -- for pointing to impossible events that would be readily observable if real, and which are never seen. That's not science.


    If you are saying there are some obscure, poorly written papers in cold fusion with mistakes in them, you are right. However, in the mainstream, peer-reviewed papers there are no significant errors. Except in Morrison's paper, which has mistakes in the other direction. It passed peer-review easily even though it is full of errors, because the editors and reviewers shared Morrison's bias against cold fusion.


    https://www.lenr-canr.org/acrobat/Fleischmanreplytothe.pdf

  • Always? In every paper? Are you saying that Fleischmann, Miles, McKubre, Storms or Mizuno have not dealt with these issues, or they do not understand them?


    https://www.lenr-canr.org/acrobat/Fleischmanreplytothe.pdf


    Not at all, McKubre and many others do understand these issues, and do not provide precise quantitative error bounds for all errors. An error bound can be correct (if conservative) without being precise. But most LENR claims are much less carefully instrumented and checked than McKubre's.


    Mizuno has a record of missing entirely significant errors.

    • Official Post

    Quantifying errors properly below 10% is difficult because the assumptions on which results rest (e.g. calibration or control runs are similar to real runs in all calorimetric variables) can be challenged. And, in addition, low powers present additional problems independent of COP close to 1.


    Are transmutations, and radiation as prone to interpretation error as XH? In Ruby's new interview with a prominent Russian scientist (Irina Savvatimova), she had this to say:


    "However, all the effects of transmutation with an increase in the content of individual elements up to 100 times or more, with a change in the isotopic composition, could not convince critics that such changes were a reality.

    Only an experiment with radioactive material could convince these people, so it was another happy occasion when John Dash invited me to Portland State University to conduct research with uranium.

    As a result of this work, we were able to show the presence of alpha, beta and gammas. The alpha activity of Uranium increased after irradiation with hydrogen and deuterium ions about 2-4 times, and beta and gamma emission increased from 10 to 60%."


    And if you think she was seeing contamination instead, or is an amateur:

    "I had experience with a glow discharge for more than 10 years before the CF, work has already been done on studying changes in structure and properties, so for me the study of transmutation was just a more in-depth comprehensive study of the process. The study of the elemental and isotopic composition showed the appearance of elements – that were absent before the experiments – in the sample material and the structural parts of the discharge chamber."


    This lady scientist knows what she is doing, and what she is seeing. She is not alone either, as there have been many reports like this from around the world. Maybe I am wrong, but it seems much harder to outright dismiss nuclear signatures as anomalies. Especially so from scientists qualified, and experienced in the study, and equipment.


  • Shane - yes, these changes in radioactivity and in isotopic composition are very hard to evaluate. That is because the errors are not straightforward:


    Radiation:

    contamination (must be considered on a detailed case by case basis). Not straightforward because unexpected causes are always possible

    GCR (ok easily dealt with, but not always considered)

    detector temperature sensitivity (not always considered, though any experimental scientist familiar with the detectors would do this)

    atmospheric contamination


    Isotopic composition:

    contamination

    misinterpretation of lines


    BTW I've probably missed quite a few issues.


    You'd need a good write-up considering all such issues before you took such claims seriously, hence anecdotal comments don't get us very far.


    However, in this case (activity of Uranium changing) I'd suggest another possibility. Natural Uranium has a complex set of decay products including Radon. These can be disturbed (physically) causing for example change due to Radon egress, and Radon progeny. Radon is tricky stuff because it creates air-bourne progeny with varying half-lives.


    Re trusting the scientists. The issue is how many scientists have such anomalous reports out of what overall sample size? If only 1% of scientists make such reports then it is plausibly just because they are less experienced and/or mistaken. And, yes, scientists like people everywhere can end up repeating mistakes.

  • But most LENR claims are much less carefully instrumented and checked than McKubre's.

    Most? Which ones? Be specific. Which of the mainstream papers are much less careful, to the point where the results are doubtful. I will grant that few people match McKubre's level of care. But which ones are so bad you don't believe them?


    Mizuno has a record of missing entirely significant errors

    The problem there was that I have a record of mistranslating early drafts of papers, and mixing up equations. I put that out so that other people would catch the errors. As Robert Bryant pointed out, Mizuno, I and others corrected these errors, but that author never acknowledged the corrections. (Robert had more contact than I did, and he can give you the details.)


    What I did was a crowd-sourced version of peer-review. It was effective. I have no regrets. Unlike you, when I make a mistake, I admit it frankly and I correct it. I do not go on claiming that steam from 100 W boiling can push up macroscopic drops of water, or that I have discovered errors in papers when I have not.

  • Unlike you, when I make a mistake, I admit it frankly and I correct it. I do not go on claiming that steam from 100 W boiling can push up macroscopic drops of water, or that I have discovered errors in papers when I have not.


    The trouble with me, from your POV, is that I admit to much more uncertainty about the world (including other people's claims) than you do.


    errors in other papers. Be precise, when have a made this claim? (Hint - it was, if you look carefully, not now except the Mizuno issue).


    One difference between us is that your assumption is no discovered error => almost certain true. I do not make that assumption and am skeptical about many things where I've not discovered errors.


    macroscopic drops. You are claiming that nothing in those systems could push recondensed liquid phase water out of the calorimetric boundary. I am just saying that it is a possibility that was not considered in F&P paper, because the measurements of salt level do not preclude it. However F&P considered this possible, presumably, because felt it worth making the salinity measurements and saying that they showed it was not happening.

  • macroscopic drops. You are claiming that nothing in those systems could push recondensed liquid phase water out of the calorimetric boundary. I am just saying that it is a possibility that was not considered in F&P paper, because the measurements of salt level do not preclude it.

    Okay, there are at least four problems with your hypothesis:

    1. Calibrations show there is no apparent excess heat except when the palladium is highly loaded and when it produces heat before and after the boil-off. Your recondensed water hypothesis cannot explain that. Why would the heat turn off just before boiling, and then turn on again after boiling? The methods of calorimetry before and after do not depend on lost water.
    2. The effect would have to be large enough that the moving droplets would be visible. They are not. No one sees droplets move up.
    3. This would happen as often with ordinary boiling or electrolysis as it did in this experiment. All test tubes of this shape would be subject to this error. Such test tubes are common. They are not subject to this error. If they were, people would have seen this long ago, and it would be common knowledge.
    4. What would be the mechanism? What pushes the water up? You said it would be steam. I suggest you calculate what the steam pressure would be, given ~100 W of boiling and length and width of the top of the test tube. You will find the pressure is far too low to cause a measurable effect.

    These problems do preclude the mechanism you propose. You have to come up with some other mechanism that is physically possible. Otherwise you might as well say that invisible unicorn farts cause water to leave the cell unboiled. Waving your hands and making impossible claims about events that no one ever observes -- and that would be readily observable, if they happened -- is not science.

    • Official Post

    Re trusting the scientists. The issue is how many scientists have such anomalous reports out of what overall sample size? If only 1% of scientists make such reports then it is plausibly just because they are less experienced and/or mistaken. And, yes, scientists like people everywhere can end up repeating mistakes.

    Unlike you, when I make a mistake, I admit it frankly and I correct it. I do not go on claiming that steam from 100 W boiling can push up macroscopic drops of water, or that I have discovered errors in papers when I have not.


    The trouble with me, from your POV, is that I admit to much more uncertainty about the world (including other people's claims) than you do.

    One difference between us is that your assumption is no discovered error => almost certain true. I do not make that assumption and am skeptical about many things where I've not discovered errors.



    TH, this argument of error has been re-packaged every so often over three decades, that there is sensitivity to it.


    Overall, statistically, LENR effects are without doubt observed over and over again. Not all measurements have the same precision; experiments have been "boutique", and not mass-labbed as a real program would do.


    Nevertheless, the Error Torch has been carried by many, for instance, by David Kidwell of the NRL who would not yield to accept the reality of experiments twice confirmed elsewhere because of his claims of error. You can see him as the Keynote speaker at ICCF-18 on Youtube.


    The goal is to find a solution to this intractable scientific question and develop a much-needed technology. You must be specific if you have claims of error, because most of these OG scientists have spent their careers answering every possible critique already, and specifics are out there.

  • They are between 2 and 3 times output ratio, increasing incrementally, but not able to crack 3 yet.

    It doesn't matter what the numbers are. They will never be high enough for Seven_of_twenty. If it cracks 3, he will demand 4. When it reaches 4 he will say nothing less than 10 will do. Many cells have produced heat with no input, so the "COP" is infinite. That's not good enough for Seven_of_twenty; he demands infinity times three.


    There is no COP and no power level that will satisfy Seven_of_twenty. 1 W, 10 W, 100 W . . . whatever is reported will not be enough. Whatever the signal to noise ratio is, it is too low. He and the other skeptics invoke the AGPM mechanism: Automatic Goal Post Moving, also known as finding the end of the rainbow. Whatever is achieved is automatically too low. Along the same lines, it does not matter who replicates, or how many labs replicate, because any lab that replicates is automatically declared ineligible. The researchers may have world-class reputations. They might be people who made the national tritium lab at Los Alamos, or the people who run the largest reactor and the national nuclear research lab in India. It makes no difference. When they report a cold fusion effect that proves they are incompetent so we can dismiss them. You can have 180 labs replicate. That only proves 180 labs are wrong.


    Also, by the way, there is never any need for Seven_of_twenty to look at the evidence. It has to be wrong, or he would look at it, and he doesn't look, so it must be wrong.


    Also, any handwaving hypothesis that THHuxley comes up with is automatically right, even if it overrules all of physics, chemistry, and common sense going back to the Middle Ages. No matter how impossible his hypothesis may be, and even if there is not a single observation of the effect -- when it should be clearly seen by any observer, with any test tube -- it is still right.


    This is new age science. New rules. Get with the program.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.