Rossi vs. Darden aftermath discussions

  • Kirk's rationale was (I believe) that without this detailed analysis systematic error is possible, and therefore aggregating likely results unsafe.


    It's not that specific. As a general rule applied to _all_ scientific research reports, the report must stand on its own first and foremost. This means not just that a systematic error might be possible, but that one looks at the whole paper or report for minimum acceptable quality. That's why the ref Alain gave the other day fails. The authors might have done some good work, but you couldn't tell from the report, so it gets rejected until it gets cleaned up. If it doesn't pass muster, it should simply be rejected, perhaps with the option to reconsider if improvements in the experiment and/or theory are added to it.


    This is one of the fallacies the CFers use. They do some sloppy work that sort of looks like what someone else got and claim that, since they get the 'same' thing, their work should be accepted as is and folded into the consensus thinking on the subject. That's how we get 'thousands' of 'replications' and chickens doing cold fusion. Not good science.


    As an aside, detecting systematic error can be really hard, because the error is often in the way the experiment is conducted, but in many cases that methodology is developed from the current best knowledge of how to do things. I once saw a history of the accepted value for the speed of light that showed this pretty well. The value was determined in one fashion, accepted as well done, but then later, improvements showed that that method was off significantly. Unfortunately, I don't recall where I saw this so I can't give a ref.

  • Lack of expected reaction products

    Conventional deuteron fusion is a two-step process,[text 6] in which an unstable high energy intermediary is formed:

    D + D → 4He* + 24 MeV

    Experiments have observed only three decay pathways for this excited-state nucleus, with the branching ratio showing the probability that any given intermediate follows a particular pathway.[text 6] The products formed via these decay pathways are:

    4He* → n + 3He + 3.3 MeV (ratio=50%)4He* → p + 3H + 4.0 MeV (ratio=50%)4He* → 4He + γ + 24 MeV (ratio=10−6)


    @LINR: This is the result of century long misconception of experiments. In CMS LENR there is almost no collision momentum other than in all classical experiments, where there is always collision momentum.

    Takahashi used the classical code to simulate the momentum free (symmetric) collision of DD and at the end there is a long lasting oscillation! No emission of particles...


    There are other issues, that could be discussed in a technical thread...

  • Quote

    Exactly! A tried and true technique. You look at A and pretend B does not exist, then you look at B and pretend A does not exist. Continue to Z. In military terms this is known as destroying an army in detail.

    No, nobody does that. You tend to confuse things by shotgunning too many things at once and trying to combine results from different experiments into one extravagant set of claims. Looking at and confirming if possible, ONE claim at a time is the right way to go about it.


    I am struck by the use of potato chips as a novel energy unit. I tend to favor cow-weeks (the amount of methane a cow farts in one week) but I am happy to consider chips. Potato chips, that is.

  • This I remember was a big disagreement between you and Kirk Shanahan, where he suggested that the proper way to deal with such experiments was specifically to look at each one specifically.

    I never disagreed with that. You are mistaken if you think I did. As I have said many times, when dealing with experimental science you have to look at specifics. You have to make specific assertions about actual results; assertions that can be tested and falsified.


    The thing is, you cannot look at one experiment and say "the heat from this is only as much as burning a few potato chips" and dismiss the whole field on that basis because --


    First, that is a lot of heat by the standards of laboratory science. As the authors pointed out, it would be enough to produce molar level chemicals. Such as potato chip ash. So the objection makes no sense.


    Second, you have to look at other experiments too, and some of these produced as much heat as you get from burning ~20 kg of potato chips.


    Regarding Shanahan, you can look at all the experiments you want. You will not find any experimental evidence for any of his claims. On the contrary, you will find lots of evidence that he is flat out wrong and his claims are impossible, such as in the schematics and calibration data published by Miles. So, I am strongly in favor of looking at specifics in his case.

  • @LINR: This is the result of century long misconception of experiments. In CMS LENR there is almost no collision momentum other than in all classical experiments, where there is always collision momentum.

    Takahashi used the classical code to simulate the momentum free (symmetric) collision of DD and at the end there is a long lasting oscillation! No emission of particles...


    There are other issues, that could be discussed in a technical thread...

    Harnessing that collision momentum might actually mean this is a superchemical event rather than Nuclear.

    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    Basically there is no evidence that the branching ratios are the same between gaseous collisions and collisions taking place within condensed matter, and there is evidence piling up that the results are completely different. Do you have that Takahashi paper? It sounds a lot like my V1DLLBEC theory.

  • On the contrary, you will find lots of evidence that he is flat out wrong and his claims are impossible, such as in the schematics and calibration data published by Miles.


    @forum


    Given that in 2017 Mel Miles admitted that he had never read any of my papers, how could he have possibly done anything deliberate to rebut what I claim? Answer. He couldn't.


    That leaves an 'accidental' result that rebuts what I have said. But Miles didn't do this rebuttal. With respect to my work, the only connection is the 2010 paper where he is one of the 10 authors, but he never read my work, so that doesn't work. Therefore it must be someone interpreting Miles' work as a rebuttal to me. Who is that? Where was that done? Refs. please...


    (And before Jed links to the Marwan reply to my 2010 comment, recall that they don't address the issues *I* raised at all.)

  • Second, you have to look at other experiments too, and some of these produced as much heat as you get from burning ~20 kg of potato chips.

    Jed potatoes are fried with oil or not ? This would certainly change the final energy output.

    How we can name this new unit ?

    BTW a serious reference of potato chip burning experiment : http://www.chemistryhow.com/?q=book/export/html/50

    From that page potato chips have 4000 calories per gram !

  • Kev Energy and Momentum are two different things both conserved. Please do not confuse them !

    Kinetic Energy is 0.5mv^2 and momentum is mv. That ball stacking display showed how 800% more KE was harnessed from essentially a linear system in contrast to a gaseous system. On an atomic level I would expect 4 orders of magnitude difference.

  • Quote

    maryyugo wrote: I would be pretty sure there is an accounted phenomenon present, some sort of anomaly.

    kevmolen: By the very definition of anomaly, it cannot be an accounted phenomenon.

    Yes, that was a typo. The intended word was UNaccounted. The portion of text would then read:


    "If someone designs an experiment for which a clear cut positive result is defined, the probability of error in measurement is extremely low (including good calibration methods, best measurement methods and devices, reliable labs doing the measurement, accounting for or ruling out Shanahan's calibration constant drift, and so on)... if that can be accomplished even once out of many tries, I would be pretty sure there is an UNaccounted phenomenon present, some sort of anomaly. ... etc."


    Thanks for catching that.

  • The Wendelstein hot fusion device is scheduled to start running in early September.



    https://translate.google.com/t…e-3811367.html&edit-text=


    Between Iter, this thing, and many other designs, hot fusion is entering a very active phase. Either hot fusion or CF/LENR (if it works as people continue to claim) will be the tech for the first working fusion power plant. My money, if there were bets, is of course on hot fusion. One of the limiting factors for hot fusion was computing power. Since you can now buy computing devices that can process over 100 terabits of data per second for $500 dollars or less, this is no longer an obstacle.


    The link to the SRI paper someone asked about is


    http://brillouinenergy.com/wp-…01/SRI_ProgressReport.pdf

  • not real- bets?

    My bet is still on IH. I think that they still have a chance with some of their other supported research.

    Not sure what the odds are, but I am will not rule them out just yet. They may come back with a vengeance next year

  • How has hot fusion been held back by computing power?


    My bet is that the hot-fusion boys will continue to defraud the public for the 1000X more money they piss away on something that will always be 50 years from now. If those frauds had been honest about cold fusion we would have cold fusion cars by now.

  • Given that in 2017 Mel Miles admitted that he had never read any of my papers, how could he have possibly done anything deliberate to rebut what I claim? Answer. He couldn't.

    He did not need to read your paper. His data proved you were wrong even before you wrote the paper, in two ways:


    1. His calibrations show that when you move the source of the heat within the cell, it does not change the cell calibration constant.


    2. The schematic of the cell clearly shows that moving the heat source within it cannot affect the calibration constant.


    See:


    http://lenr-canr.org/acrobat/MilesManomalousea.pdf


    Many other papers also prove you are wrong. Most were written before you came up with your theories.

  • ------------------------------------------------------------------------------------------------------------------------------------------

    A major criticism presented by Jones and Hansen (Reference 11) of our calorimetry is the
    variation of the calorimetric cell constants over various experiments. For example, K1 ranges from
    0.135 to 0.141 W/ºC over four separate experiments that yield a mean of 0.138 ±0.003 W/ºC
    (Reference 4). Roger Hart pointed out that this criticism by Jones and Hansen is not valid since all
    cell components are repositioned in each experiment. The relative positions of the anode and
    cathode electrodes and of the two thermistors vary somewhat with each new cell assembly, thus
    the slight variation in the calorimetric cell constants in different experiments is expected.


    Based on our previous experience with integrating open, isoperibolic calorimeters,
    improvements were recently made to eliminate most of the error sources. This new calorimetry
    and improvements are illustrated in Figure 4.

    --------------------------------------------------------------------------------------------------------------------------------------------

    page13

  • Oookay . . . for 1. Why are the calibration constants almost exactly the same?


    For 2, Fig. 4 on p. 55, is the schematic with a "a copper outer jacket contacts the bath and minimizes bath level effects by virtue of its high-thermal conductivity." As explained in this paper and others, the temperature is measured at this jacket. ". . . a copper (Cu) inner jacket that acts as the integrator" You are saying it does not integrate. Why not?


    Copper conducts heat quite well. Yet in your version of events:


    A heat source in one place within the cell, magically heats only part of the water, with no mixing from electrolysis. THEN, the heat from the water magically crossed directly over to the cell wall, heating only part of the wall. From there cell wall magically heats only part of the copper jacket; AND for some inexplicable reason -- defying physics and common sense! -- heat does not conduct to the rest of the jacket but goes straight to the thermocouple. The amount of heat that reaches the thermocouple depends on where, in the electrolyte, it originates.


    Yet a calibration works remarkably well, despite the magic effect you postulate:


    "A power output of approximately 6.5 mW is observed for 2 hours.
    This yields 47 J that compares very favorably to the expected 44 J based on the cathode size (1 mm × 4.3 cm), a loading level of PdD0.6, and using the reported value of - 35,100 joules/mole (J/mol) D2 (Reference 14)."


    Here is a copy of Fig. 4:



Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.