Validity of LENR Science...[split]

  • Quote

    Not much. Most researchers are dead. There is no funding and there is tremendous opposition. It is remarkable that the research survived at all.

    Well the IH money will be helping that, I'd hope, and Abd seems if I'm not mistaken to be thinking that at least new He/XS heat experiments are in process.



    THH

  • (3) measuring lab He, and taking this as a bound, is problematic because in lab conditions He levels can vary greatly with time and space. There are ways round this, e.g. a nearly airtight jacket with stirring and He levels measured inside. But they require a lot of care and extra work, as far as I know few of these experiments do that


    The other thing to do is keep track of Lab He concentrations, which is also time-consuming and somewhat costly, as the extra analyses add up. In the 'thousands' of papers on LENR out there you can find a few that mention lab He levels. Most are in the 10's and low 100's of ppm 4He. The Brian and Brian (Clarke and Oliver) paper on the Case cell replications done at SRI are interesting reading too...

    Production of 4He in D2-Loaded Palladium-Carbon Catalyst II

    W. Brian Clarke, Stanley J. Bos, Brian M. Oliver

    Fusion Science and Technology / Volume 43 / Number 2 / March 2003 / Pages 250-255


    Measurements of He, 3He/4He, Ne and 13 other components (H2, HD, D2, CH4, H2O, HDO, D2O, N2, CO, C2H6, O2, Ar, and CO2) in four samples of gas from SRI International (SRI) are reported. Three samples were collected from SRI Case-type stainless steel cells containing ~10 g of Pd/C catalyst initially loaded with ~3 atm D2 at ~200°C, and the fourth sample (not identified) was stated to be a control. Case and the SRI researchers have claimed to observe 4He in concentrations of ~100 parts per million (ppm) and up to 11 ppm, respectively, produced in these cells via the fusion reaction D + D = 4He + 23.8 MeV. Others found no evidence for 4He addition that cannot be readily explained by leaks from the atmosphere into the SRI cells. One sample appears to be identical in composition to air, and the other three have been seriously affected by leak(s) into and from the SRI cells. The rare gas "forensic" evidence includes 3He/4He ratios and He and Ne concentrations that are almost identical to air values. The samples also show high N2 (a primary indicator of air), low O2, and high CO and CO2 due to reaction of incoming atmospheric O2 with C in the catalyst. In two samples, the original D2 (or H2) has almost completely disappeared by outflow through the leak(s). These results have obvious implications concerning the validity of the excess 4He concentrations claimed by Case and the SRI researchers.


    These two selection mechanisms can in principle make random error correlations obey the expected ratio quite well. To get round this, you need to have a clear non-selective protocol and not amalgamate results with others from less clear protocols. I'd expect the new experiments to deal with this problem


    Expectations are not always met...


    P.S. Email me if you'd like some of the data files from the CD attached to this report in the real world. As I said, I analyzed the M series runs extensively as they had the only strong excess heat signal. You can get my email from the manuscript of my first publication stored in Jed's database.

  • Oh come now. That's easy to test. Researchers usually end up testing that whether they plan to or not. It is inherent in the equipment. Many cells have three sources of heat:


    1. The anode-cathode combination.

    2. A joule heater for calibration.


    2. The recombiner in the head space.


    [Assuming #2 "2" is really "3".]


    As I noted in my first publication and in many subsequent discussions, Dr. Storms reported in his ICCF8 talk on the data he collected that calibration by electrolysis (#1 above) gave a slightly different calibration constant from the Joule heater calibration (#2 above). (Notice! This is not me this is him saying they are different!) The difference was of the same order of magnitude as the spread of the run-specific calibration constants I derived from Ed's data, and from the reported variation in cal constants (heat transfer coefficients) that Dr. Mel Miles reports in several of his papers, all are about 1-2%RSD.


    Normal cell operation is 100% recombination at the recombiner. When the FPHE kicks in, it decreases, and they heat moves to ATE (at-the-electrode). This is the steady state shift that induces the CCS.


    This never causes spurious heat or cold for many reasons


    No Jed, that caused the apparent excess heat signals.

  • LENR claims hundreds of replications like the Catholic Church claims thousands of miracles. Unfortunately, they never happen when an outsider is watching.


    Hmm Sherlock. Your deduction is not sufficient. Every outsider present would become an insider when witnessing a LENR reaction by your and other sceptics definition. So your statement is just expressing your personal bias and nothing more. But you belief that there is dark energy, dark matter and that our reality is nonlocal right?;)

  • the best blank of F&P calorimetry is that sometime is did not work, or stopped working, proving a systematic error was not responsible of the phenomenon.


    Don't understand systematic errors well do you Alain? There's no requirement they behave as you would like them to.


    of course He4/Heat correlation is best evidence


    Of course, if there really is no real excess heat, there is no 'heat/He' correlation.


    Note that Gary Taubes argument is debunked, both because it was proven cherry picking startistic fraud, and because the observed tritium result is impossible to explain with the pretended fraud.


    Storms work showed that the tritium signal obtained likely did not come from a single spike event. That is all. Storms incorrectly concludes the tampering issue is resolved. In fact, there are another infinite number of ways to add tritium to a cell which he does not deal with.


    His study has some value in that he did 'eliminate' one of many possible hypotheses. But the normal way to investigate this problem is to back-calculate the way the tritium would have had to be introduced if from an external source, and then very carefully eliminate any such source. This is a lot harder to do that what he did. Usually, needed data to do that is nonexistent.


    the critics were stinky


    I shower every day, thank you.


    a clear groupthink


    Cold fusioneers definitely groupthink...10 of them all signed off on the gross error of associating my name with someone else's proposal (random CCSH).

  • "the fraud that made the editor of MIT papers furious"


    I looked into this. There was a preliminary data plot released and then a final one. (See page 12 of the referenced IE paper .) The final was different from the original in two ways. First, the MIT people averaged a few points of their data to produce each point plotted. I've seen CFers do the same thing, so what's the problem? I see none.


    Second, their original data showed a baseline shift before and after the supposed CF excess heat peak. In the final version, their plot started at the 1st shift and ended at the second. Again, what is the problem. Do you think the baseline shift is CF? You need to revisit the Storms data sets. The first, obtained in January 2000, had baseline shifts negatively correlated to the input power. Upon being informed of this, Ed redid his grounding scheme and produced the second set in Feb. 2000, which is the one I used in my first publication. It had largely eliminated the baseline shifts (I recall a trace still being present). Also, recall that the baseline zero is defined by a cal constant in most cases. So a shift in baseline might indicate a CCS. The point is that baseline shifts can be from many sources and are not good evidence of CF. The MIT guys certainly knew this, so they clipped their data to the interesting region. So, what's the problem?

  • You are not very observant.


    Apparently more observant than you.


    They lowered data points, erased others, and added a bunch of new ones.


    "lowered" - I assume you refer to the fact that they moved the zero to go through what is likely the arithmetic average of the region they were plotting. So what? I just stated examples that prove baseline shifts are non-diagnostic. Why foster wild speculation by laymen that a non-zero average is important?


    "erased other" - Yes, the clipped out baseline shifted parts of no interest.


    "added" - Where? don't see any such.



    The problem you fail to account for is that the plotting software used to make the Figure overwrites points. The first region apparently has more points only because it had that noise spike which spread out some of the points vertically, allowing more than usual to be seen. You can even note if you blow the figure up that the purple lines are unevenly spaced, a sure indication of this problem. The only way to support your statement is to get the original data and check. But my impression is that it is just a software 'feature'.

  • The problem you fail to account for is that the plotting software used to make the Figure overwrites points.

    No, it does not. Look at the rest of the figure, and the figure for the hydrogen null run (in the original paper). The points are distributed evenly, 1 per hour. Only one part of one graph has uneven points, and extra points added in, and points moved down. That is the deuterium test. That has to be a manual change. No program would do that. The effect of it is to hide the excess heat.

  • This argument between Kirk and Jed looks like it should be resoluble. Could anyone post the relevent pics?

    It is right where I just told you! I put it there myself, lo these many years ago.


    As I said, see p. 22 and 23 here:


    http://www.lenr-canr.org/acrobat/MilesMisoperibol.pdf


    As you see, starting around hour 26, the points are evenly spaced from there to the end. They are only irregular from 0 to 26. That cannot be caused by a data plotting program circa 1989. Those programs were perverse, but not that perverse!


    Here is H2O data with evenly spaced points, one per hour, for all 60 hours.

  • You need to select them to see the legends. My understanding, Jed, is you think the representation on the top right is an unfair manipulation of the raw data on the top left? On cursory inspection it seems pretty similar. And similar also to your plot. Obviously I need to work out exactly what the two sides are claiming about this. I have no more time now will return to this later.

  • Professor Huxley:


    Thanks for copying that figure from the Miles paper, but I wish people would read the paper, p. 22 and 23, explaining what the figure means. On the pages leading up to this, Miles gives several reasons for thinking the data is wrong. He describes that figure in this paragraph, which I guess I might as well copy:


    "A simple method to ascertain that the H2O and D2O data sets were not treated equally by M.I.T. has been pointed out by Jed Rothwell [35]. It was reported that both the H2O and D2O raw data was averaged over 1-hour blocks to produce the two figures [30]. For the H2O data (Fig. 4 of Ref. 30), it is readily verified that the averaged data points are evenly spaced with one point per hour as stated. Even a cursory visual inspection shows that this is not true for the D2O data (Fig. 5 of Ref. 30). The unpublished M.I.T. raw experimental data as recorded for the D2O cell is presented in Figure 11. Even if one ignores the raw data showing 235 mW of average excess power over the first 13.6 hours that was adjusted to zero in Figure 11, there remains later peaks that yield about 70 mW of excess power. The published data for this D2O cell is shown in Figure 12 where the averaged data points for the 1-hour blocks were apparently moved horizontally as well as vertically to give the unequal spacing and near zero excess power [35]."


    http://www.lenr-canr.org/acrobat/MilesMisoperibol.pdf

  • No, it does not. Look at the rest of the figure, and the figure for the hydrogen null run (in the original paper). The points are distributed evenly, 1 per hour. Only one part of one graph has uneven points, and extra points added in, and points moved down. That is the deuterium test. That has to be a manual change. No program would do that. The effect of it is to hide the excess heat.


    To begin with, there is no excess heat indicated in these plots. The supposed signal you refer to is not greater than 3 times baseline noise. Therefore it is considered noise. I'm not even sure it exceeds 2 time baseline noise.


    I also note the X scales are different and that alters the appearance somewhat (but doesn't solve the whole problem).


    Then, I agree with you that there is a data density difference between the pre-noise spike region of the D2O plot and the rest. I'd guess after suffering a big baseline shift and the noise spike, the MIT guys tweaked something. But even so, the H20 raw plots look different noise-wise from the D2O. Probably because they used slightly different equipment powered through different circuits leading to differing noise characteristics.


    So, I find it reasonable that the averaging they did would produce differing numbers of points per inch in the Figure. So what? The question is whether there are any peaks that indicate CF is ongoing, and the answer is no, there aren't. All we see is noise, of several types.

  • So, I find it reasonable that the averaging they did would produce differing numbers of points per inch in the Figure. So what?

    So, in that case you have no idea how plotters work, or how software worked in 1989, and you are totally unqualified to discuss this. That's so what.


    It is incredible that you seriously believe a computerized plotter would splash points around, move them down and add new ones. You don't recognize blatantly fraudulent data when it is staring at you in the face!