Validity of LENR Science...[split]

  • "the fraud that made the editor of MIT papers furious"


    I looked into this. There was a preliminary data plot released and then a final one. (See page 12 of the referenced IE paper .) The final was different from the original in two ways. First, the MIT people averaged a few points of their data to produce each point plotted. I've seen CFers do the same thing, so what's the problem? I see none.


    Second, their original data showed a baseline shift before and after the supposed CF excess heat peak. In the final version, their plot started at the 1st shift and ended at the second. Again, what is the problem. Do you think the baseline shift is CF? You need to revisit the Storms data sets. The first, obtained in January 2000, had baseline shifts negatively correlated to the input power. Upon being informed of this, Ed redid his grounding scheme and produced the second set in Feb. 2000, which is the one I used in my first publication. It had largely eliminated the baseline shifts (I recall a trace still being present). Also, recall that the baseline zero is defined by a cal constant in most cases. So a shift in baseline might indicate a CCS. The point is that baseline shifts can be from many sources and are not good evidence of CF. The MIT guys certainly knew this, so they clipped their data to the interesting region. So, what's the problem?

    No, it does not. Look at the rest of the figure, and the figure for the hydrogen null run (in the original paper). The points are distributed evenly, 1 per hour. Only one part of one graph has uneven points, and extra points added in, and points moved down. That is the deuterium test. That has to be a manual change. No program would do that. The effect of it is to hide the excess heat.

    @Jed I take as my alias an eminent Prof, but no such accolade should be given to me. I am here as an amateur with postgrad science, maths, and engineering training, and others here have equivalent no doubt.


    Firstly: I think it is most unhelpful for either side of this debate to call each other names. I can ubderstand the frustration that leads to this, but no-one thinks scientists are guilty of deliberate deceit. It is very, very unlikely and one reason Rossi (to take a topical point) was able to bamboozle so many European academics. Academics will just not look for deceit - there are so many innocent ways you can get different opinions! Unless you have cast iron proof (as with Rossi), assuming deliberate deceit distorts your analysis of other parties work.


    Secondly: my summary of the debate here is that:

    (1) the published graphs use variable point density to illustrate the shape of a graph, which you find unusual, and think hides important information

    (2) you contend that different processing (to generate the variable density graph) has been applied to the control and active data, which are then compared.


    I agree, if the processing in the two cases is different, that is improper, and it means we would have to look much more closely at any arguments based on comparison of the processed results. It does not necessarily make those arguments false but it does mean that we don't understand how these guys are working and therefore must check this much more carefully than normal.


    I don't yet think you have shown that the processing in the two cases is different. My immediate first guess here is that they use a processing method that varies point density in order to smooth noise. Thus where the high frequency noise (which we see too much of in the raw data) is very high they would average more. Normally that would be done without altering point density but there are neat algorithms that do it by varying the point density essentially choosing the number of samples per point in a way that ensures the high frequency noise is smoothed out. Higher noise then means fewer samples. Such algorithms have quite a few free parameters as you can imagine. They give nice smooth graphs, and in principle I can see why using such an algorithm is motivated in this case, because the noise is both high and variable. They would use it exactly because they want to avoid reading artifacts from noise, but also want to get maximum precision from un-noisy data. We'd need to look at the H2o data as well as the D2O data and make a comparison to go further...


    They may have use some ad hoc method to do the same - arbitrarily processing high noise sections in a way that reduces noise. That is bad practice, but quite common procedure. I can think of many LENR papers that arbitrarily process data in order to make signals clearer. It would be reason to ask a paper to be rewritten - though this would not always be done.


    I think it would be more fruitful in this case to ask what is the evidence here for the claimed signal (given the noise) rather than worry about whether a specific post-processing algorithm is more or less proper?

  • As a layman bystander, I have never in several years found the case for academic malfeasance against the MIT team even remotely persuasive, even if they were to have shifted the baseline and done other processing of the data. This situation calls to mind the various re-interpretations of the SRI M4-series data which Steven Krivit made such a fuss about, but which upon closer inspection seemed fine if a little opaque in how and why they were done. A team such as the one from MIT is allowed broad lattitude in their interpretation of their own data, and if they're clearly wrong about something another group can call them out. And their arranging of the data in the most sensible manner in their eyes will not have been obvious malfeasance.

  • Quote

    @Zeph. What makes you think the white area is sparks?


    The white area is composed of many blinking hot flicks. These aren't sparks in common sense, they just look so under thermocamera.

    Now, which chemical reaction could explain these sparks?

  • Those "hot spots" are transitions from red to white, which is a discontinuous color change for a smooth gradation of temperatures. At XºC you have red, and at X + 0.1ºC you have white. It could also be that there were transients, as Pamela Mosier-Boss has assured me the group had reason to believe existed. But the SPAWAR video itself seems consistent with there being no transients and just small shifts at the threshold temperature. From the video alone it is hard to draw a conclusion of hot spots.

  • The white area is composed of many blinking hot flicks. These aren't sparks in common sense, they just look so under thermocamera.

    Now, which chemical reaction could explain these sparks?


    You are making a lot of assumptions here. Many things could cause transient IR radiance changes (which is what we are talking about). Anything that altered the physical surface, for example. Anything that altered the effective band emissivity. Anything that altered transient thermal conductivity near surface. Also anything that caused transient local exothermic reactions and therefore temperature variation. As Eric points out - the quantisation in this picture measn that if the overall temperature is exactly on a boundary the variation here will all be caused by camera noise.


    No serious researcher, I'm sure, would take that picture alone as evidence of anything much. So the question is what other evidence is there.

    • Official Post

    The opinion of the Editor of the paper was that it was not far from misconduct, but MIt administratin disagreed with that interpretation, pushing him to furor and resign.

    http://www.infinite-energy.com/images/pdfs/mitcfreport.pdf

    He reports a very negative ambiance

    http://www.larouchepub.com/eiw…he_air_about_the_cold.pdf

    not far from the second wave of reaction in LANL

    http://www.lenr-forum.com/foru…t-and-LENR-Edmund-Storms/


    About MIT data the consensus of competent electrochemist having analysed it is that it was not worth defrauding.

    The calorimetry was so poor it could not have detected anything, and anyway the loading was insufficient to produce any excess heat, that it would not have measured.

    http://lenr-canr.org/acrobat/B…Pjcondensedg.pdf#page=138


    this presentation

    http://coldfusionnow.org/wp-co…gelstein-Talk-09-2015.pdf

    start page 75 with study of early result.

    page 80 there is an analysis of negative results, MIT among

  • @Jed I take as my alias an eminent Prof, but no such accolade should be given to me.

    I was kidding. I know who T. H. Huxley was. I am aware that he has been dead for a considerable long time, and I don't take no stock in dead people.

    but no-one thinks scientists are guilty of deliberate deceit.

    On the contrary, I know many scientists who were guilty of deliberate deceit! As I said before, compared to other professions the ethics of academic scientists are in the gutter. I have seldom encountered people so inclined to steal ideas, lie, and trash other people's reputations.


    Woodrow Wilson, who was president of Princeton U. before being elected president of the U.S., said that academic politics are particularly vicious because the stakes are so low. I think that is true, and it explains a lot. These people have nothing better to do than cause trouble.

    It is very, very unlikely and one reason Rossi (to take a topical point) was able to bamboozle so many European academics.

    As my late mother said, no one is easier to con than a con man. (My mother was a social science researcher.)

    Secondly: my summary of the debate here is that:

    (1) the published graphs use variable point density to illustrate the shape of a graph, which you find unusual, and think hides important information

    I find it fraudulent. It was obviously done by hand. When Mallove questioned the people who did it, they lied through their teeth and claimed the software did it. Software capable of that did not exist in 1989. As you see from the rest of the graph, and the H2O graph, the software produced one point per hour. It did not stuff additional points in, or move points down.

    (2) you contend that different processing (to generate the variable density graph) has been applied to the control and active data, which are then compared.

    There as no "different processing" involved! The profs at MIT looked at the graph, saw that it showed excess heat, and they deliberately moved the data points around to hide that fact. They did such an inept job that anyone can see a person did this, not a computer program. Then, they accidentally leaked the original data, inadvertently giving Mallove a copy. But even if they had not done that, anyone glancing at the dot-dot-dot version can see those dots were added and moved around by a person. No computer program would do that. They came up with a cock-and-bull story blaming the computer software, which they later backed off from and refused to comment on.


    It was out-and-out fraud, which is common in academic science, as I said.


    See:


    http://www.lenr-canr.org/acrobat/MalloveEmitspecial.pdf

  • So, I find it reasonable that the averaging they did would produce differing numbers of points per inch in the Figure. So what?

    So, in that case you have no idea how plotters work, or how software worked in 1989, and you are totally unqualified to discuss this. That's so what


    Jed has taken my quote out of context and tried to turn it into something else here. I was speaking about the mathematical averaging applied by the MIT folks, not the plotter in this specific case. The process of averaging willtake spikes and decrease their height. But likewise it will reduce the baseline noise level too, so you still end up with the 3sigma criterion for significance. The MIT data does not pass this test. There is no significant peak in the data, unless you insist the noise spike is. And it actually could be a burst due to a CCS from ATER starting up and then immediately ceasing. The real bottom line is that without replication, all we can do is speculate what caused the spike.


    I see no point in arguing this further, as Jed is unteachable when it comes to things like this. He has made his decision and it will stand forever (in his mind).

    It is incredible that you seriously believe a computerized plotter would splash points around, move them down and add new ones. You don't recognize blatantly fraudulent data when it is staring at you in the face!


    I have personally observed points in the 'baseline' get overwritten in Figures and graphs when the data density is too high, so Jed's comment on that is not correct. Note that that is NOT what I am saying has happened in the MIT data. I see now that the data density changed, that should have been explained in the MIT writeup but probably wasn't. And as noted in the posting Jed is quoting, I don't agree that anything out of the ordinary has occurred. As I explained there, baseline shifts are meaningless (which applies to Miles comments about the '235 mW' excess heat signal...) and the averaging will reduce the overall peak heights in spikes as well. The D2O data seems to have changed data density after the spike (probably noise), so different number of points per inch is expected if the averaging used a fixed number of points per average.


    As I said, we'd need the original digital data, if it was digital to proceed further. But concluding fraud is way out of bounds and is typical of Gene Mallove's rabid belief in CF. In fact as we all 'know' today (given the limited reproducibility in the field), the MIT experiment wasn't run for long enough to expect to have seen any FPHE.

  • The opinion of the Editor


    ...doesn't matter. We are discussing the possibility that the MIT guys 'fraudulently altered' data to 'suppress' CF. The conclusion amongst unbiased observers is that there may have been some fancy data reduction techniques employed that really required more explanation that was given, but the net conclusion of no observed CF signals is correct (unless you consider the noise spike as I am calling it a true signal). But that conclusion was the expected one given the short run time combined with our current knowledge of how to produce the FPHE. The references you give add nothing to this specific debate.


    You do quote several sources that report the negative attitudes towards CF prevalent at the time, which all know about already. In fact, that there was some bias at the time was why I decided to 'get involved' so to speak in 1995, after the issue supposedly already decided.


    But the bottom line is that I found a possible non-nuclear cause for apparent excess heat peaks, published it, and have been erroneously dismissed by the CF community ever since. Once you realize that all excess heat reports arising from F&P-type work could be non-nuclear effects, a very large portion of the CF house of cards comes tumbling down. Think of what would have happened if my objections had been raised in 1990 instead of 2002. I suggest it would have had the effect of silencing F&P, just like the discovery their nuclear spectrum they were using as evidence was flawed.

  • Quote

    My view is that if it works, then there can be no stopping it


    And it actually can not be stopped. But the progress may still take very long time, because of ignorance, lack of funding and attempts for boycott of research like these ones (1, 2, 3).


    Quote

    People seem to think that the validity of LENR can be decided by popular vote


    Just because I don't think so I don't understand the meaning of your posts here. How do you want to revert the results of thousands scientific publications about it (and I even don't mention the classified reports not listed there)? This is virtually impossible.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.