Team Google wants your opinion: "What is the highest priority experiment the LENR community wants to see conducted?"

  • This thread is veering all over the place again.


    Whereas if Google can replicate it, especially after the Nature article, I think people everywhere will take notice.


    I think this is a really important point. Google's imprimatur is very important.


    To circle back, the original question was "What is the highest priority experiment the LENR community wants to see conducted?"


    I would submit that to answer this question, another question has to be asked:


    "What is the highest priority for the LENR community?"


    I would have thought that getting the field taken seriously by other scientists, and attracting new talent into the field, would be the top priority; escaping the so called 'reputation trap', and unlocking the interest and attention of others.


    Google can do this. A rock solid replication in a top tier journal can do this. It will attract all kinds of attention.


    I think some people are reading the question as "What is the hole in the current field of research and replications that Google can plug?" or "What new exploratory research can Google do?" or "What is the most exciting or dramatic experiment?"


    I think this is the wrong way to look at it.


    I submit that the correct way to rephrase Google's question, assuming that you parse it the same way I do, is "Which experiment has the best chance of being successfully replicated and published in a convincing manner that can be digested by the scientific community at large?"


    Google can do things for this field that nobody else can do. This is a really important opportunity. Consequently, I think that the Google research project should be thought of as entirely seperate from everything else happening in the field. The decision should not be made in reference to other things happening in the field; rather, only in reference to the question as I have reformulated it.


    A digression to make a point: Google is a very closely followed company on Wall Street. It's a fast growing, highly profitable monopoly*. It may or may not surprise you to learn that the average Wall Street money manager is fond of fast growing, highly profitable monopolies. Google is followed extremely closely and its stock is owned extremely widely; its quarterly financials are parsed, its news flow is followed, its 'moon shots' are tracked. The portfolio managers who own the stock are as au fait with the company as anybody outside it can be, generally speaking. It's not at all unusual to hear a portfolio manager who manages billions of dollars hold forth on Waymo, or something else going on inside the company.


    Any positive result from Google will be written up in places like Quanta, if not New Scientist et al, and it will quickly make its way back to these people. The job of a research analyst on Wall Street is to know everything that's going on inside the companies that they are assigned to follow. "Google replicates experiment that breaks the laws of physics" will set off a lot of confused Googling. And excitement.


    Four or five small venture capital firms, deciding to put 10% of an investment fund into the space, would solve all of the funding problems that are present, I would have thought. That would be an eyedropper of what it would look like if real progress were made toward viable industrial products. One would assume that, assuming success, Google would continue to fund research too.


    My point is that Google may be capable of not only opening the door to greater scientific respectability, but also the door to proper venture capital funding of the field. They would lift the pall of opprobrium that Woodford and IH have encountered. Maybe not overnight, maybe not with a single replication, but they are committed to the field and their success is beyond important.


    So. Now is the time to be disciplined, pragmatic and cooperative. Now is the time to stay on task. This thread has continuously collapsed into paroxysms of disquisition, self indulgence and tangent.


    Again: "Which experiment has the best chance of being successfully replicated in a convincing manner that can be digested by the scientific community at large?"


    That may not be the question Google intended to ask, but I really think its the question that the field needs to answer.


    Here's another question: "Is it dangerous to try to hit a home run with a dramatic but difficult or unproven experiment? To what degree is a chain of base hits preferable to swinging for the fences?"


    I get the sense that a lot of people would be disappointed if Google chose, say, SPAWAR's STEM kit** as their next experiment. I'm not saying that that's the correct experiment to choose, only trying to illustrate a dynamic that I see in this thread.


    One potential path forward:


    • Admins, at a time of their choosing, close the thread to new submissions and prepare a list of everything submitted.


    • Those proposals that can be quickly dispatched with because they are not clearly defined LENR experiments are removed.


    • A discussion is opened about preliminary criteria that can be used to quickly winnow the list of serious candidates.

    •• Are proprietary materials ok?

    •• How much documentation is required?

    •• Has the experiment been successfully replicated before?

    •• Are there commercial constraints which rule out the experiment?

    •• Are there concerns about the quality of the work?

    •• etc.


    • A discussion about the pros and cons of each surviving experiment is opened.

    •• Does the experiment only offer excess heat, or also other phenomena that may make for a compelling publication?

    •• How challenging is the experiment and in what particular ways?

    •• Are the original researchers available to answer questions and provide feedback?

    •• What kind of calorimetry is appropriate for the experiment?

    •• etc.


    • Admins close the discussion of pros and cons and prepare a list of the experiments plus the +/- factors that have been identified.


    • A discussion is had weighing the various experiments against each other using the +/- characteristics identified.

    •• Is SPAWAR's work preferable to bulk Pd-D experiments because it obviates the problem of loading?

    •• How much simpler is R20 than other experiments? How is this simplicity weighed against other factors?

    •• How is R20's lack of replication weighed against experiments that have been better replicated?

    •• etc.


    • A final shortlist is prepared.


    I think Shane's idea of passing the decision to a committee of experts is a really good one.


    It's not my intention to be rude, or pretend to a level of knowledge I don't have. I've followed LENR for a while, but I'm not a scientist. I'm pretty sure my example questions are rudimentary and perhaps not even helpful, but I'm just trying to illustrate the points. Not my intention to hijack the thread. I'm also aware of Shane's wish that this be a thread of informed discussion from people with experimental experience, so I apologise for weighing in at length. Rightly or wrongly, these are just my thoughts.


    * Not a recommendation to buy or sell the stock.


    ** Apologies, I'm not sure if that's the correct title for their program.

  • orsova


    Very well said. Your post should help bring a little more discipline to the thread going forward. Since we have 2-3 months to finalize the list, we have been letting it wander around a bit, mainly because sometimes drifting off topic can spur new ideas related to the topic. Also, just killing time until ready to push it to a conclusion. Alan has been busy saving the planet, I have been busy with family issues, etc.


    Until we can get more engaged, please feel free to fill in and keep the thread moving in the right direction. Same goes for anyone.

  • I think there are at least two promising lines of research.


    1) As many have noted, replicating the most recent work of Ed Storms.

    2) The work of George Miley also seems to have used good methodology.


    Lower down the list would be (I find these to be unlikely to work):

    Mizuno's recent work.

    Brillouin Energy's work.


    Were I serious about finding something that works, I would talk to Industrial Heat to get a notion of what they have already tried and what they find promising. They've probably attempted to replicate more broadly than anyone else. They could also save time by reviewing the failed replication efforts of SKINR at Mizzou.

  • I guess you are still beating the "data quantisation" drum THH?


    Even SOT agrees that could not at all explain the excess heat.


    It is worth answering on this thread because this is about how I (and likely others) judge results.


    My relative uncertainty about Mizuno's results is affected by the fact that they are deconstructed from a spreadsheet with column titles that are not 100% accurate. Thus "anenometer" in fact means "blower power converted to equivalent anenometer speed based on blower calibration data".


    It is also affected by many other aspects of the papers and described methodology which are less professional than my comparator: McKubre's write-up of his work.


    So McKubre's results (with exception of M4) are much smaller, so could more easily be the result of some artifact, even though none such has been positively identified. But I have better confidence that the way they have been collected is exactly known from the writeup. And they are claiming something that LENR guys think has been multiply replicated, and is well understood.


    Mizuno's R20 results are much larger - very unlikely to be some obscure artifact. However they could be mistaken, I've suggested some mechanisms.


    All scientists can make mistakes. That is why methodical working, careful and complete data collection, repetition, and cross-checking, are all normal practice. Particularly when very surprising and unexpected results are discovered the experimenter goes back and repeats everything, with cross-checks, looking for mistakes. Some LENR experimental write-ups show that this has been done more than others. Mizuno's very little.


    The tragedy of LENR is that experimenters working in the LENR field do not do this in the way most people would, for LENR positive results. Why? Because, for them, such results are expected.


    We can see this, magnified large, in the number of Rossi-style NiH positive results. Google, repeating the same experiments (trying hard) discovered:

    (1) Initially, they had a lot of false positives.

    (2) The inherent inaccuracy of that style of calorimetry caused them to design a much better calorimeter

    (3) Using this, results were negative.


    Why does google discover this when others have not? I'd suggest a number of reasons, any one of which might help:

    • Those doing it are well-trained experimentalists
    • They have resources to do things properly
    • They treat positive LENR results as very surprising and unexpected (as above) and cross-check carefully


    I'm not saying that all LENR positive results come from those who are not well-trained: but many do. There appears to be an idea that somone who has worked as a scientist in say nuclear science will therefore, because a scientist, necessarily be a competent calorimeter, etc. That is not true. Scientists can have done outstanding work and yet do bad work in a field different from that in which they are expert.

  • So McKubre's results (with exception of M4) are much smaller, so could more easily be the result of some artifact, even though none such has been positively identified


    Let me fix that for you:


    McKubre's results, including M4, have a very large s/n ratio, they were repeated many times, they were independently replicated in 180 other labs, and this is one of the best conventional calorimeters ever made, so there is not the slightest chance they are the result of an artifact. No skeptic has even tried to identify an artifact in this work, or any other leading experiment, and the chances of finding one are roughly as good as finding an artifact in Newton's prism experiment.



    The distortions, ignorance and bias in your version show that with regard to cold fusion you are hopelessly unscientific. Nothing you say can be taken seriously. Everything else in this message is wrong.

  • In that context I would consider your father to be an expert.


    And in some other important context, I would expect the hospital janitor to be an expert. For example, one of the biggest problems in modern U.S. hospital care is in-hospital infections, spread by unclean practices. This kills 20,000 people per year from staph infections alone. (https://www.cdc.gov/publicheal…didyouknow/topic/hai.html) Probably, the janitorial staff knows more about how to reduce this problem than the doctors do.

  • McKubre's results, including M4, have a very large s/n ratio, they were repeated many times, they were independently replicated in 180 other labs, and this is one of the best conventional calorimeters ever made, so there is not the slightest chance they are the result of an artifact. No skeptic has even tried to identify an artifact in this work, or any other leading experiment, and the chances of finding one are roughly as good as finding an artifact in Newton's prism experiment.


    The distortions, ignorance and bias in your version show that with regard to cold fusion you are hopelessly unscientific. Nothing you say can be taken seriously. Everything else in this message is wrong.


    Jed: let us stick on topic here. If, as you say, this is true, then a highly respected, well funded, independent group, skeptically inclined and therefore able to write stuff up in a way that will have impact, and also to detect otherwise undetected artifacts, is exactly what is needed to turn these 100% guaranteed positive results into something the world now will take seriously.


    Which is why I am suggesting they do this: or something like it.


    I believe they have already (sort of) done this. I'd think therefore the priority should be to work out what is different between what they did, and what they need to do to get these guaranteed results, and suggest they do it?


    For example if high D loading is the issue, check with them are they using the correct material and the correct methodology. Also check which historic measurements of loading do they agree are correct, and which suffer from the errors they have themselves identified. there has been so much high quality experimentation in this area, and a discrepancy between your view, and that of the scientific community. Surely drilling down into that is an obviously fruitful line of research that will end up with an answer?


    THH

  • Jed: let us stick on topic here. If, as you say, this is true, then a highly respected, well funded, independent group, skeptically inclined and therefore able to write stuff up in a way that will have impact, and also to detect otherwise undetected artifacts, is exactly what is needed to turn these 100% guaranteed positive results into something the world now will take seriously.


    Which is why I am suggesting they do this: or something like it.


    That has no bearing on the fact that everything you said in that message is factually wrong, biased and unscientific. Okay, you pay lip service to the idea that they should try to replicate. But if they were to listen to you, or take your critique seriously, they wouldn't bother. You said the results "could more easily be the result of some artifact." Why would anyone bother to replicate a results that could "easily" or "more easily" be an artifact? They would think: "Surely, after 30 years, it must have been replicated? It can't be worth much if it still might easily be an artifact." Ah, but you didn't mention the thousands of replications, leaving the impression there are none. You said that "even though none such [artifact] has been positively identified" -- neglecting to mention that no artifacts of any kind have been identified in any major experiments, despite decades of looking for them by pathological skeptics such as you.


    What you write is not a critique. It is not rational. It is anti-cold fusion propaganda, intended to confuse and mislead the audience.

  • I do not disagree with all those who are suggesting that a Mizuno replication would be a high priority. However, my take is somewhat different. I would like Google to contract with Mizuno to have him produce his own replication of the R20, which he would make available to Google researchers for their own continued examination and use, in their own labs. Ideally, Google's representatives would be present during all phases of the replication's construction and preparation, perhaps assisting, perhaps just observing.


    Assuming Mizuno's results are accurate, he himself should be in the best position to replicate his work. If his work can be replicated and confirmed by Google, that will be world-changing.


    If others fail to replicate, it can always be attributed to things outside Mizuno's control, and what is needed is undeniable proof of Mizuno's claim. He is in the best position to provide such undeniable proof.


    I'm not an LENR researcher.

  • Okay, you pay lip service to the idea that they should try to replicate. But if they were to listen to you, or take your critique seriously, they wouldn't bother. You said the results "could more easily be the result of some artifact." Why would anyone bother to replicate a results that could "easily" or "more easily" be an artifact? They would think: "Surely, after 30 years, it must have been replicated? It can't be worth much if it still might easily be an artifact."


    You are both personalising and subjectivising the issue here.


    Let us avoid personalisation, and view your comment as directed to anyone who thinks given all the context that these results are likely not to be replicable.


    Why would that deter the google guys? Their view, as I understand it, is similar to mine. This is an old controversy that has not gone away therefore let us try to gain traction on it with new work, unbiassed by past events. That means taking the most likely experiments and replicating them, using better instrumentation.


    They have done this. And, undeterred by failure thus far to find LENR - they'd like to do more of it. They will have their view of how likely it is to find LENR and I can't see that being changed by anything you or I say. They make the valid point that even if chances of LENR existing are very small, checking this is worthwhile on overall cost/benefit basis because the upside if it is found is so large. That applies to all LENR research and I don't understand why it is a qualification for doing it that you think LENR is most likely real.


    Were I them, I'd enjoy doing this work because I like mysteries that can be resolved - this looks like one of those.


    Maybe you think scientists go into experiments and only find what they expect. In that case you have immediately invalidated most LENR work. My point is more subtle. In order to get results that the world generally will take seriously they need to conduct the experiment as though positive results are very surprising and unexpected. They need to think of every possible artifactual reason for those results. They need to recheck everything. In that process they will either find artifacts and issues (as they did with the Ni - H experiments) or they will find anomalous results indicating LENR, or, possibly, they will find non-artifact anomalous results with some non-LENR unexpected explanation.


    So: you should hope that their work is informed by a similar mindset to mine. That way, if the results are real, they will be checked and presented to the world in the most impactful way.


    You should also consider that all scientists have subjective views, and that doing science is all about acknowledging that and doing your best to counteract it. Good scientists try very hard not to be influenced by prior views. You won't get "less good" surprising excess heat results from a google team 99% convinced an experiment will not generate excess heat. In fact, exactly because they will be surprised, you will get stronger more carefully checked results, delivered with more enthusiasm.


    So my suggestion:


    Redo D/Pd electrolysis experiments checking all differences in methodology and materials with LENR experts who know what has been properly replicated and will work, trying to get closer to what LENR experts say will work. Publish this, commenting on the changes in methodology and materials, and the results. If they are positive examine the changes in methodology carefully for possible introduction of artifact, alos as always add instrumentation, checks, to get best integrity positives. If they are negative publish what was negative and why to the LENR experts, so that better understanding of these experiments can be obtained.


    THH

  • Dr Michael Staker has already replicated F&P work thoroughly..strongly.. 2018


    No, he didn't replicate the F&P's boil-off experiment. FWIK, this experiment was properly and successfully replicated in the 90ies only by Lonchampt in France and by the NHE people in Japan.


    Quote

    perhaps Google X could replicate Dr Staker... after doing R20


    Team Google will decide on the experiment(s) they want to replicate. Here, we were only invited to present our suggestions and provide the relevant reasons.


    Quote

    However R20 is much simpler to replicate.


    I don't know how much simple it is to replicate R20. Certainly, it is not simpler than the F&P boil-off experiment and, most importantly, it doesn't offer the same possibility of indisputably ascertaining the exact achievement of the same identical behavior of the original test, which is a unique feature of the 1992 boil-off experiment, thanks to the complete transparency of its configuration and the availability of the original videos.


    In fact, the main problem for a replicator is not to replicate a CF experiment, but to demonstrate that it was indeed able to replicate it. This is necessary to refute all the objections that inevitably will be raised after the publications of results, as happened to Team Google after having published on Nature last May.


    We should consider that a CF/LENR experiment can be subdivided in 3 main steps:


    1st – the setting-up of the experimental apparatus, including the specimen and the instrumentation;


    2nd – the running of the experiment, including the acquisition of all the experimental information: measuring data, videos, etc.;


    3rd – the interpretation of the experimental evidences, including the energy balance and the conclusive claims


    Any attempt to replicate a CF experiment is subjected to be criticized in anyone of the above steps. The criticisms could come from the CF believers in the case of attempts which failed to demonstrate the production of excess heat, or from the opposite field in the case the production of excess heat is claimed. For example, the lack of excess heat reported in the recent Google's articles on Nature was attributed by many CF supporters to an insufficient level of H loading, a feature concerning the 2nd step. (BTW, it's strange to see that many of these people suggest to replicate R20, despite the paper reporting its results states (1): "The results in Table 1 suggest that high permeability is necessary for excess heat, but high loading is not. On the contrary, high loading apparently reduces excess heat.")


    Therefore, any experiment that prevents Team Google to demonstrate w/o any doubt that they have succeeded in reproducing the exact behavior of the replicated test will leave the CF issue unresolved. It will be a waste of time and resources. The only experiment that will allow them to demonstrate - at least for the first 2 steps - the success of their attempt is the "1992 boil-off experiment". If they will decide to replicate this experiment, only the 3rd and final step (the interpretation of the experimental data) will be subject to possible diatribes, since any other objection concerning the first 2 steps will be easily rejected by comparing Google's experimental evidence with those originally documented by F&P, in particular by comparing the respective video recordings.


    So, in conclusion, the F&P's boil-off experiment is the only one which allow to wipe out two thirds of the possible sources of conflict between supporters and critics of LENR and to greatly simplify the debate between the CF researchers and the mainstream scientists.


    (1) http://lenr-canr.org/acrobat/MizunoTincreasede.pdf

  • I have a lapsed background in EE and am not a scientist.

    However, to be prudent, I would not choose Mizuno's R20 experiment as the first candidate for the Google team for at least 3 reasons :


    1) R20 needs to be successfuly replicated to be sure it is a real candidate for replication !

    Even if I believe it works, this has no bearing on the choice to advise.

    The situation would be totally different if, after say a few successful replications, the LENR community was still desperate to publicize these results in the mainstream media and/or academia.


    2) As some have already said in this thread, it is better to start with basics.

    R20 is definitively not one (see #1).

    Ascoli65 proposes the 1992 F&P experiment says it has already been replicated : maybe it is a good candidate ?

    Whatever the experiment, if Google helps publicize a replication, the impact will be absolutely huge.


    3) Everybody on this forum seems to believe that Google will succeed in replicating the best choice made by this forum members.

    However, Google team track record is not really good in this matter as they failed to replicate some known to be replicable experiments.

    Do you really want a second article in Nature describing another failed CF experiment ?

    Can you imagine the impact ?

    Would that mean the end of all further attempts in the LENR field, namely in terms of investment ?

    If Google fails again to replicate a known to be replicable experiment, that would probably mean they are not good at that ...

  • same BS again.


    Anyone trying F&P Pd/D system MUST set up 10 or 20 parallell cells and hope one or more of them show signs of heat bursts according to the F&P Seminal paper of 1990.


    The cells that prove signs of active LENR i.e. heat burst, may the used to test their 1992 hypothesis of larger excess heat at higher temepratures.


    But hey, they do not need to because their first test according to 1990 proved the LENR phenomenon.

  • Quote

    Well, stranger things have happened. My father was a very skilled cabinetmaker and was once asked to advise an orthopaedic surgeon on improving the standard ways of fixing metal plates to bone.

    That makes perfect sense. Cabinetmakers are experts on how to best use screws and glues which is how plates are affixed to bone. It's important to point out that you still would not want the cabinetmaker to perform surgery, even if he or she were of value at the surgeon's side as a consultant.

  • 1) R20 needs to be successfulyreplicated to be sure it is a real candidate for replication !

    Even if I believe it works, this has nobearing on the choice to advise.

    The situation would be totallydifferent if, after say a few successful replications, the LENRcommunity was still desperate to publicize these results in themainstream media and/or academia.


    It has to be successfully replicated to be sure it is a candidate for replication? That makes no sense. If it is successfully replicated, it won't need to replicated any more. It should be improved after that.


    The point of doing research and doing replications is to find out what is true. It is to explore the unknown. If you only replicate that which has already been replicated, you take no risks, you contribute little or nothing. If the people at Google only plan to replicate experiments we already know work, such as the original F&P approach, they will not contribute much of anything. That would take years and cost a lot of money. Why bother?


    This reminds me of something Ikegami told me years ago. When cold fusion was discovered, some younger scientists came to him and said, "That's neat! When you figure out how it works, let us know, and we will do the experiment." As I see it, they thought themselves artists when all they did was coloring books.


    People who want to do safe research which will likely pay back, and people who want to be told what experiment will work, should not do cold fusion. Anyone who imagines there is some expert, or a group of experts who can advise what is the best cold fusion experiment is not paying attention and does not understand groundbreaking research. If we knew what experiment works best, we would do it. We wouldn't need Google. If they want to play a role at this stage, they must decide for themselves what is best, and they must be prepared to fail. Them's the breaks.