Posts by orsova

    You are correct to note that at the time, Shultz's grandson was not a scientist.

    I mentioned scientists.

    Ergo, I was not talking about Shultz's grandson.

    I was mostly thinking of Professor Phyllis Gardner.


    I recall that Theranos had Kissinger, Shultz and Mattis on their board. They had a who's who of prominent investors.

    It was the scientists who understood that it couldn't work and that Holmes was lying.

    Because they understood the science.

    I think it's worth highlighting the work of Imam, Miles & Nagel, on Palladium-Boron cathodes, from the just published proceedings of ICCF-21. (

    I don't recall this work being discussed in this thread. Reading it, it looks like a high quality candidate for TG's attention. There are two papers, one on the fabrication of the cathodes, and another on experimental results. It's interesting to note that Imam, Miles & Nagel characterise their Pd-B cathodes as one of two reliably reproducible experiments, the other being co-deposition. They observe that both approaches remove oxygen as a variable in the experiments.

    Certainly, if TG intends to continue their work exploring bulk Pd/D, this work suggests avenues for exploring not just loading, but also materials science challenges. I recall that Dr Storms (I hope I'm not putting words in his mouth) suggested that exploring these problems is one of the keys to moving forward.

    I thought it also of note that, according to Imam and Miles, Pd-B alloys also offer the possibility of better hydrogen purification membranes.


    Based on almost 30 years of research, two sources of palladium materials yielding good reproducibility for generation of excess enthalpy effects have been identified: (1) palladium materials prepared by co-deposition method and (2) Pd–B alloys. A common feature for both these methods is that they yield palladium that is relatively free of oxygen as an impurity. A beneficial effect of the added boron is that it minimizes the activity of dissolved oxygen in the pal- ladium by converting it to B2O3 during processing. The low density B2O3 floats to the surface and is removed during the molten phase of the Pd–B alloy preparation. Further, the creation of two FCC phases makes the material harder and less susceptible to cracking. That is attractive for some applications. In particular, it is the likely explanation for reproducible LENR energy generation.


    Appendix: Other Applications of Pd–B Alloys

    The alloys produced in this work show the same or better strength than pure palladium with much less thickness. This is advantageous for the creation of hydrogen purification membranes because less palladium would be needed to create a membrane and achieve the same results. This is, sturdy membranes of much less thickness are enabled compared to using palladium alone. Put another way, the increased hardness means that a much smaller amount of expensive palladium may be used to provide a membrane of the same capacity compared to costly palladium alone. This would allow much greater membrane capacity through reduced material costs. How much the thickness of the membrane would be able to be decreased with the present composition would depend upon such factors as the geometrical design, gases to be purified, and the extent of purification desired.

    The hardened Pd material would also be advantageous for use as electrodes in etching, polishing, electrochemical machining, semiconductor wafer manufacture and other electrochemical processes. Palladium cathodes hardened by alloying, as described in this paper, retain their superior electrical characteristics and resist erosion better than pure palladium.


    6. Discussion

    The major question is why do these NRL Pd–B cathodes produce the F–P excess heat effect while most other palladium materials do not? One possible answer is that the added boron removes oxygen from the palladium by forming B2O3 during the melting process. The less dense boron oxide then separates from the molten metal. Other clues for oxygen effects are the successful Johnson–Matthey materials specially produced under a blanket of cracked ammonia

    (N2+H2). The hydrogen removes oxygen from the metal during the melting process in the form of H2O vapor. These Johnson–Matthey cathodes also generally produced excess energy in F–P related electrochemical experiments [2,3,11]. A possible third clue is the electrochemical deposition of palladium and deuterium (co-deposition) from D2O + PdCl2 solutions which provides oxygen-free palladium and reproducible excess power effects (if done correctly) [12].

    Another possible important factors for Pd–B cathodes is that the added boron produces a material of much greater mechanical strength than pure palladium [1,6]. There is very little volumetric expansion when Pd–B cathodes are loaded with deuterium. This suggests that Pd–B materials are less likely to crack during the loading process. Another feature is that these Pd–B materials load similarly to palladium cathodes, but the escape of deuterium (de-loading) when the current is removed is at least ten times slower than for pure palladium cathodes based on gravimetric studies. A possible explanation for such large differences in the rate of deuterium loading and de-loading for these Pd–B materials is that Pd–B may load electrochemically across the grains, but when the electrochemical current is removed, most of the deuterium escapes along grain boundaries which may be clogged with the boron atoms. With no applied current, there is no electrochemical potential to drive deuterium into other grains. When the cell current is first turned off for pure palladium cathodes, the escape of deuterium gas is much too rapid to be explained by the simple diffusion of deuterium from palladium grains at the electrode surface. It seems likely that the Pd–B materials are somehow much more restrictive than pure palladium cathodes in allowing deuterium to escape via the grain boundaries.

    I think it's also notable that this Pd-B work forms the basis of LEAP (see ICCF-22 outline below). Given that their approach seems to be quite similar to TG's, perhaps an alliance between TG and LEAP could be beneficial to both parties. This doesn't seem like such a stretch, given that Carl Page is involved with LEAP.

    The SPAWAR stuff is strong when read alone but also the type of evidence that I am suspicious of. It relies on experts claiming the only explanation for something is one thing, when even if that is all that a given expert can conceive of, other possibilities remain. The cross checks never panned out, or were never done. Thus if there really were alphas emitted they would be detectable in other ways. Comparing the SPAWAR evidence and the Earthtech investigation I did not find alphas a convincing explanation for those pits. More importantly, if alphas are emmitted from an experiment they can be detected in multiple ways, and this was not found.


    But it wasn't just pits on CR-39. It was also heat, x-rays, tritium and transmutation.

    if well motivated and documented effortful but negative attempted replications are dismissed then we have a collection of beliefs that can never be changed, and a true cult.

    It's precisely because I don't think it should be dismissed that I posted it. Rather, I think it should be examined closely. As I said, I have reservations about their work. Was the Coolescence work done at a high standard and documented to your satisfaction? Are you satisfied that the people who did the work were qualified to do so? I don't have solid answers to these questions, but I was hoping to start a conversation.

    To me, the SPAWAR work is the most compelling body of work that suggests LENR is real. Consequently, imo, any failure to replicate the work should be looked at closely. As I said, this talk was just published, so I wanted to bring it to the attention of the forum for the above reasons.

    This was my first post on this thread I started. Read Matt's question after I say "it would be helpful to know:".

    For what it's worth, I would want a demonstrated fluency with the relevant literature, consultation/critique with/from the appropriate people within the field, and a demonstration that they had worked with the experiment in question for long enough to, for the lack of a better turn of phrase, really get a feel for its ins and outs. My biggest worry with anybody trying to replicate is that they do an experiment(s), get no result, and then briskly move on to something else.

    Whatever experiment TG chooses, I think a demonstrated history of replicability and access to those who have done the work previously is probably vital to success; and I would want to see those two things reflected in the choice of experiment and in the published results.

    For what it's worth, here is how I see the result:

    1) There was a thread of consensus around bulk Pd/D being worth further investigation and a productive conversation about the challenges of this approach. Materials science and loading issues were reflected upon.

    2) There was a thread of consensus around the SPAWAR work being worth further investigation.

    3) There was a strong belief that Mizuno's results should be considered a priority, though they were ruled out for the purposes of the discussion.

    4) There was a consensus that collaboration and interaction with the scientists working in the field would strengthen TG's effort. There was a suggestion of something akin to a private get together being organised.

    5) There was reflection on TG's approach thus far.

    6) There were a variety of other experiments mentioned, and articles of note that were raised as being worth attention, though no consensus was reached on these.

    Though we didn't get a supermajority behind any single experiment, I do think we basically got our three.

    I give him a lot of credit for what he did. Hopefully everyone does.

    Amen. This format has much to commend it. It gives everybody time to collect their thoughts and build structured contributions.

    I concur that this has been a valuable and constructive exercise. It has certainly not been a failure.

    Hats off to the moderation team for their organising of this dialogue.

    Perhaps it would be useful to circle back to the beginning and re-examine the original question. I can't help but feel that too little effort was given to thinking about the question, the way it was worded, how to define priorities, how to agree on criterion and how to answer it succinctly and satisfactorily. I do not mean to criticise the moderation team, they gave us the task. To my reading, the question did not ask for a 100% probability of success, nor did it ask for us to handicap probabilities of success. It asked for the highest priority. This is a different question. This is a question that the community should be able to answer, but has not. It is a question of judgement, rather than intractability of experiments.

    I was thinking about this last night, and decided to carefully re-read TG's Nature article. My recollection of how LF reacted to the article is that it was regarded (in some quarters) with a mixture of cynicism, doubt and exasperation. I found the article to be intelligent, well structured, well written, subtle and written with both eyes on a larger strategy and a longer game. To me, it is deeply reassuring and speaks to TG's professionalism and commitment.

    Earlier in the thread, I said the following:


    To circle back, the original question was "What is the highest priority experiment the LENR community wants to see conducted?"

    I would submit that to answer this question, another question has to be asked:

    "What is the highest priority for the LENR community?"

    In TG's Nature article, they write:

    So, what is the highest priority for the LENR community? Perhaps it is finding the best candidate for development into this reference experiment?

    Here is the definition of a reference experiment that I will have in mind from here on out:


    In one version of the concept, a reference experiment was described as being an experiment of high data density that does more than address a single narrow hypothesis and is proposed by an interactive and extensive team of scientists to collect data that would be suitable for later analysis by multiple groups of investigators. In such an experiment, the resulting data would have the benefit of multiple PI inputs into design and implementation, with the intent of creating large data sets with wider-ranging applicability and extensive post-flight usage than a single hypothesis experiment.

    Source: National Academies of Sciences, Engineering and Medicine 2018. A Midterm Assessment of Implementation of the Decadal Survey on Life and Physical Sciences Research at NASA. The National Academies Press.

    Perhaps the answer to TG's original question “What is the highest priority experiment the LENR community wants to see conducted?” is “the experiment that offers the best chance of being developed into TG's reference experiment.” That is, after all, their ultimate goal and the endgame that lifts the dead hand of skepticism from the field.

    This is a slightly different way to answer the question, and shifts the focus away from the weighing of and worrying about probabilities of success and towards a longer view of TG's goals. Answering the question this way cuts the Gordian knot that is the problem that nothing can be recommended with complete confidence.

    Which experiment is the best candidate to be developed by TG into a reference experiment?

    I think we can disqualify bulk Pd/D. The materials science difficulties, and the high loading required mean that whilst TG might succeed in replication, it is unlikely they will master the experiment to such a degree that it can be developed into a reliable reference experiment. There have been 30 years of efforts and, whilst, in my opinion, proving the phenomenon conclusively, it has not been mastered.

    I do not think Takahashi's work is a good candidate either. The researchers are, as of now, secretive, and their work rests on proprietary materials. Unless they open it up completely, it cannot be a widely shared reference experiment. If it requires convoluted manufacturing processes, is this a further black mark against it in so far as its capacity to qualify as a candidate for our purposes? I would think, but do not know, that you would want an experiment that can be done 'off the shelf' so to speak.

    I submit that, for the moment, it should be put into the same limbo as Mizuno's work. For TG to follow up on, as and when they deem it appropriate, but understood as not a candidate for answering this specific question.

    Of the remaining experiments, SPAWAR seems head and shoulders above the rest.

    It requires no proprietary materials, and has no significant materials science issues. It neatly sidesteps the issues of loading bulk palladium. It is a fast, table top experiment. Because it is fast, it can be iterated quickly, increasing the chances of success and also allowing the parameter space to be explored efficiently. I note that Shane stressed that the speed of the experiment should not be a concern, but I would note that if it is to be developed into a reference experiment, the significant flexibility provided is a positive.

    It has been replicated in a good number of places, including at NASA and SRI. I think it's of note that the co-deposition protocol has been developed to the point that it can be done by students (who have successfully found energetic particles).

    Dr Storms' counsel that the experiment is difficult, and his failure to replicate should be taken seriously, but must be weighed against the history of successful replications. It is also notable that Miles had difficulties early on. However, Szpak said that if you do the experiment correctly, it is “100%” replicable. Perhaps that should be taken with a grain of salt, but it speaks to their confidence in their work.

    In the interest of completeness, I wanted to 'read into the record' the following, posted earlier:

    P.A. Mosier-Boss, L.P. Forsley. 2019. Nuclear Reactions in Condensed Matter: Synopsis of Refereed Publications on Condensed Matter Nuclear Reactions. V2.

    Regarding Coolescence, I think that their work archive has to be scrutinised closely and carefully before any judgement is made about the quality of their efforts. Was their entire team as listed on their website? How much electrochemistry experience did they have? Is it notable that David Knies & Richard Hamm did not join until 2014, after the co-dep work was done? Who did the co-dep work? Was it Cantwell? I think there are reasonable questions here.

    Consequently, I do not think it is fair to characterise the co-deposition protocol as:


    heat anomaly is said weak and previous efforts were not successful.

    because Coolescence and Dr Storms failed to replicate.

    Issues of replication aside, SPAWAR did a large number of experiments over a long period. They published extensively. They claim that their protocol is the best documented in the field.

    It's worth noting that SPAWAR say, explicitly, that they moved away from studying excess heat because people would dismiss their calorimetry. Instead, they used CR-39 detectors, among other things, to gather evidence of nuclear products being thrown off by the experiments.

    I think this was a smart approach. They seemingly had a good deal of success using this evidence to convince others.

    They got two positive write ups in New Scientist and were written up in The Economist. There was apparently also an NPR feature, but I cannot find it. Their work was covered by FOX. Dwight Williams, then senior science advisor at the US Department of Energy was publicly enthusiastic about their work when interviewed.

    I would think that the fact that the experiment generates a number of phenomenon (neutrons, protons, other charged particles, x-rays, transmutations, tritium, excess heat) is a plus in a reference experiment. Many avenues of approach. Many paths to success.

    It is worth noting, as an aside, that a focus on excess heat may be a little myopic. Whilst we can all agree that excess heat is likely the most useful product of these experiments for society, the path to scientific respectability and the path to useful products for humanity are not necessarily the same.

    The volume of material published by SPAWAR on their experiments is a major advantage. TG can pick up the work already done, and assuming they successfully replicate it, they can then, perhaps relatively quickly, begin exploring the parameter space and optimising it into their reference experiment.

    Mosier-Boss and her colleagues are likely available to advise and assist TG with their work. This cannot be said confidently of Takahashi.

    It is also perhaps worth suggesting that not all difficulties are equal. Some are more tractable given time, patience, resources and skill than others. Whilst no LENR experiment is easy, I would submit that the difficulties of the co-deposition protocol give all the indications of being tractable in ways that bulk Pd/D simply is not. Said differently, are the difficulties more of experience and skill than insurmountable problems like unknown variability in palladium cathodes?

    Another - long term - benefit of introducing the SPAWAR co-deposition protocol to a wider audience is that it introduces scientists to the fact that Pd/D co-deposition can be used to fission uranium and thorium. This opens up new possibilities in reactor design, and the remediation of nuclear waste. Whilst beyond the remit of TG's stated aims, the potential development of these possibilities is a long term bonus, and would be a side benefit of introducing the SPAWAR work to a broader scientific community. I note that TG have gone to great pains to justify their LENR work by stressing the way it can benefit other areas of science.

    So, to conclude, how would I answer TG's question?

    “What is the highest priority experiment the LENR community wants to see conducted?”

    “The highest priority of the LENR community is to assist Team Google with their task of finding a reference experiment that can serve as a tool to assert the underlying reality of LENR. It is our belief that SPAWAR's co-deposition protocol offers the highest chance of being successfully developed into a reference experiment that can be introduced to a larger scientific community. It is a protocol that has a credible history of replication, is exhaustively documented, provides a number of phenomenon for study and suffers from no serious materials science challenges. Whilst it is a difficult experiment, with some failures to replicate on record, the difficulty inherent in the experiment is likely to yield to patience and expertise in ways that some other experiments may not. In addition, the SPAWAR scientists are likely able to provide their extensive experience and advice as the work progresses. There are other high quality, well documented experiments, but none have the unique set of characteristics that qualify them as a high quality candidate for development into a reference experiment. Each seemingly suffers from some disqualifying characteristic.”

    Earlier in the thread I argued that Google's imprimatur matters tremendously, and that this is a unique opportunity. I have not changed my opinion. Rereading the Nature article, TG's strategy becomes clearer to me. Perhaps this was already obvious to others. They do not wish to simply replicate and publish, as I, and perhaps others, had assumed. They wish to find an experiment that can be developed into a reference experiment that yields a great deal of data that can be studied by a large group of scientists from multiple institutions. Something that they can study inside out and then present to the world.

    This kind of effort is only possible because of Google's funding and reputational top cover. This kind of effort is perhaps the only kind that would have the weight to properly shift the scientific consensus reasonably quickly. I say again, this is a unique opportunity. Now is the time to be rigorous, disciplined and pragmatic. Now is not the time or place to explore quixotic or eccentric suggestions that do not fit the bill, or that propose to prosecute a narrow hypothesis.

    Now is the time to put our heads together to help Google parse the literature for the best candidate for development of their reference experiment. TG have stated that this is their longer term goal, and it deserves to be thought carefully about by all who are here, reading and participating.

    Time is almost gone and no real consensus has been reached.


    Do you agree with the way I have framed my discussion and the way I have refocused TG's question on the search for a reference experiment? If not, why? How do you understand their original question? How do you think about 'highest priority'?

    If you agree with my reframing of the question, do you agree with my disqualifying of bulk Pd/D and Takahashi? If not, why? Why is your preferred experiment a credible candidate for development into a reference experiment? How do you propose to obviate any stubbornly difficult elements of the experiment? How does it perform on the characteristics identified of the co-deposition protocol above? Are there other characteristics, not identified above, that are pertinent and that recommend another experiment?

    Do you agree with my disqualifying the other experiments (Kirkinskii, Celani etc) suggested in this thread? If not, why? Why would your preferred experiment make a good candidate for a reference experiment? How does it perform on the characteristics identified above in the discussion of the co-deposition protocol? Are there other characteristics, not identified above, that are pertinent and that recommend another experiment?

    Do you agree with my general characterisation of the experiments, their features, documentation, histories of replication etc? Have I made mistakes? Have I misunderstood the science or technical features of an experiment? Have I gotten facts wrong or made other mistakes? Have I cast aspersions where they are not warranted?

    Have I misunderstood what TG mean by 'reference experiment'?

    Do you disagree with the characteristics I have identified as being attractive in a candidate for development into a reference experiment? Why? What would you propose in their place?

    Please do not take issue with my opinion of TG's Nature article here. If you would like to do so, please revive the original Nature article thread and I will consider engaging there.


    As I have said before, I am not a scientist. I do not know what I do not know. What I do know is a drop in the ocean. Though I am being assertive, and perhaps strident, please do not misunderstand this as supreme confidence in my position. Instead, I am trying to offer ideas and provoke debate, and do not think that I can do that as effectively if I am constantly back filling, equivocating and hedging as I go. I offer you an imperfect argument, to be modified, built on or demolished. I welcome and hope for rigorous criticism of my thoughts, suggestions and level of understanding. If it helps us over the line, it is all to the good.

    For reference:…tions_in_Condensed_Matter

    P.A. Mosier-Boss, L.P. Forsley. 2019. Synopsis of Refereed Publications on Condensed Matter Nuclear Reactions (v2.0). (…tions_in_Condensed_Matter)


    The NASA Glenn Research Center replicated the co-deposition protocol. The Naval Surface Warfare Center, Dahlgren Division with JWK under NCRADA and with NASA and other agency funding, replicated the protocol, analyzed materials, and observed magnetic field effects and thermal responses.

    P.A Mosier-Boss, L.P Forsley. 2015. Synopsis of Refereed Publications on Condensed Matter Nuclear Reactions. (…efereed_LENR_Publications)


    Most important, the co-deposition protocol discussed in many of these papers shows independent re- producibility and replication across multiple laboratories in four countries negating two primary criticisms of Condensed Matter Nuclear Science (CMNS): irreproducibility and lack of independent replication.


    We have sought to identify, characterize and elucidate the underlying mechanisms. Ours has been a collaborative effort with colleagues around the globe. To date, the SSC-Pacific/JWK team and colleagues have published 49 refereed papers in 14 journals and book chapters, spanning 25 years. Our colleagues include 46 authors and co-authors from ten countries representing 34 institutions. We have given more than three times as many conference talks and briefings. This is a well-represented, international effort.

    Several researchers have independently replicated our Pd/D co-deposition protocol, like Dr. Fran Tanzella et al, Dr. Kew-Ho Lee, et al and Pierre Carbonnelle; or modified it, including Dennis Letts and Dr. Mel Miles or, like Dr. Mitchell Swartz, independently developed their own. Drs. Peter Hagelstein and Dennis Cravens with Dennis Letts used co-deposition to create the gold-coated palladium structures they successfully laser irradiated. Drs. K. Sinha and A. Meulenberg dissected the mechanisms. Twelve of the papers are co-deposition replications, including researchers in the US, Belgium, Japan and South Korea.


    32. D. Letts, “Codeposition Methods: A Search for Enabling Factors”, J. Condensed Matter Nucl. Sci. 4(2011) 81-92.

    This paper is a preliminary report on results obtained from a series of experiments conducted April– September 2009. The experiments were designed to test for excess power using the basic methods dis- closed in 1991 by Szpak, Mossier-Boss and Smith. A large and repeatable excess power signal was ob- served and the efforts to test mundane explanations for the signal are described. The design, fabrication and calibration methods of a new type of Seebeck calorimeter used for these experiments are also dis- closed.


    37. Letts, D. and Hagelstein, P., “Modified Szpak Protocol for Excess Heat”, J. Condensed Matter Nucl. Sci. 6 (2012) 44-54.

    In recent theoretical work, vacancies in PdD have been shown to be able to host molecular D2, which is conjectured to be necessary for excess heat in Fleischmann–Pons experiments. Vacancies in the original Fleischmann–Pons experiment are proposed to be created through inadvertent codeposition at high load- ing. This suggests that a better approach should be to focus on experiments in which Pd codeposition is controlled, such as in the Szpak experiment. Unfortunately, the Szpak experiment has proven difficult to replicate, and we conjecture that this is due to low D/Pd loading. A modified protocol has been tested in which codeposition is carried out at higher current density with a lower PdCl2 concentration. Positive results have been obtained in all of the tests done with this protocol so far.


    39. M. H. Miles, “Investigations of Possible Shuttle Reactions in Co-deposition Systems”, J. Condensed Matter Nucl. Sci. 8 (2012) 12–22

    Experiments in the 0.025 M PdCl2 + 0.15 M ND4Cl + 0.15 M ND2OD/D2O co-deposition system pro- duced anomalous excess power in three out of three prior experiments in Japan. Completely new experi- ments have produced even larger excess power effects for this deuterated co-deposition system. The larg- est excess power effect in D2O produced 1.7 W or about 13 W/g of palladium (160W/cm3). These large excess power effects were absent in extensive studies of H2O controls. Excess power was also absent in various experiments involving the co-deposition of ruthenium (Ru), rhenium (Re), and nickel (Ni) in both H2O and D2O ammonia solutions. The statistical analysis of all 18 co-deposition experiments yields a probability of greater than 99.9989 % that the co-deposition excess power effect requires both palladium metal and D2O. Shuttle reactions have been proposed to explain the reproducible excess power effect in this ammonia co-deposition system. However, various electrochemical studies show no evidence for any shuttle reactions in this ammonia system. Nevertheless, the initial chemistry for the Pd system is complex leading to large pH changes, chlorine (Cl2) evolution, and the formation of nitrogen trichloride (NCl3) during the first few days. However, the large excess power effects are observed later in the experiments after this chemistry is completed. A better understanding of the chemistry should be helpful in the repro- duction of anomalous excess power in co-deposition systems


    42. K. Lee, H. Jang and S. Kim, “A Change of Tritium Content in D2O Solutions during Pd/D Co- deposition”, J. Condensed Matter Nucl. Sci. 13 (2014) 294–298.

    In this study electrochemical co-deposition of Pd/D on nickel electrodes was performed to determine whether a nuclear fusion reaction occurs in the palladium deposit. Co-deposition was performed with a palladium salt/D2O solution. The content of tritium in D2O solution was varied depending on the elec- trolysis procedure during co-deposition. A comparison between the co-deposition of Pd/D and the simple electrolysis of D2O was performed to investigate the change of tritium concentration in the D2O solution.


    43. M. H. Miles, “Co-deposition of Palladium and other Transition Metals in H2O and D2O Solutions”,J. Condensed Matter Nucl. Sci. 13 (2014) 401-410.

    The co-deposition of palladium, ruthenium, rhenium, nickel, and iridium were investigated in H2O and D2O ammonia systems (NH4Cl/NH3). Significant amounts of excess power were observed only in the deuterated Pd/D2O system. There was no anomalous excess power observed for the co-deposition of ru- thenium, rhenium or nickel in any H2O or D2O experiment.



    The Pd/D co-deposition technique, pioneered by SSC-Pacific, is a robust, reliable and reproducible means of generating LENR in the Pd lattice. Heat effects using Pd/D co-deposition have been reproduced by Miles10 as well as Cravens and Letts.10,56 Bockris et al. reproduced the tritium results.69 Besides SRI, the CR-39 results have been replicated by Dr. Winthrop Williams of the University of Berkeley, Dr. Ludwik Kowalski of Montclair University; Mr. Pierre Carbonnelle, l'Université catholique de Louvain and three groups of undergraduates from UCSD as part of their senior projects.

    So - are you saying the D/Pd evidence that many here think is compelling is not replicable? For me, it looks a better bet than anything else...

    Respectfully, your argument is frustrating and incoherent.

    Jed, and others, have written about how hard the experiment is; that materials science concerns are real, and even if you solve them satisfactorily by finding a source of type A palladium, you still have the problem of searching a number of samples for those that work. If you don't solve the materials science problem, you are doing no better than spinning the wheel. Then you have the difficulty of loading the palladium.

    It's, by all accounts, a fiendishly hard experiment. If it weren't, the world would have changed. It's not a matter of 'well, you say it's replicable so why don't you want to put it forward? either it's replicable or it's not.' This sort of two dimensional logic is not helpful.

    For the LENR community NOT to insist on this best attested positive set of experiments being redone would mean they felt these D/Pd electrolysis experiments were in fact not real. I guess I don't mind that, but I'd like it to be transparently stated.

    This makes no sense. It can be the best documented experiment, and still be so difficult as to cause people to think seriously about putting forward other experiments.

    When somebody comes to you and says 'we would like the recipe for a cake, and if we fail to make the cake, the consequences will be terrible', you don't give them the recipe for a croquembouche. Instead, you go and find the family recipe for a flourless orange cake.

    The robustness of the experiment is key.

    Why is bulk Pd/D a better bet than co-deposition, for example? The SPAWAR work is ostensibly straight forward and replicable. To my understanding, there are no significant materials science problems. There is no difficult loading of bulk palladium. The experiment throws off a range of nuclear products. There are many avenues to success. It is not entirely dependent on calorimetry.

    Granted, no experiment is 'gliding across the freshly waxed floor in your socks'. Some are more robust than others though, right?

    Do you think TG is less likely to succeed with co-deposition than bulk Pd/D? Why?

    You need to talk about the experiments that have been submitted to this thread in relation to each other. Weigh them along all the possible axes you can think of. You need to handicap them the way horse handicappers handicap horses. Don't spin up an argument that is an expanding tree of propositions focused on a single experiment. Weigh the experiments against each other.

    What are their strengths and weaknesses?

    Again, with respect, I just don't understand how you can arrive, confidently, at the conclusion that bulk Pd/D is the best option without going through this process.

    Other questions: Why is bulk Pd/D preferable to the Fralick experiment? Or Celani's wires? or Kirkinskii?

    If bulk Pd/D is the best option, and it may well be, then a compelling argument in its favour would weigh it against the other experiments and explain why it is preferable. You haven't done that.

    I don't know the answer to TG's question. I'm not fluent with the experiments that have been suggested. I'm not a scientist. All I want is a rigorous discussion.

    Just to be upfront with you, this is now on a two-track path to getting us to our final 3 list, which we will recommended to TG. Alan and I started a team 8 days ago, under the "Converations" to manage this process a little better. Originally I thought it better to keep the effort secret, and asked those selected, to keep it quiet. Reason for that, is I wanted to keep the panel small (more nimble), and with so many members qualified...I did not want to hurt the feelings of those left out. Since this thread is becoming more productive, I reconsidered and decided it best you know.

    The selection committee are whittling down the list to the 3 best, through a matrix magicsound put together, which sorts the experiments by weighting several criteria, on a scale of (1-5). While they are sorting that out, this thread can serve the dual purpose of providing them fresh ideas, new experiments to consider, and lay a foundational background for TG that may be of some use.

    It would be most helpful for any of the old guard here to pass along their thoughts, recommendations, how to increase the chance of success...anything really.

    Good stuff. Will the assembled team be preparing a formal document of some kind that summarises their thinking about the experiments? If so, I, and I'm sure others, would be interested to read it, or any other deliberative proceedings that can be shared.

    Well said. Epistemic humility is vital given how new and novel the experiment is.

    1. Takahashi's group

    2. Forsley/Mosier-Boss

    3. Staker

    4. Storms

    5. Celani

    6. Letts / Craven

    7. Vitaly A. Kirkinskiy

    8. Fralick

    9. Lipinski

    Is this a fair list so far? Are there any reasonably simple criteria that can be used to disqualify experiments? The list needs to be winnowed and the experiments weighed against each other.

    Some thoughts:

    • What are the potential materials science difficulties with these experiments? Do they disqualify the experiment or can they be obviated?

    • Are the original researchers available and willing to advise on the experiment? If not, does this disqualify the experiment?

    • How well documented is the experiment?

    • What is the history of prior replications of the experiment?

    • How quickly can the experiment be done? Are faster experiments, which allow for more iteration, adjustment and variation preferable?

    • Does the experiment just show excess heat, or other products too? How much excess heat is reasonably expected? Does an experiment that throws off other products, in addition to heat, offer more avenues to success for Google?