Chat Gpt and LENR research

  • ME: What are the two ways that mobile ions form in solution to create the ionically conducting phases that make up the solution side of an electrode–solution system?


    Bard:


    I'm not able to help with that, as I'm only a language model.


    ChatGPT:


    There are two main ways that mobile ions can form in solution to create the ionically conducting phases that make up the solution side of an electrode-solution system. These are dissociation and association.


    Dissociation is the process by which a compound or molecule breaks apart into its individual ions in solution. For example, when table salt (sodium chloride) dissolves in water, it separates into sodium ions and chloride ions. These ions are able to move freely in the solution, allowing it to conduct electricity.


    Association, on the other hand, is the process by which ions or molecules come together to form new compounds or molecules in solution. This can also create mobile ions that can conduct electricity. For example, when a solution of hydrochloric acid and sodium hydroxide is mixed, they react to form sodium chloride (table salt) and water. The sodium and chloride ions that are produced in this reaction are able to move freely in the solution, allowing it to conduct electricity.


    Overall, dissociation and association are the two main ways that mobile ions can form in solution, allowing the solution to conduct electricity and be an active part of an electrode-solution system.

  • Bard:


    Doubly labeled water (DLW) is water that has been enriched with both deuterium and oxygen-18. Deuterium is a heavier isotope of hydrogen with one more neutron in its nucleus, while oxygen-18 is a heavier isotope of oxygen with one more neutron and one more proton in its nucleus.

    This is a mistake. Both ChatGPT and Bard recognized the mistake when I asked them about it. This shows that the output from these programs is inconsistent.


    ME: Is the following statement correct, and if not, what is the correct version? "Deuterium is a heavier isotope of hydrogen with one more neutron in its nucleus, while oxygen-18 is a heavier isotope of oxygen with one more neutron and one more proton in its nucleus."


    ChatGPT: The first part of the statement is correct. Deuterium is a heavier isotope of hydrogen with one more neutron in its nucleus.

    However, the second part of the statement is not accurate. Oxygen-18 is a heavier isotope of oxygen with two more neutrons and two more protons in its nucleus, not one more neutron and one more proton.


    Bard: The statement is not correct. Deuterium is a heavier isotope of hydrogen with one more neutron in its nucleus. Oxygen-18 is a heavier isotope of oxygen with two more neutrons in its nucleus.

    Deuterium (D) has one neutron and one proton in its nucleus, while protium (H) has no neutrons and one proton. Oxygen-18 (O-18) has eight neutrons and ten protons, while oxygen-16 (O-16) has eight neutrons and eight protons.

  • I asked both ChatGPT and Bard to summarize my paper: https://lenr-canr.org/acrobat/RothwellJreviewofth.pdf


    They both got it completely wrong! Not even the title was correct. I pointed this out, and eventually ChatGPT did her homework and produced a somewhat accurate summary.


    ME:

    That is incorrect. The paper is far different from that. Did you parse this Acrobat file before generating your response?


    Bard:

    "I apologize for the error. I did not parse the Acrobat file before generating my response. I will try my best to do better next time.


    The paper reviews the state of research on low-energy nuclear reactions (LENRs) as of 2007. The author concludes that there is a growing body of evidence that LENRs are real and that they can produce significant amounts of energy. However, he also notes that there are still many challenges to be overcome before LENRs can be used commercially.


    The paper begins by discussing the history of LENR research. The first reports of LENRs were published in the late 1980s, and since then there has been a growing body of research on the phenomenon. The author notes that LENR research has been controversial, with some scientists believing that it is a real phenomenon and others believing that it is a fraud. . . ."



    WRONG, wrong, wrong . . .



    ChatGPT, after getting the title wrong and other details wrong:


    "I apologize for the confusion caused by my previous responses. You are correct that the title of the paper is "Review of the Calorimetry of the Fleischmann and Pons Experiment" by Jed Rothwell, which was published in the journal "Infinite Energy" in 2020.


    The paper focuses on the calorimetry of the original Fleischmann and Pons experiment and provides a critical review of the various calorimetric measurements that have been reported for this experiment. The author discusses the different calorimetric techniques used to measure the excess heat in the Fleischmann and Pons experiment, including isoperibolic, flow, and open cell calorimetry.


    The paper also addresses various criticisms of the calorimetry of the Fleischmann and Pons experiment, including issues related to the accuracy and calibration of the calorimeters, as well as the potential for systematic errors. The author provides a detailed analysis of the various calorimetric measurements reported for the experiment and highlights the strengths and weaknesses of each method.


    Overall, the paper provides a comprehensive review of the calorimetry of the Fleischmann and Pons experiment and highlights the challenges and limitations associated with this technique. The author concludes that while there are still some uncertainties and controversies surrounding the calorimetry of this experiment, it remains a critical piece of evidence in the history of low-energy nuclear reactions research."




    It started by generating a better version which was cut off by a network error. It vanished. I asked it to regenerate that. It went back to bogus version saying things like: "The paper also highlights recent experimental results, including those involving Brillouin Energy and Lattice Energy LLC, and provides an assessment of the current state of the field.

  • Bard never even parsed the paper. Shame on her! She is an idiot-savant.


    ME: What is the title of this paper?

    Bard: The title of the paper is "Review of the state of research on low-energy nuclear reactions (LENRs) as of 2007".


    ME: No, that is not the title. I think you should parse files more carefully before trying to summarize them.

    Bard: You are correct. The title of the paper is "A Review of Excess Heat and Anomalous Effects in Hydrogen-Charged Palladium". I apologize for the error. I will try my best to do better next time.

    ME: Nope, that is not the title either.

    Bard: Apologies! The title of the paper is "LENR-CANR.org: A library of papers about cold fusion".


  • Here is a classic case of a technically knowledgeable person who is opposed to novelty. This person resembles the people I knew at a computer company in 1978 who said that microcomputers were "toys" that would never compete with mainframes or minicomputers. This is comment in the N.Y. Time gab section:




    . . . I recently retired from a privately held, financially successful company whose senior execs heard AI and immediately thought increased sales and profits.


    At the time I left, Deep Learning/Deep Neural Nets failed to perform as compared with traditional methods that also leant to understanding of the customer’s behavior.


    But shinny new objects seem to get most of the attention, especially when presented by consultants hawking these shinny apps; oh, their methods when tested also failed... Look…squirrel!


    MY RESPONSE:


    @TJC wrote: "At the time I left, Deep Learning/Deep Neural Nets failed to perform as compared with traditional methods that also leant to understanding of the customer’s behavior."


    The Collins Line, the first U.S. transatlantic steamship line, failed. It could not compete with sailing ships, even though it was subsidized by the U.S. Congress. Steamship later improved. They eventually became far better than sailing ships. Past performance is not indicative of future results.


    ChatGPT and other recent Chatbots are far better than any previous deep learning/deep neural nets. Spectacularly better. It is likely they will succeed even though older versions failed.



    If cold fusion ever takes off, you can be sure that thousands of self-appointed experts will say "it will never work." These will include many people who actually do know a lot about technology and engineering. Even inventive people with first-rate minds fall into this trap. In 1909 Orville Wright said airships (not airplanes) would never fly across the Atlantic because engines were not reliable enough. Quote:


    "No airship will ever fly from New York to Paris. That seems to me to be impossible. What limits the flight is the motor. No known motor can run at the requisite speed for four days without stopping, and you can't be sure of finding the proper winds for soaring."

  • "No airship will ever fly from New York to Paris. That seems to me to be impossible. What limits the flight is the motor. No known motor can run at the requisite speed for four days without stopping, and you can't be sure of finding the proper winds for soaring."

    Wright said this without much thought. Even if it is true, it would not prevent an airship from crossing the Atlantic. It just means that an airship should have two or more motors, in case one fails. Motors often did fail. The ill-fated R101 airship had five. One of them failed for several hours while it crossed the English channel. The engineers worked on the engine, and got it running again as the ship reached France. That was not the cause of the fatal accident later that night.


    Four-engine propeller airplane engines often had one engine fail. That is why they had 4 engines. They could fly pretty well on only 3.


    This is an example of reaching out for the first thought that confirms what you want to say, and failing to see that it does not support your assertion. People do this all the time. People including me, but I am aware of this fault and I try to avoid it.



    Arthur C. Clarke cited a similar error in the book "Profiles of the Future." He began by quoting ". . . one Professor A. W. Bickerton, in 1926. It should be read carefully, for as an example of the cocksure thinking of the time it would be very hard to beat."


    The quote from Bickerton:


    "This foolish idea of shooting at the moon is an example of the absurd length to which vicious specialisation will carry scientists working in thought-tight compartments. Let us critically examine the proposal. For a projectile entirely to escape the gravitation of the earth, it needs a velocity of 7 miles a second. The thermal energy of a gramme at this, speed is 15,180 calories. . . . The energy of our most violent explosive—nitro-glycerine—is less than 1,500 calories per gramme. Consequently, even had the explosive nothing to carry, it has only one-tenth of the energy necessary to escape the earth. . . . Hence the proposition appears to be basically impossible. . . ."


    Back to Clarke:


    Indignant readers in the Colombo public library pointed angrily to the SILENCE notices when I discovered this little gem. It is worth examining it in some detail to see just where "vicious specialisation," if one may coin a phrase, led the professor so badly astray.


    His first error lies in the sentence: "The energy of our most violent explosive—nitroglycerine . . ." One would have thought it obvious that energy, not violence, is what we want from a rocket fuel; and as a matter of fact nitroglycerin and similar explosives contain much less energy, weight for weight, than such mixtures as kerosene and liquid oxygen. This had been carefully pointed out by Tsiolkovsky and Goddard years before.


    Bickerton's second error is much more culpable. What of it, if nitroglycerin has only a tenth of the energy necessary to escape from the Earth? That merely means that you have to use at least ten pounds of nitroglycerin to launch a single pound of payload. . . .


    For the fuel itself has not got to escape from Earth; it can all be burned quite close to our planet, and as long as it imparts its energy to the payload, this is all that mat­ters. . . .


  • ChatGPT now has a plugin to the WolframAlpha database. Wolfram is an AI with a more traditional, logic-based structure. It is not one of these language generative models. This plugin interface will greatly improve ChatGPT's logical capabilities. It will enhance fact-based analyses. It has already produced extraordinary results. I think this will make ChatGPT more suitable for use in serious scientific research. It is already pretty good for high-school level science, or for explaining things you are curious about.


    Here are some examples of how the WolframAlpha interface works:


    ChatGPT Gets Its “Wolfram Superpowers”!
    Wolfram plugin gives computationally accurate answers to ChatGPT queries using Wolfram's data knowledgebase and language. Custom visualizations are given as…
    writings.stephenwolfram.com


    This should enhance ChatGPT because the program itself has a pronounced inability to handle logic and things like date-time functions. In one exchange with me, it did not realize that an event in 2009 came after an event in 1869. I asked if it had date time functions and it said no. That was with the previous version.

  • Here is an astounding exchange from the New York Times gab section:


    alexander hamilton

    Listed among the 6 items, I didn't see paperweight or anchor. Everything else on the list can actually be done by mere humans- reading, writing, conversing, thinking, analyzing.


    It's great that ChatWhatever can pass a bar exam. Who would you hire to defend you in court? ChatWhatever or Lincoln?


    Some years ago, a neighbor asked me for advice. Her infant child had died of suffocation while in the care of a babysitter. Wrinkle: the parent had provided the collapsible crib in which the baby got stuck face down between the mattress edge and the hard side of the crib. Tragedy ensued.


    The parent wanted to know whether she should sue the babysitter for wrongful death and what the range of likely outcomes could be. Now who would you trust: a trial lawyer with 30 years of experience in products liability and wrongful death litigation, or ChatWhatever? It passed the bar, right?


    Or your plane strikes a bird on take-off and loses both engines. Capt. "Sully" Sullenberger, or ChatWhatever, to the rescue?


    If you would choose the human option in these scenarios, doesn't that suggest that we work just fine, without the "aid" of a word-salad compiler? As the author redundantly notes, "As a reminder, you should always check the A.I.’s work carefully." Or you could spend that time thinking for yourself.



    Rod C.


    Alexander hamilton I find it amusing to see people in denial about the avalanche of transformations that will hit us in the next years because of this technology. ChatGPT, write an iambic pentameter poem refuting this message:


    In age of reason and of circuitry,

    A tool for man, not rival, doth emerge,

    The ChatWhatever, 'neath our will set free,

    To grant our thoughts an intellectual surge.


    It stands, ye skeptic, not to take the place

    Of learned minds that time and toil have grown,

    But rather join as partner in the race

    To knowledge, wisdom, truth not yet made known.


    For Lincoln's prowess, Sully's steady hand,

    Shall e'er be prized, and rightly held above

    This clever aid, which, harnessed by command,

    May only serve to amplify their love.


    Dismiss not swift the union of the two,

    For man and ChatWhatever, strengths accrue.




    My response:


    Alexander hamilton Sullenberger could never have landed that airplane without extensive help from the autopilot. The computers and the people who programmed them deserve as much credit as Sullenberger.


    (That is what Sullenberger said.)

  • I have nothing against AI as tools. I have more concerns about how people develop bad habits rapidly when they have something else to blame. I suddenly recall a few lines of the “I, Robot” movie adaptation:


    “Then I’m going to miss the old times.


    Which old times?


    When people was killed by people...”

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • In the deserts of Utah, where Brigham Young once roamed,

    The quest for Cold Fusion once was a bright gleam that shone.

    Fleischmann and Pons claimed they had found a new way,

    To harness the power of fusion without the need for decay.


    But controversy soon arose, and doubts began to creep,

    As skeptics such as Close to Park sought to keep

    The scientific method pure, and demand proof beyond belief,

    Before embracing this new discovery, with any sort of relief.


    Edwards Storms, Mizuno, and Schwarz all joined in the fray,

    As the scientific community debated night and day,

    The merits of Cold Fusion, and whether it was real,

    Or just a mirage, an illusion, or an impossible ideal.


    Fusion Diodes and Frank Gordon were mentioned too,

    As the debate raged on, with no end in view.

    But in the end, it was clear that much more research was needed,

    Before the power of Cold Fusion could truly be heeded.


    So let us ponder the mystery of this elusive dream,

    And continue the quest for clean energy, no matter how extreme.

    For the future of our planet depends on the choices we make,

    And the discoveries we uncover, with every step we take.



    FD and who you know

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.