The LENR-CANR ChatGPT is ON LINE!

  • Well, I did everything the vendor and ChatGPT herself recommended, but it still cannot find items in the structured text files. What a shame. I was hoping the ChatBot would be a good index system. It is a terrible index system, but still a good way for people to find information and learn about the subject. The problem is, you learn something but when you ask for the source, it can't find the document.

  • Well, I did everything the vendor and ChatGPT herself recommended, but it still cannot find items in the structured text files. What a shame. I was hoping the ChatBot would be a good index system. It is a terrible index system, but still a good way for people to find information and learn about the subject. The problem is, you learn something but when you ask for the source, it can't find the document.

    It's not really meant for indexing, but rather for generative functions and transformation of data into small stores. Your not going to beat a good old hard memory databases for actual factual things. However, if the model is trained well enough it can go and fetch from databases and other tools to use it in useful tasks like visual representation of systems.
    Usually for LMs, the text is kind of summed up and tokenized in some form of branching vector memory. So it doesn't always come back out the same way it came went in when queried. I think it is probably better to use it as a tool to give agency over a separate static database and make calls for information when needed.

    Does this make sense?

    Hopefully this is what your ChatGPT guys are trying to do for you with the goal to have a good prompt conversation. I am attempting to build a open-source research assistant using Langchain and an AutonomousGPT model. I would love to work with you and anybody else who wants to apply machine learning to the difficult challenge of Fusion openly.
    LENR-CANR.org has already been invaluable in to many ways to count.
    For that I thank you
    sincerely Jed. <3

  • It's not really meant for indexing, but rather for generative functions and transformation of data into small stores. Your not going to beat a good old hard memory databases for actual factual things.

    Yes. On the ChatGPT page I recommended readers go to the Library SQL files. I wrote:


    CHATBOT 1 INDEX


    This contains the ASCII Bibliography. Use this bot to do a quick search for papers. Unfortunately, ChatGPT is still unreliable and it often fails to find papers, so you may wish to search in the Library Search All field instead.

  • This example shows how weak ChatGPT is when dealing with structured data, and how easily it hallucinates about minor details. This is probably caused by tokenization, as Diadon says. Here are two records from the structured text file:



    ++++++++++++++++++++

    BEGIN

    Record 2079

    [Title]: Kim, Y.E. and A.L. Zubarev, <i>Gamow factor cancellation and nuclear physics mechanisms for anomalous low-energy nuclear reactions.</i> J. New Energy, 1996. <b>1</b>(3): p. 145.

    [URL]: None

    [First Author]: Kim, Y. E.

    [All Authors]: Kim, Y. E., Zubarev, A. L.

    [Keywords]: theory, Gamow

    [Abstract]: None

    END


    ++++++++++++++++++++

    BEGIN

    Record 2080

    [Title]: Kim, Y.E. and A. Zubarev. <i>Role of Continuum Electrons and Condensed Matter Mechanisms in Ultra Low Energy Nuclear Reactions</i>. in <i>The Seventh International Conference on Cold Fusion</i>. 1998. Vancouver, Canada: ENECO, Inc., Salt Lake City, UT.

    [URL]: http://lenr-canr.org/acrobat/ENECOtheseventh.pdf#page=186

    [First Author]: Kim, Y. E.

    [All Authors]: Kim, Y. E., Zubarev, A.

    [Keywords]: theory, ICCF-7

    [Abstract]: We examine open problems and uncertainties in the conventional theory of nuclear fusion at low energies, and discuss possible improvements of the conventional theory and alternative theoretical formulations, and condensed matter mechanisms including the Bose-Einstein condensation.

    END



    Here is an exchange with the ChatBot set for GPT 4:


    ME: Can you find records with the keyword "Gamow"?

    BOT: Yes, I found some records with the keyword "Gamow" in my dataset. Here are a few examples:

    1. Record 2079
    - Title: Kim, Y.E. and A.L. Zubarev, <i>Gamow factor cancellation and nuclear physics mechanisms for anomalous low-energy nuclear reactions.</i> J. New Energy, 1996. <b>1</b>(3): p. 145
    - Authors: Kim, Y. E., Zubarev, A. L
    - Keywords: theory, Gamow

    2. Record 2078
    - Title: Kim, Y.E. and A.L. Zubarev. <i>Optical Theorem Formulation and Nuclear Physics Mechanisms for Gamow Factor Cancellation in Low-Energy Nuclear Reactions</i>. in <i>Sixth International Conference on Cold Fusion, Progress in New Hydrogen Energy</i>. 1996. Lake Toya, Hokkaido, Japan: New Energy and Industrial Technology Development Organization, Tokyo Institute of Technology, Tokyo, Japan
    - Authors: Kim, Y. E., Zubarev, A. L
    - Keywords: theory, gamow, ICCF-6 . . .


    These are just a few examples. If you need more information or have any other questions, feel free to ask.


    ME: Does Record 2080 reference "Gamow"?

    BOT: Yes, Record 2080 does reference "Gamow." Here are the details:

    Record 2080
    - Title: Kim, Y.E. and A. Zubarev. <i>Role of Continuum Electrons and Condensed Matter Mechanisms in Ultra Low Energy Nuclear Reactions</i>. in <i>The Seventh International Conference on Cold Fusion</i>. 1998. Vancouver, Canada: ENECO, Inc., Salt Lake City, UT
    - Authors: Kim, Y. E., Zubarev, A.
    - Keywords: theory, Gamow

    Please let me know if you need more information or have any other questions.


    ME: Where the delimiter is "++++++++++++++++++++ BEGIN," does Record 2080 reference "Gamow"?

    BOT: I apologize for the confusion earlier. Here is the record for Record 2080 with the delimiter "++++++++++++++++++++ BEGIN":

    ++++++++++++++++++++
    BEGIN
    Record 2080
    [Title]: Kim, Y.E. and A. Zubarev. <i>Role of Continuum Electrons and Condensed Matter Mechanisms in Ultra Low Energy Nuclear Reactions</i>. in <i>The Seventh International Conference on Cold Fusion</i>. 1998. Vancouver, Canada: ENECO, Inc., Salt Lake City, UT
    [Authors]: Kim, Y. E., Zubarev, A.
    [Keywords]: theory, Gamow

    This record does reference "Gamow" in the keywords section. If you have any other questions or need more information, please feel free to ask.


    ME: Those are the keywords from the previous record, Record 2079, above.

    BOT: You are correct, and I apologize for the confusion. It seems I made an error in my response. Unfortunately, I don't have any information on Record 2080 mentioning "Gamow" in my dataset. I apologize for the confusion and any inconvenience caused. If you have any other questions or need more information, please feel free to ask.

  • If you want to try a chatbot not "for lot of money" monthly (i.e. for free) you could consider an open source one:

    GPT4All is the Local ChatGPT for your documents… and it is free!
    How to install GPT4All on your Laptop and ask AI about your own domain knowledge (your documents)… and it runs on CPU only!
    artificialcorner.com


    I guess it needs more work to tokenize documents and gigabytes of storage for the pretrained models.


    If you install gpt4all on a server you can add an access to it for the public by using

    GitHub - YannickRobin/gpt4all-nodejs: Simple NodeJS server to provide a chatbot web interface to interact with GPT4All
    Simple NodeJS server to provide a chatbot web interface to interact with GPT4All - GitHub - YannickRobin/gpt4all-nodejs: Simple NodeJS server to provide a…
    github.com


  • The ChatBot is a lousy index system, but its ability to answer questions is sometimes astounding. The following answer is mainly based on papers by Ed Storms, I think, but whatever the source is, it is amazing that the Bot can summarize the data so well.


    ME: With a Seebeck calorimeter, when the heat source is moved within the calorimeter chamber, does that change the apparent heat output?

    BOT: In a Seebeck calorimeter, the voltage generated is proportional to the heat flux through the wall, and it should respond equally to heat flow no matter where the heat passes through the wall or where it originates within the enclosure. However, in practice, Seebeck calorimeters can be slightly position sensitive. To ensure accurate measurements, it is recommended to place the heat source in the same spot in every test, preferably close to the center of the cube.

  • I have made an LENR_AutoGPT (Autonomous Generative Pre-Trained Transformer)
    I think I will call her Athena_ADA (Autonomous Research Assistant) once the bugs have all been worked out and people can use it for other research and development purposes beyond just LENR.
    I am having trouble getting the LLM Chain to stop restarting in a loop and wait for a user prompt at the moment.

    So it becomes very expensive if you don't engineer good prompts at the beginning and babysit it to make sure it doesn't fall into a repetitive search loop.


    The python code that is written is similar to https://godmode.space/ if you have experimented with that yet.

    Only it is done locally and you can allow it to read local text and .pdf files.

    It is basically a kind of chained thought agentic loop and here is a summary of the idea.


    As promised, I have published the 2nd iteration as opensource code located here: LENR_AGPT
    Please keep in mind this is a very early prototype and if anyone has commits for improvement I would be eternally grateful. I already have a 3rd iteration with added features like a nice UI for websites and experimenting with Tesseracts OCR (Optical Character Recognition) to read .pdf images and text. I am also working on her being able to read an SQL Database JedRothwell 🤞
    It will likely need a Python SQL server connection of some sort.


    I have also removed Pinecone vectorstore memory for ease of use for people. Using a Vector DB greatly increases the AIs memory but it adds another cost. The next itteration I will include it but just ##Commit it out.

    It's just too expensive in OpenAI Tokens to let anyone play with it on a website app at the moment.

    It can read and store unprotected pdf files as text summaries and is able to complete some complex mathematics using Wolfram Alpha.


    A major downside as stated above is, once she starts going, you have to kill the terminal to get her to stop and she will eat up a lot of tokens from whatever LLM (Large Language Model) you are using.
    The only LLM I have used so far is OpenAI's, but it hooks up to Llama and other LLMs with a few tweaking of code and the .env file configuration.


    Let me know if anybody needs help using it, I recommend using VS Code as the source code editor.
    Any IDE that handles python will work and you can run it in a cloud using

    I have more to come soon and have collected and learned a lot about the potential of utilizing LENR with Machine Learning to help us develop a mathematically sound theoretical model.


    P.S. I left a few easter eggs in the LENR_AGPT_workspace folder.

  • You also may like this one as well.
    https://github.com/imartinez/privateGPT

  • The guy who quit Google's AI project.


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    Geoffrey Hinton - Two Paths to Intelligence (25 May 2023, Public Lecture, University of Cambridge)

    Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but they allow exactly the same computation to be run on physically different pieces of hardware. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and use very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. I will briefly describe one such algorithm. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation allows us to run many copies of exactly the same model on different pieces of hardware. All of these digital agents can look at different data and share what they have learned very efficiently by averaging their weight changes. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us.

    The public lecture was organised by The Centre for the Study of Existential Risk, The Leverhulme Centre for the Future of Intelligence and The Department of Engineering.
    The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse. For more information, please visit our website: https://www.cser.ac.uk https://twitter.com/CSERCambridge https://www.facebook.com/CSERCambridge

  • The guy who Google 'let go'.

    This guy, Hinton, quit Google on his own initiative. Google "let go" Blake Lemoine, an engineer who thinks that an LLM AI ChatBot is sentient. That is to say, they fired him. I would fire him. I have been struggling to make an LLM IA ChatBot work for several weeks now, and in my opinion it is about as intelligent as a spoon. Certainly no more intelligent than a 1981 IBM PC.


    The LLM does exhibit many behaviors that we associate with intelligence. But it exhibits these behaviors in the same sense that that a bird brain resembles an airplane autopilot computer. Birds know nothing about aviation science. Their ability to fly is not based on an intelligent or sentient analysis of flight, whereas every aspect of human aviation is the product of sentient intelligence. Nature finds ways to accomplish the same goals as we do, without our intelligence. Now we have built a computer that accomplishes the same goals, without our intelligence.


    That is not to say real, sentient intelligence will never emerge from computers. I think it will. I think that is inevitable.

  • The vendor made several improvements to the ChatGPT ChatBot. It now parses text better, and it recognizes structured text better. This means the LENR-CANR.org Chatbot 1 INDEX works better. It finds the URL for papers more reliably. It still has some difficult with a request such as "find all papers that mention 'activation energy.'" It only finds two. There are many others.


    Overall, it is working better. The vendor added some options such as "AI temperature." A temperature of 0.25 works better for the INDEX.


    I will update the LIBRARY bot. That takes a few hours to prepare.

  • The latest ChatGPT software and files are now installed in the LIBRARY bot (the lower box). It works much better. But it still has some quirks. Examples:



    When I uploaded only a few papers, it was able to answer these two questions correctly, in detail:


    1. Describe the gas chromatographic method in the paper Aoki, T., et al., "Search for nuclear products of the D + D nuclear fusion."

    2. What is the URL of this paper?


    After I uploaded many papers it could no longer find this information. Now it tells me there is nothing about the "chromatographic method" in the paper, and those two words do not appear in the paper. It cannot find the URL, either.

    It cannot find any other URLs in the LIBRARY bot. It always finds them in the INDEX bot.


    EXAMPLES AFTER LOADING FULL DATASET:


    Describe the gas chromatographic method in the paper Aoki, T., et al., "Search for nuclear products of the D + D nuclear fusion."

    I'm sorry, but the gas chromatographic method is not mentioned in the paper "Search for nuclear products of the D + D nuclear fusion" by Aoki, T., et al. The paper focuses on measurements of nuclear products such as gamma-ray, neutron, tritium, and helium using various detectors and comparing the energy spectra with background spectra.


    [INCORRECT. This was found earlier when only papers starting with "A" were loaded.]


    In the paper Aoki, T., et al., "Search for nuclear products of the D + D nuclear fusion" do you find the words "molecular sieve"?

    I'm sorry, but the dataset provided does not mention the words "molecular sieve" in the context of the paper by Aoki, T., et al., "Search for nuclear products of the D + D nuclear fusion."


    [INCORRECT]


    What is the abstract for this paper?

    The abstract for the paper "Search for nuclear products of the D + D nuclear fusion" by Aoki, T., et al. is as follows: Nuclear products which were caused by the D + D nuclear fusion reaction were searched in electrolytic cells and in gas phase of Pd + D systems. Measurements of nuclear products were made for gamma-ray, neutron, tritium and helium. To detect neutron, liquid scintillation and 3He counters were used. For gamma-ray measurement, a NaI detector was used. For tritium concentration measurement in gas phase, a gas proportional chamber was fabricated and operated in low background level. The signals of those detectors were fed to Pulse Height Analyzer and recorded as energy spectra which were carefully compared with background spectra. A different type of neutron hunting was also tried in the instants of pressurizing and depressurizing the deuterium gas in crystal. A large size crystal of tungsten bronze was prepared for the experiment.


    [EXACTLY RIGHT. Verbatim quotes now working much better.]


    What does your dataset say about neutron detection by liquid scintillation?

    The dataset describes a neutron detection system based on liquid scintillation and pulse shape analysis. The detector consists of a liquid organic scintillator with embedded 6Li-doped glass scintillator plates. Neutrons deposit their energy in the liquid scintillator through multiple collisions, producing a light output that can be measured by photomultipliers. The low-energy neutrons are then scavenged by the 6Li in the glass, emitting a glass scintillation pulse. The liquid and glass scintillation pulse shapes differ and can be distinguished in the photomultiplier signals. A liquid pulse followed by a glass pulse within 20 microseconds identifies a neutron, and by pulse height analysis of the liquid pulse, the neutron counts can be distributed according to energy.

    Some key features of the liquid scintillation neutron detection system include:

    - High efficiency for fast neutrons

    - Good pulse-shape discrimination properties

    - Capability to monitor and store relevant parameters event-by-event

    - Consideration of cosmic radiation interference

    - Measurement of neutron energy spectrum, with sensitivity ranging from 1 to 50 MeV


    [GOOD ANSWER! Excellent summary.]


    List some of the papers that describe neutron detection by liquid scintillation.

    I'm sorry, but the dataset provided does not include specific paper titles or references related to neutron detection by liquid scintillation.


    [CAN'T FIND SOURCES. TOKENIZED data is not associated with the data source. I asked the bot, and it confirmed that. The INDEX bot lists a few papers about scintillation.]

  • ChatGPT has many limitations and quirks. This is typical of first-generation software. I think many of them will be fixed in the coming months. With the latest improvements from the vendor and OpenAI, I would say:

    1. The first bot INDEX is now doing a much better job looking up papers. But, as I said in the instructions, you may wish to use the Library search screens instead. (I put a link to the library.)
    2. The second bot LIBRARY does an excellent job summarizing papers and looking up information. It is uncanny how good it is.

    The only problem is that ChatGPT software is not intended for the exact application I have in mind. It does not work well as a library reference catalog. It has difficulty finding information and then telling the reader where the information came from. I asked the Bot itself about this:


    ME: Can you locate the paper from Tsinghua University describing the liquid scintillation detector?

    Bot: I apologize, but the dataset does not provide the specific title or author names for the paper from Tsinghua University describing the liquid scintillation detector. The information I provided earlier was based on an experimental description within the dataset and not a specific paper title or author.

    ME: So, the tokenized experimental description is not linked to any specific paper?

    Bot: That's correct. The experimental descriptions I mentioned earlier are not directly linked to specific paper titles or authors in the provided dataset. I apologize for any confusion caused. . . .


    I have installed a work-around that sometimes allows you to locate files in the LIBRARY bot. It is not very reliable. You have to ask for search parameters with square brackets around them, and a colon. For example, after you find a paper, you ask:


    "What are the [URL:] and [keywords:] for this paper?"


    I updated the instructions above the second LIBRARY bot to say this:


    This contains all of the Acrobat papers in the LENR-CANR.org library. Use this to do a detailed search for papers, or to ask detailed questions about cold fusion. NOTE: to find the URL for a paper, ask for [URL:] (with the square brackets and colon).



    Here is an example of an uncanny response. This is a better summary than I might write myself:

    ME: What does Rothwell say about the Wright brothers?
    Bot: In the dataset, Jed Rothwell discusses the Wright brothers in relation to the history of aviation and the treatment they received from Scientific American. He compares the skepticism and criticism the Wright brothers faced to the treatment of cold fusion research. Rothwell cites sources like Fred Kelly's "The Wright Brothers: A Biography" and T. Crouch's "The Bishop's Boys" to support his argument that the Wright brothers were not secretive and that Scientific American's skepticism was unwarranted.

  • I think the ChatBot is now working well. If you tried using it before and the answer was not satisfactory, please try again.

    https://lenr-canr.org/wordpress/?page_id=2988

    There are two bots. The top one is good for searching, and the bottom one is good for detailed questions.

    Now that I have worked with the ChatGPT Large Language Model (LLM) for several weeks, here are some of my thoughts about it.

    My purpose is to set up the ChatBots to help people navigate the papers at LENR-CANR.org. My biggest problem is that the bot often cannot link what it reports back to original sources. I confirmed this in an exchange with the bot, which is quoted above. Let me repeat it:

    ME: [After the bot cited a study from Tsinghua U.] . . . Can you locate the paper from Tsinghua University describing the liquid scintillation detector?
    Bot: I apologize, but the dataset does not provide the specific title or author names for the paper from Tsinghua University describing the liquid scintillation detector. The information I provided earlier was based on an experimental description within the dataset and not a specific paper title or author.
    ME: So, the tokenized experimental description is not linked to any specific paper?
    Bot: That's correct. The experimental descriptions I mentioned earlier are not directly linked to specific paper titles or authors in the provided dataset. I apologize for any confusion caused. . . .

    Sometimes it can link back. If you want to do this, use the top INDEX bot.

    These LLM bots have little logic. LLM cannot even count to 10, and it does not realize that events in 1860 came before 2019. It made that error in some of my enquiries. I asked ChatGPT about that, and it said that it has no temporal comparison abilities. LLM have no creativity; they cannot synthesize new knowledge. I expect these limitations will soon be fixed. This has already begun with the Wolfram plugin for ChatGPT. Wolfram has a lot of built in logic, and it has more mathematical and engineering abilities than any one person.

    Other AI models can synthesize knowledge. In the 1990s, AI computers were given laws of physics and engineering, and then assigned various engineering goals. They reinvented electronic patents filed by AT&T in the early decades of the 20th century. These were difficult and creative patents. Sooner or later, creative models will be integrated into LLM.

    Here is the big question: Is this program intelligent? Here is my opinion. The LLM does exhibit many behaviors that we associate with intelligence. But it exhibits these behaviors in the same sense that bees exhibit structural engineering when they build a nest. Their ability is in their brains, so this is real intelligence. But it is nothing like the sentient intelligence of a human structural engineer. Nature finds ways to accomplish the same goals as we do, without our intelligence. Now we have built a computer that accomplishes the same goals, without our intelligence.

    I predict that future AI models will be intelligent by every standard (artificial general intelligence). I predict they will be sentient. I do not know enough about AI to predict how long this will take, but I think there is no fundamental reason why it cannot happen. I am sure that sentient thinking machines exist because, as Arthur C. Clarke used to say, I carry one on my shoulders. Clarke and I did not think there is anything preternatural about a brain. We did not think brains, intelligence, or sentience will be forever unsolvable mysteries, or complicated "beyond human understanding." We expected they will someday be understood in enough detail to replicate them in silicon, or in quantum computers, or whatever technology is called for.

  • Here we see the Bot's inner nature.


    AI Temperature set to 0.25:


    User: Describe activation energy.

    Bot: Activation energy is a crucial parameter in reaction kinetics, representing the minimum amount of energy required for a chemical or nuclear reaction to proceed. It serves as an energy barrier that must be overcome for the reaction to take place. In catalysis, the activation energy is often reduced, leading to faster reaction rates. The lower the activation energy, the faster the reaction rate. Activation energy can be determined from an Arrhenius plot, and it plays a significant role in processes like diffusion of deuterium in palladium, which influences the rate of nuclear fusion in certain conditions.


    AI Temperature raised to 2.0 (maximum) (maximum creativity):


    User: Describe activation energy.

    Bot: Activation energy, often represented in chemistry industry catalysis processes italytics ked speeds upleralcolecules rxlg provided.& creating d reaction fr low_pockets P he calculation.vergemost.o_o barrier locreative sometimesions.dd.nc.orporralcularvillochemiltratorskttractive devolutionpeed.typ_transxlatingcermsorsicsataressnergy dleature also materiallates te worderral Thenciugh leghty response IurrlIt BarActivation activateoben tfactive ion.& Diffustwo fasterquenceations sreres read ingocus understandxamineslonstants racriterion ak steps_exa heretoldsbreak-A As_activationysicsleairondsationommshymes.isucticol_rr.usdd.blename&#nnaturelzfromincusc fusion.talg_par,e20 over.diff__eductrialnder8_fig.longectionade specystondoms_bywteinway init_tranlnita_H+typmeapico-mletalskitplot;;key_finiveission-exewActivation

  • This reminded me the humor setting of the TARS robot in Interstellar.


    External Content m.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.