The LENR-CANR ChatGPT is ON LINE!

  • Well, I did everything the vendor and ChatGPT herself recommended, but it still cannot find items in the structured text files. What a shame. I was hoping the ChatBot would be a good index system. It is a terrible index system, but still a good way for people to find information and learn about the subject. The problem is, you learn something but when you ask for the source, it can't find the document.

  • Well, I did everything the vendor and ChatGPT herself recommended, but it still cannot find items in the structured text files. What a shame. I was hoping the ChatBot would be a good index system. It is a terrible index system, but still a good way for people to find information and learn about the subject. The problem is, you learn something but when you ask for the source, it can't find the document.

    It's not really meant for indexing, but rather for generative functions and transformation of data into small stores. Your not going to beat a good old hard memory databases for actual factual things. However, if the model is trained well enough it can go and fetch from databases and other tools to use it in useful tasks like visual representation of systems.
    Usually for LMs, the text is kind of summed up and tokenized in some form of branching vector memory. So it doesn't always come back out the same way it came went in when queried. I think it is probably better to use it as a tool to give agency over a separate static database and make calls for information when needed.

    Does this make sense?

    Hopefully this is what your ChatGPT guys are trying to do for you with the goal to have a good prompt conversation. I am attempting to build a open-source research assistant using Langchain and an AutonomousGPT model. I would love to work with you and anybody else who wants to apply machine learning to the difficult challenge of Fusion openly.
    LENR-CANR.org has already been invaluable in to many ways to count.
    For that I thank you
    sincerely Jed. <3

  • It's not really meant for indexing, but rather for generative functions and transformation of data into small stores. Your not going to beat a good old hard memory databases for actual factual things.

    Yes. On the ChatGPT page I recommended readers go to the Library SQL files. I wrote:


    CHATBOT 1 INDEX


    This contains the ASCII Bibliography. Use this bot to do a quick search for papers. Unfortunately, ChatGPT is still unreliable and it often fails to find papers, so you may wish to search in the Library Search All field instead.

  • This example shows how weak ChatGPT is when dealing with structured data, and how easily it hallucinates about minor details. This is probably caused by tokenization, as Diadon says. Here are two records from the structured text file:



    ++++++++++++++++++++

    BEGIN

    Record 2079

    [Title]: Kim, Y.E. and A.L. Zubarev, <i>Gamow factor cancellation and nuclear physics mechanisms for anomalous low-energy nuclear reactions.</i> J. New Energy, 1996. <b>1</b>(3): p. 145.

    [URL]: None

    [First Author]: Kim, Y. E.

    [All Authors]: Kim, Y. E., Zubarev, A. L.

    [Keywords]: theory, Gamow

    [Abstract]: None

    END


    ++++++++++++++++++++

    BEGIN

    Record 2080

    [Title]: Kim, Y.E. and A. Zubarev. <i>Role of Continuum Electrons and Condensed Matter Mechanisms in Ultra Low Energy Nuclear Reactions</i>. in <i>The Seventh International Conference on Cold Fusion</i>. 1998. Vancouver, Canada: ENECO, Inc., Salt Lake City, UT.

    [URL]: http://lenr-canr.org/acrobat/ENECOtheseventh.pdf#page=186

    [First Author]: Kim, Y. E.

    [All Authors]: Kim, Y. E., Zubarev, A.

    [Keywords]: theory, ICCF-7

    [Abstract]: We examine open problems and uncertainties in the conventional theory of nuclear fusion at low energies, and discuss possible improvements of the conventional theory and alternative theoretical formulations, and condensed matter mechanisms including the Bose-Einstein condensation.

    END



    Here is an exchange with the ChatBot set for GPT 4:


    ME: Can you find records with the keyword "Gamow"?

    BOT: Yes, I found some records with the keyword "Gamow" in my dataset. Here are a few examples:

    1. Record 2079
    - Title: Kim, Y.E. and A.L. Zubarev, <i>Gamow factor cancellation and nuclear physics mechanisms for anomalous low-energy nuclear reactions.</i> J. New Energy, 1996. <b>1</b>(3): p. 145
    - Authors: Kim, Y. E., Zubarev, A. L
    - Keywords: theory, Gamow

    2. Record 2078
    - Title: Kim, Y.E. and A.L. Zubarev. <i>Optical Theorem Formulation and Nuclear Physics Mechanisms for Gamow Factor Cancellation in Low-Energy Nuclear Reactions</i>. in <i>Sixth International Conference on Cold Fusion, Progress in New Hydrogen Energy</i>. 1996. Lake Toya, Hokkaido, Japan: New Energy and Industrial Technology Development Organization, Tokyo Institute of Technology, Tokyo, Japan
    - Authors: Kim, Y. E., Zubarev, A. L
    - Keywords: theory, gamow, ICCF-6 . . .


    These are just a few examples. If you need more information or have any other questions, feel free to ask.


    ME: Does Record 2080 reference "Gamow"?

    BOT: Yes, Record 2080 does reference "Gamow." Here are the details:

    Record 2080
    - Title: Kim, Y.E. and A. Zubarev. <i>Role of Continuum Electrons and Condensed Matter Mechanisms in Ultra Low Energy Nuclear Reactions</i>. in <i>The Seventh International Conference on Cold Fusion</i>. 1998. Vancouver, Canada: ENECO, Inc., Salt Lake City, UT
    - Authors: Kim, Y. E., Zubarev, A.
    - Keywords: theory, Gamow

    Please let me know if you need more information or have any other questions.


    ME: Where the delimiter is "++++++++++++++++++++ BEGIN," does Record 2080 reference "Gamow"?

    BOT: I apologize for the confusion earlier. Here is the record for Record 2080 with the delimiter "++++++++++++++++++++ BEGIN":

    ++++++++++++++++++++
    BEGIN
    Record 2080
    [Title]: Kim, Y.E. and A. Zubarev. <i>Role of Continuum Electrons and Condensed Matter Mechanisms in Ultra Low Energy Nuclear Reactions</i>. in <i>The Seventh International Conference on Cold Fusion</i>. 1998. Vancouver, Canada: ENECO, Inc., Salt Lake City, UT
    [Authors]: Kim, Y. E., Zubarev, A.
    [Keywords]: theory, Gamow

    This record does reference "Gamow" in the keywords section. If you have any other questions or need more information, please feel free to ask.


    ME: Those are the keywords from the previous record, Record 2079, above.

    BOT: You are correct, and I apologize for the confusion. It seems I made an error in my response. Unfortunately, I don't have any information on Record 2080 mentioning "Gamow" in my dataset. I apologize for the confusion and any inconvenience caused. If you have any other questions or need more information, please feel free to ask.

  • If you want to try a chatbot not "for lot of money" monthly (i.e. for free) you could consider an open source one:

    GPT4All is the Local ChatGPT for your documents… and it is free!
    How to install GPT4All on your Laptop and ask AI about your own domain knowledge (your documents)… and it runs on CPU only!
    artificialcorner.com


    I guess it needs more work to tokenize documents and gigabytes of storage for the pretrained models.


    If you install gpt4all on a server you can add an access to it for the public by using

    GitHub - YannickRobin/gpt4all-nodejs: Simple NodeJS server to provide a chatbot web interface to interact with GPT4All
    Simple NodeJS server to provide a chatbot web interface to interact with GPT4All - GitHub - YannickRobin/gpt4all-nodejs: Simple NodeJS server to provide a…
    github.com


  • The ChatBot is a lousy index system, but its ability to answer questions is sometimes astounding. The following answer is mainly based on papers by Ed Storms, I think, but whatever the source is, it is amazing that the Bot can summarize the data so well.


    ME: With a Seebeck calorimeter, when the heat source is moved within the calorimeter chamber, does that change the apparent heat output?

    BOT: In a Seebeck calorimeter, the voltage generated is proportional to the heat flux through the wall, and it should respond equally to heat flow no matter where the heat passes through the wall or where it originates within the enclosure. However, in practice, Seebeck calorimeters can be slightly position sensitive. To ensure accurate measurements, it is recommended to place the heat source in the same spot in every test, preferably close to the center of the cube.

  • I have made an LENR_AutoGPT (Autonomous Generative Pre-Trained Transformer)
    I think I will call her Athena_ADA (Autonomous Research Assistant) once the bugs have all been worked out and people can use it for other research and development purposes beyond just LENR.
    I am having trouble getting the LLM Chain to stop restarting in a loop and wait for a user prompt at the moment.

    So it becomes very expensive if you don't engineer good prompts at the beginning and babysit it to make sure it doesn't fall into a repetitive search loop.


    The python code that is written is similar to https://godmode.space/ if you have experimented with that yet.

    Only it is done locally and you can allow it to read local text and .pdf files.

    It is basically a kind of chained thought agentic loop and here is a summary of the idea.


    As promised, I have published the 2nd iteration as opensource code located here: LENR_AGPT
    Please keep in mind this is a very early prototype and if anyone has commits for improvement I would be eternally grateful. I already have a 3rd iteration with added features like a nice UI for websites and experimenting with Tesseracts OCR (Optical Character Recognition) to read .pdf images and text. I am also working on her being able to read an SQL Database JedRothwell 🤞
    It will likely need a Python SQL server connection of some sort.


    I have also removed Pinecone vectorstore memory for ease of use for people. Using a Vector DB greatly increases the AIs memory but it adds another cost. The next itteration I will include it but just ##Commit it out.

    It's just too expensive in OpenAI Tokens to let anyone play with it on a website app at the moment.

    It can read and store unprotected pdf files as text summaries and is able to complete some complex mathematics using Wolfram Alpha.


    A major downside as stated above is, once she starts going, you have to kill the terminal to get her to stop and she will eat up a lot of tokens from whatever LLM (Large Language Model) you are using.
    The only LLM I have used so far is OpenAI's, but it hooks up to Llama and other LLMs with a few tweaking of code and the .env file configuration.


    Let me know if anybody needs help using it, I recommend using VS Code as the source code editor.
    Any IDE that handles python will work and you can run it in a cloud using

    I have more to come soon and have collected and learned a lot about the potential of utilizing LENR with Machine Learning to help us develop a mathematically sound theoretical model.


    P.S. I left a few easter eggs in the LENR_AGPT_workspace folder.

  • You also may like this one as well.
    https://github.com/imartinez/privateGPT