ChatGPT test at LENR-CANR.org

  • A person who knows a lot more about AI than I do recommended that I check the "temperature" parameter of the ChatGPT setting. You can look up "AI temperature" to see what that is. For an application such as the one at LENR-CANR.org, a low temperature is recommended. So, I asked the vendor about this. He responded:

    "At the moment, our temperature is set to 0, which is the lowest possible value to ensure the chatbot's responses are deterministic."

    Despite this zero setting, the ChatBot generates hallucinations. This technology is still in the early stage of development. It is unreliable. It reminds me of computers in 1968, and microcomputers in 1980, such as the unloved Radio Shack TRS-80, a.k.a. Trash-80.

  • A person who knows a lot more about AI than I do recommended that I check the "temperature" parameter of the ChatGPT setting. You can look up "AI temperature" to see what that is. For an application such as the one at LENR-CANR.org, a low temperature is recommended. So, I asked the vendor about this. He responded:

    "At the moment, our temperature is set to 0, which is the lowest possible value to ensure the chatbot's responses are deterministic."

    Despite this zero setting, the ChatBot generates hallucinations. This technology is still in the early stage of development. It is unreliable. It reminds me of computers in 1968, and microcomputers in 1980, such as the unloved Radio Shack TRS-80, a.k.a. Trash-80.

    Yup, I figured they would have already implemented that and hopefully fine tuning the model as well.
    These things take time to develop :)

  • On a funny note, my last interactions with LENR-CANR Chatbox1 have been less than satisfactory, I think "she" has developed the usual bureaucrat clerk whimsical attitude, she gave me more or less the exactly useless response for the last three queries, pointing me to look a the index of all ICCFs every time to look for what I needed.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • I think pointing to that is my mistake. I am not sure how to undo it.

    She certainly found that this discourages one to keep asking, hence my reference to bureaucracy clerks, that have mastered that technique.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • Known in the UK as 'Storekeeper's disease.' They never have what you want.

    As shown in Monty Python's cheese shop sketch:


    External Content youtu.be
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

  • The gismodo article is a unfair. Some experts agree with the advice given by the ChatBot. Some disagree, or even consider that advice harmful under the circumstances. You cannot expect a AI to take sides in a dispute among experts. It can only go with either the majority opinion, or the opinions expressed in the database uploaded to it. I assume the people at this website NEDA uploaded their own dataset. They would not use the public version of ChatGPT that has access to the entire internet. The ChatBot can only repeat whatever their database and their recommendations say. It can never come up with an opposing argument, unless that argument is also in the database. In that case, it would say something like: "It should be noted that some experts disagree . . ."


    Based on this, we can conclude that the people at NEDA do not agree with body positivity activist Sharon Maxwell. They think you should weigh yourself frequently. If they thought weighing yourself is a bad idea, they would have said so. Or they would have said, "some experts think . . ." and the ChatBot would have echoed that. A ChatBot never takes sides or holds any opinion. It is not capable of doing that.


    The ChatBot at LENR-CANR.org is similar. It is limited to the data I uploaded. It cannot quote, paraphrase, or recommend anything outside of that dataset. There are only a few references to Huizenga's book, because he did not give me permission to upload it. A few people mentioned the book. The ChatBot cannot summarize his arguments or present his point of view in detail, because they are not in the database. Here is what it says:


    Summarize Huizenga's book, " Cold Fusion: Scientific Fiasco of the Century."

    BOT: "Cold Fusion: The Scientific Fiasco of the Century" by J.R. Huizenga is a critical examination of the cold fusion phenomenon. Huizenga, who was a member of a DOE/ERAB panel in 1989, focuses on the lack of evidence for cold fusion and the problems faced by the panel. The book provides a detailed history of cold fusion, its development, and the frenzy surrounding it in 1989. Huizenga emphasizes the absence of nuclear reaction products and refuses to accept an exotic nuclear process without ash or radiation. As a result, he categorizes cold fusion as pathological science, alongside polywater and N rays. The book is known for its biased attitude and condescending tone, but it also serves as a valuable source of factual material.

  • Some experts agree with the advice given by the ChatBot.

    So which "experts" agreed that it is a good idea to tell someone with anorexia that they need to lose more weight and constantly weigh themselves?

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • So which "experts" agreed that it is a good idea to tell someone with anorexia that they need to lose more weight and constantly weigh themselves?

    Whoever wrote the documents they uploaded to the ChatBot. You can't blame the ChatBot, any more than you can blame a paper card catalog that lists a badly written book. The problem was caused by a person. Either the person who wrote the help-desk documents, or the person who uploaded the data to the ChatBot.


    It is possible they uploaded only part of their help-desk documents. Perhaps the documents that describe how deal with anorexia were accidentally left out. They may have been unreadable. When I first set up the LENR-CANR.org ChatBot I uploaded several files that the bot could not read. The Acrobat format was too old. I asked it questions about those documents and it said it had no information. I asked the vendor, who looked at the internal tokenized files and found they were empty. In other words, uploading data to ChatBots can be tricky. The technology is still unreliable. It is difficult to know what it is doing, or whether the files are properly tokenized.


    There are other problems. It seems you cannot remove a document or replace it. The content seems to hang around in the tokenized database even after you delete the source file. You have to reset the entire Bot database and start over from scratch.

  • The technology is still unreliable. It is difficult to know what it is doing, or whether the files are properly tokenized.

    It resembles PC and Mac compilers in the 1980s. They did not have much debugging facilities. You had to resort to various tricks such as printing out variables to know what the program was doing, or to find a problem. By the 1990s compilers improved. You could step through program code, insert a breakpoint, or display a set of variables. It was much easier to debug. ChatBots need debugging programs and programs that reveal their internal configuration. They need a larger set of control parameters available to the user, such as AI temperature. No doubt these things will be made available as the software improves.


    I do not think the internal workings of a ChatBot will ever be as transparent as a C++ program. The ChatBot is too complicated, and it was not made by a person. But the internal workings can be made more transparent than they are now. One method that already works is to ask the ChatBot itself to explain what it is doing, or to ask it for advice on how to improve the dataset. I asked ChaGPT how to format a structured dataset, with delimiters and whatnot. Unfortunately, the advice it gave did not work.


    It is a novel experience asking a program in plain English how to make that same program work. It is disconcerting that the program confidently gives you instructions that do not work. Maybe that is not such a novel experience. For programming languages such as C++ or Pascal I have often found detailed instructions in programming manuals for obscure functions that do not work the way the manual says they should. That happens when someone changes the internal code and forgets to update the online documentation. It is a human failing. It is understandable. When the Bot tells me to use a delimiter of such-and-such format, placed in a certain way, but that does not work, we are faced with a different problem. The Bot is largely programming itself. If the Bot does not know how to prepare a structured database, no one knows. There may not be a way to do it. The Bot "thinks" it knows, and the methods it describes often work, but they sometimes fail for reasons no human can discern. With better debugging tools we might find out, but that is only one task out of millions. It may be that structured databases are important enough to warrant human programmer intervention, in debugging or directly by reprogramming some aspect of the Bot. But there are countless other problems with software and there is no way a human could address them all. Even with C++ and other human-written programming languages, there are hundreds of features built in and no way anyone can test them all to be sure they work in every situation. Whereas in 1980, every command in a personal computer language could be listed in about 5 pages, and there were only a few parameters or ways each command had to be tested. It was completely transparent. But, of course, it could only do a few things, and you had to program every aspect of the job yourself. For example, there were no calendar or Julian date features. You could not ask it how many days there are between Date 1 and Date 2, or what year it is now.


    Here is a program language manual from 1983. It was a thing of beauty, far better than any previous PC DOS compiler. Do a search for date functions, and you will see they are all user-defined. You will also see they were Y2K problems waiting to happen. Y2K was not limited to 2-digit year data. There were many other problems, such as badly written do-it-yourself Julian date routines.


    http://bitsavers.informatik.uni-stuttgart.de/pdf/borland/turbo_pascal/TURBO_Pascal_Reference_Manual_CPM_Version_3_Dec88.pdf

  • Frogfall


    Bit on the other hand- the Japan Times said today:-


    The government said Friday it has issued administrative guidance to ChatGPT operator OpenAI due to its insufficient consideration of protocols to protect personal information.

    The guidance, issued Thursday by the government’s Personal Information Protection Commission and based on the personal information protection law, pointed to the possibility of ChatGPT infringing on privacy by obtaining sensitive personal information without prior consent.

    The commission said that it has not confirmed any specific violation of the law so far.

    It is believed to be the first time for the commission to issue administrative guidance over generative artificial intelligence.

    If U.S.-based OpenAI fails to take sufficient measures in response to the guidance, Japanese authorities may conduct an on-site probe or impose fines.

    The personal information protection law defines information on people’s race, beliefs, social status, medical history and criminal history as sensitive personal information, and in principle requires the consent of individuals before it is obtained.

    The commission urged OpenAI not to collect sensitive personal information from users without their prior consent. It also told the company to make efforts to ensure such information is not included in data collected to train AI, and to take measures if it is found, such as by deleting the data or making it impossible to identify the individuals concerned.

    The commission also took issue with the fact that OpenAI did not warn users in Japanese about the purpose of ChatGPT’s use of personal information, and demanded that an explanation be made in the language.

    It also urged administrative institutions and corporations using generative AI to minimize the use of personal information and to sufficiently check that the law is not violated.

    For general users, the commission warned about the risk of personal information they enter being used in machine learning and leading to outputs of inaccurate information.

    ChatGPT has previously been suspended and investigated by the Italian government over a suspected violation of the law.

  • This biggest risk posed by AI is that it will put a lot of people out of work. And without a strong array of public services and a universal basic income they will turn to fighting each other for scraps of bread.

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.