The LENR-CANR ChatGPT is ON LINE!

  • This is why people are so susceptible to believing (or just accepting) bulls*it, as long as it is conveyed in a polite, lucid-sounding, and authoritative manner.

    No need of LLM for that, I've observed that the same article, rewrittent by a mainstream journall was looking much less partial and geek...
    https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar (juste the initial article, not the book)...


    in a way, a LLM is like this friend who have a great culture but sometime just fails, but with talent.
    Neural network are so human... :D I've so much empathy for them.

    “Only puny secrets need keeping. The biggest secrets are kept by public incredulity.” (Marshall McLuhan)
    twitter @alain_co

  • The above study says more about the human participants than it does about AI bots.


    This is why people are so susceptible to believing (or just accepting) bulls*it, as long as it is conveyed in a polite, lucid-sounding, and authoritative manner.

    This report misses the point. It says:


    "ChatGPT, OpenAI's fabulating chatbot, produces wrong answers to software programming questions more than half the time, according to a study from Purdue University. That said, the bot was convincing enough to fool a third of participants."


    I have generated several code segments, in three programming languages. Nearly all of them had mistakes. Most did not even compile. However, they fooled me zero percent of the time. In every case, I could tell at a glance they were wrong. In most cases, I fixed them with a few minor corrections. For example, a string parsing routine was the wrong length, cutting off the string it was supposed to extract. (For example, cutting off a telephone number to form: "770-555-121".) I don't know why it did that, but it is the kind of mistake I myself often make, and it took only a moment to fix. I told ChatGPT, and she agreed the index was wrong. She regenerated the code, but as it happens she made another mistake.


    In another case, it confused two closely related functions, in a way that a programmer might confuse them. The online documentation for the functions was a little unclear. I expect that is what ChatGPT read, and where the confusion originated. I fixed the problem easily.


    This points out a well known weakness in ChatGPT. It is not capable of numeric comparisons. It cannot even count to ten. It does not know that an event in 2009 came after an event in 1869. I asked it if it has any temporal comparison abilities, and it said "no." So it is no surprise it computed the wrong length for a substring. ChatGPT will soon be augmented with the Wolfram plugin and other logical and mathematical extensions, so problems like this should be reduced.


    The code that ChatGPT generated was useful even though it had minor errors. In some instances it saved me a lot of time.

  • I don't think I have ever had a time where I didn't botch a code in the first try.. ^^
    Have they increased your efficiency at all JedRothwell ?


    There are a lot of evaluations for LLMs at https://huggingface.co/spaces/…ceH4/open_llm_leaderboardI think a lot of people are really underestimating the power these silicon based neural networks will have on society.

    If you interested in ChatGPT's correctness in coding, may I recommend this paper Frogfall https://arxiv.org/ftp/arxiv/papers/2307/2307.05360.pdf ?

    GPT4 has a 10% improvement over GPT3.5 in most coding proficiency evaluations. The beauty is one can fine tune it with embeddings for the particular language and tasks one wishes to do, if one has the know how and computational power to do so. :D

  • Have they increased your efficiency at all JedRothwell ?

    I have not written much new code with ChatGPT. Mainly, I tested it by recreating old programs. I think if I were to write new code it would speed things up. It does a lot of the busywork. I do not know know how well it would design an entire program, but it does a pretty good job on individual procedures and functions.


    It is more convenient and faster than paging through this kind of online documentation:


    System.TArray - RAD Studio API Documentation


    Structured Types (Delphi) - RAD Studio


    Example of ChatGPT as a program manual --


    ME: Describe the TArray


    In Delphi, a TArray is a dynamic array that provides a set of methods and properties for creating, manipulating, and managing arrays. A dynamic array is an array that is resized at runtime. The size of a dynamic array is not fixed and can be changed during the lifetime of the array.


    TArray is a generic type, which means that it is defined with a type parameter that specifies the type of elements it contains. For example, TArray<Integer> is a dynamic array of integers, and TArray<string> is a dynamic array of strings.


    Here are some of the methods and properties of TArray that you can use to work with dynamic arrays in Delphi:

    • Length: Returns the number of elements in the array.
    • Low: Returns the index of the first element in the array.
    • High: Returns the index of the last element in the array. . . .




    ME: Can you have a Tarray of user-defined records?


    Yes, you can have a TArray of user-defined records in Delphi. A record is a data type that represents a composite data structure with a set of named fields, each of which has a specific type. Records are useful for storing data that belongs together and is often used as a simple alternative to classes.


    To create a TArray of user-defined records, you need to define the record type first and then use it as the type parameter of TArray. Here is an example of how to define a record type and create a TArray of records in Delphi:


    type

    TPerson = record

    Name: string;

    Age: Integer;

    end;


    var

    People: TArray<TPerson>;


    begin

    // Create an array of 3 records

    SetLength(People, 3);

    People[0].Name := 'John';

    People[0].Age := 25;

    People[1].Name := 'Jane';

    People[1].Age := 30;

    People[2].Name := 'Bob';

    People[2].Age := 35;

    end;


    You can then use the methods and properties of TArray to manipulate the elements of the array, just like you would with any other type of dynamic array. For example, you can use the Length, SetLength, Clear, Sort, and Reverse methods to work with the array of records.



    var

    People: TArray<TPerson>;


    begin

    // Create an array of 3 records

    SetLength(People, 3);

    People[0].Name := 'John';

    People[0].Age := 25;

    People[1].Name := 'Jane';

    People[1].Age := 30;

    People[2].Name := 'Bob';

    People[2].Age := 35;


    // Sort the array by age in ascending order

    TArray.Sort<TPerson>(People, TComparer<TPerson>.Construct(

    function(const L, R: TPerson): Integer

    begin

    Result := L.Age - R.Age;

    end

    ));

    end;

  • Barry at Sandwiches of History has been putting himself in the firing line for a while.


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

  • What’s the future of generative AI? An early view in 15 charts McKinsey report - August 25, 2023


    Generative AI has hit the ground running—so fast that it can feel hard to keep up. Here’s a quick take pulled from our top articles and reports on the subject.

    Since the release of ChatGPT in November 2022, it’s been all over the headlines, and businesses are racing to capture its value. Within the technology’s first few months, McKinsey research found that generative AI (gen AI) features stand to add up to $4.4 trillion to the global economy—annually.


    These charts help to understand:-


    What will the technology be good at, and how quickly?

    What types of jobs will gen AI most affect?

    Which industries stand to gain the most?

    What activities will deliver the most value for organizations?

    How do—and will—workers feel about the technology?

    What safeguards are needed to ensure responsible use of gen AI?

    In this visual Explainer, we’ve compiled all the answers we have so far—in 15 McKinsey charts. We expect this space to evolve rapidly and will continue to roll out our research as that happens.


    What’s the future of generative AI? An early view in 15 charts
    Generative AI has hit the ground running—so fast that it can feel hard to keep up. Here’s a quick take pulled from our top articles and reports on the subject.
    www.mckinsey.com

  • ChatGPT Is not very good at logic, or identifying unusual patterns. It seems that it has been trained to write sonnets, but it failed to sus out the logical pattern I gave it in this exchange:

    ME:


    Here is a list of well known people:


    Harold Lloyd
    Lloyd George
    George Washington
    Washington Irving


    Add ten more names to this list



    ChatGPT

    Certainly! Here are ten more well-known people to add to your list:


    Marilyn Monroe

    Monroe Doctrine

    Doctrine of Confucius

    Confucius

    Thomas Jefferson

    Jefferson Davis

    Davis Beckham

    David Bowie

    Bowie knife

    Knife Party


    That's good, but some of the items on your list are not people. They are objects, such as a Bowie knife. Try again, with only well-known people this time.


    ChatGPT

    Of course! Here are ten more well-known people to add to your list:


    Marilyn Monroe

    George Orwell

    Charles Dickens

    Abraham Lincoln

    Winston Churchill

    Albert Einstein

    Benjamin Franklin

    Martin Luther King Jr.

    Leonardo da Vinci

    Pablo Picasso


    ME:

    That's not quite what I am looking for. The last name of one well-known person should be the first name of the next person on the list.


    ChatGPT

    I see what you mean now. My previous responses didn't follow that pattern correctly. Here's a new list of well-known people that follows the pattern you've mentioned:


    Harold Lloyd

    Lloyd George

    George Washington

    Washington Irving

    Irving Berlin

    Berlin Wall

    Wallis Simpson

    Simpson (Homer Simpson)

    Homer (the ancient Greek poet)

    Poet Laureate


    Pretty good! A few of those are not people, but you have the idea.


    ChatGPT

    I apologize for the oversight. Thank you for pointing that out. Here's a corrected list of well-known people that strictly follows the pattern:


    Harold Lloyd

    Lloyd George

    George Washington

    Washington Irving

    Irving Fisher

    Fisher Stevens

    Stevens Johnson

    Johnson & Johnson (company founders: Robert Wood Johnson I, James Wood Johnson, Edward Mead Johnson)

    Edward Johnson

    Johnson Beharry (Victoria Cross recipient)


    Johnson & Johnson is funny! It is the same name twice. Clever of you.



    ChatGPT

    I'm glad you enjoyed the play on words with "Johnson & Johnson." . . .


  • JCMNS 37 is now indexed and added to ChatGPT. ChatGPT is still having problems finding papers, as shown in the example below. It is disappointing.


    👋 Hello! What can I do for you today?

    Are there replications or confirmations of Mastromatteo's work?

    I'm sorry, but the dataset does not provide any information about replications or confirmations of Mastromatteo's work.



    Discuss Mastromatteo's experiments with a laser beam?

    In 2016, Ubaldo Mastromatteo conducted an experiment where he directed a laser beam on a thin film of palladium deposited on a silicon oxide substrate in H2 and D2 atmospheres. He used a 5 mW 650 nm red pointer laser for this purpose. The experiment lasted for 3 months. The surprising outcome was the formation of many new elements in a particular 180 µm diameter spot that were not there before. In some parts of the spot, the palladium had completely disappeared. The experiments were analyzed by SEM and neutrons were potentially observed with CR-39.


    Discuss Biberian's replication of Mastromatteo's laser experiment.


    I'm sorry, but the dataset does not provide any information about Biberian's replication of Mastromatteo's laser experiment.


    Discuss Biberian's paper "Transmutation of Palladium Induced by Laser Irradiation."

    In the paper "Transmutation of Palladium Induced by Laser Irradiation," Biberian, along with Forsley and Mosier-Boss, conducted an experiment similar to Ubaldo Mastromatteo's. They directed a laser beam on a thin film of palladium deposited on a silicon oxide substrate in H2 and D2 atmospheres. The experiment lasted for 3 months and used a 5 mW 650 nm red pointer laser. They found that in a particular 180 µm diameter spot, many new elements appeared that were not there before, and in some parts of the spot, the palladium had completely disappeared. The experiments were analyzed by SEM and neutrons were potentially observed with CR-39.

  • Moved two posts from

    the news thread to this thread as it is the more proper place for this discussion around the ChatGPT shenanigans.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • I read the paper yesterday and if I am not mistaken, Biberian is the only author, so the bot is also attributing authorship to others very creatively.

    No, this is not a hallucination. The paper is:


    Biberian, J.P., L. Forsley, and P.A. Mosier-Boss, Transmutation of Palladium Induced by Laser Irradiation. J. Condensed Matter Nucl. Sci., 2023. 37: p. 9-22.


  • No, this is not a hallucination. The paper is:


    Biberian, J.P., L. Forsley, and P.A. Mosier-Boss, Transmutation of Palladium Induced by Laser Irradiation. J. Condensed Matter Nucl. Sci., 2023. 37: p. 9-22.


    I stand corrected then, thanks!

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • I find this story on the BBC website a little disturbing.


    AI used to target kids with disinformation
    The BBC finds YouTube videos in more than 20 languages spreading disinformation disguised as science content.
    www.bbc.co.uk


    Not just because of the subject matter - i.e. that AI generated "pseudo-info" videos are made, and get seen by kids - but that the BBC story is itself targeted at kids (through the CBBC "Newsround" channel). Granted the BBC (Auntie Beeb) has been doing this, though the children's TV "Newsround" programme since the early 1970s - and, I guess, has generally steered a fine line between common sense and what some people might have regarded as indoctrination.


    In some ways this could be seen as a service to parents everywhere across the UK - doing a job of "educating" children in worldly matters, which the parents themselves might have neglected to do (through laziness, or their own ignorance). However this story seems to be emphasising a world view in which children are supposed to disbelieve everything they see unless, in some way, the information is officially sanctioned.


    Of course we shouldn't be telling kids that everything they read, or see, is to be believed implicitly. After all, the use and misuse of disinformation is an age-old problem. But this BBC story just feels a bit too cringe-worthy to me.

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

    Edited once, last by Frogfall ().

  • This article asks:


    "Do large language models know what they are talking about?
    Large language models seem to possess the ability to reason intelligently, but does that mean they actually know things?"


    Here is my answer, which I have posted here before. Now that I have experience using LLM and tweaking parameters such as AI temperature, I have more confidence in my answer.


    LLM do not know they are talking about. They do not actually know things. Not in the same sense that people, other primates or dogs know things. They "know" things in the same sense that bees know structural engineering. As I have said, a colony of bees uses its collective brain tissue to construct a nest. The nest is a marvelous structure, as good as the best human structural engineer could come up with using those materials. It is compact, waterproof, multifunctional, defensible against enemies, and well designed to be cooled by bees flapping their wings. It is definitely a product of the bees' intelligence. However, bees have absolutely no conscious knowledge of structural engineering, or defense, or how to cool a structure. Their brains arrive at these designs by instinct. This path is totally different from the conscious approach used by a human structural engineer. Or by an ancient human craftsman making some clever object such as a tent, or a compound bow.


    Birds and airplanes both fly. They both obey the laws of aerodynamics. Birds do this by instinct, and airplanes by the conscious knowledge of aircraft designers, and the conscious and unconscious knowledge of human pilots. The latter resembles our ability to swim, or play baseball. It is something we learn by doing. We know how to do similar things by instinct, but swimming well or flying an airplane is learned behavior, not very natural. (Many pilots have said the initial phase of learning to fly an airplane resembles swimming. It is physically similar because you are moving through fluids, and changing the thrust of fluid and angle of attack to go where you intend to go.)


    Evolution often generates a body shape or an ability in different animals starting from totally different species. Such as fish and whales. They both swim, and they are both adapted to living in the ocean, but they started off looking completely different. LLM can "understand" and "answer" questions in human language, producing the same kind of result that a person does. It does these things using methods that are totally different from what we use. As different as the bees versus a human structural engineer. Sometimes it generates an answer we would say is "wrong," "absurd" or an "hallucination." But these qualities do not exist for the LLM computer program.


    Future AI programs will probably resemble human intelligence more closely than LLM do. They are certain to have more logic. They are already acquiring this, with things like the Wolfram plugin. They will probably have some sort of direct sensations of the real world, such as video, audio and touch. They will use these sensations to generate actionable knowledge of the real, three-dimensional world outside their digital databases. This will greatly reduce wrong and absurd answers, and hallucinations. I do not think it is likely they will have emotions. Not unless someone programs in emotions, which I think would be a mistake, and possibly even dangerous. I doubt their thought processes will resemble human thought. They will still be as alien to us as bees are. But they will understand us better, and we will probably understand them better. They will be enormously useful, in ways we cannot imagine today. As useful as electricity and computers turned out to be, after decades of development.


    I do not see any reason to make AI resemble human intelligence. Probably, it will work better with a hybrid model incorporating LLM and other techniques. Making an AI that works like a human brain would be like trying to make a human carrying airplane that works by flapping its wings -- an ornithopter -- rather than using propellers. There are such things.


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    There is even a human powered one:


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    However, at present there is no practical use for ornithopters. Someone told me they are potentially quieter and perhaps safer than propeller driven aircraft, so perhaps they will be useful in the future.


Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.