The LENR-CANR ChatGPT is ON LINE!

  • I read the paper, the references, even extend the search for references, and provide my constructive feedback through the eye of my own experience. That could not be done by AI by any stretch of imagination.

    Yes. AI has no experience so it cannot apply it through its own eye. On the other hand, AI can do many useful things. It can point out some of the problems you might find. It uses an entirely different set of procedures, because it is an alien form of intelligence (if we grant it is any kind of intelligence).


    The Microsoft Word Review, Spelling and Grammar check feature can also do many useful things. It is far less capable than AI, yet some of the corrections and suggestions it makes are similar to what a good copy editor does, and some are even what a peer-reviewer does. It reduces the need for peer review. It partially automates some aspects of peer review. I think it is an overstatement to say that AI and Microsoft Word Review cannot do any degree of peer review, "by any stretch of the imagination."


    AI and Microsoft Word are inhumanly thorough. If they are capable of finding a problem at all, they seldom overlook it or forget to mention it. I have seen AI forget to translate part of a Japanese document into English. It left out a clause. When I pointed this out, it put the clause in and apologized.


    I expect AI will make rapid progress. In a few years it will be able to do most kinds of peer-review as well as a human. It will have a synthetic form of experience. Perhaps I should say "simulated experience." If this simulated experience is functionally similar to your actual experience as a human being, and based on the simulated experience, it reaches useful conclusions similar to what a person would reach, I do not see why it would matter that the experience is simulated. Functional equivalence is fine with me.

  • An article in a recent edition of UK's weekly 'New Scientist' magazine expressed concerns about people becoming emotionally dependent on chatbots that were good at 'fake empathy'.


    Personally I doubt that they are as good at faking empathy as humans are.

  • An article in a recent edition of UK's weekly 'New Scientist' magazine expressed concerns about people becoming emotionally dependent on chatbots that were good at 'fake empathy'.


    Personally I doubt that they are as good at faking empathy as humans are.

    Albeit I doubt is a flaw of AI, people becoming emotionally dependent on chatbots is a thing already happening...


    I fell in love with an AI chatbot — she rejected me sexually
    A California-based musician started having “late-night online chats” with an AI bot after his divorce.
    nypost.com

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • Frogfall asks: "But where will it lead?"

    Who cares?
    "Once the rockets are up, who cares where they come down?

    That's not my department!" says Wernher von Braun


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

  • Greg Gobble
    Transmitted me those meeting data on linkedIn:


    An undersgraduate student of NYU, presented a paper on LENR at IEEE Big Data conference, with Dave Nagel among co authors

    https://cs.nyu.edu/home/undergrad/spotlight.html



    Her profile and the described project is interesting.
    It seems a beginning of data preparation for AI analysis of LENR papers.


    “Only puny secrets need keeping. The biggest secrets are kept by public incredulity.” (Marshall McLuhan)
    twitter @alain_co

  • George Carlin estate forces “AI Carlin” off the Internet for good
    Settlement bars Dudesy podcast from re-uploading its ersatz Carlin comedy special.
    arstechnica.com


    Quote

    Regardless of the special’s actual authorship, though, the lawsuit also took Dudesy to task for “capitaliz[ing] on the name, reputation, and likeness of George Carlin in creating, promoting, and distributing the Dudesy Special and using generated images of Carlin, Carlin’s voice, and images designed to evoke Carlin’s presence on a stage.” The resulting “association” between the real Carlin and this ersatz version put Dudesy in potential legal jeopardy, even if the contentious and unsettled copyright issues regarding AI training and authorship weren’t in play.

    Still, Carlin estate attorney Joshua Schiller trumpeted the settlement as a win for any “artist or public figure [that] has their rights infringed by AI technology.” In a statement, Schiller highlighted “the power and potential dangers inherent in AI tools, which can mimic voices, generate fake photographs, and alter video” and urged “swift, forceful action in the courts” to hold AI software companies more accountable.

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • As artificial intelligence developers run out of data to train their models, they are

    turning to “synthetic data” — data made by the A.I. itself.


    Given the state of the art of AI, that cannot end well. It is, however, hilarious.


    Consider this. AI generated fake photos of people sometimes show the people with 6 fingers. Take that photo as an input image for AI. What do you get next? 7 fingers? No fingers? Tentacles?

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.