The LENR-CANR ChatGPT is ON LINE!

  • She says she cannot remember exactly what they say because they are tokenized.

    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams


  • External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    What do you reckon? Legit "impersonation", or ripping off an entertainer's IP?


    Note that impressionist entertainers normally only perform short 'skits' while "in character". An hour-long show is a bit different.


    EDIT: The video above has now been taken offline. I suspect that answers the question....

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

    Edited 3 times, last by Frogfall ().

  • Hahaha. From Slashdot:


    Quote

    The estate of George Carlin has filed a federal lawsuit against the comedy podcast Dudesy for an hour-long comedy special sold as an AI-generated impression of the late comedian. But a representative for one of the podcast hosts behind the special now admits that it was actually written by a human. In the lawsuit, filed by Carlin manager Jerold Hamza in a California district court, the Carlin estate points out that the special, "George Carlin: I'm Glad I'm Dead," (which was set to "private" on YouTube shortly after the lawsuit was filed) presents itself as being created by an AI trained on decades worth of Carlin's material. That training would, by definition, involve making "unauthorized copies" of "Carlin's original, copyrighted routines" without permission in order "to fabricate a semblance of Carlin's voice and generate a Carlin stand-up comedy routine," according to the lawsuit.


    Despite the presentation as an AI creation, there was a good deal of evidence that the Dudesy podcast and the special itself were not actually written by an AI, as Ars laid out in detail this week. And in the wake of this lawsuit, a representative for Dudesy host Will Sasso admitted as much to The New York Times. "It's a fictional podcast character created by two human beings, Will Sasso and Chad Kultgen," spokeswoman Danielle Del told the newspaper. "The YouTube video 'I'm Glad I'm Dead' was completely written by Chad Kultgen." Regardless of that admission, Carlin estate lawyer Josh Schiller told the Times that the lawsuit would move forward. "We don't know what they're saying to be true," he said. "What we will know is that they will be deposed. They will produce documents, and there will be evidence that shows one way or another how the show was created."

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • Ok well that justifies my arguments with an actualised risk.
    Your cloud providers do not give a stuff about your data and will carelessly donate it to all and sundry! (jesting, tongue in cheek)
    If you want to maximise your security bring your data home and run your models locally where your security interests are aligned.
    ;)

  • Re: the fake AI George Carlin video:


    Interestingly, there is a section on the GC wikipedia page that says the following:


    Quote

    Many online quotes have been falsely attributed to Carlin, including various joke lists, rants, and other pieces. The website Snopes, which debunks urban legends and myths, has addressed these hoaxes. Many of them contain material that runs counter to Carlin’s viewpoints; some are especially volatile toward racial groups, gay people, women, the homeless, and other targets. Carlin was aware of this and debunked the quotes by writing on his website, “Here’s a rule of thumb, folks: nothing you see on the Internet is mine unless it comes from one of my albums, books, HBO specials, or appeared on my website. […] It bothers me that some people might believe that I would be capable of writing some of this stuff.”


    In 2011, “Weird Al” Yankovic referenced the hoaxes in his song “Stop Forwarding That Crap to Me” with the lyric, “And by the way, your quotes from George Carlin aren’t really George Carlin.”


    Given the above, I can see that claiming new material as being “just like from a George Carlin show” could be particularly irksome, and offensive, to the owners of his deceased estate - regardless of who (or what) had written it.


    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

    Edited once, last by Frogfall ().

  • Your cloud providers do not give a stuff about your data and will carelessly donate it to all and sundry!

    That's not true. That is nonsense. You pay cloud providers. They give you a contract guaranteeing they will protect the data and keep it secret. If they gave it all and sundry, no corporation, hospital or other institution would store any data with them, or use their computing services. They would be sued for billions of dollars. It would an easy win in court. They would be bankrupt in no time.


    This is like saying doctors don't care if their treatments cause harm. Yes, they do care. Not because they are good people but because they will be sued for malpractice and they will lose their licenses and their livelihood.


    When you give data to Facebook, you pay them nothing. I don't know what their contract looks like, but as a practical matter, the data belongs to them, not to you. But when you store data in a cloud provider, it is your property, just as much as your stocks and bonds at a broker remain your property. The broker cannot give them away to "all and sundry."

  • I regret to say the LENR-CANR ChatGPT is now OFF LINE. I deleted it. It was costing a lot and few people were using it.

    I'm sorry to hear that Jed.
    It should be relatively easy to create your own LLM using an open-source model ran locally.
    You will just need to have a dedicated GPU that has at least 6GB RAM and maybe 20GB of storage. The challenge is in being able to serve an endpoint for a WebUI.

    Something I am actively working on to sort out.


    I just made this "Low Energy Nuclear Reactors Autonomous Research Agent" (LENR ARA) to play around with if anyone is interested in using and testing it.
    There are many models and experiments I have been doing, so it was pretty easy to throw this OpenAI one together after reading your news.

    It should be able to do several tasks like data visualization, experimental suggestions, and mathematical formulations of the various theories of LENR, but you might need ChatGPT+ Subscription to use it.

    It should be able to make visualizations of the knowledge datasets as well. If anyone has suggestions, features, or integrations ideas, or missing information let me know.

    I'm going to attempt to intergrade it with lenrdashboard.com to see if I can implement a retrieval function without it using too many tokens.
    I am trying moving towards local models but there are hardware restrictions as well as integration problems in delivering endpoints without taking up to many resources.
    Hopefully this cute little GPT I threw together is helpful to the community.

  • It should be relatively easy to create your own LLM using an open-source model ran locally.Hopefully this cute little GPT I threw together is helpful to the community.

    An open source base model for LENR for others to use for their own models would be good but it would need a a lot more GPU's I think.

    As LENR-CANR ChatGPT is now no more, I have just removed the CANR page from the forum satellite site, LENR-News.com.

  • An open source base model for LENR for others to use for their own models would be good but it would need a a lot more GPU's I think.

    It's a goal of mine 😊


    We can get pretty good results from using a mix of several 7B models each trained in specific domains.


    If you have tried using Llama2 "Mistral" 7B model it's pretty damn good for being a local LLM and only being a 7 Billion parameter model.

    I have had some success with many different models besides OpenAI's ChatGPT models, but fine-tuning these models and delivering an end-point to other users has been a challenge.
    I can do it if I keep the code base simple, but once I start implementing more complex functions with APIs and Python Library tools, it has been challenging to get it to work properly.
    All part of the process of engineering a think I guess. 😅

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.