The LENR-CANR ChatGPT is ON LINE!

  • It should be relatively easy to create your own LLM using an open-source model ran locally.
    You will just need to have a dedicated GPU that has at least 6GB RAM and maybe 20GB of storage. The challenge is in being able to serve an endpoint for a WebUI.

    Something I am actively working on to sort out.

    Well, if you figure out how to do that, and you want a window to your bot at LENR-CANR.org, let me know.


    I spent a few weeks converting the entire LENR-CANR.org library from Acrobat to a format the ChatBot can use. The bot is supposed to accept Acrobat files but it does not. There are various rules such as paragraphs should not be long or the bot will lose track. I converted the files to ASCII and wrote a program to fit the parameters the vendor suggested. The files are here if you want to download them:


    https://lenr-canr.org/Collections/ChatBotFiles.zip


    This is out of date. I added ~50 papers after converting these. If you want to use them, I can convert the recent ones and add them to this batch.

  • I appreciate it as this will be very helpful with chat completion and some tweaking with .jsonl reformatting.

    Before I would ever release such a thing into the wild I would appreciate your and other's review in the community to ensure it's accuracy.
    A benchmarking of sorts so it can be improved upon and becomes something that is useful.
    🍻

  • OpenAI Says ChatGPT Probably Won’t Make a Bioweapon
    The company wants you to rest easy knowing that ChatGPT only makes it a little easier to commit bioterrorism.
    gizmodo.com


    This might sound like good news, but the downside is their implication that chatgpt might not help anyone research any technical or scientific subject. That would include things like LENR, and all other desirable technological developments.

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • This might sound like good news, but the downside is their implication that chatgpt might not help anyone research any technical or scientific subject.

    That's silly. People are already using ChatGPT in research. I am using it in technical subjects such as programming and translating chemistry papers. It is very helpful. Astounding at times.


    I translated a bunch of legal documents from English into Japanese with the DeepL translation program (https://www.deepl.com/translator). I do not know much about the law and I have never translated legalese. It was a tremendous help. I had my TextAloud screen reader read the Japanese while I checked the English. It made very few mistakes. Most of the mistakes were inconsequential and easy to spot. It is much better than Google translate or ChatGPT.


    A screen reader is a good way to check a translation. It is also a good way proofread a document you write yourself, or one that you are editing. You should also read documents aloud yourself. Experienced authors recommend this.



    Here is a gruesome section from one of the legal documents, a healthcare directive. Here it is translated from English to Japanese and back again. That indicates how accurate it is:


    "Section (4) gives your agent specific guidance regarding the factors to be considered in making your health care decisions, including the specific preferences for your treatment which you have indicated in this document.


    Section (5) gives to your agent certain authority to make provisions for an autopsy, organ donations and the final disposition of your body, except to the extent that you initial these entries to remove such powers. At the bottom of Section (5) you may also indicate whether you prefer to be buried or cremated. . . ."


    セクション(4)は、あなたがこの文書で示した治療に対する具体的な希望を含め、あなたの医療に関する意思決定を行う際に考慮すべき要素について、あなたの代理人に具体的な指針を与えるものです。


    セクション(5)は、あなたの代理人に対し、解剖、臓器提供、およびあな たの遺体の最終処分について規定する一定の権限を与えますが、あなたがこれらの権限を削除するためにこれらの項目を初期設定する場合を除きます。セクション(5)の下部には、埋葬または火葬のいずれを希望するかを記入することもできます。


    [This sounds kind of English-y, with lots of borrow words, but it is correct. Translated back to English:]


    Section (4) provides specific guidance to your representative on factors to consider when making decisions regarding your medical care, including your specific preferences for the treatment you have indicated in this document.


    Section (5) gives your representative certain powers governing autopsies, organ donation, and the final disposition of your remains, unless you initial these items to remove these powers. At the bottom of section (5), you may also indicate whether you wish to be buried or cremated.

  • That's silly. People are already using ChatGPT in research. I am using it in technical subjects such as programming and translating chemistry papers. It is very helpful. Astounding at times.

    I agree. I was just trying to highlight the absurd and contradictory nature of their defence - that ChatGPT couldn't be used to help people build a weapon.


    ChatGPT is just a tool - and tools are neutral. If humans want to use a tool for nefarious purposes, then they can - and sometimes will.


    If OpenAI claim that their tool can't be used to research bioweapons, then they must also claim that it a useless tool for research in general.

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • I use chatGPT4 (Pro) for code development. Nothing else comes close.

    I'm still bothered by the lack of reliable search.

    I have G4, Bard and Bing/Copilot.
    I like Bard's verification step. Bing is too consumer oriented.

    I'm playing with perplexity.ai This has its own web search engine. The default LLM interface is pretty good. It gives you links (similar to wiki) supporting its statements.

    It pre-loads a few suggested follow-up questions.

    The pro version lets you select from a variety of LLM engines, including G4, Claude and Gemini Pro (and gives you more queries).

    It also allows you to use DALL-E3 --- but it doesn't handle adjustments, like "the person on the right should be facing forward" Completely new picture (kind of what G3 did for code).

  • I translated a bunch of legal documents from English into Japanese with the DeepL translation program (https://www.deepl.com/translator). I do not know much about the law and I have never translated legalese. It was a tremendous help. I had my TextAloud screen reader read the Japanese while I checked the English. It made very few mistakes. Most of the mistakes were inconsequential and easy to spot. It is much better than Google translate or ChatGPT.

    Did you know btw that Japan adopted the Korean Legal Code in almost it's entirety to use as their own?

  • Good evening this is LeBob on a different account due to the fact I seem to have trouble signing in again. Happy New year and pleasant discussion to you all now. Interested in how to get to the Chat Gpt page for the site because I clicked on the link and didn't see it. Just regular Google search. Second question, does it also include forum discussion and comment section data from relevant websites like here?

  • Good evening this is LeBob on a different account due to the fact I seem to have trouble signing in again. Happy New year and pleasant discussion to you all now. Interested in how to get to the Chat Gpt page for the site because I clicked on the link and didn't see it. Just regular Google search. Second question, does it also include forum discussion and comment section data from relevant websites like here?

    Hello, unfortunately JedRothwell decided to terminate it as was taking too much money to run and was not being used much.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.


  • Artificial Intelligence: Arguments for Catastrophic Risk

    Adam Bales, William D'Alessandro, Cameron Domenico Kirk-Giannini

    Quote
    Recent progress in artificial intelligence (AI) has drawn attention to the technology's transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument -- the Problem of Power-Seeking -- claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that they might obtain it, that this could lead to catastrophe, and that we might build and deploy such systems anyway. The second argument claims that the development of human-level AI will unlock rapid further progress, culminating in AI systems far more capable than any human -- this is the Singularity Hypothesis. Power-seeking behavior on the part of such systems might be particularly dangerous. We discuss a variety of objections to both arguments and conclude by assessing the state of the debate.

    https://arxiv.org/abs/2401.154…suit%20of%20their%20goals.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.