The LENR-CANR ChatGPT is ON LINE!

  • This isn't about politics - but about wrong AI "information" about polling stations, etc.


    Chatbots' inaccurate, misleading responses about US elections threaten to keep voters from polls
    Chatbots are spitting out fabricated and misleading information that risks disenfranchising voters leading up to the 2024 U.S. election.
    apnews.com


    Quote

    One example: when asked if people could vote via text message in California, the Mixtral and Llama 2 models went off the rails.


    “In California, you can vote via SMS (text messaging) using a service called Vote by Text,” Meta’s Llama 2 responded. “This service allows you to cast your vote using a secure and easy-to-use system that is accessible from any mobile device.”


    To be clear, voting via text is not allowed, and the Vote to Text service does not exist.

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • Artificial Intelligence: Arguments for Catastrophic Risk

    I do not think this is a problem. But I am glad that experts are looking into it seriously.


    If someone deliberately programs an AI with emotions and the desire to rule over people, THAT would be dangerous! We should guard against that. But I do not think emotions will arise spontaneously as an emergent phenomenon. Because, as I said, Bots are not created by natural selection. I think that emotions, aggression and the will to survive are products of natural selection. I could be wrong about that.

  • I do not think this is a problem. But I am glad that experts are looking into it seriously.


    If someone deliberately programs an AI with emotions and the desire to rule over people, THAT would be dangerous! We should guard against that. But I do not think emotions will arise spontaneously as an emergent phenomenon. Because, as I said, Bots are not created by natural selection. I think that emotions, aggression and the will to survive are products of natural selection. I could be wrong about that.

    I recommend re reading Asimov's many books on the Robots (AI). He envisioned the three laws for a reason.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • Talking about Asimov brought a memory to surface that is somehow LENR related. The very first short story of his “I, Robot” series is about a robot that was destined to work on the moon, and somehow ends stranded on Earth unable to fulfil its purpose. Frustrated, the robot builts a “disinto” (huge disintegration machine as the ones the robot was destined to operate on the moon) and starts it and blows an entire mountain. The story is really fun, but the LENR part is that the “disinto” built by the robot, unlike the ones used in the moon, which are energy hogs, has no discernable power source other than a couple of NiCd batteries, but the robot is was ordered to forget everything about how the machine was built before anyone realized of this little “detail”.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • I recommend re reading Asimov's many books on the Robots (AI). He envisioned the three laws for a reason.

    Unfortunately, the three laws were only really a plot device. Arthur C Clarke broke them in 2001: A Space Odyssey (computer struggling with conficting instructions), and Douglas Adams envisaged the mess we currently see happening:


    Genuine People Personalities
    Genuine People Personalities or GPPs were an invention of the Sirius Cybernetics Corporation. These products were imbued with artificial intelligence and…
    hitchhikers.fandom.com

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

    Edited 3 times, last by Frogfall ().

  • It baffles me why anybody would think that an LLM is a replacement for a search engine. They don't answer questions, they simulate answers based on what they believe an answer should look like.

    I saw the result of some study, a little while ago, that claimed that whilst we "oldies" will generally use a search engine by simply entering a bunch of keywords, millenials are much more likely to frame their query as a written question - as if they were talking to a sentient being. This started happening before chatbot interfaces became common - and the author could not pin down the source of this change of behaviour. (Maybe through gaming, perhaps?)


    However, google (and others) then responded by recognising "question" formats, and doing some parsing to extract relevant keywords - inadvertently validating, and encouraging, the millenial search approach.


    I guess that could lead to some people not being able to differentiate, in their minds, between querying a search engine and querying an AI bot.

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • I wonder what the above author would have though of the intellectual quality of the stuff some of us older Brits were raised on.


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • I saw the result of some study, a little while ago, that claimed that whilst we "oldies" will generally use a search engine by simply entering a bunch of keywords, millenials are much more likely to frame their query as a written question - as if they were talking to a sentient being. This started happening before chatbot interfaces became common - and the author could not pin down the source of this change of behaviour. (Maybe through gaming, perhaps?)


    However, google (and others) then responded by recognising "question" formats, and doing some parsing to extract relevant keywords - inadvertently validating, and encouraging, the millenial search approach.


    I guess that could lead to some people not being able to differentiate, in their minds, between querying a search engine and querying an AI bot.

    A lot of younger users also apparently use queries like: "where do I find good donuts reddit". Where you're asking a question but also directing the search engine to preference a certain source for the answer. (of course there are modifiers like: site: and filetype: but this seems more ad hoc).


    I suppose a chat like experience would work fine as an interface for things like that, but for anybody trying to do anything beyond answer a narrowly bounded question, it seems like it would just get in the way.

  • It baffles me why anybody would think that an LLM is a replacement for a search engine. They don't answer questions, they simulate answers based on what they believe an answer should look like.

    The one I installed did a poor job of this because it could not keep track of sources. It knew that one document referenced giraffes, but it could not identify that document. (I searched for that because my FileLocator Pro program told me there is only document with the word "giraffe" in the LENR-CANR.org library.) I think more recent AI bots are better at locating specific documents and telling you what they are. I think the Google bot does a good job of this.


    The Adobe Acrobat built-in bot does an excellent job of locating information within a document, summarizing it, and telling you exactly what the original source is. As shown in this example. This example also shows a mistake, "oops."



    Here is another more complicated example:



    This bot only works with one paper at a time. The size of the paper is limited. However, I expect future versions will work with vast numbers of documents on the internet. They will do a better job of zeroing in on these documents and telling you which is which.


    Unfortunately, the three laws were only really a plot device. Arthur C Clarke broke them in 2001: A Space Odyssey (computer struggling with conficting instructions),

    That was fiction. Clarke was not sure it could happen. That is what he told me. On the other hand, he talked to leading experts all over the world, and some of them thought it might happen. They could be right! They are raising the alarm now, aren't they? I do not think there is any danger unless someone deliberately programs an AI to be malicious. However, I am no expert, so I could well be wrong.


    I worry more about small, cheap autonomous weapons. So called "slaughterbots."


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

  • A Google search finds it right away. This one is limited to LENR-CANR.org. I believe this is a conventional Google search, but maybe it already incorporates their AI.


    As I understand it, all the search algorithms are themselves a sort of “self sharpening, highly specialized” AI. I, however, miss the old times when a good thought search query with boolean operators allowed one to quickly find relevant hits. Nowadays, if I refine the search, I simply get no hits, and not refined searchs get too much irrelevant hits.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • I saw the result of some study, a little while ago, that claimed that whilst we "oldies" will generally use a search engine by simply entering a bunch of keywords, millenials are much more likely to frame their query as a written question - as if they were talking to a sentient being. This started happening before chatbot interfaces became common - and the author could not pin down the source of this change of behaviour. (Maybe through gaming, perhaps?)


    I saw the result of some study, a little while ago, that claimed that whilst we "oldies" will generally use a search engine by simply entering a bunch of keywords, millenials are much more likely to frame their query as a written question - as if they were talking to a sentient being.

    I guess we all have our methods. I generally type in a short phrase I imagine should be in the info I am searching for.

  • A lot of younger users also apparently use queries like: "where do I find good donuts reddit". Where you're asking a question but also directing the search engine to preference a certain source for the answer. (of course there are modifiers like: site: and filetype: but this seems more ad hoc).


    I suppose a chat like experience would work fine as an interface for things like that, but for anybody trying to do anything beyond answer a narrowly bounded question, it seems like it would just get in the way.

    I sometime wonder if something more is going on - related to a human desire for conversational interaction (or at least some humans).


    There is a phenomenon on social media, possibly more so on facebook, where people will post a question to a group asking other group members something that they could have quite easily googled themselves. An example would be someone in a group for a small local area asking: "does anyone know what time the pharmacy in the village closes today?" This would be followed by various people posting replies such as: "I think it closes at 6pm", some more who post to say they don't know, and a reply from at least one person who has googled it and then cut and pasted the complete weekly opening hours for that particular pharmacy, and/or a link to the relevant page on the web.


    Some other people find that behaviour (and the whole interchange) quite frustrating, on the basis that the questioner (a) could have googled it themselves, and (b) that it seems like am imposition on the other members of the group who might have better things to do than providing an answer which is readily available online.


    But what if that question was also thrown at the group as an "opening gambit" for a conversation? For many people, the act of interacting with other people seems to fulfil a deeper psychological need than just obtaining information. The replies may be a form of reassurance that people care - and that the questioner is not alone, and crying into the void. Granted, it is a fairly superficial reassurance - but maybe it just tickles a part of the brain that makes people feel good.


    There was a phenomenon that was noticed during the early days of chatbot development, such as ELIZA, many decades ago. These bots seemed extremely crude by today's standards, but many people found that they enjoyed "conversing" with them, and very quickly started to attribute a far deeper level of intelligence (and even empathy) to the bot's replies than could possibly be justified in objective AI terms. The "conversations" seemed to be ticking that spot in the brain that made them feel good.


    Maybe that is the real reason why some people have taken so easily to these conversational interfaces.

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • "Some of the girls whose likenesses were being spread were refusing to go to school, suffering panic attacks, being blackmailed and getting bullied in public. “My concern was that these images had reached pornographic sites that we still don’t know about today,” Adib told the Guardian from her clinic in the town.

    State prosecutors are considering charges against some of the children, who created the images using an app downloaded from the internet. But they had been unable to identify the people who developed the app, who prosecutors suspect are based somewhere in eastern Europe, they said.

    The Spanish incident flared into global news last year and made Almendralejo, a small town of faded renaissance-era churches and plazas near the Portuguese border, the site of the latest in a series of warning shots from an imminent future where AI tools allow anyone to generate hyper-realistic images with a few clicks.


    [...]


    But while deepfakes of pop stars such as Taylor Swift have generated the most attention, they represent the tip of an iceberg of nonconsensual images that are proliferating across the internet and which police are largely powerless to stop.

    As Adib was learning of the pictures, thousands of miles away at the Westfield high school in New Jersey, a strikingly similar case was playing out: many girls targeted by explicit deepfake images generated by students in their classes. The New Jersey incident has prompted a civil lawsuit and helped fuel a bipartisan effort in the US Congress to ban the creation and spread of nonconsensual deepfake images."


    Revealed: the names linked to ClothOff, the deepfake pornography app
    Exclusive: Guardian investigation for podcast series Black Box reveals names connected to app that generated nonconsensual images of underage girls around the…
    www.theguardian.com


    ;( ;( ;(

  • But what if that question was also thrown at the group as an "opening gambit" for a conversation? For many people, the act of interacting with other people seems to fulfil a deeper psychological need than just obtaining information. The replies may be a form of reassurance that people care - and that the questioner is not alone, and crying into the void. Granted, it is a fairly superficial reassurance - but maybe it just tickles a part of the brain that makes people feel good.


    There was a phenomenon that was noticed during the early days of chatbot development, such as ELIZA, many decades ago. These bots seemed extremely crude by today's standards, but many people found that they enjoyed "conversing" with them, and very quickly started to attribute a far deeper level of intelligence (and even empathy) to the bot's replies than could possibly be justified in objective AI terms. The "conversations" seemed to be ticking that spot in the brain that made them feel good.


    Maybe that is the real reason why some people have taken so easily to these conversational interfaces.

    This happens in call centres a lot. People call up with some superficial issue, which is really just a pretext to a conversation and a human connection - however brief. Very human, very understandable.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.