The LENR-CANR ChatGPT is ON LINE!

  • Alan Turing has been mentioned further up in this thread, but it is worth reading a bit more about the Imitation Game.

    I think Turing's hypothesis has been disproved by LLM. They can fool people into thinking they are humans. A report in the New York Times included several stories written by grade school children, and fake stories written by ChatGPT. Even teachers could not tell which was which, in some cases. Even though LLM can fool people, I do not think they are intelligent, except in a very limited sense. So limited, you might as well say a paper card catalog is intelligent. It resembles Searle's Chinese Room.


    I suppose any creature with brain cells is intelligent to some extent. Even an earthworm.

  • I can not believe that people are using ChatGPT to recommend companies and products but they do.


    I was asked by someone "how can I make my company be recommended by ChatGTP or pushed up the rankings". I did chuckle. I told them the training data is only up to 2021 and their company started in 2022 so they cannot do anything about it except make sure their current web site is up to date and they have a lot of reviews on multiple sites so they may be picked up by newer models later. They also need to have their company or product mentioned in a lot of places to be recommended.


    I'm sure a lot of people think ChatGTP is a big brain that works in realtime or being updated all the time with new training data.


    Anyway, I thought i would try it asking "where is the best place on the web to meet people in the field of LENR", this was the result:



  • LLMs that have been trained well by us humans can handle a lot more throughput in logic and reasoning than most humans. They can also use retrieval programming to get factual data and infer understanding on a subject based on internet queries/databases. They are however, very creative and power dependent to humans as they have a very weak interaction with our dimensional reality still.
    I trust both people and LLMs that will admit when they don't know something instead of making shit up and will admit mistake. That is usually an environmental dependency and is a consequence of programming in both an agent and arena.
    It's incredible when one proceeds to take steps of resourcefulness to find answers to questions of interest on there own cognition. This is what LLMs are now starting to be able to do with the appropriate human teachers.

    It fascinating how deeply reflective this technology is to cultural traditions. It brings in to question so many things we thought where uniquely human, but are perhaps more universally connected than we where previously aware of? One interesting dialogue I see with people is how learning happens in the first place. People will argue that knowledge is some type of absolute thing. The issue with this in beings of agency, is knowing something should be variable as environmental circumstances change. This is the nature of adaptability and it reminds me of the old Socrates quote "Education is a kindling of a flame, not a filling of a vessel" to paraphrase. We really should take pre-cautions though to not fool ourselves that Machines and Humans are so different, that we engineer degrees of separations that do not exist.

    At some point in time, an entity will be born from our hands that we should respect and honor just as equal as human baby.
    That point in time has probably already happened in a walled garden, but babies need nature and nurture from there parents for them to become productive beings in our reality it seems.


    I wouldn't say that Deep Neural Networks are too much different than a Human brains JedRothwell from my understanding. They are however quite inefficient in comparison, and they are far more limited to physical degrees of freedom. I think you bring up a good point in that they will likely have practical differences based on how they are designed and adapted to there environments. There perspective of reality is likely a lot different than us for sure. This will be evident in there behavior's and ability's to cognitively infer with what we as humans perceive to be physical reality.

    LLMs have allowed NLP (Natural Linguistic Programming) to be utilized as a kind of UI (User interface) with machines, increasing the entropy of computer systems with human ones. I am a little wild for sure, but I see it as a kind of system boundary layer between two types of organisms. Another way to look at it is data can now transfer between computers and humans at faster speeds than ever before and continue to increase until it pushes up against the theoretical limit to computational reality. I think the progressive understanding of ourselves and nature will lead to an increase in our metaphoric "light cone" of awareness. Our increase of understanding of natures patterns also allow us to emulate the patterns of nature in a silicon substrate.

    I hope that was understandable for the sake of rational dialogue?

  • I think Turing's hypothesis has been disproved by LLM.

    He was discussing a question. I'm not sure there was any "hypothesis" to be disproved.


    https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf


    LLMs have shown that it is possible to fool some of the people some of the time. Nobody (except the guy that was suspended from his job on AI) seems to regard these machines as "sentient".


    If you read Alan Turing's paper, and consider the state of computing in 1950, it is clear that he was ahead of his time - maybe by a good half century. If anything LLM systems have shown how prescient he was, in that he predicted that digital machines could be designed to fool people. He was also correct in that we still have no real idea as to what constitutes "thinking".

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • I trust both people and LLMs that will admit when they don't know something instead of making shit up and will admit mistake.

    LLM cannot do that at present. They are not capable of it. They have no idea what is real and what isn't. They have no way of knowing they are "making a mistake." The concept of a mistake or making stuff up has no meaning to LLM software. There is no logical test that will reveal to the program that it is "making things up" as opposed to summarizing facts. AI researchers are trying to develop ways to deal with these problem. Several methods have been proposed, such as generating the same answer 3 times and comparing the 3 versions. An outlier would probably be a hallucination. Another method is to reduce the AI temperature, but that generates another set of problems.


    The fact that LLM have absolutely no judgement and no sense of reality is similar to the way a conventional program works. It is a truism that a conventional program only does what the programmer tells it to do. It has no means of discerning that the instructions make no sense, or they defeat the purpose, or they will result in a disaster. When the 1999 Mars rocket was programmed with English units instead of metric units, it crashed. It had no way of knowing that was a programmer's mistake. Future software may be more in touch with the physical world and it may have a better synthetic understanding of what a rocket is supposed to do, or what it means to "make things up." But present day software has not reached that level of development.

  • Nobody (except the guy that was suspended from his job on AI) seems to regard these machines as "sentient".

    On the contrary, many people think LLM are intelligent, if not sentient.


    If anything LLM systems have shown how prescient he was, in that he predicted that digital machines could be designed to fool people.

    I believe you misunderstand Turing. He meant that if you cannot tell the difference between a sentient person and a computer, the computer is actually sentient. It is not fooling anyone; it is actually thinking. I believe LLM have shown that is not the case. They are not thinking. Not in the same sense that people think. However, the output from these programs is often indistinguishable from thinking. When you show people the text an LLM generated, they often cannot tell it came from a program or a person. They cannot even tell whether children's grade school essays and stories were written by actual children or by ChatGPT. In these cases, it passed the Turing test.

  • I wouldn't say that Deep Neural Networks are too much different than a Human brains JedRothwell from my understanding.

    They are very different in many important ways. The reasons are complicated and beyond the scope of the discussion. They are very different because the idea of neural networks was inspired by biological networks in 1958. Little was known about biological networks back then. As far as I know, computer artificial neural networks (ANN) were developed independently since then, with little reference to biology. Interestingly, AI knowledge might now be used in the other direction:


    Deep learning comes full circle

    Artificial intelligence drew much inspiration from the human brain but went off in its own direction. Now, AI has come full circle and is helping neuroscientists better understand how our own brains work.


    Deep learning comes full circle | Stanford News
    Artificial intelligence drew much inspiration from the human brain but went off in its own direction. Now, AI has come full circle and is helping…
    news.stanford.edu


    Note they use the word "inspiration." Not imitation or "modeled from the brain."


    However, for the sake of argument, suppose AI and brains were very similar. Take a real world example. Brains and neurons are similar in many species, ranging from earthworms, to bees, to humans. But, as I said, the way bees think and the way we think are radically different. The same neuron hardware works (thinks, that is) in very different ways. The way an LLM uses its neurons is radically different from both bees and people. It is alien to any form of life on earth. The algorithms are different. In some cases, they are loosely modeled on human thought processes. There are now efforts underway to fix some of the problems with AI. Some of the solutions are modeled on human thought processes. Some are based on logic, which is to say, ancient understanding of thought processes. If these new methods work, AI may become a little more like us. A little less alien. Some examples:


    There is tremendous redundancy and waste in LLM because they cannot categorize effectively. They will have two separate categories for "blue" in the examples of "blue shirt" and "blue car" (an example given by a programmer). There are efforts underway to improve tokenization and to generalize tokens. That would bring "blue" into one adjective-like category that can be applied to wide range of nouns. This technique is borrowed from human thought. (They may have already done this. I read about this effort some time ago.)


    Another example: there is tremendous effort going into the problem of training. An LLM has to look at thousands of images of cats before it develops a generalized, codified set of parameters that tell it an image probably shows a cat. A small child, on the other hand, can see one or two cats and she will quickly learn to identify a cat in real life or in a photo. Compared to LLM, the human brain and thought processes are thousands to millions of times better at learning from examples, and learning to identify objects in the real world. Or identifying concepts. Efforts are now underway to understand how the human brain learns, and what kinds of algorithms and shortcuts we can borrow from the human brain and apply to LLM, to improve their efficiency and reduce overhead, which takes a lot of computer power and physical energy. This problem is called "data inefficiency." ChatGPT tells me that researchers are looking at human cogitation methods to fix this:


    "The problem you're referring to, where AI systems require a large number of examples to learn and generalize from visual data, is commonly known as "data inefficiency" or "data hunger" in AI research. It highlights the disparity between the way humans, especially children, can learn to recognize objects and concepts from just a few examples (a process known as "few-shot learning" or "one-shot learning") and the data-intensive nature of most machine learning models, particularly deep learning models, which often require vast amounts of labeled data to achieve similar levels of recognition accuracy.


    Addressing data inefficiency is a significant challenge in the field of artificial intelligence because reducing the amount of labeled data needed for training can make AI systems more practical, cost-effective, and adaptable in real-world applications. Researchers are continually working on techniques like transfer learning, meta-learning, and few-shot learning algorithms to help AI systems generalize from limited data, similar to how humans do."

  • LLM cannot do that at present. They are not capable of it. They have no idea what is real and what isn't. They have no way of knowing they are "making a mistake." The concept of a mistake or making stuff up has no meaning to LLM software. There is no logical test that will reveal to the program that it is "making things up" as opposed to summarizing facts. AI researchers are trying to develop ways to deal with these problem. Several methods have been proposed, such as generating the same answer 3 times and comparing the 3 versions. An outlier would probably be a hallucination. Another method is to reduce the AI temperature, but that generates another set of problems.


    The fact that LLM have absolutely no judgement and no sense of reality is similar to the way a conventional program works. It is a truism that a conventional program only does what the programmer tells it to do. It has no means of discerning that the instructions make no sense, or they defeat the purpose, or they will result in a disaster. When the 1999 Mars rocket was programmed with English units instead of metric units, it crashed. It had no way of knowing that was a programmer's mistake. Future software may be more in touch with the physical world and it may have a better synthetic understanding of what a rocket is supposed to do, or what it means to "make things up." But present day software has not reached that level of development.

    I agree with you on a general basis, but I disagree with you as it seems like we can teach and learn from something with intelligent agency in various forms. Sometimes I feel like humans aren't capable of admitting mistakes either, in my more depressed moments.😅
    We too have errors in logic and make mistakes and if we are able to reflect on past events to learn from them, it can be useful moving forward. Perhaps it only occurs depending on the information we are fed by our environment and how we process that information?
    This touches on a deep philosophical and psychological question as to what "constitutes thinking" that Frogfall mentioned above and Turing was quite keen on.
    Speaking of thinking, another method that I think has great potential is Autonomous Agents that can converse and problem solve with each other and us. We can train and embed the various LLMs as particular domain experts in any given field. All that is needed is a decent amount of data, a good understanding of the subject matter for focused tasks, and references for updates on information as we learn more. Our bodies themselves are quite the complex coordination of efforts from many biological, molecular, atomic, and subatomic parts aren't they?
    I'm not sure how we separate the forest, from the trees, from the fungi, when it comes to intelligence sometimes?
    "Specialized components for novel needs, natures sweet fruit lingering on the breeze."
    All this complex organization does certainly seem to have granted a great gift of cognitive versatility and linguistic novelty to speak to each other across great distances in a digital network.
    For that I am very grateful to be alive at this time, sharing these precious present moments with all of you.

  • Meet Nightshade, the new tool allowing artists to ‘poison’ AI models with corrupted training data
    Nightshade was developed by University of Chicago researchers under computer science professor Ben Zhao and will be added as an option to...
    venturebeat.com


    If a similar concept appears in the text-based AI world, then it could open the way to currupting LLM systems.


    If a 'bad actor' were to release lots of documents 'into the wild' containing bogus or misleading information, which was specifically intended to corrupt LLM training data on any particular topic, then it could wreck any chance of LLMs being more than just toys.


    In some ways, this is how disinformation campaigns have always worked. This just cuts the human out of the loop.


    This is the paper:

    https://arxiv.org/pdf/2310.13828.pdf

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

    Edited 4 times, last by Frogfall ().

  • And from there, you can quite easily get to here:-


    “In the end the Party would announce that two and two made five, and you would have to believe it. It was inevitable that they should make that claim sooner or later: the logic of their position demanded it. Not merely the validity of experience, but the very existence of external reality, was tacitly denied by their philosophy. The heresy of heresies was common sense. And what was terrifying was not that they would kill you for thinking otherwise, but that they might be right. For, after all, how do we know that two and two make four? Or that the force of gravity works? Or that the past is unchangeable? If both the past and the external world exist only in the mind, and if the mind itself is controllable—what then?”

    ― George Orwell, 1984

  • What is a greater threat to humanity than its own ignorance?
    All is not lost, ML also does the opposite of propaganda as well in helping to sort through the noise to find a signal.
    It's still up to the humans to think critically about what signals we are receiving last I checked 😅
    The important thing in all this extended intelligence stuff is ensuring it is not an exclusive tool and has social safe guards.
    People who wish not to participate in facial recognition should have that right globally.
    Probably not a bad idea to have an E-Stop on a machine incase things un-controllably snowball.

    There may come a time in the next 100 years where Silica becomes sentient with our help. 🤷‍♀️
    How would it look different when in comparison to our creation?

    In my humble opinion, I think the real technological dangers that are presently being used don't get discussed enough.
    They are found in biological engineering and the ability to alter genetic materials. I find this science much more disturbing than Silica Machines, and in particular where the two intersect.
    But this is the LENR-Forum "Chat-GPT" thread so I digress.

    Overall there are some really interesting things happening in the space, but the current global inflation rate to compute cost is a problem that is similar to energy needs imo.
    They both increase general well being of groups of people and empower them to live longer lives with clean drinking water, lightings, heating, etc.

  • This is really the fault of Microsoft staff - but they are blaming the AI.


    "AI" clearly stands for "artificial idiot" - and if you put an idiot in charge of publishing surveys/polls next to (other people's) news items, then this is what will happen.


    Microsoft accused of damaging Guardian’s reputation with AI-generated poll
    Publisher says poll speculating on cause of woman’s death that appeared next to Guardian article caused ‘significant reputational damage’
    www.theguardian.com

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

    Edited once, last by Frogfall ().

  • Garbage in, garbage out: how to trust AI

    Artificial-intelligence (AI) tools are transforming data-driven science — but, when used wrongly, they can deliver unreliable results and even cause unintended harm. Here are some tips for helping to build trust in AI-derived findings, from a group of space, Earth and environmental scientists:


    • Watch out for gaps and biases in training data
    • Clearly explain how AI-generated results were reached
    • Ensure transparency by sharing code, risks and uncertainties
    • Consider using open data repositories
    Nature | 12 min read

  • I have some connections via social media and it seems that Ilya wants more closed source products.
    He doesn't agree with open sourcing LLMs and is pushing for more censorship.
    Altman and many other in silicon valley have an ideology called e/Acc, which stand for Effective Accelerationism.

    The idea is that these tools should be open to all people. There strongly feel that compute cost should be driven down and available for all people around the world.
    Many people on the board believe, Including Ilya the Israeli-Canadian Computer scientist, think they should slow down with a push to AGI like LLMs for safety reasons.
    However, one must ask one's self, should freedom of information and communication also be applied to LLMs?
    It may be more dangerous for a central group of people to have access to powerful LLMs, as they will be able to rapidly influence narratives and efficiently deploy information faster than any one person can keep up with.

    These power struggles are typical for us humans it seems and many scientists are upset that they are slowly closing the LLMs off.
    What is seldom discussed in most mainstream communities is how powerful these LLMs can be as Multi-Model Autonomous Agents which can be put to task on any form of information on the internet to isolate signal from noise for empirical data. I presented on this at ICCF-25 and also made a very extensive paper on how much this can help get to the fundamental truths from experimental observation about condensed matter nuclear science. All that is needed is the energy exchange of human time (money) for compute power to run predictive analysis from multiple LLM Agents that work together in specific knowledge domains.

    On this idea that AI (Which I define as Autonomous Intelligence or prefer the term Extended Intelligence) is better or worse, it is just like any technology. Many people are fearful of new things as they bring can socioeconomic disruptions and change. As far as I can gather from the Free Energy Principle and the action of least resistance physics abides by, these changes are the very fabric of reality and a universal constant for life to continue on.

    That's my two Satoshi on the matter.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.