JedRothwell Verified User
  • Member since Oct 11th 2014
  • Last Activity:

Posts by JedRothwell

    I wouldn't say that Deep Neural Networks are too much different than a Human brains JedRothwell from my understanding.

    They are very different in many important ways. The reasons are complicated and beyond the scope of the discussion. They are very different because the idea of neural networks was inspired by biological networks in 1958. Little was known about biological networks back then. As far as I know, computer artificial neural networks (ANN) were developed independently since then, with little reference to biology. Interestingly, AI knowledge might now be used in the other direction:


    Deep learning comes full circle

    Artificial intelligence drew much inspiration from the human brain but went off in its own direction. Now, AI has come full circle and is helping neuroscientists better understand how our own brains work.


    Deep learning comes full circle | Stanford News
    Artificial intelligence drew much inspiration from the human brain but went off in its own direction. Now, AI has come full circle and is helping…
    news.stanford.edu


    Note they use the word "inspiration." Not imitation or "modeled from the brain."


    However, for the sake of argument, suppose AI and brains were very similar. Take a real world example. Brains and neurons are similar in many species, ranging from earthworms, to bees, to humans. But, as I said, the way bees think and the way we think are radically different. The same neuron hardware works (thinks, that is) in very different ways. The way an LLM uses its neurons is radically different from both bees and people. It is alien to any form of life on earth. The algorithms are different. In some cases, they are loosely modeled on human thought processes. There are now efforts underway to fix some of the problems with AI. Some of the solutions are modeled on human thought processes. Some are based on logic, which is to say, ancient understanding of thought processes. If these new methods work, AI may become a little more like us. A little less alien. Some examples:


    There is tremendous redundancy and waste in LLM because they cannot categorize effectively. They will have two separate categories for "blue" in the examples of "blue shirt" and "blue car" (an example given by a programmer). There are efforts underway to improve tokenization and to generalize tokens. That would bring "blue" into one adjective-like category that can be applied to wide range of nouns. This technique is borrowed from human thought. (They may have already done this. I read about this effort some time ago.)


    Another example: there is tremendous effort going into the problem of training. An LLM has to look at thousands of images of cats before it develops a generalized, codified set of parameters that tell it an image probably shows a cat. A small child, on the other hand, can see one or two cats and she will quickly learn to identify a cat in real life or in a photo. Compared to LLM, the human brain and thought processes are thousands to millions of times better at learning from examples, and learning to identify objects in the real world. Or identifying concepts. Efforts are now underway to understand how the human brain learns, and what kinds of algorithms and shortcuts we can borrow from the human brain and apply to LLM, to improve their efficiency and reduce overhead, which takes a lot of computer power and physical energy. This problem is called "data inefficiency." ChatGPT tells me that researchers are looking at human cogitation methods to fix this:


    "The problem you're referring to, where AI systems require a large number of examples to learn and generalize from visual data, is commonly known as "data inefficiency" or "data hunger" in AI research. It highlights the disparity between the way humans, especially children, can learn to recognize objects and concepts from just a few examples (a process known as "few-shot learning" or "one-shot learning") and the data-intensive nature of most machine learning models, particularly deep learning models, which often require vast amounts of labeled data to achieve similar levels of recognition accuracy.


    Addressing data inefficiency is a significant challenge in the field of artificial intelligence because reducing the amount of labeled data needed for training can make AI systems more practical, cost-effective, and adaptable in real-world applications. Researchers are continually working on techniques like transfer learning, meta-learning, and few-shot learning algorithms to help AI systems generalize from limited data, similar to how humans do."

    Nobody (except the guy that was suspended from his job on AI) seems to regard these machines as "sentient".

    On the contrary, many people think LLM are intelligent, if not sentient.


    If anything LLM systems have shown how prescient he was, in that he predicted that digital machines could be designed to fool people.

    I believe you misunderstand Turing. He meant that if you cannot tell the difference between a sentient person and a computer, the computer is actually sentient. It is not fooling anyone; it is actually thinking. I believe LLM have shown that is not the case. They are not thinking. Not in the same sense that people think. However, the output from these programs is often indistinguishable from thinking. When you show people the text an LLM generated, they often cannot tell it came from a program or a person. They cannot even tell whether children's grade school essays and stories were written by actual children or by ChatGPT. In these cases, it passed the Turing test.

    I trust both people and LLMs that will admit when they don't know something instead of making shit up and will admit mistake.

    LLM cannot do that at present. They are not capable of it. They have no idea what is real and what isn't. They have no way of knowing they are "making a mistake." The concept of a mistake or making stuff up has no meaning to LLM software. There is no logical test that will reveal to the program that it is "making things up" as opposed to summarizing facts. AI researchers are trying to develop ways to deal with these problem. Several methods have been proposed, such as generating the same answer 3 times and comparing the 3 versions. An outlier would probably be a hallucination. Another method is to reduce the AI temperature, but that generates another set of problems.


    The fact that LLM have absolutely no judgement and no sense of reality is similar to the way a conventional program works. It is a truism that a conventional program only does what the programmer tells it to do. It has no means of discerning that the instructions make no sense, or they defeat the purpose, or they will result in a disaster. When the 1999 Mars rocket was programmed with English units instead of metric units, it crashed. It had no way of knowing that was a programmer's mistake. Future software may be more in touch with the physical world and it may have a better synthetic understanding of what a rocket is supposed to do, or what it means to "make things up." But present day software has not reached that level of development.

    Alan Turing has been mentioned further up in this thread, but it is worth reading a bit more about the Imitation Game.

    I think Turing's hypothesis has been disproved by LLM. They can fool people into thinking they are humans. A report in the New York Times included several stories written by grade school children, and fake stories written by ChatGPT. Even teachers could not tell which was which, in some cases. Even though LLM can fool people, I do not think they are intelligent, except in a very limited sense. So limited, you might as well say a paper card catalog is intelligent. It resembles Searle's Chinese Room.


    I suppose any creature with brain cells is intelligent to some extent. Even an earthworm.

    This article asks:


    "Do large language models know what they are talking about?
    Large language models seem to possess the ability to reason intelligently, but does that mean they actually know things?"


    Here is my answer, which I have posted here before. Now that I have experience using LLM and tweaking parameters such as AI temperature, I have more confidence in my answer.


    LLM do not know they are talking about. They do not actually know things. Not in the same sense that people, other primates or dogs know things. They "know" things in the same sense that bees know structural engineering. As I have said, a colony of bees uses its collective brain tissue to construct a nest. The nest is a marvelous structure, as good as the best human structural engineer could come up with using those materials. It is compact, waterproof, multifunctional, defensible against enemies, and well designed to be cooled by bees flapping their wings. It is definitely a product of the bees' intelligence. However, bees have absolutely no conscious knowledge of structural engineering, or defense, or how to cool a structure. Their brains arrive at these designs by instinct. This path is totally different from the conscious approach used by a human structural engineer. Or by an ancient human craftsman making some clever object such as a tent, or a compound bow.


    Birds and airplanes both fly. They both obey the laws of aerodynamics. Birds do this by instinct, and airplanes by the conscious knowledge of aircraft designers, and the conscious and unconscious knowledge of human pilots. The latter resembles our ability to swim, or play baseball. It is something we learn by doing. We know how to do similar things by instinct, but swimming well or flying an airplane is learned behavior, not very natural. (Many pilots have said the initial phase of learning to fly an airplane resembles swimming. It is physically similar because you are moving through fluids, and changing the thrust of fluid and angle of attack to go where you intend to go.)


    Evolution often generates a body shape or an ability in different animals starting from totally different species. Such as fish and whales. They both swim, and they are both adapted to living in the ocean, but they started off looking completely different. LLM can "understand" and "answer" questions in human language, producing the same kind of result that a person does. It does these things using methods that are totally different from what we use. As different as the bees versus a human structural engineer. Sometimes it generates an answer we would say is "wrong," "absurd" or an "hallucination." But these qualities do not exist for the LLM computer program.


    Future AI programs will probably resemble human intelligence more closely than LLM do. They are certain to have more logic. They are already acquiring this, with things like the Wolfram plugin. They will probably have some sort of direct sensations of the real world, such as video, audio and touch. They will use these sensations to generate actionable knowledge of the real, three-dimensional world outside their digital databases. This will greatly reduce wrong and absurd answers, and hallucinations. I do not think it is likely they will have emotions. Not unless someone programs in emotions, which I think would be a mistake, and possibly even dangerous. I doubt their thought processes will resemble human thought. They will still be as alien to us as bees are. But they will understand us better, and we will probably understand them better. They will be enormously useful, in ways we cannot imagine today. As useful as electricity and computers turned out to be, after decades of development.


    I do not see any reason to make AI resemble human intelligence. Probably, it will work better with a hybrid model incorporating LLM and other techniques. Making an AI that works like a human brain would be like trying to make a human carrying airplane that works by flapping its wings -- an ornithopter -- rather than using propellers. There are such things.


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    There is even a human powered one:


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    However, at present there is no practical use for ornithopters. Someone told me they are potentially quieter and perhaps safer than propeller driven aircraft, so perhaps they will be useful in the future.


    If the Ne-22 producing reaction runs to completion and gets to the lowest energy states, the cold fusion reaction to produce Ne-22 would be as follows: 4H2O = CO2 + 3H2 + 22Ne. There is a water shift reaction due to the carbon production and due to reacting oxygen-17.

    I assume there are hydrogen fusion reactions going on, and the neon transmutations are only a byproduct of them. In any case, other forms of cold fusion have COPs much higher than 10 or 20. Some are infinite, with no input energy. Assuming this is cold fusion, if this particular form of cold fusion is limited to 5 or 10, others forms will be used instead.

    I have estimated that the steam power generation cycle will be sustainable (without input from conventional energy) if COP > 5.0.

    I believe that is correct. However, at 5.0 the machine would generate very little useful electricity. It would use all energy just to keep itself going. To make useful energy I believe ratio would have to be 10 or 20.


    I do not know of any reason to think 10 or 20 cannot be achieved.


    At 5.0 you could use the gadget as a space heater, or perhaps a water heater. You would only be making use of the waste heat. I do not think that would be a practical or cost effective application. Suppose you achieved 6 or 7, producing a little electricity and mostly waste heat. That might be useful in some niche applications, such as powering railroad crossing equipment in Alaska, and keeping the equipment warm enough to operate. I assume the heat would last for years without refueling. Natural gas fired thermoelectric generators are used in Alaska for that purpose. You have to replenish the natural gas from time to time.


    Niche applications are often the best way to develop new technology. New technology is usually not cost effective in direct competition with existing technology in mainstream applications. Electric cars in 2000 could not compete with gasoline models. Tesla succeeded by limiting EV to the the luxury car market, which is a niche market.

    B J Huang's work is important because it addresses your assumptions which are debatable. Namely LENR can provide a power density for a heat engine with reasonable Carnot efficiency.

    LENR has already done this. See:


    "Cold fusion has reached temperatures and power density roughly as high as the core of a nuclear fission power reactor."


    Video

    A combined cycle plant is more efficient because is converts more heat to energy in multiple heat cycles not because it uses the same heat more.

    That is another way of saying the same thing. When I say "the same heat" I mean, for example with a triple expansion engine, the heat source (combustion) produces steam once which is then run through three cylinders. The steam is first high grade (high pressure, with lots of enthalpy), then medium, then low. The last cylinder is very large.



    Another failing of LENR is heat from electricity without accounting for the efficiency of heat to electricity.

    Everyone accounts for this. It is Carnot efficiency. At a given temperature it is the same with all heat engines. It makes no difference whether the heat comes from combustion, fission, or cold fusion.

    What makes combined cycle power plants so efficient? (araner.com)

    The same thing that made triple-expansion marine steam engines so efficient. The same heat is tapped multiple times, first as high grade heat, then medium, then low grade heat.


    When considering the future of LENR one would be wise to consider this benchmark.

    I disagree. There is no need to make cold fusion reactors highly efficient. On the contrary, it would best to trade off efficiency for low cost. The fuel costs nothing, so you do not save any money by making them more efficient. There is no point to making them more than ~30% efficient, which is about the efficiency of fission reactors, which also have very cheap fuel. Fission reactors cannot be more efficient. The zirconium cladding would melt. But the low temperatures and low efficiency also prolong the life of the generating equipment. They will also prolong the life of cold fusion generators and space heaters.


    If they were 10% efficient, that would produce a lot of waste heat which would be tricky to get rid of. The machines would run hot and require large cooling fans. They would be bulky. 25% would make them roughly the size of an automobile engine and radiator, per kilowatt of capacity.


    See:


    https://lenr-canr.org/acrobat/RothwellJmoreaboutw.pdf

    D. Bushnell discusses seeding iron in ocean in New York Times


    Bushnell is a cold fusion researcher at NASA (https://www.lenr-canr.org/acrobat/BushnellDlenrandnas.pdf). He co-authored a guest essay in the New York Times today. Russ George has also been involved in various projects to do this.


    See:

    Iron Dust Could Reverse the Course of Climate Change


    Here is a Gift Link to get around the Times paywall:

    Opinion | Iron Dust Could Reverse the Course of Climate Change (Gift Article)
    The U.S. government should find out if a natural process of adding iron-rich dust to the ocean can make a dent in climate change.
    www.nytimes.com

    I read the paper yesterday and if I am not mistaken, Biberian is the only author, so the bot is also attributing authorship to others very creatively.

    No, this is not a hallucination. The paper is:


    Biberian, J.P., L. Forsley, and P.A. Mosier-Boss, Transmutation of Palladium Induced by Laser Irradiation. J. Condensed Matter Nucl. Sci., 2023. 37: p. 9-22.


    JCMNS 37 is now indexed and added to ChatGPT. ChatGPT is still having problems finding papers, as shown in the example below. It is disappointing.


    👋 Hello! What can I do for you today?

    Are there replications or confirmations of Mastromatteo's work?

    I'm sorry, but the dataset does not provide any information about replications or confirmations of Mastromatteo's work.



    Discuss Mastromatteo's experiments with a laser beam?

    In 2016, Ubaldo Mastromatteo conducted an experiment where he directed a laser beam on a thin film of palladium deposited on a silicon oxide substrate in H2 and D2 atmospheres. He used a 5 mW 650 nm red pointer laser for this purpose. The experiment lasted for 3 months. The surprising outcome was the formation of many new elements in a particular 180 µm diameter spot that were not there before. In some parts of the spot, the palladium had completely disappeared. The experiments were analyzed by SEM and neutrons were potentially observed with CR-39.


    Discuss Biberian's replication of Mastromatteo's laser experiment.


    I'm sorry, but the dataset does not provide any information about Biberian's replication of Mastromatteo's laser experiment.


    Discuss Biberian's paper "Transmutation of Palladium Induced by Laser Irradiation."

    In the paper "Transmutation of Palladium Induced by Laser Irradiation," Biberian, along with Forsley and Mosier-Boss, conducted an experiment similar to Ubaldo Mastromatteo's. They directed a laser beam on a thin film of palladium deposited on a silicon oxide substrate in H2 and D2 atmospheres. The experiment lasted for 3 months and used a 5 mW 650 nm red pointer laser. They found that in a particular 180 µm diameter spot, many new elements appeared that were not there before, and in some parts of the spot, the palladium had completely disappeared. The experiments were analyzed by SEM and neutrons were potentially observed with CR-39.

    ChatGPT Is not very good at logic, or identifying unusual patterns. It seems that it has been trained to write sonnets, but it failed to sus out the logical pattern I gave it in this exchange:

    ME:


    Here is a list of well known people:


    Harold Lloyd
    Lloyd George
    George Washington
    Washington Irving


    Add ten more names to this list



    ChatGPT

    Certainly! Here are ten more well-known people to add to your list:


    Marilyn Monroe

    Monroe Doctrine

    Doctrine of Confucius

    Confucius

    Thomas Jefferson

    Jefferson Davis

    Davis Beckham

    David Bowie

    Bowie knife

    Knife Party


    That's good, but some of the items on your list are not people. They are objects, such as a Bowie knife. Try again, with only well-known people this time.


    ChatGPT

    Of course! Here are ten more well-known people to add to your list:


    Marilyn Monroe

    George Orwell

    Charles Dickens

    Abraham Lincoln

    Winston Churchill

    Albert Einstein

    Benjamin Franklin

    Martin Luther King Jr.

    Leonardo da Vinci

    Pablo Picasso


    ME:

    That's not quite what I am looking for. The last name of one well-known person should be the first name of the next person on the list.


    ChatGPT

    I see what you mean now. My previous responses didn't follow that pattern correctly. Here's a new list of well-known people that follows the pattern you've mentioned:


    Harold Lloyd

    Lloyd George

    George Washington

    Washington Irving

    Irving Berlin

    Berlin Wall

    Wallis Simpson

    Simpson (Homer Simpson)

    Homer (the ancient Greek poet)

    Poet Laureate


    Pretty good! A few of those are not people, but you have the idea.


    ChatGPT

    I apologize for the oversight. Thank you for pointing that out. Here's a corrected list of well-known people that strictly follows the pattern:


    Harold Lloyd

    Lloyd George

    George Washington

    Washington Irving

    Irving Fisher

    Fisher Stevens

    Stevens Johnson

    Johnson & Johnson (company founders: Robert Wood Johnson I, James Wood Johnson, Edward Mead Johnson)

    Edward Johnson

    Johnson Beharry (Victoria Cross recipient)


    Johnson & Johnson is funny! It is the same name twice. Clever of you.



    ChatGPT

    I'm glad you enjoyed the play on words with "Johnson & Johnson." . . .


    My estimate:

    295,200 MJ = 82,000 kWh of raw heat. However, fission reactors are inefficient. They convert only 1/3rd of the heat into electricity. So that is 27,333 kWh electric. The average American uses 10,632 kWh per year, so that is 2.6 years of electricity.

    Bard's estimate:

    Only ten uranium pellets can power the average household for an entire year.

    These estimates are not far apart. My stats are per capita. I assume Bard found this estimate somewhere on the web, and did not make it up. Bard's estimate is for the average household, which is ~2.5 people. My estimate was 2.6 years of electricity per capita. Or 1 year per household. That would be 16 fuel pellets powering one household for 1 year. Bard says it takes 10 fuel pellets. For a rough average, 10 is reasonably close to 16.

    The problem of energy conversion efficiency are shared by all systems that convert heat to any kind of more usable energy (be it mechanic or electric).

    True, but fission reactors are inherently inefficient for several reasons. Mainly because the core cannot be more than 600 deg C or the zirconium cladding will melt. This means the primary water has to be much cooler than that, 315 deg C and the secondary loop is even cooler. So Carnot efficiency is low.


    What level of enrichment are you using? What level of enrichment was Crawford using?

    I used the energy production numbers from the ANS and the NEI. They also said uranium is 88% by weight. I do not know what the enrichment level is, but the ANS can probably tell you.


    Whatever the enrichment level is, I expect it is the standard for all power generators. It is probably optimum. I doubt Crawford knows the enrichment level.

    Your estimates seem more accurate and conservative. Your point about energy conversion efficiency seems quite valid as well. Perhaps there was some loss in the translation of the article?

    The original article is in English. He just got the numbers wrong. I sympathize. I often get the numbers wrong, sometimes by an order of magnitude. It is embarrassing so I check and recheck!


    Maybe he looked it up somewhere. Maybe he asked Bard, which just now gave me the following misinformation:




    One cubic inch of uranium releases 287,912,815,862,066,148.6 joules of energy. This is a theoretical calculation based on Einstein's equation E=MC sq.


    [Oops! Perhaps that would be if you convert the entire mass to energy? The correct answer is 295,200,000,000 J, as far as I know. Bard's estimate is off by a factor of a million.]


    Uranium is an abundant metal that is very energy-dense. A one-inch tall uranium pellet contains the same amount of energy as 120 gallons of oil. One uranium fuel pellet creates as much energy as one ton of coal, 149 gallons of oil, or 17,000 cubic feet of natural gas.


    [Correct-o-mundo.]



    images?q=tbn:ANd9GcRBS63jXZWAk0QnAd6B0VaCZ8kgZB-shS3Xtddbuc8S-C04Kdon


    .


    Nuclear energy is more efficient than other energy sources because of uranium's high energy density


    images?q=tbn:ANd9GcSPGVdvgnzRzHmVWAoHI35HoAqFu1Pjdcs2DJoo5gKdMRqyBGZj


    . Only ten uranium pellets can power the average household for an entire year.

    The author of the book "concludes that there is no technological or economic reason why we can’t have flying cars with existing technology—indeed, why we couldn’t have had them already, if sustained work on them had continued past the 1970s." That is wrong on many levels, as I have discussed here. The main problems with flying cars and personal aircraft are:


    Present day FAA air traffic control cannot handle a large increase in traffic. We would need radically new GPS based systems. Crawford says if you tried to launch a flying car venture, "you'd be shot down by the FAA." Yes, and for good reason.


    Flying an airplane is difficult and dangerous. It takes a lot of training. Microsoft Flight Simulator is remarkably realistic these days. I have enough experience with it to know that I and most other people could not possibly learn to fly safely. Personal aircraft will only become possible when airplanes are completely autonomous -- like elevators. Progress in autonomous aircraft is being made, but they do not exist yet.


    Flying cars, that is, machines that convert from road vehicles to flying vehicles, are a bad idea. Not good as cars and not safe as airplanes. It makes more sense to rent a car or use a taxi or an Uber to the airport.

LENR Partners