Post ICCF24 thread.

  • A great (and positive) analysis of what a scientific field of LENR could/should be doing

    Yes

    As I studied Metzler's ARPA-E presentation I listed the known groups approaching commercialization and considered which groups have already been doing what Metzler suggests in his concluding remarks.


    This has proved to be a valuable exercise. Applied to an industry wide group improvement effort... Invaluable!


    Team Google might be enticed to share every bit of research behind their solid state atomic and fusion energy patent development. They are seemingly in pursuit of carving out a large share of the CMNS energy market.


    Other groups that have been doing what Metzler suggests could also be enticed to share data and work together to advance all commercial efforts. The NASA presentation provided a rough overview of work began in 2012. Detailed research and data from 2022 has not been disclosed. Neither has progress on the GEC NASA Space Act Agreement been disclosed.


    How might we entice the most advanced teams to help other teams improve products and succeed in rapid market entry and energy market takeover?

    This nascent industry has the potential to face no other competition in the energy marketplace. Competition against each other is detrimental to the CMNS Energy Tech Industries and shortsighted.

  • Nevertheless, the google program managers had their own idea, and ultimately concluded that there is something anomalous going on, and they were going "to find out what it is".

    I do not think the paper said that. It said the experiments are challenging and worth doing on their own merits. It gave no hint that they think cold fusion is real. The Nature editorial that accompanied the paper was a hatchet job no better than any of the other hatchet jobs Nature has published since 1989. There was no softening, no concessions, or no indication they know anything about the research.

  • Kazuaki Matsui presented about the New Hydrogen Energy NHE program in Japan which ran for five years during the 90s. He commented how they never really found anything positive in their experiments. But Melvin Miles, who attended the conference online, posted in the comments how he found clear signature excess heat in the larger cells he activated at NHE during the last few months of the program. Martin Fleischmann's analysis agreed with that, but apparently, it wasn't enough to keep the program going.

    Miles and Fleischmann were very upset about this. Look for the term NHE in Fleischmann's letters:


    http://lenr-canr.org/acrobat/Fleischmanlettersfroa.pdf


    Here we are only 4 decades into the computer revolution, and there's already a museum!

    Eight decades since computers were invented in 1945. By 1950 people knew about them. They began having a large impact around 1960. Bill Gates, I, and many others learned how to use them in the mid-1960s. Microcomputers (later called personal computers) that were cheap enough for ordinary folks to buy emerged in the late 1970s.


    I have not been to the museum in California, but I see from the online materials they have many exhibits from the 1950s on, long before microcomputers emerged.


    Here is part of my own personal computer museum. This is a planar core RAM module and a DIMM semiconductor RAM module. The planar core is 64 bits x 64 bits = 4096 bits. This is the kind of memory computers had when Gates and I learned to program them. It is 10.5 cm per side, 110 cm^2, bit density is 37 bits/cm^2. The DIMM is an old 4 GB ram, 34,359,738,368 bits. It is 13.5 cm x 3 cm = 40.5 cm^2; 848,388,502 bits/cm^2. Bit density has increased by 22.9 million. Newer RAM DIMM modules are usually 32 MB I think. There is one with 512 GB.


  • I do not think the paper said that. It said the experiments are challenging and worth doing on their own merits. It gave no hint that they think cold fusion is real. The Nature editorial that accompanied the paper was a hatchet job no better than any of the other hatchet jobs Nature has published since 1989. There was no softening, no concessions, or no indication they know anything about the research.

    Yes, you are correct. I am paraphrasing. It should not be in quotes. I did not look up exactly what they said. Essentially, that is what they said. They are not stopping. google will continue, because they believe there is something there. The presenter Ben Barrowes remarked about the google paper, and how it made him think he could propose something because of that exact sentiment. He got that sentiment from the paper. He is not alone, as Scott Hsu said as much. This is how the google Nature paper had an impact, despite not being what the LENR community had hoped for.

  • They are not stopping. google will continue, because they believe there is something there.

    I do not get the sense they believe there is something there. I get the sense they are playing political games, but I cannot image what games, or what purpose they serve. Judging by the Nature editorial that was published with the paper, this was a crude hatchet job intended to make people think cold fusion does not exist. That seems like an expensive hatchet job, so I do not know why they would bother. They could have accomplished the same thing with the usual bullshit editorial in Nature.


    If they continue with the program the way they have done up to now, they will waste another $10 million (or however much it was), and accomplish nothing. They did the same experiment 400 times, with no hint of a positive result. Peter Hagelstein wondered what they learned between experiment 399 and 400. He had other unkind things to say about them.


    The Google paper in Nature was a masterpiece of obfuscation, misdirection, and confusion. I think I know what they did -- sort of -- because I asked them, and I heard through the grapevine.


    This reminds me of the first NHE program, which was a travesty conducted by people who knew less about cold fusion than I did (which is to say, hardly anything on a professional PhD level). It was a waste of money. It gave cold fusion a bad name. When Miles visited their labs, did an experiment, and produced excess heat, they refused to look at it, and they did not mention it in their final reports. Matsui did not mention it in his presentation, which upset Miles. The final report and Matsui's summary should have said: "Miles concluded that his experiment produced excess heat, but we disagree [for thus and such reasons]." For them to not even mention it is way out of bounds in academic science. You do not invite one of the world's top electrochemists to your lab, have him work for months, and then refuse to mention his results in your final report. If he came for a 3-day visit that would be another matter.


    With friends like Google and Matsui, we don't need enemies.

  • I do not get the sense they believe there is something there. I get the sense they are playing political games, but I cannot image what games, or what purpose they serve. Judging by the Nature editorial that was published with the paper, this was a crude hatchet job intended to make people think cold fusion does not exist. That seems like an expensive hatchet job, so I do not know why they would bother. They could have accomplished the same thing with the usual bullshit editorial in Nature.


    If they continue with the program the way they have done up to now, they will waste another $10 million (or however much it was), and accomplish nothing. They did the same experiment 400 times, with no hint of a positive result. Peter Hagelstein wondered what they learned between experiment 399 and 400. He had other unkind things to say about them.


    The Google paper in Nature was a masterpiece of obfuscation, misdirection, and confusion. I think I know what they did -- sort of -- because I asked them, and I heard through the grapevine.

    Rightfully so, Trevithick/Google took a beating from Hagelstein and others about doing the same Parkhomov experiment 400x's and failing each time. But IMO the good they have done for the field outweighs that. As mentioned, some of the new presenters at this ICCF expressed gratitude to Google for either spurring them on to do their own research, or giving them the cover they needed to start.


    Trevithick has stayed involved with the community even after the nature article. That tells me something. He attended the ICCF23 right after that Nature paper came out, and hosted a discussion panel (Nagel, Duncan, Schenkel) at this ICCF24. In his talk he proudly pointed out that some on Team Google (Project Charleston) decided to stay active in the field. Indeed, Schenkel was on the panel, and his results were the one bright spot in the Nature paper.


    That does not spell "I do not get the sense they believe there is something there) to me.

  • Miles and Fleischmann were very upset about this. Look for the term NHE in Fleischmann's letters:


    http://lenr-canr.org/acrobat/Fleischmanlettersfroa.pdf


    Eight decades since computers were invented in 1945. By 1950 people knew about them. They began having a large impact around 1960. Bill Gates, I, and many others learned how to use them in the mid-1960s. Microcomputers (later called personal computers) that were cheap enough for ordinary folks to buy emerged in the late 1970s.

    That is a good question. When do we define the start of the revolution? It isn't one moment. I guess I was thinking about it in terms of the societal applications that were the final straw in breaking the old institutions, when us ordinary folks got involved.

  • Trevithick has stayed involved with the community even after the nature article.

    He was involved long before that, as well. McKubre and others speak well of him. I do not blame him, exactly. I suppose the people at Nature would only accept the paper written in that uninformative style, accompanied by the infuriating editorial. It seems that Nature said something like: "we'll publish your paper as long as it casts doubt on the people you are replicating, and as long as we can include an editorial with flat-out lies about your research." I think it is a violation of academic ethics to agree. I would never agree to that. Perhaps Trevithick et al. decided that even with those conditions, the paper would do more good than harm. Perhaps it did . . . But I wouldn't agree to it.

  • When do we define the start of the revolution?

    The computer revolution began in early 1943 at the Moore School of Electrical Engineering, with the ENIAC project. The revolution become an assured success, and future progress become inevitable in the summer of 1944, on a passenger railroad platform in Aberdeen Maryland, when Herman Goldstine had a chance meeting with John von Neumann. Goldstine was a young mathematician working on ENIAC. Years later, Goldstine remembered that he was understandably nervous upon meeting the world-famous mathematician on the platform at the Aberdeen station. Goldstine recalled:


    "Fortunately for me, von Neumann was a warm friendly person who did his best to make people feel relaxed in his presence. The conversation soon turned to my work. When it became clear to von Neumann that I was concerned with the development of an electronic computer capable of 333 multiplications per second, the whole atmosphere of our conversation changed from one of relaxed good humor to one more like the oral examination for the doctor's degree in mathematics."


    von Neumann soon wrote "First draft of a report on the EDVAC" and other seminal papers in collaboration with Goldstine. He defined von Neumann architecture and the methods of stored program control. Later, at Princeton's Institute for Advanced Study, von Neuman gathered a group of mathematicians and engineers and oversaw the IAS computer, which was a huge step forward. It was replicated at Los Alamos and elsewhere, in various versions in including the JOHNNIAC. von Neumann did not just oversee this project; he personally designed key parts of the machine and solved many difficult conceptual problems and engineering problems.


    The IAS computer had an immediate and profound impact on the whole human race: it was used to develop the hydrogen bomb. I gather they couldn't have built the damn thing without it. By the mid- to late 1950s computers were essential to things like the air traffic control system, the air defense system, nuclear weapons and reactors, and many large business. In 1956 there were ~600 computers. By 1966, after the introduction of the IBM 360, there were 30,000. 90,000 by 1970. (Computers in Business (textbook), 1972 edition, p. 52, 53.) People did not see them, but they already had a profound effect on industry, government, the military and so on.


    Sometimes, one event and one person changes the whole world. Sometimes you can pinpoint the day, the hour and the person who changed history.


    I guess I was thinking about it in terms of the societal applications that were the final straw in breaking the old institutions, when us ordinary folks got involved.

    That was mainly due to the microprocessor, introduced in 1971, and the internet, which was mainly designed in 1973 by Robert Kahn and Vinton Cerf. The internet succeeded largely thanks to support from the politician and later V.P. Al Gore. People made fun of Gore for saying this, but it is true. See:


    https://web.eecs.umich.edu/~fessler/misc/funny/gore,net.txt

  • Very nice presentation by Dr McKubre, the Youtube solution being, I think the future, because it gives dynamism to papers often very heavy especially for beginners.



    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

  • I thought the Telex started it in the 1930s

    https://en.wikipedia.org/wiki/Telex

    It's funny that people here should talk about computer systems. In 1986 I came up with what might be the end evolution of computers. It's called reconfigurable computing nowadays. It is a multi-billion industry (I didn't get the money). I still have ideas I'm working on that will change the computer industry I just need to raise a little money.


    Who is Steve Casselman
    Who is Steve Casselman? These are some notes 1 1987 SBIR
    bit.ly

    https://bit.ly/FPGA-Fabric-Eats-The-World


    I'm a visionary. I see how things work all in one flash. Just as I saw how computer systems would evolve and how we would one-day program hardware with software languages I saw how coherent alpha-beta phase waves were one way to explain cold fusion. Only quantum mechanics can explain the weirdness that is cold fusion. My explanation is simple, straightforward forward, and recorded on video. In 2012 I "saw" how it all went together. Just like I did with reconfigurable computing in 1986.


    Cheers

    Steve

  • Steve_Casselman


    Thank you for the mini-autobiography , I'm impressed.. Can you describe some of your cold fusion work?

    Currently, it's all thought experiments. I predicted alpha-beta phase waves in 2012. In 2017 phase waves were recorded in this paper published in Nature https://www.nature.com/articles/ncomms14020


    More importantly, they recorded a "mystery" where the phase of the area they were recording suddenly flipped from alpha to beta through what they surmised was a "lattice realignment" This is also predicted by my theory. They just charged and discharged their samples. In my proposed experiment you change the sample and then plate to lock in the hydrogen. Then you stimulate it at a very low frequency to generate phase waves. When I presented this to McKubre in 2013 he told me a story about how somebody's kid cranked down the frequency on the cube target and the next day the target had melted.


    I'm trying to get the experiment done. I don't have the calorimetry chops or the money or the time to do it myself. I'm doing my day job which is creating a hardware/software codesign project. It's a runtime loadable microcoded algorithmic state machine that is programmed in a subset of C.

  • This work should interest you.


    Currently, it's all thought experiments. I predicted alpha-beta phase waves in 2012. In 2017 phase waves were recorded in this paper published in Nature https://www.nature.com/articles/ncomms14020


    More importantly, they recorded a "mystery" where the phase of the area they were recording suddenly flipped from alpha to beta through what they surmised was a "lattice realignment" This is also predicted by my theory. They just charged and discharged their samples. In my proposed experiment you change the sample and then plate to lock in the hydrogen. Then you stimulate it at a very low frequency to generate phase waves. When I presented this to McKubre in 2013 he told me a story about how somebody's kid cranked down the frequency on the cube target and the next day the target had melted.


    I'm trying to get the experiment done. I don't have the calorimetry chops or the money or the time to do it myself. I'm doing my day job which is creating a hardware/software codesign project. It's a runtime loadable microcoded algorithmic state machine that is programmed in a subset of C.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.