The Playground

  • Another groundhog day for ECW community... ;)


    Rossi about to drop current "product" for the next evolution in the pipeline....? Again and again... which idiot would buy an ECat that requires a power connection to the wall plug, if he can use the very same version without grid connection??? ^^


    Frank's latest realization and question to himself on ECW:

    "If there is no difference in cost between the grid and gridless SKLeps, and there is no difference in performance, I can’t see any reason to order a grid version, as it adds unnecessary complexity. My guess is that the vast majority of customers will opt for the gridless SKLeps and probably the gridless SKLeps will become obsolete in time."

  • Frank's latest realization and question to himself on ECW:

    "... I can’t see any reason to order a grid version, as it adds unnecessary complexity..."

    Ah, the poor, sweet Summer-child!

    There is no complexity in the 'grid' version - AR's Blooper with the Coarse Voltage control instantly revealed that to the World: the slightest smidgeon of Voltage adjustment made the demo 'circuit' go straight from accepting 0.5mA to handling 0.6A input, without breaking sweat ...and it hadn't even reached the upper limit of its operating input range (ie. 12V). The totality of the grid 'circuit' is likely to be nothing more complicated than a series resistor (as I think was mentioned at the time), if even that


    Also, it is highly unlikely that there is any difference between the existing 'grid' version and the new 'gridless' version, since the 'grid' version in the existing demo is completely isolated from any galvanic connection to any of the power-carrying grid wires, or to ground


    I'm happy to be proved wrong but I sincerely believe now that Leonardo Corp. is not in business to sell energy-generation product to the public

    Gie me ae spark o' nature's fire, That's a' the learning I desire

    R. Burns

    Edited once, last by nul-points ().

  • A few days ago I was thinking briefly of the circuit and possible buck converters and MPPT, but really, all that is in the PSU, so why add anything at all? The unmonitored PSU has all the power consumption in the circuit and best to keep it that way.
    A big enough resistor to protect the LED panel, within reason… and nothing else.

    Just don’t touch that dial!

    The 220 V input version might be exciting!

  • A few days ago I was thinking briefly of the circuit and possible buck converters and MPPT, but really, all that is in the PSU, so why add anything at all? The unmonitored PSU has all the power consumption in the circuit and best to keep it that way.
    A big enough resistor to protect the LED panel, within reason… and nothing else.

    Just don’t touch that dial!

    The 220 V input version might be exciting!

    yes, i think AR lucked out in 2 ways:

    a) his LED matrix illuminated 'just' sufficiently for his 'demo' purposes at around 10V, within specified operating range;

    b) the existing PSU supplied by FA, although not the one specified by AR (due to unavailability) which would only show current from 0.01A and above, showed greater resolution but has an inherent threshold current around 25mA (or 12 mA, depending on max current rating) below which the current is shown as 0.000A (meaning that you can power up to approx 250mW at 10V before the Current & Watt readings begin to show)


    LOL @ 220V version - happy memories of accidently connecting my 12" Bass guitar speaker across 220V, when i was a teenager ...it never sounded so loud, before - or after! ;)

    Gie me ae spark o' nature's fire, That's a' the learning I desire

    R. Burns

  • Thanks Shane!


    Gatekeeping always serves vested interests - of whatever flavour: personal, institutional, or governmental


    The accepted framework of science has evolved over centuries (or more), meanwhile important discoveries have continued, regardless


    For the benefit of all, the birth of new ideas should be mentored and encouraged by wise people, not policed by other competitors in a 'race to glory'


    A worthwhile experiment, including its report, is a true creative exercise which should extend human knowledge - it shines a spotlight on yet another truth about our universe. All these truths belong to all of us everywhere - our grateful thanks, and a suitable reward, should be given those who diligently investigated on our behalf; the knowledge gained belongs to us all, however, and therefore should reside in public-domain libraries, not behind undeserving paywalls without merit


    I could be wrong, and often am ...but at my age IDGAF ;)

    Gie me ae spark o' nature's fire, That's a' the learning I desire

    R. Burns

  • The rise and fall of peer review - by Adam Mastroianni (substack.com)


    The author claims peer review came about largely after WW2,

    That is what I read elsewhere.


    This paper says:


    (Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)


    I read something like that in a biography of Einstein. I don't recall it said he published elsewhere, but he was upset. He was puzzled, as well. "What's this?" was his initial response.


    My impression is that most cold fusion researchers have a low opinion of peer review. However, Mel Miles and others say that when it is done right, it can be very helpful. Miles says reviewers have often caught errors or problems in his papers, and he appreciates it.


    I am not a peer-reviewer. Not by any stretch of the imagination. However, I am a copy editor and translator of cold fusion papers. I have edited hundreds. That is like being the physics department secretary. You correct the professor's spelling and "it's" versus "its." You also tell the prof his sentence makes no sense. In many cases I have written "this makes no sense" or "I think that is wrong." I often make many suggestions for authors who are not native speakers of English. I know how helpful this can be. When I write papers in Japanese, I really appreciate it when Japanese speakers correct my work. Many researchers appreciate my assistance. A few of them hate it. They do not want me to edit their papers. So I don't, and they end up publishing gobbledygook.


    People who learn a second language sometimes make subtle mistakes their entire lives. Martin Fleischmann spoke and wrote superb English. Better than 99.9% of native speakers I would say. He once told me he would like to give a lecture on electrochemistry in iambic pentameter. Yet despite his mastery of the language he sometimes made mistakes with idiomatic expressions such as "put that in your pipe and smoke it." His wife would casually correct him. She had been doing that for decades so he wasn't upset -- it hardly slowed him down. My wife has been correcting my Japanese for decades. It is like when your wife reaches over to pull your shirt collar out of your sweater.

  • Editorial work is as important as the technical “peer review”. A scientific paper needs to be not only techcnically right but also be readable.


    When I have peer reviewed I focus in the technical aspects but I also add my personal engineering point of view. As an example, if I see something that is technically correct and ingenious, but I fail to see how it could be implemented in practice or economically, I ask the author to add a comment on his perspective of how this aspect could be boarded.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • The rise and fall of peer review - by Adam Mastroianni (substack.com)


    The author claims peer review came about largely after WW2, and has been a flop. It may even do more harm than good. Long read, but interesting. She has plenty of studies to support her belief.



    Peer review is necessarily imperfect. I agree it often does not catch bad errors. I agree doing it is a real pain, and uses up valuable time (one reason why it is not always done as well as it should be).


    But....


    An incorrect argument against peer review


    The argument above is simply logically false. The fallacy is to conflate "peer review lets through a lot of rubbish" with "peer review on average filters out more rubbish than useful papers". I think the utility of a filter - and also the way that even when using a filter this does not constitute gatekeeping - may not be understood by all so I will say a bit about it.


    Many here, I have noticed, have an absolutist view about experimental science, having some bar for what constitutes a correct experiment/paper and then fully believing, or fully disbelieving, the content. The same idea extended (some years ago) to "Rossi's demos have been validated by genuine Profs, therefore his stuff is likely correct". Distinguished professors are as capable of being fooled (perhaps more so, because they do not expect it from somone who speaks philosophy of science) as anyone, and detailed analysis of their fields of expertise showed lack of relevant experimental expertise except for Levi - who was very clearly not independent.


    This all or nothing mentality perhaps would not register the (simple) probabilistic defect in the above lined "peer science is bad" argument.


    Common mistakes when evaluating science


    And before saying my bit about peer review I would like to make a link with how conspiracy-based contrafactual anti-climate-science memes are taken up by many clever people - typically engineers. From here an interesting comment:


    After a couple of years debating on sceptics sites I have encountered a number of engineers.


    They tend to have worked in technologies based on mature sciences. Their equations and physical constants are set in stone. Their instruments measure to the limits of precision they require. By training and experience they have little experience with the uncertainties of observational science.


    This makes them suckers for the uncertainty meme pushed by propagandists such as Judith Curry.


    long version from David Brin: Skeptics vs Deniers.


    I am not identifying anti-consensus climate-deniers with those supporting LENR. Merely saying that it is common for non-experimental scientists to misunderstand anything where results are probabilistic. LENR advocates here might agree and note that this is why LENR theories are dismissed. But it can work the other way too! Anyway, my point about peer review is that scientific papers represent arguments of better or worse quality about things which themselves are (and need to be) uncertain. So that there will be 9/10 good but wrong papers for every 1/10 good and right paper - whenever we look at something not understood. And which one of those 10, if any, is right often remains unclear for a long time. A case in point would be the mechanism behind ball lightning.


    (Disclosure - I have a foot in both camps - being an engineer who has significant maths and theoretical physics training and a math interest in uncertainty).


    The argument about Peer Review


    When doing research you do not read all the contents of whole Journals. You take your stuff, do a quick scan for what might be relevant. Spend a bit more time to narrow down a shortlist, add any candidates to the "this is of interest" collection. And then, when reading "this is of interest" properly, you do citation backward (from paper to something it cites) and forward (from paper to something that cites it) reference. You leverage the fact that worthwhile papers get cited by other people doing related work.


    That is a lot of reading. And notice that the gatekeeping does not much exist. When following stuff which is interesting you do not pay attention to whether the Journal that published has high or low standards, or even whether the work referenced is a Journal or somone's Doctoral or Master's thesis. Trust me, you can get anything published in a form that can be referenced and found, and interesting work is not always that which is published in peer-reviewed Journals or even admitted to high quality peer-reviewed conferences.


    However when not following clear interest, and browsing, it helps (the reader's sanity if nothing else) to be reading stuff that is at least well-written in form, with clear introduction, conclusions, argument, appropriate references, etc. Without those content-agnostic elements the value of the work, regardless of whether it is actually correct, is much lower. That is because "is it correct?" is the question that gets answered last. Maybe never. And "is it useful?" depends on all those other things which peer review helps with. But - as noted above - peer review is no silver bullet - it increases overall quality on average. Bad stuff will get through (it is quite possible to write stuff that sounds convincing and well argued, but which any expert could pick big holes in. That stuff is easily published as the antivax papers that get through shows). Good stuff will also be refused. Luckily, there are 1000s of Journals out there making independent decisions, so no refusal will stop good work forever if it is good. The horror stories you hear are when some highly novel idea that no-one understands does not get published in the best Journal, and is difficult to publish anywhere. But then that is not all bad. If no-one understands it the authors need to explain it better, add links or extra evidence, etc. All of that is needed so that most people will be able to pay attention to it. And any way "bad" and "good" are not always so clear. Certainly "good" takes a good time to be known if by "good" you mean "correct".


    I am open to the idea that some crowd-sourced vehicle such as Reseachgate - combined with citation or download overall metrics (Ummm - Rossi shows as the perils of paying too much attention to downloads as a quality metric) could in the end be a better - but more cost-effective and more accurate - filter than publication in a Journal. Were funding bodies to use such metrics you can be sure that if they can be gamed - they will be. So I don't see that as a silver bullet either. The UK govt has de-emphasised (rightly) number of papers published in the most recent REF exercise.


    So what is (harmful) gatekeeping?


    What happens on ECW, or less often here, where clearly sound arguments not liked by the readership and of continued relevance to discourse are shut down and discourse continues with embedded elements that are affected by the arguments rejected but not resolved discussed without any reference to those (unresolved but relevant) arguments. That is a bit of a mouthful but the English language is not good at making statements about uncertainty.


    On ECW, the contrafactual element would be "Rossi is not a crook" (there are many other, but that about summarises it).


    Here the contrafactual elements are much more subtle - but the one that drove me away from arguments about LENR was when ascoli's simple "foamgate" argument - expounded at length - was shut down without any acknowledgment that ascoli's questions had not been addressed, and that if his/her argument was false it could be addressed simply. Those two things together do not quite prove the argument true - but they make it something that should stick in everyone's minds as "hmmm, something here is not resolved, and important". (I confess to being reluctant to engage with it myself initially for various reasons - but I am a bit obsessive about unanswered arguments when they can be answered).


    Foamgate does not disprove all those early LENR experiments. In fact it casts significant doubt on only one experiment - the F&P boil-off demo (and its clone replication).


    So why does that one thing being not resolved matter? I argued consistently during foamgate that it does not matter. There are many LENR experiments, and each can be considered on its merits. But, many here have (look back for evidence) claimed that the L&F boil-off experiment was very important and had irrefutable public evidence of beyond any chemical explanation for the observed heat production. That claim is stated explicitly in F&P's paper. It the claim stands, and the experiment is replicated (which it pretty well was) it proves LENR - or at least an effect that is either very novel physics or LENR. So the integrity of that experiment matters to many here.


    I like to follow clear arguments on their internal merit without much context. It makes me more willing to engage with LENR without being a believer than many, who dismiss it on (maybe false) contextual grounds. I do eventually add context into my overall decision-making, as we all do. I am an outlier here because I add the context of


    "LENR is unproven and has as yet no predictive theory, so a large number of disparate artifacts, errors, misinterpretations, and very occasional (Rossi) fraud looks more plausible than LENR"


    whereas others add:


    "LENR is proven beyond doubt by so many positive experiments - there must be some novel physics or which LENR is the least complex (less of a stretch than hydrinos, for example)".


    At that point my judgements and those of most others here part company. Any rational person would require a higher standard of evidence in the first context than the second. Any Bayesian scientist would agree that context cannot be separated from judgements, no matter how much we would like to do this. And to those who argue bias - on either side - it can exist, however even without bias context will alter judgements.


    Happy Christmas to everyone - try to avoid tribal identification! (My clarion call this year to all including myself).


    THH


    PS - where I agree with at least some others at ICCF24 is that a replicable, certain, experiment showing LENR is needed in order for mainstream scientists to take notice, at which point there would be very major effort and likely successful commercialisation. You can see with the NASA work where some aspects of LENR-lite (fusion induced at higher than expected levels because of electron shielding and coherent effects) have passed this benchmark but with as yet no clarity whether they will end up being commercial, because "LENR-lite" is significantly less easy to commercialise than LENR.

  • And I'd agree with Jed's comments here about Peer Review (did not see this till after I'd posted).

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.