THHuxleynew Verified User
  • Member since Jan 18th 2017
  • Last Activity:

Posts by THHuxleynew

    Sigh...


    You have not been reading my posts - or you would know that what I said (and linked) shows the straw man that this paper argues against.


    The distribution of the structural charge density of the deuteron is examined and its symmetry is analyzed on the basis of experimental data obtained from electron scattering experiments, in which the total symmetry of the deuteron structure is evidenced. Since according to the conventional nuclear model the deuteron is formed of a neutron and a proton, whose structural charge density is substantially different, it follows that the juxtaposition of the distribution of their charge density is asymmetrical, thus being in deep disagreement with that of the deuteron which is symmetrical. This incongruence is thus analyzed. The conventional model of a proton juxtaposed to a neutron is unable to provide a credible explanation of the symmetry of the deuteron charge distribution since it is composed of two different particles, one neutral and the other one charged, and with a highly dissimilar structural charge density. Consequently, an explanation for the structural symmetry of deuteron is proposed, based on a revised approach.


    The "conventional nuclear model" is of course only an approximation: and both (standard model) theory and experiment show that (quite reasonably) the constituent quarks in a nucleus bind to each other tightly beyond the nucleon triplets. Also the approach here loses the fact that those constituent quarks - even if binding together mainly as nucleons, have probability distributions, so that classic analysis cannot predict the resulting symmetry.


    This description of the deuteron is based on the authors previous description of a neutron (section 3) and depends on it.


    But that is wrong: I have previously given a fundamental experimental reason why electrons cannot be localised to a nucleus. The size/momentum characteristics of electrons have been studied in great details experimentally and correspond exactly to the theory.


    Sardin can be excused for not explaining this discrepancy because his writing (as read here) excludes any possible QM modelling of the nucleon constituents. It is as though he is stuck in the 19th Century and QM not yet accepted. Even Einstein - who hated it - had to admit all those experiments.


    I find the credibility gap here extraordinary. It is as though people here look at a complex proof, and wonder it. Then, when the first bit contains a clear contradiction, and this is pointed out, they juts ignore that on the grounds that the rest of it is so compelling.


    Unfortunately - in physics an in maths - if you start off with a contradiction clearly unreal - you can prove anything.


    Finally - the actual (experientally measured) structures, even of neutrons, are very complex. So proposing a simple structure (even if not theoretically flawed) does not work:

    New insights into the structure of the neutron
    An international research team has measured neutron form factors with previously unattained precision.
    www.sciencedaily.com


    So - in summary - criticism of the standard model is invalid because:

    • The standard models predictions for nuclear structure are complex, and cannot be determined without QM, not considered here
    • The model considered here is contrary to experiment (known properties of electron)
    • The model considered here is contrary to experiment (even neutrons, let alone deuterons, have experimentally measured v complex structure)
    • The model here does not show the correct quark-like constituents as shown experimentally by deep inelastic scattering and otehr ex


    I am not a great person to be reviewing this (not an expert). But I know enough about the review process that such major flaws would need to be addressed by the author before it was published in any non-predatory journal. Rossi would take it, though.


    As for those 3 quarks. It is only 3 quarks in one sense of the word. Perhaps this analogy can give some idea of teh complexity but also emergent simplicity of the standard model


    (Virtual) gluons are part of nucleons in the same sense that (virtual) photons are part of the atom as a whole. You don't need to talk about them because they are implied in the strong and electro-magnetic interaction respectively, and unlike the electrons and protons and quarks, they are uncountable - that is, a protium atom always has one electron and one proton made of three quarks, but there isn't even any meaningful number of the virtual photons and gluons. Don't think of virtual particles as "this could have been a real gluon, but isn't" - they are excitations in their corresponding quantum field that don't follow the rules for particles in that field.

    Don't try too hard finding a "yes-no" answer to anything in physics - most of the answers are more like "Yes, but...". There can be hundreds of implied conditions in anything you learn about anything (one good reason why you need to slowly build up on strong foundations, rather than just skipping to some random interesting physics topic) - e.g. do relative velocities combine additively? Yes (as long as we're talking about e.g. cars on a road). No (if you're talking about e.g. high energy particles hitting Earth's atmosphere). This exists in probably every single physics question, so the ", but..." is always implied - little need to keep carefully reminding people of it beyond their introduction to science in general.

    For the first part, I'd just add to the already existing great answers: three quarks aren't the only explanation, of course. Another way of looking at the problem is that the (e.g.) proton is made out of thousands of quarks and anti-quarks that are constantly created and annihilated, and if you add them all up at any given instant, you get three more quarks than anti-quarks. Except for their energy (which contributes to the mass of the proton as a system), they almost entirely cancel out except for the three "extra" quarks. There are many ways of looking at this picture as well - some consider the "cancelling" quarks to be real particles, some consider them to be virtual particles and some see an interplay between virtual quarks and gluons. Needless to say, all those alternatives predict the exact same outcomes for any typical chemistry question - we're talking about differences that are either tiny for any practical purposes, or possibly even just a mostly meaningless semantic debate (does a falling tree make a sound if nobody hears it?).



    THH








    You can find regimes where QCD modelling has not got very far - true.


    Here is the problem:


    In formulation, QCD and QED are strikingly similar. Both are gauge-invariant quantum field theories. The key difference is that photons in QED are neutral; so they can’t interact directly with each other. The gluon is the QCD analog of the photon; it carries the strong force between quarks. But quite unlike photons, gluons do carry color charge, the analog of electric charge. So gluons interact directly with each other as well as with quarks. (See the article by Frank Wilczek in Physics Today, Physics Today 0031-9228 53 8 2000 22 https://doi.org/10.1063/1.1310117. August 2000, page 22 .)

    That seemingly innocent change has dramatic consequences for phenomenology. It is the root of QCD’s daunting complexity. Electrons, positrons, and photons can be separated and isolated at macroscopic distances. Quarks, antiquarks, and gluons cannot. This prohibition, called color confinement, assures that all the elementary particles (the hadrons) composed of quarks, antiquarks, and gluons come in precise color-neutral combinations. Loosely speaking, this means that they come either in quark–antiquark pairs (the mesons) or in triplets of quarks (the baryons). Several recently discovered “pentaquark” baryons appear to combine a quark triplet with a quark–antiquark pair (see page 19 of this issue.)

    Why only color-neutral combinations? In QCD, quarks can have three colors. Conventionally, they are labeled red, blue, and green, but of course they have nothing to do with optics. Antiquarks have the corresponding anticolors. Triplets of quarks containing equal portions of the three colors are color neutral.

    Try to pry loose one of the three valence quarks in a proton. Before going much farther than the radius of the proton (about 1 fm or 10−13 cm), you’ve done enough work to create a new quark–antiquark pair. Pairs promptly appear, choose new partners, and you find a meson in one hand and a proton or neutron in the other. No isolated quarks!


    However in many other regimes the calculations converge better, and we have got better at doing very complex calculations


    2004 (30 years after it was formulated) lattice QCD really starts to make a mark


    Cookie Absent


    The most important theoretical advance in recent years has been the development of improved actions, that is, improved methods of formulating QCD on the lattice. As in classical field theory, the QCD action is the integral, over space and time, of the Lagrangian density. In lattice calculations, this four-dimensional integral is approximated by summing over discrete lattice points in spacetime.

    With substantial computational resources at NSF and DOE national centers during the past three years, lattice gauge theorists have used an improved “staggered fermion” (ISF) action to generate, and make publicly available, a large set of gauge-field configurations (see box 2). 5 Staggered fermion actions, introduced by John Kogut and Leonard Susskind in 1976, are so called because the algorithm spreads the fermion spins over adjacent lattice points.

    The newly available gauge-field configurations include the vacuum-polarization (quark loop) effects of u, d, and s quarks. Several lattice-QCD collaborations, working together, 6 have recently used these configurations to determine a variety of hadronic quantities to an unprecedented accuracy of 3%. All of those quantities had been measured previously in the laboratory. Figure 2 plots the ratio of the simulated value to the experimental one for each observable. The only inputs were a few experimentally known hadron masses that were used to determine the lattice spacing and the masses of five of the quark flavors. The t quark is too heavy to contribute. The rest is pure prediction.

    Well, TG is no longer funding research on LENR that I know of, although some components of the team are clearly continuing on. One being Schenkel at LL Berkely who attended the ICCF. What Matt (now with DCVC venture) said at ICCF24 was a recap of their Nature paper. Old news IMO. And as been pointed out here, they appear to have gotten bogged down by putting so much of their effort into Parkhomov.


    On the bright side, which I default too, the ICCF did present a number of individuals, and teams, now claiming repeatability...defined as being able to replicate "at will", or every time. We may have that reference experiment after all, and not just one.

    All you need is for everyone to agree with what these teams say about themselves (for at least one of them).

    I meant bulk Pd-D in general. As opposed to, say, Arata, who did gas loading of nanoparticles using electrolysis. That was replicated, but only once as far as I know. I did not mean the boil off experiment. Only one or two others did it, so I would not call that portion of the experiment a reference.


    (By "reference" I mean a standard that is well documented with instructions that many other people have replicated. Other people may define the word differently.)

    Exactly - but we know more now than then - surely a bulk Pd-D experiment could be documented in detail with expected results. If highly sample-dependent a protocol for identifying a large source of good samples could be established. Such a source would allow replication and testing with the main loss of replicability removed.

    Also, from my vantage point, I think the debate exposed a weakness within the community that needs to be addressed. Many times we have heard it said that LENR is “proven”, as evidenced by the many “replicable experiments”. However, when those experiments were held up to closer examination, there were always caveats; too many to justify the claim that LENR is a proven science. I think there needs to be some clarification as to what “replicable" and "proven” mean, when applied to LENR.

    Yes - my view exactly.

    "Which experiment do we use?" Seemed simple, but like now, the more they dug into the literature, the less obvious it was.

    If you look at the ICCF24 presentation, and also some of their papers, the plot thickens.


    They definitely tried initially to understand and replicate the Pd-D experiments.


    They thought - in line with LENR field thought - that high loading was important.


    They therefore spent a long time trying to obtain high loading.


    They found:

    (1) high loading is very difficult - as high as some of the claimed loading in good experiments impossible. (0.95 max, and that very rare, they found)

    (2) measuring loading accurately was extremely challenging. They tried several ways. I think they discovered the way it was typically measured in LENR papers was wrong.


    That is all interesting, and deserves airing. Were they incompetent at getting high loading? Or were they just measuring better?


    There is then a black hole - did they try those Pd-D experiments. If not, why not?


    THH

    That's an odd thing to say. Science is not about beliefs. I would expect you to say "I don't think the experiments done in Ni-H systems are properly designed to affirm that they are evidence of excess heat". That would be more in line with the THHuxleynew we know.

    I am happy to say it the PC way:


    "My understanding, based on the way the Ni-H results have scaled with reactor insulation, the clamed temperature dependence, and the types of calorimetry, is that they can on balance be best described as systems in which nuclear reactions do not play an important part"


    I feel so much better now!

    My long ago physics lecturer said of the electron when asked about it's structure 'it's an electron for goodness sake, an almost imaginary convenience for describing some aspects of physics'.

    That is because he was old fashioned and thought that something exhibiting strong QM behaviour had to be described by wave-particle duality (an unpleasant thing) rather than by the beautiful anti-commutative maths of QM.


    But - we know from experiment what are to the limits of what we can see (which is very very small) point particles (electrons, quarks, other leptons) and what we have seen are composite particles (baryrons, mesons).


    Understanding that point particle is not a particle but a QM wave function.

    It depends what you mean by 'lower'. I and many others have found that 300-350C is a very productive temperature.

    I was actually looking at the dependence of excess heat on temperature. All the Ni-H work, if you believe it, shows a positive relationship.


    Mind you - I don't believe the Ni-H work and although F&P claim the same, I do not believe their results (at least as described in the paper Curbina says he will delete my posts if I mention either).


    (I think these tangential references I make would not be necessary if here gave me a different F&P gold standard reference).

    Well we should ask only LF people involved in nucleus way to see they won't go especially in the same way of understanding.

    Trying to explain the nucleus is similar as speaking with god :) Grateful for Egos even mine 8)

    Since no-one here likes to look at QCD calculations, and what we know from experiment is that nuclei are a whole mess of quarks interacting (not juts in triplets as nice clean nucleons) no-one here is going to understand it.


    And if you do look at the QCD stuff it does not help much because it is in a regime where the high order terms are too complex for us to calculate with any precision. Think of its a being like predicting the exact shape and dynamics of a waterfall. It is a chaotic system that that cannot be done, even though there are some emergent features that can be calculated easy enough.


    Emergent features like the moments (of different fields) seen from a long way away.


    Anyone who says they can avoid that complexity?


    They are not looking at the experimental evidence.


    However, it is possible we will find better ways to calculate QCD - so all is not lost - though the known complexity of the structure will limit neat solutions

    Just so everyone knows, I warned once that I would delete posts relating to that, and I have simply done exactly as warned, I don’t mind keep doing that, we are not going back to that topic again. The closed thread contains all the arguments from both sides that will ever be.

    Well, I was promised otherwise - that correct reference where logically necessary to reply to others was allowed. Delete me on this thread and see what will happen!


    (yes - I am a prima donna....) :)

    You always “circle back” to insist on obvious potential sources of error that imply researchers are clueless about basic experimental methodology. I have yet to see you reading the 2020 paper about D flow through PdAg and make any specific comment about what they report instead of listing all the potential errors they could have made.

    You are right Curbina - and I must reserve judgement on that one because I don't think I've read it. Perhaps others could say why they do not propose it as a reference experiment, or why they do?


    But researchers are sometimes clueless - and more than normal seem to be doing LENR. Niot to say there are not some very competent researchers as well.


    The main reaons is that there is in principle no way, given waht we now know of LENR (almost nothing clearly predictive) to distinguish between hunting for subtle experimental artifacts and hunting for LENR. Most of teh accounts I have read focus on "maximising the effect". those that seek to do something else have not yet resulted in much prediction of use - or it would be used to construct a good reference experiment.


    Maybe this can be done - it is jusyt that no-one seems to like other people's reference experiments. That says something.


    A bit like the Tory party at the moment... ( :) whoops - I did not mean to say that! Just slipped out).


    THH


    PS - Jed and I agree Pd-D but I want a good modern description takling into account all that understanding we now have of how LENR works to deliver a more reliable experiment. Not 100%, but with correct initial sample screening and checking (built into the protocol) high enough to be useful.

    Curbina.


    THH doesn't wish to be better informed, or visit a lab to see a working system (threads passim) since then he might face an existential conflict of doubt about his scepticism..

    You have never yet said what "seeing" an experiment would give me that reading good write-ups would not?


    I have always found that given a complex system with a demo, all my immediate responses are uninformed unless I am very familiar with it. Were I an expert - things might be different, although then I'd maybe not trust my conventional expert ideas when given with a system that might be anomalous because it broke normal rules in some subtle way.


    Anyway - I'd not have been a good mark for Rossi - and while I have every respect for Alan as a conscientious and competent experimenter (neither word applies to Rossi) the same deficit of "seeing things work" applies.


    THH

    I can see where this is going...right back to Foamgate. How about we keep it at "You are wrong, I am right" and vice versa, and leave it at that? No more deep dives.


    It can be like the old "Tastes great...less filling!" beer commercials. Never a real resolution to the debate as it is really about personal opinion. Same with the FP boil-off.

    Shane:


    I am not the one claiming that F&P experiment is a reference experiment. I'd hope that no-one here is. If you keep quiet about it - I will. I have sympathy with Jed when he says Pd-D is the way to go for a reference experiment it is just that Simplicity is the one I have always been given here, when I ask. I am hoping for something better. Surely in those 180 replications (or whatever) there is one very carefully done and well written?


    Perhaps Jed, and others, would put forward McKubre's sequence of experiments as that? It is a shame, for a reference experiment, that they showed relatively low levels of excess heat compared with energy in: that means you need very accurate calorimetry to see an effect if you replicate them. But I'd guess they are the best you have?

    This is the principle of lattice confinement fusion, as it runs within crystalline metal lattices, which contain atoms naturally oriented into lines already. Here we can observe many paradoxes, such as cold fusion yield decreases with temperature, because low temperatures help atoms to maintain their order better.

    Except that nearly all the experimental results I have seen show CF increasing with temperature.


    I agree - a lot of CF mechanisms would be more plausible at much lower temperatures.

    THHuxleynew , The idea that coherence can only be attained at lower temperatures is what mainstream science maintains. There's however a patent from Lockheed Martin granted in 2011 for a method of producing coherent matter beams that shows a way to induce coherence by restricting movement instead.


    As Alan Smith suggests, and as experiments like SAFIRE and many others show, even if mainstream insists in ignoring them, is that self organization and self attainment of coherence is something that Nature likes to do.

    I don't think Mother Nature pays much attention to patents. Coherence is a known thing - technologically you are right you can get high temperature coherence in various ways - for example topological superconductors (I have not looked at that patent so do not know whether it is one of those ways - undoubtedly reducing degrees of freedom gives you the possibility of higher energy gaps). You need to restrict energy levels so that you have a larger gap between the coherent state and any adjacent state - that gap must be larger than thermal noise to start to get coherent behaviour. Which is why, if coherence underlies an LENR mechanism, you would expect lower temperatures to be a good idea.


    With 0.1neV coupling in a distributed coherent state, you get a 0.1neV change from whatever is the ground state by altering the relative spin (or whatever is doing the coupling) between components. So it is difficult to see how you can get a large gap. Nuclei are pretty well defined isolated things. With electrons the wave functions interpenetrate and all sorts of complex things are possible. With nuclei you have very long-range (comparatively) couplings where each nuclei can see the other nuclei as a set of quantum numbers for spin, charge, etc.


    It is always good to distinguish between fundamental limits (like the speed of light, or the fact that coherence requires a single state) and technological limits.


    THH

    Everyone I know agrees that the original F&P Pd-D experiment is a reference experiment. It is well defined. The control parameters are clear. It was widely replicated at high signal to noise ratios. What else does a reference experiment need to be?

    Perhaps you could link a write-up of this?


    Simplicity was obviously not suitable, and It would be interesting to see here a proper write-up of an F&P experiment without holes that could be used now for that purpose.


    Or, if F&P never wrote it up, perhaps somone else has?


    Anyway I am with you in that D/Pd electrolysis seems to me to be the best bet. We juts need to remove all holes in the experiment, and use best understanding now of how to get higher probability of it working. All that needs to be incorporated in the reference experiment spec.


    THH

    Using meters at close to resolution is not accurate and cannot be trusted - check the meter accuracy spec which will include a +/- (LS digit). If it does not have this all bets are off as to accuracy.


    In addition, dc meters (and most cheap power meters) often have poor time resolution: fine for 50Hz but may give erratic results for spikes.


    Don't trust anything related to Rossi and electrical measurements!


    I'd use a good storage scope for current and voltage. A picoscope is actually quite good for this sort of thing.

    What he did not say was the fact that getting coherence from 0.1neV coupling requires noise coupling below 0.1neV level. Kim (following Hagelstein) had the reasonable view that lower temperatures would enhance coherent effects and therefore help. His experiments did not however show this.

    The Chinese worked with SS jacketed TCs.

    Indeed. Many people will do this, in which case all should be good for that one thing as long as the TC seal is good (you need to look in detail at the spec for long-term exposure of the part where the wires come out). But I'd expect this to be good.


    My point is you need to check every possible error mechanism for applicability to every experiment - not just find that one error does not apply to every experiment.


    It is hard work.