So what energy does this yours additional electron-proton state have?
Why e.g. in energy spectrum of stars we don't see its required additional lines: when this new state deexcitates to a known state, or the opposite?
So what energy does this yours additional electron-proton state have?
Why e.g. in energy spectrum of stars we don't see its required additional lines: when this new state deexcitates to a known state, or the opposite?
This is obvious nonsense! Fusion is the least kinetic process you can think off. True is only that most experiments so far did see fusion upon kinetic impact of mass. We see/measure LENR at room temperature!
You start with two nuclei in a large distance, need to take them to distance of nuclear forces (~fm), what due to Coulomb repulsion requires investing huge energy (MeV-scale).
How does their distance r(t) evolve in time? Where this energy comes from?
If you are saying that there are some additional energy levels of atoms - not known to atomic physics ... if having lower energy than ground state atom (like these "hydrinos"), everything should deexcitate to such state as physics searches for the lowest one - fortunately nothing like this is happening.
Otherwise, states are still extremely easy to confirm or disprove: e.g. through excitation or absorption spectra. Lack of corresponding energy lines: added or removed, means that there are no such states.
E.g. in stars all such hypothetical additional energy levels would be used, and well seen in energy spectrum - but nothing like this is happening - we see only known lines.
If your belief is based on a theory requiring additional states which were disproved in a countless of ways, don't be surprised that this field is not treated seriously.
If you indeed see fusion in room temperature, electrons are absolutely crucial there - but not to bind with nuclei what again requires investing huge energy (e.g. p + e + 782keV -> n), but to remain localized between the two nuclei, like in perfect "+-+" configuration collapsing to a point.
The lowest energy state for electron + proton pair is ground state hydrogen atom: 13.6eV below them being far away.
Otherwise e.g. space vacuum wouldn't be filled with hydrogen, but with this something different of lower energy. All hydrogen would just collapse to it
To bind them: form neutron, we would need to invest
m_n - m_p - m_e ~ 782keV
what is many orders of magnitude higher than used in chemistry, in 1000K there is barely thermally available 1eV.
Fusion of two nuclei is a very kinetic process - they have trajectories which need to end in ~fm distance so that nuclear force can take over.
E.g. to get protons to 2fm we again need kee/r ~ 720keV.
Where would like to get such energy in ~1000K? To get it thermally you would need ~10^9K instead.
To make such fusion realistic, you need electron between them: according to Coulomb, perfect symmetric "+ - +" configuration would collapse into a point.
To convince the mainstream to hypothetical possibility of nuclear fusion in low temperature, we need a concrete mechanism for crossing this huge Coulomb barrier, and electron assistance seems the only possibility (? I still haven't seen any other ?)
However, it requires that electron remains localized between the two nuclei - while it is natural if considering its trajectory, mainstream requires swelling the electron into huge wavefunction, making such localization practically impossible - hence this possibility currently is not treated seriously.
To change that, it is crucial to show that such "swelling of electron" doesn't always have to occur - that its charge has a trajectory.
However, such trajectory picture requires "local realism", for which there is Bell violation counter-argument: standard view on "local realism" contains a misunderstanding we need to repair first - e.g. to maintain electron localization to screen for fusion of two nuclei.
The most obvious is Mermin's inequality - for binary A, B, C literally "tossing 3 coins, at least 2 are equal": Pr(A=B) + Pr(A=C) + Pr(B=C) >= 1.
However, QM formalism allows to get sum below 1.
We can repair the standard: "evolving 3D" "local realism" misunderstanding by replacing it with time-symmetric: "4D local realism":
- in spacetime the basic object is trajectory, hence we should use their ensembles, e.g. Feynman path integrals are equivalent with QM,
- we have time/CPT symmetry in Lagrangian mechanics: we successfully use from QFT to GR,
- in Born rule rho~psi^2 one psi comes from past (propagator from -infinity), second from future (propagator from +infinity), like in TSVF: https://en.wikipedia.org/wiki/Two-state_vector_formalism
Here is example of construction of violation of such inequity by just assuming uniform distribution among paths ( https://en.wikipedia.org/wiki/Maximal_Entropy_Random_Walk ) :
Description of construction (details: page 9 of https://arxiv.org/pdf/0910.2724 ) :
The considered space is graph on the left with all 2^3 = 8 values of ABC: in 000 and 111 we have to stay, in the remaining vertices we can jump to a neighbor.
The presented measurement in time=0 ignores C - we have 4 possible outcomes (red squares) determining exactly AB.
Assuming uniform probability distribution among paths (from -infinity to +infinity in time like in TSVF), we get Pr(A=B) = (1^2 + 1^2) / (1^2 + 2^2 + 2^2 + 1^2) = 2/10.
Analogously for the remaining pairs, we finally get Pr(A=B) + Pr(A=C) + Pr(B=C) = 6/10
Using ensemble of paths toward only one time direction, we would have first power instead of Born rules - the inequality would be satisfied.
Alternative construction: https://arxiv.org/pdf/1907.00175
I am not aware of any evidence for violation of energy or momentum conservation - they are at heart of Lagrangian mechanics we successfully use from QFT to GRT.
Regarding virtual particles - they are used in Feynman diagrams in perturbative approximation of QFT - assuming QFT is fundamental, perturbative QFT/Feynman diagrams is still an effective picture - practical approximation ... leading to countless number of divergences, usually removed by hand.
But it is extremely universal practical tool defining objects as point particles through their interactions - it is very general algebra on particle-like objects ... like algebra properly concluding that "apple + apple = two apples" without any insights what apple is ... which can also handle non-point objects like fields by approximating them with a series of virtual particles.
The basic example is Coulomb interaction e.g. proton - electron, which in pertubative (approximation of) QFT is handled with a series of point-like photons, instead of continuous EM field.
It brought dangerous common misconception that EM field is always quantized, while it is just a continuous field, which optical photons are quantized due to discrete atomic energy levels ... but e.g. linear antenna produces cylindrically symmetric EM radiation - which energy density drops like 1/r to 0 - cannot be quantized to individual discrete photons localizing finite portions of energy.
We have lots of quasi-paricles especially in solid state physics - starting with phonons: classically just Fourier/normal modes of the lattices, but perturbatve QFT treats them as real particles ... point-like.
There is also virtual pair creation - while we imagine pair creation as a zero-one process, it is in fact continuous - field can perform a tiny step toward pair creation, represented as real (virtual) pair creation in perturbative QFT. Continuity of this process is nicely seen using topological charge as charge:
Particle like electron is more than just a wave packet - it is among others stable localized configuration (nearly singular) of electric and magnetic field:
It doesn't loose these localized properties when approaching a proton to form an atom - becoming huge probability cloud of quantum orbital - this is proper but only effective description, averaging over some hidden dynamics.
They can perform real acrobatics on magnetic dipoles of these electron, like Larmor precession or even spin echo: https://en.wikipedia.org/wiki/…on_paramagnetic_resonance
Coupled wave created by internal clock of electron (de Broglie's, zitterbewegung, experimental confirmation: https://link.springer.com/article/10.1007/s10701-008-9225-1 ) has to become standing wave to minimize energy of atom - described by Schrodinger, giving quantization condition. It is nicely seen in Couder's walking droplet quantization, nice videos.
If lenr is possible, there is needed a non-thermal way to overcome the huge Coulomb barrier between nuclei - the only mechanism I could imagine is (localized) localized electron staying between nucleus and proton due to attraction - screening their repulsion.
QM is profoundly different from local models. You cannot get out of this, and it comes from experiment not theory.
Of course the idea of locality, which we are fixated on, does not apply naturally in a quantum domain. That has profound consequences for the structure of spacetime - it is just that we have as yet not properly worked out the connections!
You are saying that if proton and electron are far apart they are "classical" corpuscular ... but when they meet they became "quantum" wave-like ... so in which moment/distance this switch happens?
Where exactly is the classical-quantum boundary?
Do we really need such switch? - maybe they are both at the time. Like in popular Couder's walking droplets with wave-particle duality (Veritasium video with 2.5M views, great webpage with materials and videos, a lecture by Couder, my slides also with other hydrodynamical analogues: Casimir, Aharnonov-Bohm). Among others, they claim to recreate:
This way QM is just one of two perspectives/descriptions of the same system, what we already had in coupled oscillators, or their lattices: crystals, which can be described classically or through normal/Fourier modes - treated as real particles in QFT ...
Potential energy is negative: deuteron + electron -> deuterium + 13.6eV
The main problem with discussing dynamics of electrons below the probability clouds is the general belief that violation of Bell inequalities forbids us using such local and realistic models.
While the original Bell inequality might leave some hope for violation, here is one which seems completely impossible to violate - for three binary variables A,B,C:
Pr(A=B) + Pr(A=C) + Pr(B=C) >= 1
It has obvious intuitive proof: drawing three coins, at least two of them need to give the same value.
Alternatively, choosing any probability distribution pABC among these 2^3=8 possibilities, we have:
Pr(A=B) = p000 + p001 + p110 + p111 ...
Pr(A=B) + Pr(A=C) + Pr(B=C) = 1 + 2 p000 + 2 p111
... however, it is violated in QM, see e.g. page 9 here: http://www.theory.caltech.edu/…ill/ph229/notes/chap4.pdf
If we want to understand why our physics violates Bell inequalities, the above one seems the best to work on as the simplest and having absolutely obvious proof.
QM uses Born rules for this violation:
1) Intuitively: probability of union of disjoint events is sum of their probabilities: pAB? = pAB0 + pAB1, leading to above inequality.
2) Born rule: probability of union of disjoint events is proportional to square of sum of their amplitudes: pAB? ~ (psiAB0 + psiAB1)^2
Such Born rule allows to violate this inequality to 3/5 < 1 by using psi000=psi111=0, psi001=psi010=psi011=psi100=psi101=psi110 > 0.
I have just refreshed https://arxiv.org/pdf/0910.2724 adding section III about violation of this inequality using ensemble of trajectories: that proper statistical physics shouldn't see particles as just points, but rather as their trajectories to consider e.g. Boltzmann ensemble - it is in Feynman's Euclidean path integrals or its thermodynamical analogue: MERW (Maximal Entropy Random Walk: https://en.wikipedia.org/wiki/Maximal_entropy_random_walk ).
For example looking at [0,1] infinite potential well, standard random walk predicts rho=1 uniform probability density, while QM and uniform ensemble of trajectories predict different rho~sin^2 with localization, and the square like in Born rules has clear interpretation:
Considering ensembles (uniform, Boltzmann) of paths also allows to violate Bell in similar as QM way (through Born rules) - this is realistic model, and in fact required if we e.g. think of general relativity: where we need to consider entire spcatime, particles are their paths.
It is not local in "evolving 3D" picture, but it is local in 4D spacetime/Einstein's block universe view - where particles are their trajectories, ensembles of such objects we should consider.
511keV is just rest mass of electron - required e.g. to build it from photons (EM waves) during pair creation.
Hence, energy conservation doesn't allow energy of electric field of electron to exceed 511 keVs.
However, naive E ~ 1/r^2 assumption for point charge has infinite energy if integrating from r=0.
We would get 511keV from energy of E ~ 1/r^2 electric field if integrating from r~1.4fm.
Hence, energy conservation alone requires some femtometer scale modification of E ~ 1/r^2 electric field around electron.
Is there experimental evidence forbidding deformation/regularization in this scale? (doesn't need e.g. 3 smaller fermions or electric dipole)
Exactly, as this Feynman's lecture says: "the dependence on E is uniquely determined by dimensional analysis", getting sigma ~ 1/E^2.
This is the line I was referring to.
We are interested in size of non-Lorentz-contracted electron, so we need to extrapolate this sigma ~ 1/E^2 line to energy of non-Lorentz-contracted electron, getting sigma ~ 100mb, or ~2fm size.
Could stable pseudo-slits be made by (a series of) calibrated interference of alternating high frequency magnetic fields?
(A magnetic field grate)
Sounds like optical lattice ( https://en.wikipedia.org/wiki/Optical_lattice ) - made by laser beams ... but I don't think you could test electron size this way - they are light and repulse each other.
I assume you mean electron size at the lowest possible speed?
By resting I have meant just not Lorentz contracted (gamma=1, v=0), what affects the size we are interested in.
Regarding requirements for electron size, the energy of electric field E ~ 1/r^2 is infinity if integrating from r=0.
In pair creation field of electron-positron is created from 2 x 511keV EM radiation - energy of electric field cannot exceed 511keV - for this purpose we would need to integrate from r~1.4fm instead of 0.
So energy conservation requires to deform electric field on femtometer scale - no 3 smaller fermions as Dehmelt writes, no electric dipole, just deformation of E~1/r^2 electric field not to exceed 511keV energy.
Regarding collision evidence, I have put plot for GeV-scale electron-positron collision cross-section in my previous post.
Interpreting it, we need to have in mind that there is enormous Lorentz contraction there, e.g. gamma ~ 1000 for 1GeV.
We are not interested in size of 1000-fold contracted electron, but of a resting electron (gamma=1).
Extrapolating line (no resonances) in that plot to gamma=1 resting electron, you get ~100mb cross-section, corresponding to r ~ 2fm.
See discussion exactly about it: https://www.scienceforums.net/…ies-for-size-of-electron/
In double-slit experiment we have material built of ~10^-10m size atoms - huge comparing to e.g. ~10^-15m scale deformation of electric field required not to exceed 511keVs.
Maybe behavior of positronium would allow for some boundaries for size of electron?
All we can say is that no evidence of finite size has yet been seen, and that bounds size to 10-19 m or so.
Please elaborate - I couldn't find any details in this link? ... Or previously looking through literature for a few days ... or trying to discuss it in a few forums.
Sure, Wikipedia points Dehmelts 1988 paper with e.g. 10^-22 m ( http://iopscience.iop.org/arti…031-8949/1988/T22/016/pdf ) - figure on the left below.
As we can see, he took two particles composed of 3 fermions (proton and triton) and fitted parabola to these two points (!) - to get r = 0 for g=0 for electron built of three smaller fermions ... what allowed him to conclude these tiny sizes ... but it's "proof" by assuming the thesis.
Additionally, while g-factor is said to 1 classically, it is for assuming equal density of mass and charge - without this assumption we can get any g by changing mass/charge distribution - see formula in one of these links with longer explanations, e.g. https://www.scienceforums.net/…ies-for-size-of-electron/
On the right we can see cross-section for electron-positron collision.
Naively we would like to interpret cross-section as area of particle - the question is: cross-section for which energy should we use?
As there is Lorentz-contraction affecting the collision, and we are interested in size of resting electron, we should extrapolate the line without resonances (sigma ~ 1/E^2) to gamma=1 resting electron ... this way we get r ~ 2fm for electron.
Can you defend Dehmelt's fitting parabola to two points?
Or maybe you know some other experimental evidence for e.g. these 10^-19m radius?
It is crucial to understand size of electron here - while "everybody knows that electron is point-like", I have spent a few days to find experimental evidence to bound its size ... and literally nothing.
There is usually referenced Dehmelt's paper extrapolating from g-factor by fitting parabola to two points (seriously!): composed of 3 fermions (proton and triton), concluding for electron as composed of 3 fermions.
Cross-section for electron-positron collisions can be naively interpreted as area of particle. The question is: cross-section for which energy should we use? As we are interested in size of resting electron, to rescale Lorentz contraction we should extrapolate to gamma=1, but it suggests huge ~2fm radius.
Some discussions with images and formulas:
Is there a single experiment really bounding the size of electron????
Wyttenbach : the 0.6 violation here is universal, doesn't depend on dimensionality.
I have put it to a forum dedicated to Bell theorem - there are some additional materials about this example there:
The most important counterargument against considering dynamics of electrons, which assistance seems crucial for fusion, is the Bell theorem.
I have recently rewritten my paper about connection between QM and MERW ( https://en.wikipedia.org/wiki/Maximal_Entropy_Random_Walk ) - that repairing diffusion to be in agreement with the (Jaynes) entropy maximization principle, also repairs disagreements between predictions of stochastic models and QM, like the stationary probability distribution being exactly as in the quantum ground state.
Completely rewritten fresh version: https://arxiv.org/pdf/0910.2724v2.pdf
MERW turns out also having the Born rule: amplitudes describe probability distribution toward both past and future half-spacetime, to randomly get some value we need to "draw it" from both time directions, hence probability is (normalized) square of amplitude.
In this fresh paper I have yesterday added simple derivation of Bell inequalities and their violation for Born rule (also in MERW) - this picture contains a complete proof of Bell's theorem:
Top: assuming some probability distribution among 8 possibilities, we always get the above inequality.
Bottom: example of its violation using just quantum Born rule: probability is normalized square of amplitude.
As LENR society seems more open and farsighted, I have decided to try to bring to attention this generally ignored, but potentially having enormous consequences off-topic subject: of applying physics P-symmetry to biology - synthesizing living mirror version of organisms:
Recently there has started the race for such synthesis of mirror cell: built with mirror versions of molecules (called enantiomers) - in 2016 Chinese synthesized mirror polymerase, see e.g. Nature "Mirror-image enzyme copies looking-glass DNA, Synthetic polymerase is a small step along the way to mirrored life forms":
Direct economical motivation is that such organisms will allow for mass production of mirror versions of standard biomolecules (like new drugs) - known examples are e.g. aptamers, or L-glucose (a perfect sweetner). Further possible applications come from that such organisms will be incompatible with most of our pathogens, for example a mirror-human would be immune to most of diseases.
However, here is 2010 WIRED article "Mirror-Image Cells Could Transform Science — or Kill Us All":
Yes there are also extreme potential dangers, starting with toxicity of metabolites of such mirror cells. But the "KILL US ALL" part comes from potential of ecological disaster e.g. from introducing mirror (photosynthesizing) cyanobacteria to our environment - due to lack of standard natural enemies, it would probably dominate our ecosystem, what accordingly to the WIRED article, could eradicate our standard life in a few centuries.
Such first mirror cell could be even 3D printed (frozen) with some AFM. And it might turn out that it is already extremely dangerous, especially if adapting to digest our D-glucose and for example settling in a colon and producing toxic metabolites.
Therefore, this topic requires public attention and discussion - before (not after) we find out that mirror cells are already living e.g. in some Chinese lab in 10-20 years.
Wyttenbach , the shape of this high energy tail seems crucial especially for scattering experiments ... but maybe also for probability of fusion.
Oks points the start of this dispute about tail's shape to 1960s Gryzinski's papers - showing disagreement of quantum considerations especially for plasma and scattering scenarios, repaired by him using classical approximation with radial ("free-fall") trajectories of electron.
In 2001 paper Oks was able to repair the quantum considerations by considering wavefunction singular in the center - with electron affecting the nucleus, what again seems essential for probability of fusion.