Jarek Member
  • Member since May 9th 2015
  • Last Activity:

Posts by Jarek

    Fermi exclusion principle doesn't look like a fundamental one, rather as effective.
    One reason for this principle is Coulomb repulsion between electrons. The popular naive view e.g. on the ground state helium is that it has two independent 1s electrons ... but doing calculations right: using two-electron wavefunction psi(x1,x2) with included electric repulsion, these electrons are strongly anti-correlated: are on the opposite sides on the nucleus. Classical synchronization: http://gryzinski.republika.pl/teor5ang.html
    Another argument that 3 electrons would not fit in one orbital is that electrons are tiny magnets. There are only two ways to place two magnets in stable motion: parallel or anti-parallel alignment. Otherwise you would get additional twisting force. While two anti-parallel magnets attract each other (1/r^4) allowing to lower energy (electron can stay closer to nucleus), there is no place for 3 electrons in stable synchronous motion.


    Also bosons are only some idealization, e.g. the Bose-Einstein condensate has definitely a nonzero volume, so these are not in the same state.


    Regarding energy quantization, see the Couder paper I have linked - orbits are already quantized for classical objects with wave-particle duality. The picture is that to get resonance with the field, particle needs to choose closed orbits, and the the number of ticks of some clock has to be integer while performing this orbit. QM describes such field, but there is also a trajectory of the particle hidden there in Couder's picture, and the same can be true for QM.
    This clock is external in Couder, internal for real particles: de Broglies's/zitterbewegung. It was actually observed in experiment, see e.g. Hestenes paper: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.169.7383&rep=rep1&type=pdf

    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    Regarding probabilities of electron capture, I haven't seen such calculations, but I have just recently started looking closely at this approach and generally it seems a very complex question involving nuclear physics.
    Different electrons in multi-electron atom (I don't understand yet) have different parameters while passing near the nucleus (e->p scattering):
    - velocity (energy),
    - minimal distance (~impact parameter),
    - angular momentum,
    - change of incoming angle,
    - angle between spin and angular momentum,
    - angles to spin of the nucleus.
    These factors describe the details of this short moment when nuclear interaction can take place - I don't know how they determine probability of nucleus capturing the passing electron?
    I think the big difference can be the orbital angular momentum: electron capture from p orbital seem completely negligible in QM, while it seems probable in free-fall model.
    However, I haven't seen such experiment (?) Zeeman or Stark effect could help here to distinguish angular momentum.


    Regarding spherical harmonics, Gryzinski has purely classical trajectories.
    I personally see QM and so spherical harmonics as mix of two reasons:
    1) a result of adding thermal noise (additional interactions e.g. with neighborhood) to these classical trajectories and averaging over time - statistical mechanics suggests to assume Boltzmann distribution among possible paths, what gives exactly quantum probability densities (euclidean path integrals/maximal entropy random walk)
    2) describing the state of the field everything is happening in, resonance with which requires Bohr-Sommerfeld quantization condition, like in Couder's quantization for walking droplets: http://www.pnas.org/content/107/41/17515.full

    That suggests a test of Gryzinski's explanation. Most electron captures are considered to be K or L shell because of something I forget; maybe the auger transitions that follow after?


    This concerns both electron capture ( https://en.wikipedia.org/wiki/Electron_capture ) and internal conversion ( https://en.wikipedia.org/wiki/Internal_conversion ).
    So here is the best paper regarding electron capture I could find: http://www.sciencedirect.com/s…icle/pii/0020708X8290151X
    It uses 55Fe and says that M capture produces too low energy to by directly observed by their scintillators.
    However, it writes P_K + P_L was essentially smaller than 1 - the difference was about 1.6% here (3% for 131Cs).
    As I have already cited, for internal conversion higher shells (including M and N) have also nonnegligible contribution.


    So there are no doubts that also higher shell electrons spend some time extremely close to the nucleus - so that nuclear force can act.
    In contrast to Bohr, in Gryzinski's picture electrons pass nucleus in ~10^-13m distance about 10^16 times per second, suggesting a chance for interacting with this nucleus, especially for larger nuclei.
    However, predicting probabilities is a difficult nuclear physics question: different shell electrons have different velocity while passing the nucleus (and angular momentum) - velocity seems a crucial factor for probability of interaction.


    Electron capture Wikipedia article mentions this dependence on chemical bonds:
    "Chemical bondscan also affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus. For example, in 7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments.[8] This relatively large effect is due to the fact that beryllium is a small atom whose valence electrons are close to the nucleus."
    Where [8] is http://link.springer.com/artic…40%2Fepja%2Fi2006-10068-x


    ps. Regarding dynamics of electrons in atom, while QM wants immediate wavefunction collapse (neglecting hidden dynamics), there is nice Science 2010 paper "Delay in photoemission": http://science.sciencemag.org/content/328/5986/1658
    "We used attosecond metrology to reveal a delay of 21±5 attoseconds in the emission of electrons liberated from the 2p orbitals of neon atoms with respect to those released from the 2s orbital by the same 100–electron volt light pulse."


    There is obviously some crucial hidden dynamics of electrons in atoms - we not only don't understand, but even are forbidden to ask about in the view of mainstream QM.

    From https://en.wikipedia.org/wiki/Internal_conversion :
    "Most internal conversion (IC) electrons come from the K shell (the 1s state), as these two electrons have the highest probability of being within the nucleus. However, the s states in the L, M, and N shells (i.e., the 2s, 3s, and 4s states) are also able to couple to the nuclear fields and cause IC electron ejections from those shells (called L or M or N internal conversion). Ratios of K-shell to other L, M, or N shell internal conversion probabilities for various nuclides have been prepared"
    Pointing here: http://www.nucleide.org/DDEP_WG/introduction.pdf


    I will think about it.

    axil,
    There might be various factors affecting nuclear reactions, e.g. presence of neutrinos is a popular hypothesis.
    However, here it specifically discussed "electron capture by nucleus" - so the main factor should be the presence near nucleus of electron to capture.


    There is a crucial question: which shell electrons dominate in this electron capture?
    In Bohr and QM the inner shell electrons should dominate, if free-fall atomic model any electron can be captured.
    Spectroscopy should allow to determine it: if it's inner shell electron, this vacancy will be quickly filled by an outer electron - producing photon.

    (...)as in the electron capture type of decay(...)


    It agrees with Gryzinski's picture: while in Bohr and QM higher shell electrons remain relatively far from the nucleus, in free-fall atomic model they fall and nearly miss the nucleus (~10^-13m minimal distance), so atomic/molecular shell structure might influence the rate of electron capture by nucleus - especially for huge nuclei like 226Ra here.


    I am just investigating his related argument regarding the screening constant: https://en.wikipedia.org/wiki/Slater%27s_rules
    So it generally says in multielectron atoms, the effective nuclear charge seen by an electron is reduced by this screening constant:
    Z_eff = Z - s
    What is surprising here is that outer shell electrons screen charge of nucleus for inner shell electrons, what seems in contradiction with Bohr and QM picture saying that they should stay in further distances.
    In free-fall atomic model outer shell electrons spend some time very close the nuclei, so they can screen inner shell electron - see http://gryzinski.republika.pl/teor6ang.html

    The assumed foundations can completely change "the physical predictions of the theory" - like in the interesting here case: QM and probability of LENR.


    The problem here is crossing the Coulomb barrier to get the two nuclei together - what would require billions of kelvins to reach this energy in thermal way with nonnegligible probability, as seen in the conventional view on fusion.
    In hypothetical LENR this Coulomb barrier is claimed to be frequently crossed in ~1000K, for which the only nonmagical explanation seems electron staying between the two nuclei, for example pure Coulomb force says that symmetric p - e - p initial configuration should collapse into a point.


    So the main LENR question is the probability of electron staying between the two collapsing nuclei for a sufficiently long time.
    However, the QM mainstream doesn't take seriously such possibility of electron localizing between two nuclei collapsing down to a distance of nuclear forces (~10^-15 m) - because of smearing electron density cloud, Heisenberg uncertainty principle etc. I don't imagine how to perform quantum calculation so that this electron would not escape and smear its probability around (?)
    In contrast, asking about a concrete ("classical") trajectory of electron, there is no longer a problem of its staying between the two nuclei, like in p - e - p symmetric collapse.
    If we agree that QM is effective theory describing e.g. averaged trajectories, asking for local trajectories of electrons makes LENR no longer theoretically impossible.


    So this QM foundations discussion is about to be or not to be for LENR ...

    Regarding "This is just the situation, which the quantum mechanics doesn't cover.", it was a situation described in quantum formalism - which clearly doesn't fulfill energy conservation.
    In statistical physics type of effective theories, QM strongly resembles, there can be conservation of expected energy - that energy is most likely conserved.
    But a really fundamental theory, in some scale effectively described by QM, should have real ultimate energy conservation - like any Lagrangian mechanics.


    Let's go back to the problematic Stark effect for Lyman-gamma (4->1) ...
    I have decided to perform the calculations (pdf file) as described here for n=3.
    So we need matrix <n,l,m| z^hat|n,l',m'> for fixed n (assuming degeneracy) and all n^2 possibilities for l and m.
    Possible shifts are given by eigenvalues of this matrix (times a*E*e).
    For n=3 we get eigenvalules: {-9, -9/2, 0, 9/2, 9} - it fits to Frerichs' results assuming we don't see the 0 line.


    For n=4 we get eigenvalues: {-18,-12,-6, 0, 6, 12, 18} - visually it seems to fit Frerichs' results assuming we don't see the {-6, +6} lines.
    However, he got (10^8/lam): {102630.5, 102684.2, 102823.6, 102964.4, 103021.7},
    after subtracting average value we get {-194.38, -140.68, -1.28, 139.52, 196.82}
    The proportions suggest that the external lines should be ~140 * 1.5 = 210, so the observed ones are essentially narrower than predicted.


    How to repair this discrepancy?
    Maybe someone has some more recent experimental results for Lyman-gamma (4->1)?


    update:
    I have just found 1984 paper ( http://journals.aps.org/pra/ab…/10.1103/PhysRevA.30.2039 ) which starts with "Recent measurements of the Stark profiles of the hydrogen Lyman-alpha and -beta lines in an arc plasma have revealed a sizeable discrepancy between theoretical and experimental results" ...

    Ok, so imagine there 1000 of such photons, there is still nonzero probability that all of them will turn out to have lower energy, or that all of them will be measured to have higher energy.
    How can we even talk about energy conservation here?

    Imagine you have a source of photons in state |psi> = (|a>+|b>)/sqrt(2), where |a> and |b> are eigenstates of Hamiltonian for different energies.
    What is energy of such superposition before measurement?
    Now perform measurements, separately on each photon - sometimes you will get higher energy, sometimes lower.
    Is energy conserved here? In other words: is the energy of superposition before measurement always the same as after measurement?

    Does Copenhagen fulfill energy conservation?
    Imagine you perform a measurement of momentum of a photon, and so of its energy - Copenhagen says that you can get a random value of measurement.
    So there is some probability that this photon has lower energy, some that it has higher - how can we even talk about energy conservation for a theory predicting random energies?


    Theory predicting only probabilities, like thermodynamics or statistical physics, is an effective theory - trying to predict the most probable evolution of our limited knowledge, using some law of large numbers. And it also applies to QM.
    We should search for fundamental theory - for which effective description in some scale is QM.
    We use Lagrangian mechanics from QFT to GRT - this fundamental theory should be most likely a Lagrangian theory.
    Lagrangian theories are deterministic, so using Bell theorem as counter-argument does not apply ("super-determinism").
    We need a field (e.g. EM) with localized constructs like charges/particles - localized entities of fields are called solitons, using topological solitons we get quantization of charge, rest mass of particles, Coulomb attraction/repulsion, finite energy of electric field of charge (not true for point charge) and many other properties.
    We need to understand trajectories of these particles/charges/solitons - long time should average to quantum probability clouds (and it does), short time is nearly Kepler-type, and experiments suggest dominating nearly zero angular momentum: free-fall radial trajectories.

    This is not about one of them (classical or QM) being wrong, the other being right.
    Both of them are applied as approximation:
    - practically used QM neglects e.g. the neighborhood of the atom, interaction with which causes wavefunction collapse - which results are not predicted by QM (just probabilities). This lack of information (like neighborhood) is treated in statistical physics manner - QM has built in statistical mechanics of the state of this neighborhood. If we could consider QM of larger systems, finally the Wavefunction of the Universe, there would be no longer neighborhood and so no wave collapse - it would become a deterministic theory ... but we are faaaar from being able to practically work with it,
    - practically used classical mechanics neglects e.g. field everything is happening in, which leads e.g. to requirement of Bohr-Sommerfeld quantization condition to find resonance with the field (as in the picture of Couder's walking droplets: http://www.pnas.org/content/107/41/17515.full ).


    QM and classical pictures are just different perspectives on the same system, for example we can look at coupled pendula through their positions (classical) or normal modes (quantum). Increasing the number of pendula to infinity, we get crystal with classical positions or "quantum" phonons.
    Considering classical trajectories, we need to add the neighborhood in statistical way: as noise, averaging such perturbed classical trajectories, we get Boltzmann distribution among trajectories, which leads exactly to quantum density clouds (euclidean path integrals/Maximal Entropy Random Walk).
    Adding field and wave-particle duality e.g. for the electron: that it is both localized entity (indivisible charge) and coupled waves around it (caused by internal periodic motion: de Broglie's clock/zitterbewegung) - we get interference in double-slit experiment: particle goes single trajectory, its coupled "pilot" wave goes all trajectories, leading to interference. See double-slit for Couder's walking droplets: https://hekla.ipgp.fr/IMG/pdf/Couder-Fort_PRL_2006.pdf


    We shouldn't see QM and classical as opposing - instead, they are complementing tools/perspectives - we should learn to choose the best from them for approximation of real systems.

    @Eric,
    This seems indeed a fascinating story about the foundations of modern physics ... a paper from the best journal in 1934 showing disagreement of quantum calculations, which still seems the only one with Lyman-gamma ... and just 3 citations. I cannot find any other experimental paper with Lyman-gamma?
    Seems an inconvenient problem brushed under the carpet ... then shameless use of Stark effect in all texbooks claiming perfect agreement ... without even referring to experiment.


    Gryzinski didn't even use semiclassical, but pure classical: Bohr plus classical spin of electron (magnetic dipole moment + gyroscope), plus eventually precession of this spin to explain Bohr-Sommerfeld quantization condition (1987 paper).
    And in nearly 30 papers published in Phys. Rev. level of journals (1957-2000) he shows surprisingly good agreement of these classical considerations, sometimes better than quantum ones - especially for various scattering problems. His papers have ~ 3000 total citations ( https://scholar.google.pl/scholar?hl=en&q=gryzinski ).


    I don't think classical considerations have to be in disagreement with quantum ones. Adding thermal noise to a classical trajectory and averaging over time, we get exactly quantum probability distributions (Maximal Entropy Random Walk).
    The question here is to understand short time dynamics of electrons - especially if we want two nuclei to cross the Coulomb barrier in 1000K ... and Gryzinski's work has strong arguments that radial trajectory usually dominates (zero angular momentum limit of Bohr-Sommerfeld).
    Such radial trajectories can happen between two nuclei, screening their Coulomb repulsion and so making LENR possible to imagine.


    ps. I have also asked this question at physicsforums - maybe they will be able to clarify it (?)
    https://www.physicsforums.com/…ory-vs-experiment.885330/

    Gryzinski gives many examples where he claims that quantum predictions give unsatisfactory agreement with experiment, while simple classical calculations give much better agreement.
    They mostly concern various scattering scenarios, what is not surprising as QM seems to describe dynamical equilibrium.
    However, there are also other examples, like calculating diamagnetic coefficient, Ramsauer effect (as outer shell electrons screening charge of nucleus for inner shell electrons), and also Stark effect - which is in nearly all QM textbooks as example for using perturbation theory (alongside Zeeman).


    Wikipedia article ( https://en.wikipedia.org/wiki/Stark_effect ) has a nice figure with n-th level splitting into n-1 equally spaced sublevels:

    It is hard to find published experimental results - please cite if you know some.
    A clear one for Lyman series (2->1, 3->1, 4->1) can be found in historical "Der Starkeffect der Lymanserie" by Rudolf Frerichs, published January 1934 in Annalen der Physic (its editors back then: W. Gerlach, F. Pashen, M. Planck, R. Pohl, A. Sommerfeld and M. Wien), here are its results:


    These are clearly not equally spaced.
    One could expect that such paper, top physicists were aware of these 80 years ago, should now have hundreds of citation - like for conformation of the theoretical calculation in all QM textbooks, or maybe a surprise that should be understood and repaired...
    In contrast, it has now just 3 citations: https://scholar.google.pl/scholar?cites=15476592679702358817


    So this was pointed out in Gryzinski 2002 book (unfortunately in Polish and it seems there is no published paper for that), alongside a few lines of classical calculations (Bohr-Sommerfeld), leading to this picture (top QM, bottom classical, blue - experiment):



    Could anyone comment on that?


    update: I have looked at two of these citations of the Frerichs 1936 paper (more recent and English): the 1992 one concerns much higher levels (10->30, getting nearly equally spaced sublevels) and refers to only one experimental paper for Lyman series( ->1, the Frerichs') and 3 papers for Balmer series (->2). The second (1996) concerns Lyman-alpha (2->1).
    There is something really strange going on with this Lyman-gamma ...

    I have found two Gryzinski's articles related to fusion - and they are both publicly available (I can privately share some others):


    1982 "Intense ion beam generation in " RPI " and " SOWA " ion-implosion facilities" - regarding this coaxial plasma gun:
    https://hal.archives-ouvertes.…jphys_1982_43_5_715_0.pdf


    And 1979 "Theoretical description of collisions in plasma : classical methods": https://hal.archives-ouvertes.fr/jpa-00219441/document
    Which is a good starting point for LENR considerations.
    It uses Binary Encounter Approximation (BEA) - exactly as I have written:
    treat the two essential particles directly (e.g. p/D/T + e for LENR)
    and the rest effectively - which from his considerations was oscillating multipole:
    C_n(r^hat)/r^n sin(omega*t) + C_m(r^hat)/r^m
    where C are multipole functions (dipole or quadrupole or octupole) - this pulsating multipole approximation work especially well for modeling scattering on noble gases - see his 1975 papers.


    ps. some his papers can be download here: http://www.newkvant.narod.ru/
    ps2. his work is continued e.g. by prof. Victor V. Vikhrev (also plasma physicist) - here is some his recent freely available paper
    https://www.researchgate.net/p…bining_in_hydrogen_plasma

    Not in the billion dollar realm, but very much a particle accelerator.


    Similar kind of "accelerator" is currently used in Dense Plasma Focus: https://en.wikipedia.org/wiki/Dense_plasma_focus
    Nice animation and explanation:

    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    Gryzinski's group has been working on similar approach to fusion since 1957 (they say they have started it), here is article about their group (in Polish, but lots of pictures): http://web.archive.org/web/201…umenty/ptj/sadowski10.pdf
    Some their article: http://jphys.journaldephysique…hys_1982__43_5_715_0.html
    E.g. prototype of coaxial plasma gun (1958-60), both anode and cathode are multiple rods:

    These are eV-scale photons in a very special (superfluid) medium, here we are talking about MeV-scale photons in standard solid matter (it has eV-scale phonons, excitons, plasmons etc.) - we know how such photons behave, the only reasonable way to "store them" is in nuclei (isomers) or other nuclear reactions.