Electron-assisted fusion

  • Ok, so imagine there 1000 of such photons, there is still nonzero probability that all of them will turn out to have lower energy, or that all of them will be measured to have higher energy.
    How can we even talk about energy conservation here?

  • This is just the situation, which the quantum mechanics doesn't cover. This theory talks just about statistically averaged probability functions.
    Of course, some deterministic interpretations of QM would involve hidden variables and the ways, which would enable the leaking of energy into extradimensions or similar stuffs - but the energy balance in QM itself is strictly conserved.


    BTW Buckling motion of graphene could be used to generate electricity from ambient thermal energy. Such an energy source could be used to run low-power devices such as remote wireless sensors. At room temperature the distance of layers varies by as much as 10 nm over that time – a distance that is about 40 times the separation between neighbouring carbon atoms in graphene (synopsis). Another already known examples of graphite based generators. IMO the interest of physicists about research these devices doesn't correspond their both practical, both theoretical potential significance, because they fear of accusation from attempt for violation of energy conservation laws.

  • Regarding "This is just the situation, which the quantum mechanics doesn't cover.", it was a situation described in quantum formalism - which clearly doesn't fulfill energy conservation.
    In statistical physics type of effective theories, QM strongly resembles, there can be conservation of expected energy - that energy is most likely conserved.
    But a really fundamental theory, in some scale effectively described by QM, should have real ultimate energy conservation - like any Lagrangian mechanics.


    Let's go back to the problematic Stark effect for Lyman-gamma (4->1) ...
    I have decided to perform the calculations (pdf file) as described here for n=3.
    So we need matrix <n,l,m| z^hat|n,l',m'> for fixed n (assuming degeneracy) and all n^2 possibilities for l and m.
    Possible shifts are given by eigenvalues of this matrix (times a*E*e).
    For n=3 we get eigenvalules: {-9, -9/2, 0, 9/2, 9} - it fits to Frerichs' results assuming we don't see the 0 line.


    For n=4 we get eigenvalues: {-18,-12,-6, 0, 6, 12, 18} - visually it seems to fit Frerichs' results assuming we don't see the {-6, +6} lines.
    However, he got (10^8/lam): {102630.5, 102684.2, 102823.6, 102964.4, 103021.7},
    after subtracting average value we get {-194.38, -140.68, -1.28, 139.52, 196.82}
    The proportions suggest that the external lines should be ~140 * 1.5 = 210, so the observed ones are essentially narrower than predicted.


    How to repair this discrepancy?
    Maybe someone has some more recent experimental results for Lyman-gamma (4->1)?


    update:
    I have just found 1984 paper ( http://journals.aps.org/pra/ab…/10.1103/PhysRevA.30.2039 ) which starts with "Recent measurements of the Stark profiles of the hydrogen Lyman-alpha and -beta lines in an arc plasma have revealed a sizeable discrepancy between theoretical and experimental results" ...

  • @Eric Walker,
    Nevanlinna of Cobraf just posted a link to a paper (2012) that you might want to have a look at. Currently the link is the second post down, and it may expire at some point.
    This is regarding experiments on "decay rate changes". Maybe you have seen it already, but it has interesting implications for earlier experiments. Looked like your cup of tea..


    http://translate.google.ca/translate?hl=en&sl=it&tl=en&u=http%3A%2F%2Fwww.monetazione.it%2Fforum%2Ftopic.php%3Freply_id%3D123584914%26topic_id%3D5747%26ps%3D20%26pg%3D1%26sh%3D0

  • But all these foundation arguments do not alter one iota the physical predictions of the theory.



    This comment is irrefutably wrong. Think, Thomas, of the implications of ANY fundamental assumptions in ANY theory. Begin at the beginning, if you will, with say the parallel postulate in Euclidian geometry... that only one parallel line can be drawn through a point to a given line. Riemann, Lobatchevsky and perhaps other geometries evolve or devolve necessarily from a single elementary change ---i.e. the possibility of an infinite number of parallels, or alternatively NO parallels, may be drawn to a given line. Immense implications from the alteration of a single Euclidian postulate at the base of the most fundamental theory of the structure of space.


    Your comment there is uncharacteristically looney, since these absolutely elementary changes have absolute implications in the real world. That is general relativity v. special relativity v. QM v. Newton and so on.


    To those interested, please read later productions by Hans Reichenbach or Rudolf Carnap.

  • The assumed foundations can completely change "the physical predictions of the theory" - like in the interesting here case: QM and probability of LENR.


    The problem here is crossing the Coulomb barrier to get the two nuclei together - what would require billions of kelvins to reach this energy in thermal way with nonnegligible probability, as seen in the conventional view on fusion.
    In hypothetical LENR this Coulomb barrier is claimed to be frequently crossed in ~1000K, for which the only nonmagical explanation seems electron staying between the two nuclei, for example pure Coulomb force says that symmetric p - e - p initial configuration should collapse into a point.


    So the main LENR question is the probability of electron staying between the two collapsing nuclei for a sufficiently long time.
    However, the QM mainstream doesn't take seriously such possibility of electron localizing between two nuclei collapsing down to a distance of nuclear forces (~10^-15 m) - because of smearing electron density cloud, Heisenberg uncertainty principle etc. I don't imagine how to perform quantum calculation so that this electron would not escape and smear its probability around (?)
    In contrast, asking about a concrete ("classical") trajectory of electron, there is no longer a problem of its staying between the two nuclei, like in p - e - p symmetric collapse.
    If we agree that QM is effective theory describing e.g. averaged trajectories, asking for local trajectories of electrons makes LENR no longer theoretically impossible.


    So this QM foundations discussion is about to be or not to be for LENR ...

  • Nevanlinna of Cobraf just posted a link to a paper (2012) that you might want to have a look at.


    Here are the details of the paper, for later reference (or for when the link goes down): Thomson et al., "The apparent change of radioactivity with temperature in a 226Ra decay chain," J. Radioanal. Nucl. Chem. (2012), doi:10.1007/s10967-011-1403-5. Includes Miley and Swartz as coauthors (as was mentioned above).


    Quote

    Radioactive decay rates are to a large extent believed to be independent of the chemical environment. This is the physics basis implicitly assumed in applications such as radioisotope dating. While this statement is a good approximation for most radioactive decays, there are cases where a slight variation of 0.5% or more can be observed, as in the electron capture type of decay. There are renewed interests in possible decay-rate changes with external parameters such as temperature, with controversy as to the phenomenon’s authenticity. In this paper, we study the variation of radioactivity counts that significantly change (up to 50% or more) with temperature. We carefully studied the characteristics of the change and found that the presence of a gaseous decay daughter can pose a serious challenge to a bona fide account of the intrinsic nuclear decay rate. After a careful solution to rate equations of the relevant isotopes under our experimental conditions, we found that most of the radioactivity change could be accounted for by the diffusion and loss of gaseous daughters under the heat, without a supposed change in the intrinsic nuclear decay rate. We hence demonstrate that an accurate determination of the decay constant has to consider the possible diffusion of volatile components in the decay chain. This is especially important in cases involving significant temperature change.

  • (...)as in the electron capture type of decay(...)


    It agrees with Gryzinski's picture: while in Bohr and QM higher shell electrons remain relatively far from the nucleus, in free-fall atomic model they fall and nearly miss the nucleus (~10^-13m minimal distance), so atomic/molecular shell structure might influence the rate of electron capture by nucleus - especially for huge nuclei like 226Ra here.


    I am just investigating his related argument regarding the screening constant: https://en.wikipedia.org/wiki/Slater%27s_rules
    So it generally says in multielectron atoms, the effective nuclear charge seen by an electron is reduced by this screening constant:
    Z_eff = Z - s
    What is surprising here is that outer shell electrons screen charge of nucleus for inner shell electrons, what seems in contradiction with Bohr and QM picture saying that they should stay in further distances.
    In free-fall atomic model outer shell electrons spend some time very close the nuclei, so they can screen inner shell electron - see http://gryzinski.republika.pl/teor6ang.html

  • Radioactive decay is caused by the activity of virtual particles in the vacuum. If that activity is increased, the rate of radioactive decay will also increase. Charge separation will produce a zone of positive vacuum energy and a corresponding zone of negative vacuum energy. If the unstable isotope is in the zone of positive vacuum energy, its rate of decay will increase.


    See “Effects of Vacuum Fluctuation Suppression on Atomic Decay Rates”.


    At: http://arxiv.org/pdf/0907.1638v1.pdf

  • Gameover said:

    Quote

    I would not trust that the explanation they give is what they actually think is happening.


    Why assume that they are saying anything other than what they actually are saying? The volatile daughter product problem is important to clear up before testing for real rate changes. If it inconvinently explains away some earlier work as possibly caused by this issue, then so be it. Better than chasing false anomalies. The method Thomson et al used can be used to look at older experiments from a new perspective. If some older work can survive this "challenge", then all the better for that work. If it easily explained, using the newer analysis method, then time wasting is reduced.

  • axil,
    There might be various factors affecting nuclear reactions, e.g. presence of neutrinos is a popular hypothesis.
    However, here it specifically discussed "electron capture by nucleus" - so the main factor should be the presence near nucleus of electron to capture.


    There is a crucial question: which shell electrons dominate in this electron capture?
    In Bohr and QM the inner shell electrons should dominate, if free-fall atomic model any electron can be captured.
    Spectroscopy should allow to determine it: if it's inner shell electron, this vacancy will be quickly filled by an outer electron - producing photon.

  • From https://en.wikipedia.org/wiki/Internal_conversion :
    "Most internal conversion (IC) electrons come from the K shell (the 1s state), as these two electrons have the highest probability of being within the nucleus. However, the s states in the L, M, and N shells (i.e., the 2s, 3s, and 4s states) are also able to couple to the nuclear fields and cause IC electron ejections from those shells (called L or M or N internal conversion). Ratios of K-shell to other L, M, or N shell internal conversion probabilities for various nuclides have been prepared"
    Pointing here: http://www.nucleide.org/DDEP_WG/introduction.pdf


    I will think about it.

  • @gameover,
    That was not sarcasm at all. It is entirely possible, maybe even very likely, that the effects caused by volatile daughter products were not considered in any detail in early "decay rate change" experiments.


    Consider for example some of the electrolysis experiments using U metal.

  • @gameover,
    The report does not state which gas was included in the blank cell. One might assume it was air, if not reported otherwise.
    I suggest asking the authors for any clarifications on aspects of the experiment, and about any extra statements they might wish to make about the conclusions.

  • That suggests a test of Gryzinski's explanation. Most electron captures are considered to be K or L shell because of something I forget; maybe the auger transitions that follow after?


    This concerns both electron capture ( https://en.wikipedia.org/wiki/Electron_capture ) and internal conversion ( https://en.wikipedia.org/wiki/Internal_conversion ).
    So here is the best paper regarding electron capture I could find: http://www.sciencedirect.com/s…icle/pii/0020708X8290151X
    It uses 55Fe and says that M capture produces too low energy to by directly observed by their scintillators.
    However, it writes P_K + P_L was essentially smaller than 1 - the difference was about 1.6% here (3% for 131Cs).
    As I have already cited, for internal conversion higher shells (including M and N) have also nonnegligible contribution.


    So there are no doubts that also higher shell electrons spend some time extremely close to the nucleus - so that nuclear force can act.
    In contrast to Bohr, in Gryzinski's picture electrons pass nucleus in ~10^-13m distance about 10^16 times per second, suggesting a chance for interacting with this nucleus, especially for larger nuclei.
    However, predicting probabilities is a difficult nuclear physics question: different shell electrons have different velocity while passing the nucleus (and angular momentum) - velocity seems a crucial factor for probability of interaction.


    Electron capture Wikipedia article mentions this dependence on chemical bonds:
    "Chemical bondscan also affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus. For example, in 7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments.[8] This relatively large effect is due to the fact that beryllium is a small atom whose valence electrons are close to the nucleus."
    Where [8] is http://link.springer.com/artic…40%2Fepja%2Fi2006-10068-x


    ps. Regarding dynamics of electrons in atom, while QM wants immediate wavefunction collapse (neglecting hidden dynamics), there is nice Science 2010 paper "Delay in photoemission": http://science.sciencemag.org/content/328/5986/1658
    "We used attosecond metrology to reveal a delay of 21±5 attoseconds in the emission of electrons liberated from the 2p orbitals of neon atoms with respect to those released from the 2s orbital by the same 100–electron volt light pulse."


    There is obviously some crucial hidden dynamics of electrons in atoms - we not only don't understand, but even are forbidden to ask about in the view of mainstream QM.

  • So there are no doubts that also higher shell electrons spend some time extremely close to the nucleus


    I don't think there was ever a doubt about this. QM says that any s-shell electron, no matter how far out, spends some time in the nucleus. I guess the question for me is, does Gryzinski's picture lend itself to the actual ratios of K, L, M, etc., captures that are observed, with most being K-capture? There's some definite evidence that has led to this conclusion, although I do not recall what it was. There's an interesting possibility that Gryzinski's account not only is at variance with the Copenhagen interpretation, with his literal orbitals, but the existing calcluations as well.


    Does Gryzinski do away with the spherical harmonics?

  • Regarding probabilities of electron capture, I haven't seen such calculations, but I have just recently started looking closely at this approach and generally it seems a very complex question involving nuclear physics.
    Different electrons in multi-electron atom (I don't understand yet) have different parameters while passing near the nucleus (e->p scattering):
    - velocity (energy),
    - minimal distance (~impact parameter),
    - angular momentum,
    - change of incoming angle,
    - angle between spin and angular momentum,
    - angles to spin of the nucleus.
    These factors describe the details of this short moment when nuclear interaction can take place - I don't know how they determine probability of nucleus capturing the passing electron?
    I think the big difference can be the orbital angular momentum: electron capture from p orbital seem completely negligible in QM, while it seems probable in free-fall model.
    However, I haven't seen such experiment (?) Zeeman or Stark effect could help here to distinguish angular momentum.


    Regarding spherical harmonics, Gryzinski has purely classical trajectories.
    I personally see QM and so spherical harmonics as mix of two reasons:
    1) a result of adding thermal noise (additional interactions e.g. with neighborhood) to these classical trajectories and averaging over time - statistical mechanics suggests to assume Boltzmann distribution among possible paths, what gives exactly quantum probability densities (euclidean path integrals/maximal entropy random walk)
    2) describing the state of the field everything is happening in, resonance with which requires Bohr-Sommerfeld quantization condition, like in Couder's quantization for walking droplets: http://www.pnas.org/content/107/41/17515.full

  • I personally see QM and so spherical harmonics as mix of two reasons:


    My understanding of one reason why the spherical harmonics are used is to explain the filling of atomic (and nuclear) shells. Because electrons are fermions, you can't have more than two occupying the same s-orbital for a shell, for example, and you can't have more than three electrons occupying the p-orbital for a shell. The s-shell electrons all have one energy and the p-shell electrons have another (ignoring fine structure). And these patterns are borne out by the energy transitions seen in atomic spectra, and the number of electrons becomes apparent when spectral lines are split. You see several lines clustered around this energy, and a different number clustered around that energy. So the spherical harmonics do a great job of describing how many electrons can fill each energy level.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.