Direct experimental proof of SO(4) physics

  • AS an example of the different effects and laws that might pertain at different energy levels


    consider what different nonlinear effects happen to one's head when one collides with a wall


    at walking speed 5 km/hr and at 200 km/hr:)

  • Nonlinear terms in Maxwell eqns may well have meanings during GEV collisions of heavy Pb ions

    as mentioned in

    https://arxiv.org/pdf/1904.01243

    Perhaps Maxwell never envisaged machines that would collide heavy ions at 5000 MEV energy levels in 1870

    it is quite possible that at more common MEV levels linear Maxwell eqns are quite sufficient

    to describe the photon interactions and other events

    especially at the energy levels seen in LENR energy exchanges in the region

    4 eV - 2.2 MEV


    Its a bit like the anomalous Rutherford scattering that is seen at higher collision

    energies due to Poissonic forces as opposed to Coulombic forces.(Schaeffer,2016)


    To give a more detailed reply to the speculative parts of this post.


    Your reference https://arxiv.org/pdf/1904.01243.pdf details exactly my point. Classical (Maxwell's equations) photon interaction in free space is linear, and therefore cannot lead to scattering. QED predicts nonlinearity (and scattering) via virtual electron-positron exchange.


    Your reference details the nonlinearity considered as that formulated by the Euler-Kockel-Heisenberg Lagrangian


    Some QED history relevant to photon-photon scattering:


    https://arxiv.org/pdf/1711.05194.pdf

    (you need this to find the references from my quote below, the quote comes from page 5-6)


    However, the physical mechanism for photon-photon interaction processes and
    their quantitative details remained beyond consideration for a couple of further
    years. In 1933, Otto Halpern [26] finally proposed that virtual electron-positron
    pairs could be at the origin of photon-photon collisions. While this clarified the
    qualitative picture to be applied for the description of photon-photon scattering,
    only the early development of quantum electrodynamics (QED) provided physicists
    with the theoretical tools for a quantitative answer: It was finally given by two
    students of Werner Heisenberg, Hans Euler and Bernhard Kockel, who calculated
    in 1935 the leading nonlinear corrections to the Maxwell equations in vacuum [47].
    Within the framework of QED, it turned out that photon-photon scattering as a
    characteristic feature of a nonlinear electrodynamic theory has a very low probability

    for all practical purposes. Once the theoretical picture had been established the
    challenge emerged to demonstrate experimentally consequences of the violation of
    the linearity of Maxwell equations in vacuum, i.e., of the violation of the superposition

    principle for light (or, speaking more generally, for electromagnetic fields). The
    processes of Delbrück scattering (elastic scattering of a photon in the Coulomb field
    of a nucleus) and photon splitting (in an external field) have meanwhile experimentally

    been confirmed (cf. [181, 182]). However, up to the present day demonstration
    of such a fundamental physical phenomenon as photon-photon scattering remains
    a problem at the edge of current experimental and observational capabilities (for
    some recent developments see [183, 184]).


    My point was exactly that this was a QED theoretical prediction - the Maxwell equation nonlinearity comes only from the consideration of QED effects.

    Furthermore a quantitative prediction made in the 1930s has been found to be correct by direct measurement only 80 years later, using the LHC.


    (There has however been other indirect evidence).


    I should also point out that this QED predicted nonlinearity (caused by additional QFT effects) comes from the theoretical framework without considering space curvature due to energy density (Chapter 7 here). QED incorporates special relativity, and hence Lorentzian spacetime, but not general relativity. However, QED tensor equations could be done in curved spacetime. It is just the dynamic interactions between electromagnetic energy density and space curvature that are intractable. Luckily that is an incredibly small effect even under LHC extreme photon scattering conditions.


    So curved space would provide additional nonlinearity (predicted theoretically in the case of static curvature) but I know of no lab experiment which measures that in the context of photon-photon scattering. The effect is tiny - still there might be indirect measurements based on the properties of the early universe as revealed by Cosmic Background Radiation? The trouble with such indirect evidence is that it relies on other possibly uncertain assumptions.

  • AS an example of the different effects and laws that might pertain at different energy levels


    consider what different nonlinear effects happen to one's head when one collides with a wall


    at walking speed 5 km/hr and at 200 km/hr:)



    Absolutely: so in that case (leaving aside all the biological complexity and the fact that your brain would no doubt stop working) the obvious behaviour change here is that between elastic and non-elastic deformation in the skull. That can be calculated, and exists because the response of material objects to sheer and stress is inherently nonlinear.


    But W is rejecting QED (which predicts specific calculated in 1930s and confirmed in one specific case directly this year nonlinearities) and claiming a behaviour change at some known energy (or frequency). Now, QED has a specific coherent mechanism for the nonlinearity which as you know comes inevitably from QFT and is consistent with (in fact derived from) the leptonic parts of the standard model. It comes with QM and all that stuff, once you apply it to fields.


    Suppose like W you Reject QED and hope to replace it by something else. Then you need some other mechanism for nonlinearity - and you have a fudge factor unless you can calculate precisely what the nonlinearity is from the theory. Note that QED does not need any additional info to do this other than electron mass and charge.


    I'm quite content for W to have as many fudge factors as needed in his theory for it to match experiment but I'd like them to be explicit, together with the (perhaps hand-waving) theoretical explanation for them. Also, I'd like to note that since QED calculates these nonlinearities and is validated by experiment W's theory will need to follow QED in the predictions it makes in this area.


    In this case it is just a question of defining at what frequency behaviour changes. W's post above made it sound like he thought this was obvious: but it is not obvious to me.

  • "From the arxiv paper

    So far the ATLAS"and CMS collaborations obtained first evidence of photon-photon scattering for invariant masses Wγγ > 6 and 5 GeV, respectively. Due to theexperimentalcutsontransverse photon momenta pt,γ > 3GeV,theresultingstatistics is so far rather limited. The ATLAS result is roughly consistent with the Standard Model predictions for elementary cross section embedded into state-of-art nuclear calculation including realistic photon fluxes as the Fourier transform of realistic charge distribution]]


    Any idea what roughly consistent means?


    RB - I'd guess that you have read the whole of diphoton scattering LHC paper, not just the sentence you quote? In which case while a rhetorical question is of course good rhetoric, better argument would be to make a precise point?


    What they mean is that they have observed diphoton scattering but do not yet have enough data to tie down all the other noise components and therefore generate high accuracy quantitative measurements of cross-section.


    What they have is +/- 50% accuracy and agreement with theoretical predictions:


    ATLAS measured a fiducial cross section of σ = 70 ± 24
    (stat.) ±17 (syst.) nb and theoretical calculations (including experimental acceptance)
    gave 45 ± 9 nb [7] and 49 ± 10 nb [8]


    Don't worry - the estimates will get better as they tie down as yet badly quantified background sources, and also as they collect more data!



  • AS an example of the different effects and laws that might pertain at different energy levels


    consider what different nonlinear effects happen to one's head when one collides with a wall


    at walking speed 5 km/hr and at 200 km/hr:)


    I think you are using "linear" to just mean "simple" and "nonlinear to somehow mean "more true". While it is true that linear equations have a particular type of simplicity to them, the definition of linear is much more constrained than that. And "non-linear" just means anything that isn't linear. Just because some equations are nonlinear doesn't mean they are a truer or more faithful model of reality.

  • Well there is no known element W that decays releasing protons. That's not what I meant, I was referring to the possibility of secondary fusion reactions between He3 + He3 releasing 2 protons, He3 + D releasing one proton, leading to an accumulation of reactive protons within the transition metal lattice in cold fusion. He3, tritium and D would be constrained within the lattice structure leading to further interactions with high energy protons (and neutrons). This could be one explanation of Fleischmann & Pons runaway experiment if there was a gradual accumulation of He3 within the Pd lattice over weeks of electrolysis of D2O, once the He3 had reached a critical density within the cathode there could then have been a chain reaction with high energy protons leading to massive energy release and meltdown (through the floor!) And mostly aneutronic, so no toxic radiation.


    I read your rotatoral collapse theory as being a mechanism involving micro-magnetism of distributing the energy of fusion reactions through the lattice - all I'm trying to figure out is what these fusion reactions are in LENR as there seems little consensus as to what is happening ie its nuclear but some workers like R Godes (Brillouin) ascribe initial D formation to electron capture by protons then runs it through to He4 and energy release as before (I can't see why they never tried D instead, probably raise their COP by a factor of 2). Other workers like say Iwamura or Takahashi don't have any fixed theory in mind and experiment showing yes, we obtain excess heat and measure all sorts of interactions between nanostructures of transition metals, an equally valid empirical approach. Then there's all the transmutation data and related SPAWAR studies all of which can be taken as neutron and proton release from CF experiments, and the 'strange radiation' which I proposed was probably a complex mix of just about every particle or ray you could think of except maybe betas.


    Then we go right back to the beginning of cold fusion which was originally predicted by Andrei Sakharov as muon-catalysed fusion, then demonstrated by direct experiment, There's patents now claiming electron 'clusters' can do the same thing, whilst Holmlid is beavering away on laser-generated muons to achieve essentially the same thing. All I'm saying is hey guys maybe it's all down to a critical density of protons, neutrons and He3 (or dare I say it UDD may in fact be He3?).

  • (I can't see why they never tried D (instead of H) instead, probably raise their COP by a factor of 2).


    They might well be using deuterium, but using the fact that it is a hydrogen isotope to name it thus. This is a very common thing to do in the LENR business. Rossi IMHO might well be using Deuterium - nobody really knows - and re-painting a gas cylinder another colour is a trivial task,

  • Since frequency has dimensions 1/T that requires some natural time (or frequency).

    QED predicts nonlinearity (and scattering) via virtual electron-positron exchange.


    Frequency can be expressed independent of time if you use wave numbers. Wave numbers are given as oscillations/per unit of path. In dense space, where we have no time, we use wave numbers and quotients of wave numbers e.g. 3/5. This also defines energy relations.


    You did not understand my point and the experiment with vg >> c. Anything you measure = information = energy (-discrimination)..


    If you study the vg >> c paper, then you will notice that people had to insert a delay line to match the over speed/c signal again with the source signal (that is at c). This has nothing to do with your experiment of thought.

    If you would understand how dense mass works, then you could immediately see that mass in higher dimensions than 3D,t (e.g. SO(4)) moves faster than c at any point in space. My first assumption was that photons also follow the SO(4) orbit because it follows 3 rotations in the experiment. But this would (most likely) limit the (phase=rotations-induced) speed to 32c.


    "virtual electron-positron exchange", is an artifact of a badly defined theory. Such an exchange would only be possible at v > c and that's the reason they call it virtual. In realty there is never a particle exchange. What we see is an oscillation of e.g. 2 SU(2)xSU(2) coupled systems. But from a mathematical point of view it could result the same solution if by mere luck the two coupled system are symmetric (what they rarely are).


    There is no doubt that photons can follow/flow on open SO(4) orbits. As all mass in the same SO(4) system couples by "X" we need no prediction to say that such a coupling is non linear. It's just natural understanding of true physics laws.


    If you stick to 3D,t you will never understand the true laws of physics.

  • Frequency can be expressed independent of time if you use wave numbers. Wave numbers are given as oscillations/per unit of path. In dense space, where we have no time, we use wave numbers and quotients of wave numbers e.g. 3/5. This also defines energy relations.


    That merely changes the question to finding a natural length at which the transition happens. Without such a natural length (or time) "large" has no meaning. So after several iterations you are still not answering my question "what do you mean by large"?


    In fact I showed awareness of this in my original post where I converted from Planck length to Planck time.


    You did not understand my point and the experiment with vg >> c. Anything you measure = information = energy (-discrimination)..


    If you study the vg >> c paper, then you will notice that people had to insert a delay line to match the over speed/c signal again with the source signal (that is at c). This has nothing to do with your experiment of thought.

    If you would understand how dense mass works, then you could immediately see that mass in higher dimensions than 3D,t (e.g. SO(4)) moves faster than c at any point in space. My first assumption was that photons also follow the SO(4) orbit because it follows 3 rotations in the experiment. But this would (most likely) limit the (phase=rotations-induced) speed to 32c.


    W: I think I cannot help you further. I understand fully that you believe an experiment showing (as is predicted by classical wave theory) an e-m wave packet with group velocity > c => some physical quantity moves FTL. The issue here is that e-m wave packets created by interference of "sculpted" time-varying phase wave sources do not correspond to single photons. The interference is created from a complex time-varying source that can only be represented in free space as multiple photons.


    Similarly with electrons: you could in principle create an electron pulse moving FTL if you could create time-varying phase wavelike (nonlocalised) electrons and let them interfere. In that case the actual constituent electrons would not be moving FTL even though the velocity of the multi-electron pulse does so move.


    This is an old "paradox" which has been refuted many times and comes from a misunderstanding of what group velocity means. It is quite subtle when combined with QM wave packets, which can represent single particles that are localised. That however does not mean that all places where wave amplitude peaks correspons to single particles (obviously - since waves can interfere). My analogies are relevant and will help most people (but it seems not you) to understand why this is as will the discussion I linked.


    Foundations are important: so if new ideas are based on basic misconceptions they cannot be trusted. That is the case for your posts here in the specific ways that I've highlighted.

    You would make surer progress if you listened to criticism and restated your ideas: I'm not claiming these issues necessarily invalidate your work. But, for as long as you persist in claiming things that are just untrue - like that e-m group velocity > c => FTL movement of some physical object, or that "large" frequency is well defined, no-one with a physics background will be able to follow your work.

  • You would make surer progress if you listened to criticism and restated your ideas


    Now where is that resstatement of that 6 significant precision for QED?QCD

    that THHuxleynew finds in Durr et al's 2015 Arxiv paper.

    The problem is that THHuxleynew has entanglement with listening to criticism

    and that QED/QCD fails the six figure precision test for nuclear modelling


    Not just QM's QED, QCD .to be fair..

    Mills' GUTCP.. Magnitskii's Ether a few others.


    It might be good to go beyond the limits of 3d,t when dealing with the nucleus

    I'm thinking of publishing an article.. Perhaps Bruce-H cn help

    QED is not QED, RIP QED 2020..

  • QM wave packets, which can represent single particles that are localised.


    There is no such reality: Single particles consist of a rotating flux of em-mass. QM is a just a simplification of physical facts. All known particles can be represented by relativistic mass and perturbative mass something that has nothing in common with a single wave!!

    The correct SO(4) modelling of the electron/proton pair fully reproduces the experiments where as QM totally fails on nuclear/close to nuclear orbits level..


    QM on nuclear level is just mathematical fantasy...QM wave/particle only "exist" (make sense) in the limits of a pure Coulomb gauge and disappearing internal magnetic forces.

    That merely changes the question to finding a natural length at which the transition happens. Without such a natural length (or time) "large" has no meaning. So after several iterations you are still not answering my question "what do you mean by large"?


    This fact is simple to answer and already given by information theory. For particles time disappears at distances smaller than the de Broglie radius. I did show that this can be proven by simple math as the first step of "massification" (fusion) is getting rid of the de Broglie potential!