Einstein was right? QM is ??


  • That depends on whether are specifying the mass or the mass difference - I was giving the figure for mass accuracy.


    Perhaps not fair - but no more than Mills does when he specifies 11 digit accuracy for AMM!


    THH



  • Yes Stefan - I actually meant to delete that because I redid everything and agreed with him. He is not claiming more accuracy than is warranted for his data. However, his value which fits the old data very well is way out for the new data.


    As far as QED goes the issue is this. It is tough doing the calculations for more accurate theoretical values: you need lots of computer time (for the monte carlo simulations) and all those thousands of Feynman loop diagrams are easy to get wrong - in fact I referenced a 2017 correction to a previous calculation where an error was discovered in one of the loop diagrams. So people tend to do better (more complex and longer compute time monte carlo) runs when there is some point to this because the extra precision can be checked against experiment.


    One thing you might consider is that the errors - either statistical or due to not considering higher order terms - can all be bounded and these bounds are calculated together with methodology (which allows checking of error bounds as well as checking values).


    Why is fudging not possible? Because:

    (a) different groups use different methods and get the same answer - or if not they debug

    (b) answers for different quantities are all compared with experiment. Even though all depend on alpha you can compare two different values to check the theory.

    (c) this stuff his done by ,multiple independent people who check each other's methodology. There is no way that fudge factors can survive for along because the method must be be justified, and finding errors in other people's work is very well rewarded in the academic system. As I've said you will find quite a lot of people publishing - "there was an error we have fixed" papers. And you will also find "We have got this result but it does not fit experiment - we are not sure why" published. The not sure why leads to further investigation and either errors found or some conflating phenomenon not considered (well that is also an error of course).


    Cf that with Mills' stuff where I have very little confidence it not fudges at the level of choosing from a variety of theories till the correct value drops out.


    Also with Mills stuff remember that first order his semi-classical approach is provable identical to QED (that quite often happens). It may be that second order it is also identical to QED - though I would need to do a lot of work to see whether that is true. I would not expect it to model any of the electro-weak or hadronic terms in the QFT expansion - but those are pretty small. Maybe it is correctly modelling 1st and 2nd order QED (both of those are analytical in QED). Or maybe not - I have spent enough time on this for now and lost the will to check the second order correspondence with second order QED terms.


    So I see little merit in the matching of Mills calculations for this one well investigated number. And a clear red flag with the difference from the more recent more accurate values. For other numbers again I think any match can be explained by numerology (choosing hand waving reasons for coefficients to retrofit a close match) and analytical equivalence between QED and semi-classical models at low order.


    I'd like to know whether Mills has generated any of these results demonstrably before the experimental results were known. That would not rule out semi-classical equivalence, but would rule out fudge factor numerology!


    I would of course be a bit more charitable if Mills theory was not incompatible in many other ways (e.g. entanglement!). There is obviously room for cleverer ways to do those Feynman loop diagrams as we have sort of started to see with the Amplitudehedron! A cleverer way to do them might lead to a much more enlightening and less complex way of visualising what is going on.


  • What you explain can be easilly solved in a hidden variable theory If you have a uniform distribution on

    ((1,1),(1,1))

    ((1,0),(1,0)),

    ((0,1),(0,1)),

    ((0,0),(0,0)).


    you will get what you explained you say with a hidden variable e.g. probability theory. But the crux is that quantum theory mixes the wave function in

    such a way that you can use Bell theorem to say that a probability theory cannot produce a quantum mechanical one. QM is _not_ probability theory but

    another system of combining the fields to values.


    See wikipedia for Bells theorem derivation. Here one deduces the seen correlation through a simple QM calculation from the wave functions. I don't see

    how this derivation adds new things to Schrödinger with initial conditions the derivation uses the wave function at time t. So you are saying that there now

    is new measurments more complex measurements that adds to this so that schrödinger is not enough. That would be remarkable.

  • Now lets go back to entanglement,

    What specifically is wrong and ignorant ...

    clearly both wrong and ignorant

    as you have stated about Mill's

    explanation of the entanglement phenomenon

    Perhaps you would be kind enough to explain thatin the context of the 1998 Durr et a l findings


    I have not ever looked at the 1998 Durr et al findings. The key entanglement experiments are different from this.


    I'm a bit resistant to looking at it because it may not be an experiment that shows non-local (non-classical) entanglement. More efficient for you to look at one of the many experiments that does.


    Wrong and ignorant because if you do even a simple google search you find lots of good entanglement experiments which progressively close all possible (they are pretty implausible but still) loopholes, and the loophole that Mills suggests does not survive complementary measurements where the measurement chosen is statistically independent of the way that the wave functions are prepared (which can obviously pretty easily be enforced).


    I've noticed with the HUP comment that Mills takes an experiment giving more detail on a QM issue (in this case the HUP inequality) and interprets that completely wrongly out of context - as I've stated at length earlier - and I hope you would agree.


    I expect the same for the Durr experiment he quotes, which means looking at it will be time wasted, when my point is proven by many other experiments.


    If you really really think it is important to do this when I have regained the will to live I will look at the details of 1998 Durr et al.

  • That depends on whether are specifying the mass or the mass difference - I was giving the figure for mass accuracy.

    Which was rather different from

    Stephan Durr's figure of 1.51+-0.28


    Perhaps you would like to discuss that with Durr.


    The same conclusion... NOT 6 digit precision as you maintained on your first skimreading


    and a pretty crude preciseness after all those seven years of teraflop supercomputing.

    and at least five grants from the EU taxpayer..

    There are a few other models with better preciseness which don't require supercomputing.


    Perhaps there is a better use of those supercomputers..



  • Stefan - I agree hidden variable theories can solve this - though they get very messy because every conjugate observation requires new hidden variables.


    But they must be non-local hidden variables. That is not compatible with Mills semi-classical stuff.


    It is compatible e.g. with pilot wave interpretations (or Cramer etc) - but those are inherently nonlocal too. I don't like the PW/Cramer ones much because of difficulty in working out when the transactions actually happen - I'm not sure it can be well defined - but maybe it can and I don't mind them too much. Since there is no difference in experimental predictions between different interpretations as long as (unlike Copenhagen) they are well defined it does not so much matter...


    More later.

  • when I have regained the will to live

    The problem with accusing Mills

    of being "Wrong and ignorant"

    is that it going to require a significant

    commitment of time and energy

    to support this statement

    Thanks for that effort on the g

    but you have not shown QED

    Wrong and ignorant..


    Perhaps you could consider in detail

    Mills derivation of the g-factor.

    GUTCP 2018 appears updated a bit

    in the g-factor section

  • If you are going to accuse Mills of "Wrong and ignorant"

    please take the time to read his explanation


    OK - well I will I guess have to do that: however you then please agree with me that the Mills statement you highlighted about HUP is wrong (because it says HUP is proved wrong when in fact a non-proven and inexact relation is proved wrong, which was known previously to be theoretically wrong from QM, and where a paper 1 year later shows that a tighter relationship can be formulated which is provable correct) and ignorant (of the references I highlighted which explain by it is wrong). See my previous post for all needed refs.


    This is not splitting hairs: Mills concludes from that statement that QM is experimentally proven wrong, whereas in fact is shows no such thing - quite the reverse.


    You could just accept this: or read my refs which prove this and then accept it.

  • Rather it shows that a never proven informal inequality commonly used is inexact as was known previously (2003) proven from QM theory. So far from disproving QM HUP this is in line with what has been proven.

    How does QED do? A 2017 improved QED 10 loop calculation is


    As mentioned earlier (many times) QM is not and never will be a fundamental theory as its original gauge (Coulomb) now is in complete discrepancy with reality! The QED paper THH mentions (now felt the tenth time) is simply formulated mathematical fraud as the weights they use are 5 digits in average what is not allowed to get 10 digits precision. Such an approach only works if you have e.g. 1 Million measurements and an make an average. Summing up terms with 5 digits precision to claim a 10 digits result is simply mathematical nonsense. This is typical for mathematician that never learned the rules of computation. E.g. the minimal error in the sum of 10000 values given with 5 digits is at least 0.00001 * 2log(10000) what is about 14 times larger than the single error. (This error is just for one summation!) But some formulas use values with less than 2 digits precision...


    For any theory to be fundamental it's base form must perfectly match the reality! currently QM/QED/QCD have no working gauge. Currently only SO(4) models can do this in certain cases. QM is just a good engineering tool for non magnetic interaction.


    Entanglement:


    On the other side Mills gives not much more insight about what entanglement really is. The simplest form of entanglement everybody knows is the spin-pairing of e.g. the 4-He electrons. This effect can only be explained by SO(4) physics. Mills approach for 4-He energies, which I studied in deep detail is much better than QM but he uses some cheating for the reduced mass in a place where there is no (simple) reduced mass...

    What we know today is that SO(4) physics gives the absolute exact ionization energy of Hydrogen/Deuterium and almost exact for 4-He. The problem with 4-He is that the nucleus itself is not totally symmetric. It also has a quadrupole moment. Thus the perturbation of the SO(4) orbits must be adjusted by the weights that caused the quadrupole effect. Doing this is just boring work I might do some time.


    The electron g-factor Mills calculates is the best what you can do with 3D,t Maxwell physics together with 4D ad hoc rules for mass-conversion. The Mills value can be corrected by the 4D perturbation to get 2 more digits what is already off precision we can trust.


    The problem with Mills is that he never gets the absolute correct value and explains these facts with wrong arguments like alpha being with low precision. But the good or lets say the excellent, physically correct explanation of the electron g-factor outweighs such deficits.

    Physics will only make progress if it first looks at Mills and then switches to SO(4) space with the two "X" coupled "2 -potential" approach.


    Now the most serious problem: Starting (2005) latest 2014 NIST itself more and more fudges values of the famous "table" just to hide discrepancy within SM... I did delete the latest NIST table to avoid making wrong usage. Please don't use new alpha / 4-He/alpha-particle mass values, 4-He charge radius as there is no true experimental base for the corrections made.

  • I presume, that everyone understands, that Mills theory cannot be compatible with double slit experiment (despite all its efforts), because the flabelliform shapes of electron orbitals (which Mills theory denies) are just spherical analogy of flabelliform patterns of double slit experiment - and they even result by very same mechanism (the diffraction of pilot wave). That doesn't mean, that non-radiation condition doesn't apply and that some aspects of Mills theory cannot be relevant for overunity phenomena - but Randell Mills seems to want to have non-radiating orbitals everywhere, despite it's not apparent, why exactly. The existence of hydrino requires to have them only for sub-quantum energy levels, which quantum mechanics cannot describe anyway.


    Another question is, why Mills needs to have spherical orbitals and nonradiating conditions at all, once he assumes, that hydrino is the most stable form of hydrogen? The nonradiating condition applies for forbidden energy levels, which are metastable. But hydrino is supposed to be stable - or its production wouldn't release an energy. In another words, Mills theory looks confused for me and it's full of logical inconsistencies. Math results of theory are important, but we shouldn't overlook the forest for the trees.

  • Regarding the formal results of Mills theory, many of them apparently works - but we shouldn't forget, that let say epicycle theory also worked well for predictions of eclipses and conjunctions - despite it was based on quite opposite (actually topologically inverted) perspective. Similar holographic duality exists in physics for example between string and loop quantum gravity theories, which look seemingly very differently (the former one is based on vibrating strings like quantum mechanical orbitals, whereas this later one on bubbles of spin foam in similar way like the orbitals of Mills theory). So that Mills theory is merely holographically dual to quantum mechanics (i.e. topologically inverted in momentum space-time), once we try to imagine, how the quantum mechanics would look like for subquantum states: it would converge to spherical Rydberg orbitals - just much smaller ones, than the real Rydberg orbitals. So that in many aspects Mills theory can still provide correct predictions, just in temporal momentum space instead of real one as it has sign of time dimension inverted. It's sorta epicycle model of quantum mechanics.

  • Aether Wave Theory based on dense (luminiferous) aether model handles vacuum as a sort of dense gas or supercritical fluid, inside of which particles wiggle like pollen grains in water, being shaken and kept in neverending motion by vacuum fluctuations. That means, if we would constrain the motion of some particle into a smaller volume, it would resist its destiny by more intensive wiggling - so called degeneracy pressure according to uncertainty principle of quantum mechanics. This explains, why we cannot have subquantum states (despite that energy of electron would undoubtedly decrease with approaching to atom nuclei due to Coulomb force): the vacuum fluctuations don't allow shrinking of electron orbitals and they would immediately expand them back like elastic bubble or balloon.


    In this aspect the dense aether model essentially contradicts the Mills theory, as Randell Mills believes, that with shrinking size of electron orbital its energy only increases due to Coulomb force so that the hydrino formation would lead into release of energy. Whereas from dense aether model follows, that even without atom nuclei at the center the electron orbital would be immediately expanded back from subquantum state to its fundamental quantum state. So that the energy of Coulomb field would get compensated with degeneracy pressure nearly completely and the formation of hydrino (even if it could be somehow stabilized for example by the Gauss non-radiating condition) would be actually endothermic.


    This is also the reason, why I don't believe that hydrino forms dark matter: it would decay back into hydrogen already.

  • It is interesting to note that Mills device of modelling a entangled system results in a solution that can't be

    solved with probability theory and hidden variables due to Bells Inequality. This is expected because he

    does not use any probability theory and hidden variables.

  • From: entanglement


    "

    According to some interpretations of quantum mechanics, the effect of one measurement occurs instantly. Other interpretations which don't recognize wavefunction collapse dispute that there is any "effect" at all. However, all interpretations agree that entanglement produces correlation between the measurements and that the mutual information between the entangled particles can be exploited, but that any transmission of information at faster-than-light speeds is impossible.

    "


    what I've tried to say all along,

  • Useful lecture in QED result for AMM.


    zeta(3) = 1.20205... (it the the sum of 1/n^3 for n = [1..infinity])


    evaluating this we get:

    -0.04863637

    as the alpha^2 coefficient (I think).


    The alpha^3 coefficient (see below) is approximately +0.03813 and so makes a further correction 1/100th of the value of the alpha^2 correction.


    Let us compare this with Mills coefficients




    -(4/3)(1/2pi)^2 = -0.033773 (alpha^2 coefficient)


    (2/3)(1/2pi) = +0.106 (alpha^3 coefficient)


    ----------



    The alpha^3 coefficient is also analytical. From the Laporta and Remiddi 1996 reference:


    https://arxiv.org/abs/hep-ph/9602417


    We have evaluated in closed analytical form the contribution of the three-loop non-planar `triple-cross' diagrams contributing to the electron (g-2) in QED; its value, omitting the already known infrared divergent part, is

    a_e(3-cross) = 1/2 pi^2 Z(3) - 55/12 Z(5) - 16/135 pi^4
    + 32/3 (a4 + 1/24 ln(2)^4) + 14/9 pi^2 ln(2)^2
    - 1/3 Z(3) + 23/3 pi^2 ln(2) - 47/9 pi^2 - 113/48.
    This completes the analytical evaluation of the (g-2) at order alpha^3, giving

    a_e(3-loop) = (alpha/pi)^3 { 83/72 pi^2 Z(3) - 215/24 Z(5)
    + 100/3 [( a4 + 1/24 ln(2)^4 ) - 1/24 pi^2 ln(2)^2 ]
    - 239/2160 pi^4 + 139/18 Z(3) - 298/9 pi^2 ln(2)
    + 17101/810 pi^2 + 28259/5184 } = (alpha/pi)^3 (1.181241456...).


  • Repeating statements without reason is not a logical argument.


    So: the Mills passage you quote contains two statements: the first was that about the Durr et al double-slit experiment, and its interpretation. I've agreed not to dismiss that until I've read (in detail) both Durr's whole paper and Mill's whole comment on it.


    The second statement is Mills saying that QM (and QED, SM, etc) is proven false because the HUP (Heisenberg Uncertainty Principle) on which it rests is disproven. Mills gives as evidence of this a 2012 paper by Rozema et al.


    I hold by my statement that this specific statement (made by Mills) is both wrong and ignorant.


    Specifically it is wrong - because the 2012 paper does not disprove the HUP - merely an unproven and inexact inequality commonly taught. As further evidence I point out that this experimental "disprove" was predicted from QM theoretical work some 10 years later. Finally, to show that this is a lacuna - not some real problem with HUP - I shown a 2013 paper - commenting on the 2012 result - which derives a proof of a tighter inequality that formalises HUP. This inequality is consistent with all experiments.


    It is ignorant because when making claims that a major theory in active use and development, with many experimental successes, is wrong it is usual to look at the literature before jumping to judgement. In this case the abstract of the 2012 paper that Mills cites makes it pretty clear that this result does not contradict HUP, not QM theory, only an inexact but commonly taught inequality. Furthermore anyone non-ignorant, before making such a strong claim - would as I did check citations of said paper. That contains the correct and proven inequality corresponding to HUP.


    Mills uses this result as evidence that the whole edifice of QFT is unsound. That usage is clearly wrong: so this is not a trivial mistake of no significance.


    You will find details in #23.


    I'd welcome your rational rebuttal of this argument, "I see no sign of wrong and ignorant" is quite properly asking me to provide my evidence from the quoted passage. However, I did this previously in #23 and have repeated the salient point here. If you persist with "I see no sign" without a rebuttal of my argument, then I think it is fair to say that you are ignoring clear evidence.


    You might feel it is unfair to accuse an author in such extreme terms - wrong and ignorant. Normally I'd agree. In this case the mistake is so obvious, and the conclusion from it so extravagant (that accepted and highly successful physics is fundamentally wrong) that I think it is entirely fair.



  • Stefan - I don't think anyone has said anything contrary to that.


    However - it does not quite get at the nub of the matter - which is the spooky way that non-locality surfaces. True, it (provably) can't be used for FTL signalling. But equally, it does exist.


    The way to think of this is that two entangled photons - even if 1000s of light-years apart - behave as a single object when measured. Complementarity then means that the apparently random results from measurements at far distant locations can be correlated.


    This spookiness is subtle and emerges directly from the maths of QM. It shows something deep about the structure of the universe - that non-local things are everywhere. It is consistent with the modern (Raamsdorf et al) ideas that the structure of spacetime (and the speed of light as an information propagation limit) emerge from entanglement as a fundamental process.


    So we have here a 70 years old issue that tantalisingly hints at something new.


    How you interpret the spookiness is up for grabs: the different interpretations cannot be experimentally resolved (there might be some possibles, with some interpretations, but none to my knowledge that have panned out).


    There is a meta-observation - which many don't like - that might push you towards a many-world type answer. That is if it turns out vanishingly unlikely that the universe could evolve to generate complex structures and life then the anthropic principle makes that quite OK as long as we have many worlds.


    On the issue of wavefunction collapse: it is non-measurable, because decoherence produces the same results. The fact that measured microscopic systems can be processed coherently in a way that restored the "multiple results" speaks against any form of waveform collapse, because there is in principle noting to stop complex processing that restores coherence after a measurement. Thus I don't like anything that requires collapse. But pilot wave etc ideas can be compatible with a many worlds no collapse interpretation.


    It is a mark against Mills that he seems not to have engaged with this whole issue: the physical world certainly does engage with it!

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.