Posts by THHuxleynew

    RB - the issue here was calculation of electron anomalous magnetic moment.


    Why is that more relevant? It is calculated from QED, a simpler and stunningly accurate theory that Mills dismisses as wrong. Mills has a closed form determination of it from alpha, and QED has closed form determination up to alpha^3, with monte carlo simulations for the higher order terms.


    Although the correct (QED) determination does have electroweak (dependent on muon mass) and QCD (hadronic) components, which complicate the pure QED calculation, these are relatively small.


    For this best of calculated, best of tested value, QED is consistent with experimental error, Mills theory is 200X SD wrong.


    You might think Mills would want to propose some higher order correction, and refine his theory? But it seems he is not able to do this, and instead comments on the good fit using 1987 data.


    It is not proper behaviour from a scientist. Although, of course, Mills has commercial interest in the matter, wants to promote investment in his company, and is not in any conventional sense a scientist (although he does have some experimental work published).


    QED 4 loop confabulation: https://arxiv.org/pdf/1704.06996


    This is a guy he is giving 100 digit accuracy for one component of the calculation (the QED 4 loop one). It is I guess good to have an exact numerical solution for this but at even 11 sig fig the QCD and electroweak components become significant, and practically I can't see the merits in this over other numerical techniques and in particular efficient monte carlo based techniques.


    Still, it is a lot of effort...


    Mills vague accusations about QM


    I've read Box I.1 in Mills' 1000 page book. He makes major charges against conventional theories in one or two sentences with no (or almost no) references and no details. It is difficult to take this seriously. Each of these one liners would be (if real) the subject of 10s or even 100s of papers arguing and clarifying it, if real. In fact there are a lot of papers that do that on many topic within QM, arguing corners of it. Mills does not engage with this: and dismisses existing work without evidence.


    RB - you say some of these one-liners seem interesting to you. Perhaps then you could do the literature survey that Mills avoids, show where the existing theory as described in research papers is wrong, and give full reasons? Or find somone else who has done that?


    Otherwise this type of sound bite science is in my view contemptible, because it makes serious and weighty accusations without evidence or checking.


    Let us do this for RB interesting criticism 1.


    It appears based on https://link.springer.com/article/10.1023/A:1004605626054

    Journal of Low Temperature Physics

    August 2000, Volume 120, Issue 3–4, pp 173–204 | Cite as

    On the Fission of Elementary Particles and the Evidence for Fractional Electrons in Liquid Helium

    H.J.Maris

    We consider the possibility that as a result of interactions between an elementary particle and a suitably designed classical system, the particle may be divided into two or more pieces that act as though they are fractions of the original particle. We work out in detail the mechanics of this process for an electron interacting with liquid helium. It is known that when an electron is injected into liquid helium, the lowest energy configuration is with the electron localized in a 1s state inside a spherical cavity from which helium atoms are excluded. These electron bubbles have been studied in many experiments. We show that if the electron is optically excited from the 1s to the 1p state, the bubble wall will be set into motion, and that the inertia of the liquid surrounding the bubble can be sufficient to lead to the break-up of the bubble into two pieces. We call the electron fragments “electrinos.” We then show that there is a substantial amount of experimental data in the published literature that gives support to these theoretical ideas. The electrino bubble theory provides a natural explanation for the photoconductivity experiments of Northby, Zipfel, Sanders, Grimes and Adams, and possibly also the ionic mobility measurements of Ihas, Sanders, Eden and McClintock. Previously, these experimental results have not had a satisfactory explanation. In a final section, we describe some further experiments that could test our theory and consider the broader implications of these results on fractional particles.


    Let us do a citation check in google scholar:


    2001 [Mills has a citation in which he critiques QM, but does not contribute to the "facrtional charge" mystery]

    The Schrödinger equation was originally postulated in 1926 as having a solution of the one electron atom. It gives the principal energy levels of the hydrogen atom as eigenvalues of eigenfunction solutions of the Laguerre differential equation. But, as the principal quantum number n⪢1, the eigenfunctions become nonsensical. Despite its wide acceptance, on deeper inspection, the Schrödinger solution is plagued with many failings as well as difficulties in terms of a physical interpretation that have caused it to remain controversial since its inception...

    2001 Jackiw, Rebbi, Schreiffer

    We argue that electrons in liquid helium bubbles are not fractional, they are in a superposed state.


    2001 Rae, Vinen

    It has recently been suggested that a bubble in liquid helium containing an electron could be excited into a state where the electron is divided between two smaller half bubbles, and that these “electrinos” would have increased mobility. This proposal is discussed critically, and it is concluded that, if such a state were to form, it would quickly collapse into an incoherent quantum superposition of two separated ground-state bubbles. All the measurable properties of this state are identical with those of a single bubble.


    2003 Maris

    We present calculations of a number of properties of electron bubbles in liquid helium. The size and shape of bubbles containing electrons in different quantum states is determined based on a simplified model. We then find how the geometry of these bubbles changes with the applied pressure. The radiative lifetime of bubbles with electrons in excited states is calculated. Finally, we use a quantum Monte Carlo method to determine the properties of a bubble containing two electrons. We show that this object is unstable against fission.


    2008 Moroshkin, Hofer, Weiss

    The studies of defects formed by impurity particles (atoms, molecules, exciplexes, clusters, free electrons, and positive ions) embedded in liquid and solid 4He are reviewed. The properties of free electrons and neutral particles in condensed helium are described by the electron (atomic) bubble model, whereas for the positive ions a snowball structure is considered. We compare the properties of the defects in condensed helium with those of metal atoms isolated in heavier rare gas matrices.


    2008 Maris

    An electron injected into liquid helium forces open a small cavity that is free of helium atoms. This object is referred to as an electron bubble, and has been studied experimentally and theoretically for many years. At first sight, it would appear that because helium atoms have such a simple electronic structure and are so chemically inert, it should be very easy to understand the properties of these electron bubbles. However, it turns out that while for some properties theory and experiment are in excellent quantitative agreement, there are other experiments for which there is currently no understanding at all.



    Maris 2003 looks like a good one to dig in: Maris posed the initial anomaly and goes on working on it.


    When an electron is injected into liquid helium, it forces open a cavity
    free of helium atoms, referred to as an electron bubble. In a recent paper1
    (referred to as I), we considered what happens when an electron bubble is
    illuminated by light. If the electron is excited from the lowest energy 1S
    state of the initially spherical bubble to the 1P state,2 the bubble shape will
    change. At high temperatures, the liquid contains many thermal excitations
    (phonons and rotons) and the damping of the motion of the bubble wall is
    large. One can therefore expect that the bubble will slowly relax to a new
    equilibrium shape. It was shown that this equilibrium shape resembles a
    peanut. However, at lower temperatures, the liquid contains few excitations
    and so the damping of the bubble wall becomes small. As a result, the
    bubble will change shape rapidly and after the equilibrium shape has been
    reached, the liquid surrounding the bubble will still be in rapid motion. The
    inertia associated with the liquid may then be sufficiently large to cause the
    waist of the peanut to shrink to zero, thus dividing the bubble into two
    parts. What happens after this point was not definitely established, and is
    under experimental investigation. Elser3 has argued that before the division
    of the bubbles takes place the wave function of the electron will cease to
    deform adiabatically as the bubble shape develops and that, as a result, all
    of the wave function will end up in one of the parts. This part would then
    expand and become a conventional 1S electron bubble and the other part,
    containing no wave function, would collapse. A different argument has
    been presented by Rae and Vinen.4 They claim that if the bubble divides
    into two baby bubbles each containing half of the wave function, this state
    would quickly collapse into an incoherent quantum superposition of two
    separated ground-state bubbles which would have properties no different
    from ordinary 1S bubbles.

    When electron bubbles are introduced into helium, a space charge field
    is set up which drives the bubbles out of the liquid. This limits the number
    density of the bubbles. As a result, conventional optical studies of the
    bubbles are extremely difficult.5, 6 Several experiments have shown that
    the absorption of light results in a change in the mobility of the bubbles;7–9
    the origin of this change in mobility is not clearly established. Very
    recently, a new experimental method for the study of the bubbles has been
    developed.10 In this experiment, a negative pressure is applied to the liquid.
    If the pressure is negative with respect to a critical pressure Pc , an electron
    bubble in the liquid will become unstable and explode. The explosion pressure Pc is different for each quantum state. Thus, a measurement of the
    pressure required to make a bubble explode provides a means to identify
    the quantum state. This provides the basis for a new method to study the
    properties of electron bubbles in excited states.



    So, basically, no fractional charge particles. Maris proposed (speculatively) that when a one-electron bubble splits it is

    possible that the wave function would be stable in a coherent split between the two halves (possible) leading to two bubbles

    sharing an electron!


    It is an interesting idea, and Maris explored it (he is the expert on He electron bubbles). However others pointed out that such a coherent structure would have a very short lifetime, even in liquid helium. Maris has one on to explore lots more stuff about these electron bubbles without finding evidence for fractional charge. One of the issues is that bubble mobility varies with electron excited state, and after teh first paper he found a neat way to explore the energy state of the bubble's electron by varying pressure and looking at when the bubble exploded. With this more powerful tool he has a lot more stuff on bubbles, but no more speculation about fractional charged bubbles because of no evidence for them after better experimental work.


    Maris 2008 (more He bubbles)

    An electron injected into liquid helium forces open a small cavity that is free of helium atoms. This object is referred to as an electron bubble, and has been studied experimentally and theoretically for many years. At first sight, it would appear that because helium atoms have such a simple electronic structure and are so chemically inert, it should be very easy to understand the properties of these electron bubbles. However, it turns out that while for some properties theory and experiment are in excellent quantitative agreement, there are other experiments for which there is currently no understanding at all.


    Mauracher et al 2014

    Helium droplets provide the possibility to study phenomena at the very low temperatures at which quantum mechanical effects are more pronounced and fewer quantum states have significant occupation probabilities. Understanding the migration of either positive or negative charges in liquid helium is essential to comprehend charge-induced processes in molecular systems embedded in helium droplets. Here, we report the resonant formation of excited metastable atomic and molecular helium anions in superfluid helium droplets upon electron impact. Although the molecular anion is heliophobic and migrates toward the surface of the helium droplet, the excited metastable atomic helium anion is bound within the helium droplet and exhibits high mobility. The atomic anion is shown to be responsible for the formation of molecular dopant anions upon charge transfer and thus, we clarify the nature of the previously unidentified fast exotic negative charge carrier found in bulk liquid helium.


    It looks like more recent work has identified anomalies previously noted in these systems.


    Summary of Mills/RB interesting criticism 1.


    This is certainly interesting work on He3 superfluidic systems which can expose all sorts of weird effects. Jumping from this to fractional charged electrons is only done by Mills. Fractionally charged bubbles after fission due to coherence, as was was tentatively proposed by Maris but not supported by better later experiements - does not seem likely. Maris has continued doing this work.


    Related (a bit more google search) finds this fascinating work on splitting wave packets (very fast time resolution - since as noted above such coherent splits cannot last for long).


    https://phys.org/news/2015-05-electron.html



    There is a pattern here. Science is complex, and all sorts of anomalies have multiple possible solutions. If you are Mills you leap to the solution that requires reformulating the whole of physics in a way that requires new particles, and no longer correctly predicts fundamental constants, or the non-local results of QM.


    If you are anyone else you do real experimental work to understand the phenomena, come up with a whole load of other explanations, and eventually work out which one if right based on better empirical evidence!

    it might involved reading a lot of GUTCP 2018.


    GUTCP 2018 updates the previously posted GUTCP 2016 - following whose working Mills shows a 1000 X error bound discrepancy with experiment for electron AMM.


    GUTCP 2018 can be found here (since RB has not linked it): https://brilliantlightpower.com/book-download-and-streaming/


    Lets look at the 2018 arguments on the same topic.



    Mills claims good agreement with experiment. Actually, he does not give (and should give) the error bounds in his analytic value inherited from those on his value for alpha. I'll do that for him:

    alpha-1 = 137.03603(82) (6E-6 fractional error) =>


    g/2 - 1 also has 6E-6 fractional error


    The Mills equation value is thus:

    1.001 159 652 137(6000)



    That is silly, let us write it more normally as:


    Mills: 1.001 159 652(6)

    Experiment: 1.001 159 652(0)


    Indeed the Mills value coincides with experiment, but it has an error of approx 1 in 10-5

    so this is not a good correspondence, except that by luck Mills has got values that coincide exactly in the 9th decimal digit of ae, even though the error bounds from alpha are 6 on this digit!


    For his 2018 comparison, Mills chooses values for alpha and ae from 1987!




    He makes some other arguments about constant values: again using CODATA values from 1998


    For an argument that replies on precise agreement with experiment this is disingenuous. Presumably, he realises the lack of agreement with more accurate values and uses this 1987 (30 year old) data for his headline comparison for that reason, and again 1998 (20 year old) CODATA values.


    Mills makes a valid argument that some derivations of alpha come from ae and the QED theoretical value - since this is significantly more precise than experimental alpha. Using such a value to validate QED would be circular. However, other experimental values are derived independently of ae.


    I am happy to review the modern literature on this matter (unlike, it would seem, Mills).


    Particularly interesting and relevant is Measuring the fine structure constant as a test of the standard model


    Measurements of the fine-structure constant α require methods from across subfields and are thus powerful tests of the consistency of theory and experiment in physics. Using the recoil frequency of cesium-133 atoms in a matter-wave interferometer, we recorded the most accurate measurement of the fine-structure constant to date: α = 1/137.035999046(27) at 2.0 × 10−10 accuracy. Using multiphoton interactions (Bragg diffraction and Bloch oscillations), we demonstrate the largest phase (12 million radians) of any Ramsey-Bordé interferometer and control systematic effects at a level of 0.12 part per billion. Comparison with Penning trap measurements of the electron gyromagnetic anomaly ge − 2 via the Standard Model of particle physics is now limited by the uncertainty in ge − 2; a 2.5σ tension rejects dark photons as the reason for the unexplained part of the muon’s magnetic moment at a 99% confidence level. Implications for dark-sector candidates and electron substructure may be a sign of physics beyond the Standard Model that warrants further investigation.


    This gives us α = 1/137.035999046(27) measured independently of QED and ae.


    Using this value, and the CODATA value for ae we can compute Mills' equation with confidence.


    That reference gives a useful comparison of different values for alpha:



    Note the good correspondence between most accurate QED ae (red) and recoil (green) measurements that directly test QED derivation of ae.


    Against this Mills derivation can be tested.


    1.001 159 652 137 (Mills derivation from alpha-1 = 137.03603)


    correct value of alpha-1 =

    137.035999046


    error 0.000030954

    fractional error = 2.25E-7


    Change in Mills calculated ae due to error in alpha (alpha is fractionally higher than the quoted Mills value)

    0.000 000 000 259


    Modern Mills value of ae:

    0.001 159 652 396


    Best current value of ae:

    0.001 159 652 181 643 (764)


    So this is an error of 200X the uncertainty in alpha and ae, where alpha is measured independently of QED and ae.


    It is a pity that Mills does not address this issue. If he did, he would be able to consider what assumptions in his derivations lead to innacurate calculations (just like anyone else would do).


    THH

    You actually did not make a specific statement

    You actually said

    Mills's critique of conventional science is so clearly both wrong and ignorant

    it was one or your BROAD SWEEPING GENERALISATIONS..

    Besides actually Mills does not disagree with all conventional science

    just the fuzzy recent bits characterised by QED since around 1930

    Now the task for you is to justify this generalisation for all his critique

    otherwise you may well find yourself WRONG and IGNORANT.


    Dear all on this thread. I'm not going to answer RB again on this point: it feels like Groundhog Day.


    He persists in claiming I did not give specific reasons for my statement that Mills (in what he said about QM and therefore conventional science) was wrong and ignorant. He then interprets my justified and correct statement out of context in the broadest possible way and accuses me of making sweeping generalisations? It is something that is easy to do when you isolate individual quotes from context and repeat them ad infinitum.


    I gave these specifics in #23 and repeated them in summary again in #61.


    I will further, here, point out why such ignorance of conventional theoretical physics, and wrong ideas about it, impacts on Mills' other work. Mills has not been able to write up his theoretical ideas (nor any bit of them, to my knowledge) in a way that allows critique by other scientists. Therefore we do not have the normal checks and balances against him being wrong or indeed just rubbish that we would have for a published author of a radical new theory. In fact, radical new theories are seldom the result of just one person, the initial ideas get expanded by many people.


    I would suggest that Mills' inability to get his ideas published comes from his ignorance. Specifically, when you critique, or propose alternate solutions to, existing work it is always important first to be expert in the working you are seeking to replace. For obvious reasons. Mills has shown himself, in the specifics I've quoted, to be non-expert and have severe misunderstanding. Anyone proposing something new without awareness of what is currently done will rightly get little attention. After all, existing theory encodes so much validating experiment: some new and better theory needs to be compatible with all that existing experiment. If you are not aware of all the validated consequences of existing theory you are handicapped.


    There is no proof that Mills is completely wrong. There never can be. And, if his ideas have some kernel of truth in them, they might be helpful. It does not seem likely, and could never be known until Mills or somone else puts effort into fully understanding existing QFT and its experimental support, as well as understanding Mills new theory (which, however, may not be possible if it is incoherent).


    THH



    Stefan - I don't think anyone has said anything contrary to that.


    However - it does not quite get at the nub of the matter - which is the spooky way that non-locality surfaces. True, it (provably) can't be used for FTL signalling. But equally, it does exist.


    The way to think of this is that two entangled photons - even if 1000s of light-years apart - behave as a single object when measured. Complementarity then means that the apparently random results from measurements at far distant locations can be correlated.


    This spookiness is subtle and emerges directly from the maths of QM. It shows something deep about the structure of the universe - that non-local things are everywhere. It is consistent with the modern (Raamsdorf et al) ideas that the structure of spacetime (and the speed of light as an information propagation limit) emerge from entanglement as a fundamental process.


    So we have here a 70 years old issue that tantalisingly hints at something new.


    How you interpret the spookiness is up for grabs: the different interpretations cannot be experimentally resolved (there might be some possibles, with some interpretations, but none to my knowledge that have panned out).


    There is a meta-observation - which many don't like - that might push you towards a many-world type answer. That is if it turns out vanishingly unlikely that the universe could evolve to generate complex structures and life then the anthropic principle makes that quite OK as long as we have many worlds.


    On the issue of wavefunction collapse: it is non-measurable, because decoherence produces the same results. The fact that measured microscopic systems can be processed coherently in a way that restored the "multiple results" speaks against any form of waveform collapse, because there is in principle noting to stop complex processing that restores coherence after a measurement. Thus I don't like anything that requires collapse. But pilot wave etc ideas can be compatible with a many worlds no collapse interpretation.


    It is a mark against Mills that he seems not to have engaged with this whole issue: the physical world certainly does engage with it!


    Repeating statements without reason is not a logical argument.


    So: the Mills passage you quote contains two statements: the first was that about the Durr et al double-slit experiment, and its interpretation. I've agreed not to dismiss that until I've read (in detail) both Durr's whole paper and Mill's whole comment on it.


    The second statement is Mills saying that QM (and QED, SM, etc) is proven false because the HUP (Heisenberg Uncertainty Principle) on which it rests is disproven. Mills gives as evidence of this a 2012 paper by Rozema et al.


    I hold by my statement that this specific statement (made by Mills) is both wrong and ignorant.


    Specifically it is wrong - because the 2012 paper does not disprove the HUP - merely an unproven and inexact inequality commonly taught. As further evidence I point out that this experimental "disprove" was predicted from QM theoretical work some 10 years later. Finally, to show that this is a lacuna - not some real problem with HUP - I shown a 2013 paper - commenting on the 2012 result - which derives a proof of a tighter inequality that formalises HUP. This inequality is consistent with all experiments.


    It is ignorant because when making claims that a major theory in active use and development, with many experimental successes, is wrong it is usual to look at the literature before jumping to judgement. In this case the abstract of the 2012 paper that Mills cites makes it pretty clear that this result does not contradict HUP, not QM theory, only an inexact but commonly taught inequality. Furthermore anyone non-ignorant, before making such a strong claim - would as I did check citations of said paper. That contains the correct and proven inequality corresponding to HUP.


    Mills uses this result as evidence that the whole edifice of QFT is unsound. That usage is clearly wrong: so this is not a trivial mistake of no significance.


    You will find details in #23.


    I'd welcome your rational rebuttal of this argument, "I see no sign of wrong and ignorant" is quite properly asking me to provide my evidence from the quoted passage. However, I did this previously in #23 and have repeated the salient point here. If you persist with "I see no sign" without a rebuttal of my argument, then I think it is fair to say that you are ignoring clear evidence.


    You might feel it is unfair to accuse an author in such extreme terms - wrong and ignorant. Normally I'd agree. In this case the mistake is so obvious, and the conclusion from it so extravagant (that accepted and highly successful physics is fundamentally wrong) that I think it is entirely fair.

    Useful lecture in QED result for AMM.


    zeta(3) = 1.20205... (it the the sum of 1/n^3 for n = [1..infinity])


    evaluating this we get:

    -0.04863637

    as the alpha^2 coefficient (I think).


    The alpha^3 coefficient (see below) is approximately +0.03813 and so makes a further correction 1/100th of the value of the alpha^2 correction.


    Let us compare this with Mills coefficients




    -(4/3)(1/2pi)^2 = -0.033773 (alpha^2 coefficient)


    (2/3)(1/2pi) = +0.106 (alpha^3 coefficient)


    ----------



    The alpha^3 coefficient is also analytical. From the Laporta and Remiddi 1996 reference:


    https://arxiv.org/abs/hep-ph/9602417


    We have evaluated in closed analytical form the contribution of the three-loop non-planar `triple-cross' diagrams contributing to the electron (g-2) in QED; its value, omitting the already known infrared divergent part, is

    a_e(3-cross) = 1/2 pi^2 Z(3) - 55/12 Z(5) - 16/135 pi^4
    + 32/3 (a4 + 1/24 ln(2)^4) + 14/9 pi^2 ln(2)^2
    - 1/3 Z(3) + 23/3 pi^2 ln(2) - 47/9 pi^2 - 113/48.
    This completes the analytical evaluation of the (g-2) at order alpha^3, giving

    a_e(3-loop) = (alpha/pi)^3 { 83/72 pi^2 Z(3) - 215/24 Z(5)
    + 100/3 [( a4 + 1/24 ln(2)^4 ) - 1/24 pi^2 ln(2)^2 ]
    - 239/2160 pi^4 + 139/18 Z(3) - 298/9 pi^2 ln(2)
    + 17101/810 pi^2 + 28259/5184 } = (alpha/pi)^3 (1.181241456...).

    If you are going to accuse Mills of "Wrong and ignorant"

    please take the time to read his explanation


    OK - well I will I guess have to do that: however you then please agree with me that the Mills statement you highlighted about HUP is wrong (because it says HUP is proved wrong when in fact a non-proven and inexact relation is proved wrong, which was known previously to be theoretically wrong from QM, and where a paper 1 year later shows that a tighter relationship can be formulated which is provable correct) and ignorant (of the references I highlighted which explain by it is wrong). See my previous post for all needed refs.


    This is not splitting hairs: Mills concludes from that statement that QM is experimentally proven wrong, whereas in fact is shows no such thing - quite the reverse.


    You could just accept this: or read my refs which prove this and then accept it.



    Stefan - I agree hidden variable theories can solve this - though they get very messy because every conjugate observation requires new hidden variables.


    But they must be non-local hidden variables. That is not compatible with Mills semi-classical stuff.


    It is compatible e.g. with pilot wave interpretations (or Cramer etc) - but those are inherently nonlocal too. I don't like the PW/Cramer ones much because of difficulty in working out when the transactions actually happen - I'm not sure it can be well defined - but maybe it can and I don't mind them too much. Since there is no difference in experimental predictions between different interpretations as long as (unlike Copenhagen) they are well defined it does not so much matter...


    More later.

    Now lets go back to entanglement,

    What specifically is wrong and ignorant ...

    clearly both wrong and ignorant

    as you have stated about Mill's

    explanation of the entanglement phenomenon

    Perhaps you would be kind enough to explain thatin the context of the 1998 Durr et a l findings


    I have not ever looked at the 1998 Durr et al findings. The key entanglement experiments are different from this.


    I'm a bit resistant to looking at it because it may not be an experiment that shows non-local (non-classical) entanglement. More efficient for you to look at one of the many experiments that does.


    Wrong and ignorant because if you do even a simple google search you find lots of good entanglement experiments which progressively close all possible (they are pretty implausible but still) loopholes, and the loophole that Mills suggests does not survive complementary measurements where the measurement chosen is statistically independent of the way that the wave functions are prepared (which can obviously pretty easily be enforced).


    I've noticed with the HUP comment that Mills takes an experiment giving more detail on a QM issue (in this case the HUP inequality) and interprets that completely wrongly out of context - as I've stated at length earlier - and I hope you would agree.


    I expect the same for the Durr experiment he quotes, which means looking at it will be time wasted, when my point is proven by many other experiments.


    If you really really think it is important to do this when I have regained the will to live I will look at the details of 1998 Durr et al.



    Yes Stefan - I actually meant to delete that because I redid everything and agreed with him. He is not claiming more accuracy than is warranted for his data. However, his value which fits the old data very well is way out for the new data.


    As far as QED goes the issue is this. It is tough doing the calculations for more accurate theoretical values: you need lots of computer time (for the monte carlo simulations) and all those thousands of Feynman loop diagrams are easy to get wrong - in fact I referenced a 2017 correction to a previous calculation where an error was discovered in one of the loop diagrams. So people tend to do better (more complex and longer compute time monte carlo) runs when there is some point to this because the extra precision can be checked against experiment.


    One thing you might consider is that the errors - either statistical or due to not considering higher order terms - can all be bounded and these bounds are calculated together with methodology (which allows checking of error bounds as well as checking values).


    Why is fudging not possible? Because:

    (a) different groups use different methods and get the same answer - or if not they debug

    (b) answers for different quantities are all compared with experiment. Even though all depend on alpha you can compare two different values to check the theory.

    (c) this stuff his done by ,multiple independent people who check each other's methodology. There is no way that fudge factors can survive for along because the method must be be justified, and finding errors in other people's work is very well rewarded in the academic system. As I've said you will find quite a lot of people publishing - "there was an error we have fixed" papers. And you will also find "We have got this result but it does not fit experiment - we are not sure why" published. The not sure why leads to further investigation and either errors found or some conflating phenomenon not considered (well that is also an error of course).


    Cf that with Mills' stuff where I have very little confidence it not fudges at the level of choosing from a variety of theories till the correct value drops out.


    Also with Mills stuff remember that first order his semi-classical approach is provable identical to QED (that quite often happens). It may be that second order it is also identical to QED - though I would need to do a lot of work to see whether that is true. I would not expect it to model any of the electro-weak or hadronic terms in the QFT expansion - but those are pretty small. Maybe it is correctly modelling 1st and 2nd order QED (both of those are analytical in QED). Or maybe not - I have spent enough time on this for now and lost the will to check the second order correspondence with second order QED terms.


    So I see little merit in the matching of Mills calculations for this one well investigated number. And a clear red flag with the difference from the more recent more accurate values. For other numbers again I think any match can be explained by numerology (choosing hand waving reasons for coefficients to retrofit a close match) and analytical equivalence between QED and semi-classical models at low order.


    I'd like to know whether Mills has generated any of these results demonstrably before the experimental results were known. That would not rule out semi-classical equivalence, but would rule out fudge factor numerology!


    I would of course be a bit more charitable if Mills theory was not incompatible in many other ways (e.g. entanglement!). There is obviously room for cleverer ways to do those Feynman loop diagrams as we have sort of started to see with the Amplitudehedron! A cleverer way to do them might lead to a much more enlightening and less complex way of visualising what is going on.


    That depends on whether are specifying the mass or the mass difference - I was giving the figure for mass accuracy.


    Perhaps not fair - but no more than Mills does when he specifies 11 digit accuracy for AMM!


    THH

    Forget Mills for the moment, fitting his idea to all possible cases is not something one easilly and quickly performs without spending considerable amount of time. But let's focus on this. Isn't quantum entanglement

    just initiating the wave function and then later calulating correlations by using the wavefunctions at the different sites. Can you in a few words explain what's more in entanglement? Is there experiments where one need

    more than Schrödinger or Dirac to explain the results of the experiments?


    Yes - if that were all there was to entanglement then it would not be non-local.


    The issue is quite subtle, and to do with how the two measurements (of the two entangled photons) relate to each other.


    First - here is a really good "anyone can understand it" detailed backgrounder on entanglement:


    https://www.quantamagazine.org…ent-made-simple-20160428/


    Key concept: To get the (non-local => non-classical) weirdness you need entanglement AND complementarity.


    Now - having read that - why does Mills's idea not work.


    The problem is this. Suppose you have two complementary properties: color (read or blue) and shape (round or square). (In diphoton experiments these correspond to measurement of spin in two directions at right angle to each other).


    QM makes sure that if you measure one property the other will be random and vice versa.


    But, if you have two entangled photons P1 and P2, and measure one property (eg color) at P1, and the same property at P2 the measurements must agree. Classically that is fine, you can say that two photons start in the same state, read or blue.


    However the same is always true if you measure the other property on P1 and P2 (shape). Again that is fine, you can prepare a system with either two square or two round photons.


    The nonclassical issue is this. You can decide which measurement you are doing independently of preparing the photons. Whichever measurement you do (color or shape) you always get agreement if the two measurements are the same. And, if you measure the two opposite properties (color and shape) for P1 and P2 you get random results (no correlation).


    Because those statistics are always true, and you can choose the measurements you want to do independently of the photon generation source, there is no way that a local description of the system can generate the observed probabilities.


    The 2018 experiment used light originating 8 billion years ago to determine which experiment was domne and still got these non-local statistic correlations. Pretty difficult to see how that can be any classical method.


    Now, I've skipped over things a bit but you can see the argument in more detail following the proof of Bell's Theorem - which has been validated many many times by different experiments.


    Here is a description of that - and teh loophole which has now, courtesy quasar light, been effectively closed:

    https://www.quantamagazine.org…l-test-loophole-20170207/

    The problem is that you have never refuted this statement by referring to the actual

    data in the Durr et al 2016 arxiv paper that you cited

    as having 6 digit significance for the n-p mass difference


    Yes, but I rowed back on that shortly after: and you ignored that and kept on repeating this weird 6 digit mantra!


    I am however quite interested in the general topic of who makes better predictions: Mills or QED/QCD. The problem here is QCD which is thoroughly difficult to get highly accurate results from for calculation reasons.


    However, we have QED - the "world's most accurately tested ever theory". I'm fascinated by, for example, the weird 2pi values that enter into Mills claimed calculation for the anomalous magnetic moment of the electron.


    Stefan - have you looked at this value's derivation (it is a cubic in alpha). I'd like to go through it in detail to understand how the anomalous 2pis get there (the alpha^2 part of the alpha cubed term not divided by 2pi).


    My reference for Mills is: http://zhydrogen.com/wp-content/uploads/2013/04/test6.pdf




    1 + alpha/2pi is simply stolen from the first order (in alpha) QED expansion coefficient, which is known analytically to be exactly this.

    Mills' semiclassical derivation based on the Poynting Power Theorem agrees with QED to this order, which should not surprise us.


    Let us work this out. Current experimental value for ae (the above value is supposed to be ae+1):

    ae = 0.001 159 652 181 643(764) (from 2011 Wikipedia)


    also


    (from Control of a Single-Electron Quantum Cyclotron: Measuring the Electron Magnetic Moment" 2011 is given in Wikipedia and consistent.


    Also alpha is known as roughly:

    α−1 = 137.035999049(90) (from 2010,2011 ref 3,4 in https://arxiv.org/pdf/1705.05800.pdf)

    also cf with the 2014 CODATA value: 137.035999139(31) that is consistent and only a tiny bit more accurate.


    For convenience we calculate alpha/2pi = 0.00116 140 973 3



    Using just the first order QED-only term (ignoring higher order QED, hadronic and electro-weak components):

    alpha/2pi = 0.0011641409733


    That is 6 sig figures on g/2, but since the first figure is 1 this is really only 5 sig figures. If you take the value of ae (g/2-1) which is the anomalous part of course it is only 3 sig figures accurate.


    So our starting point is this first order analytical QED approximation. How much extra accuracy does Mills next two terms give us?

    Subtracting first order terms from real value we get:


    -0.0000044888 - this is the number that Mills has to use numerology to hit!




    {\displaystyle a_{e}=0.001\;159\;652\;180\;73(28)}


    This differs by 1 in the 12th decimal place.


    Mills reference from 2006


    He quotes alpha-1 = 137.03604(11) from

    R.C. Weast, CRC Handbook of Chemistry and

    Physics, 68th edition (CRC Press, Boca Raton,

    FL, 1987–88), pp. F-186–F-187. - this is consistent with the 2010 value but 3 sig figures less accurate


    ae = .001 159 652 188(4) from


    R.S. Van Dyck Jr., P. Schwinberg, and H.

    Dehmelt, Phys. Rev. Lett. 59, 26 (1987). This is slightly inconsistent with the current value, being two SD too high.


    Mills, calculates

    ae= 0.001 159 652 120

    which he compares with (his 2006 experimental value)

    ae= 0.001 159 652 188(4)


    Excellent agreement


    Mills (in 2006) notes that values for the fine structure constant are variable. Indeed his alpha-1 value has error 137.03604(11) or 10^-7.


    Propagating this error to (ae -1) we see that the fractional (ae - 1) error is the same as the fractional alpha error which is the same as the fractional alpha-1 error.


    that gives a Mills calculated ae error (from his 2006 alpha data) of:

    0.001 159 652 120(100)


    [ EDIT - I meant to delete this The calculated value is coincidentally 50X better than would be expected if his formula was precisely correct given his stated error in alpha!


    Mills spends some time discussing different values for alpha: but he is cheating! He talks about the remarkable agreement between his value and the correct value, when he cannot have a value of alpha that justifies this level of accuracy! So his 11 significant figures accuracy for ae+1 is the same as 8 significant figures accuracy for alpha.]



    Let us see what happens if we use more recent values. The key value is that of alpha - which is less precise than ae by 2 sig figs.


    Using the current (CODATA 2014) value for alpha of

    137. 035 999 139(31)


    We have (alpha/2pi) = 0.00116 140 973 241(25)


    We get a Mills value for ae of:

    +0.001 161 409 732 41(25) (1st order - same as QED 1st order)

    -0.000 001 798 496 75 (2nd order)

    +0.000 000 041 231 02 (3rd order)

    +0.001 159 652 466 68(25) (total)



    versus CODATA value for ae of

    +0.001 159 652 180 91(26)



    Using recent values Mills is wrong by a factor of 500X the error bound


    Mills' calculation is precise. If his theory is correct. So something must give.


    Mills claims that the CODATA values for alpha may be wrong (by a factor of 1000X larger than the error bound?) because they involve QED. We may have to investigate this is anyone here (Stefan, RB?) feels that is a plausible claim.


    Alternatively Mills must now invoke otherwise unspecified errors to explain his lack of correspondance with theory.


    How does QED do? A 2017 improved QED 10 loop calculation is https://arxiv.org/pdf/1712.06060.pdf


    ae = +0.001 159 652 182 03 (72)




    But evaluating these calculations precisely is complex: the values given all depend on alpha - with error to first order the same (fractionla) as alpha. Neverthless this is 1000X better than Mills, unless you conclude that the CODATA value of alpha is wrong by 1000X the stated error.


    Stefan: there are quite possibly mistakes in this, though I've put some effort into it, let me know if you find any!



    THH


    I agree with this as an explanation of the phenomenon Mills says it describes Stefan. Unfortunately, as I have said above and given many references to, the very large number of key entanglement experiments which are based on statistical measurements of entangled diphoton isolated pairs with a dynamically changing measurement orientation cannot so be explained. Weird that Mills does not know of these.


    I can only think that Mills single's out this one experiment (which does not at all disprove classical mechanisms for entanglement) because looked at in isolation he can say this.


    But he does not reference the large literature on hidden variables interpretations of QM and experiments which make them look highly unpleasant! That is the core of his argument, so weird he is unaware of the vast literature?


    If anyone (RB, Stefan?) would like to go through this I will pick out one of these classic experiments, we can apply Mills's explanation to it, and see where it fails? There has been a lot of explanation of this so I can probably find some good semi-technical quanta etc articles as well.


    THH

    RB: perhaps you would like to read the statements below and agree, or give your reasons for not agreeing, the following:


    (1) the 2012 experiment does not disprove HUP. Rather it shows that a never proven informal inequality commonly used is inexact as was known previously (2003) proven from QM theory. So far from disproving QM HUP this is in line with what has been proven.


    (2) Further an exact proof of a similar HUP inequality which is compatible with all experiments has been given in the 2013 paper - commenting on the 2012 result.


    (3) Mills classical explanation of double-slit results cannot explain measurement statistics from the very many diphoton measurement experiments where statistics from measurements on entangled photons clearly show entanglement


    (4) A more sophisticated objection to such being true entanglement, which relies on some classical effect between the choice of measurement orientation and photon state, has been proven (2018) to require a classical causal link between quasar light generation 8 billion years ago and entangled diphoton generation in a lab now. That seems unlikely, to put it mildly.


    (5) Mills interpretation of the 2012 experiment is contradicted by the abstract of the 2012 paper itself, and further destroyed by the 2013 followup.


    Informally: Mills has been making extravagant claims that "QM is wrong" for a long time, not one of which is correct, and some of which as above expose a deep inability to read and understand the experimental and theoretical literature even at the level accessible to many here. I'm not saying Mills is incapable of understanding QM (though that seems quite possible) but he is in that case determined not to read the literature.

    Plea to all those reading this thread.


    QM foundations is fun - and does sort of hint at new physics. Even if you don't agree with the Raamsdorf et al ideas I linked above, understanding in detail Bell's theorem, the various suggestions for ways round it, and the various distinct interpretations of QM, are necessary to have an appreciation of the merits/demerits of QM (let alone QFT) and this is helpful in evaluating non-standard proposals.

    Here is Randell Mills explanation of entanglement in classical terms

    centreing on the experimental interpretation by Durr et al 1998..



    RB - is it not strange that you (and Mills?) consider entanglement only a property of microwave double-slit experiments? That you (and he) ignore the very long and inventive literature on entanglement experiments and (futile) attempts to explain them classically?


    Mills is not unique in attempting to find local explanations for entanglement phenomena. There is a very long history of such attempts, (for example hidden variables theories of QM).


    The above explanation cannot account for any of the very many entangled single photon pair experiments (see wikipedia history of entanglement for many references to such).


    Other extravagantly complex loopholes in such experiments that might allow classical explanation have been pushed back by better technology: for example this experiment where the choice of which measurement to make is determined by quasar light (and therefore emitted 7.8 billion years ago). A classical explanation of the observed statistical relationships would have to somehow correlate this event 7.8 billion years ago to the lab diphoton generation now.


    More specifically, critiquing Mills, he has shown (it was commented on here a while ago) a very large misunderstanding in his comments on modern experiments.


    For example the 2012 Rozema et al experiment that he references.


    Here is the writeup on arxiv for open access: https://arxiv.org/abs/1208.0034


    Let us look at the abstract, to see whether this bears out Mills's interpretation?


    While there is a rigorously proven relationship about uncertainties intrinsic to any quantum system, often referred to as "Heisenberg's Uncertainty Principle," Heisenberg originally formulated his ideas in terms of a relationship between the precision of a measurement and the disturbance it must create. Although this latter relationship is not rigorously proven, it is commonly believed (and taught) as an aspect of the broader uncertainty principle. Here, we experimentally observe a violation of Heisenberg's "measurement-disturbance relationship", using weak measurements to characterize a quantum system before and after it interacts with a measurement apparatus. Our experiment implements a 2010 proposal of Lund and Wiseman to confirm a revised measurement-disturbance relationship derived by Ozawa in 2003. Its results have broad implications for the foundations of quantum mechanics and for practical issues in quantum mechanics.


    Oh dear - it seems not to say what Mills says it does. Rather the informal, approximate, and never formally proven relationship can be shown in some cases to be not precise. Oh - and the result here confirms a revised measurement disturbance relation calculated by Osawa in 2003 from - wait for it - quantum mechanics!


    Whereas the QM precise statements about non-commutative measurement operators for position and momentum or energy and time remain precisely correct and have been proven over and over.


    It has over 212 citations: perhaps we can find some more educated comments on it from these?


    Oh look: no 2 in the list looks at the issue of what can be proven and gives a comprehensive answer, with a correct and proven version of the quantitative relationship!


    https://journals.aps.org/prl/a…03/PhysRevLett.111.160405

    Proof of Heisenberg’s Error-Disturbance Relation

    Paul Busch, Pekka Lahti, and Reinhard F. Werner

    Phys. Rev. Lett. 111, 160405 – Published 17 October 2013


    While the slogan “no measurement without disturbance” has established itself under the name of the Heisenberg effect in the consciousness of the scientifically interested public, a precise statement of this fundamental feature of the quantum world has remained elusive, and serious attempts at rigorous formulations of it as a consequence of quantum theory have led to seemingly conflicting preliminary results. Here we show that despite recent claims to the contrary [L. Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)], Heisenberg-type inequalities can be proven that describe a tradeoff between the precision of a position measurement and the necessary resulting disturbance of momentum (and vice versa). More generally, these inequalities are instances of an uncertainty relation for the imprecisions of any joint measurement of position and momentum. Measures of error and disturbance are here defined as figures of merit characteristic of measuring devices. As such they are state independent, each giving worst-case estimates across all states, in contrast to previous work that is concerned with the relationship between error and disturbance in an individual state.


    Mills's critique of conventional science is so clearly both wrong and ignorant that those thinking his theories should keep quite about this and hope it is not noticed - because it makes Mills look very bad. Like a PhD student with a single (erroneous) idea in their head who never did a proper literature survey to find the many obvious counterexamples and ended up with a whole thesis clearly provable wrong. Having wrong ideas is no sin: in fact every scientist worth their salt must have these as part of the discovery process. However, persisting with wrong ideas that have been easily disproven by many others shows bad professional practice and is a real shame because it results in much wasted effort. You might say that Mills is a classic example of this.


    RB: would you perhaps prefer to repeat your 6 figure THH mantra again (in the sure knowledge that I'll not reply unless you do it on a new thread) rather than making easily disprovable statements about physics?

    I guess if you take Mills as your reference you will make a lot of those, based on the the above comment of his that is so seriously wrong.

    Quantum entanglement appears to be based on Stephan Durr et al's1998 interpretation of microwave interferometry with Rb


    A valid interpretation of this microwave experiment which does not require quantum entanglement is possible.

    In these complex physics experiments alternative explanations often need to be eliminated by followup experiments.

    I doubt whether this has been done by Durr



    I can't quite believe you mean this. Quantum entanglement occurs in every single quantum computer, of which many commercial and academic examples exist. You can test programs for one of these online using microsoft Q#.


    You may believe that quantum computers will never do anything useful - that is possible - but they exist and work, manipulating entangled quantum states. To state otherwise shows a great ignorance of 21st century technology.


    Wikipedia lists

    Frank Jensen: Introduction to Computational Chemistry.Wiley, 2007, ISBN 978-0-470-01187-4.

    as a 2017 textbook reference for the ionisation energy of atoms which can only correctly be calculated using entanglement (since electron orbitals contain entangles electrons).


    Most entanglement experiments use entangled pairs of photons generated from downconversion. This allows extreme nonlocality to be demonstrated.


    https://en.wikipedia.org/wiki/Quantum_entanglement#History


    Entanglement was first predicted in the 1930s, with very many experiments demonstrating it in the 1970s and onwards. I'm not sure where you get your "single experiment in 1998" from. It has been demonstrated using entangled photons


    Because many people are so unimaginative that they don't like the idea of the fabric of the universe being inherently different from the macroscopic world there has always been a strong attempt to find other explanations for entanglement. These are necessarily pretty weird (Bell's theorem indicates that). No alternative explanation has survived the tests of the many followup experiments (see above link for history).