Einstein was right? QM is ??

  • Now, perhaps you are an expert on quantum entanglement (I have my doubts).

    For the forum's resident 6 figure precise mathematician, high energy physics specialist

    CCS/ATER expert , gamma spectrum interpreter..

    Here is Randell Mills explanation of entanglement in classical terms

    centreing on the experimental interpretation by Durr et al 1998..

  • Quantum entanglement appears to be based on Stephan Durr et al's1998 interpretation of microwave interferometry with Rb


    A valid interpretation of this microwave experiment which does not require quantum entanglement is possible.

    In these complex physics experiments alternative explanations often need to be eliminated by followup experiments.

    I doubt whether this has been done by Durr



    I can't quite believe you mean this. Quantum entanglement occurs in every single quantum computer, of which many commercial and academic examples exist. You can test programs for one of these online using microsoft Q#.


    You may believe that quantum computers will never do anything useful - that is possible - but they exist and work, manipulating entangled quantum states. To state otherwise shows a great ignorance of 21st century technology.


    Wikipedia lists

    Frank Jensen: Introduction to Computational Chemistry.Wiley, 2007, ISBN 978-0-470-01187-4.

    as a 2017 textbook reference for the ionisation energy of atoms which can only correctly be calculated using entanglement (since electron orbitals contain entangles electrons).


    Most entanglement experiments use entangled pairs of photons generated from downconversion. This allows extreme nonlocality to be demonstrated.


    https://en.wikipedia.org/wiki/Quantum_entanglement#History


    Entanglement was first predicted in the 1930s, with very many experiments demonstrating it in the 1970s and onwards. I'm not sure where you get your "single experiment in 1998" from. It has been demonstrated using entangled photons


    Because many people are so unimaginative that they don't like the idea of the fabric of the universe being inherently different from the macroscopic world there has always been a strong attempt to find other explanations for entanglement. These are necessarily pretty weird (Bell's theorem indicates that). No alternative explanation has survived the tests of the many followup experiments (see above link for history).

  • Here is Randell Mills explanation of entanglement in classical terms

    centreing on the experimental interpretation by Durr et al 1998..



    RB - is it not strange that you (and Mills?) consider entanglement only a property of microwave double-slit experiments? That you (and he) ignore the very long and inventive literature on entanglement experiments and (futile) attempts to explain them classically?


    Mills is not unique in attempting to find local explanations for entanglement phenomena. There is a very long history of such attempts, (for example hidden variables theories of QM).


    The above explanation cannot account for any of the very many entangled single photon pair experiments (see wikipedia history of entanglement for many references to such).


    Other extravagantly complex loopholes in such experiments that might allow classical explanation have been pushed back by better technology: for example this experiment where the choice of which measurement to make is determined by quasar light (and therefore emitted 7.8 billion years ago). A classical explanation of the observed statistical relationships would have to somehow correlate this event 7.8 billion years ago to the lab diphoton generation now.


    More specifically, critiquing Mills, he has shown (it was commented on here a while ago) a very large misunderstanding in his comments on modern experiments.


    For example the 2012 Rozema et al experiment that he references.


    Here is the writeup on arxiv for open access: https://arxiv.org/abs/1208.0034


    Let us look at the abstract, to see whether this bears out Mills's interpretation?


    While there is a rigorously proven relationship about uncertainties intrinsic to any quantum system, often referred to as "Heisenberg's Uncertainty Principle," Heisenberg originally formulated his ideas in terms of a relationship between the precision of a measurement and the disturbance it must create. Although this latter relationship is not rigorously proven, it is commonly believed (and taught) as an aspect of the broader uncertainty principle. Here, we experimentally observe a violation of Heisenberg's "measurement-disturbance relationship", using weak measurements to characterize a quantum system before and after it interacts with a measurement apparatus. Our experiment implements a 2010 proposal of Lund and Wiseman to confirm a revised measurement-disturbance relationship derived by Ozawa in 2003. Its results have broad implications for the foundations of quantum mechanics and for practical issues in quantum mechanics.


    Oh dear - it seems not to say what Mills says it does. Rather the informal, approximate, and never formally proven relationship can be shown in some cases to be not precise. Oh - and the result here confirms a revised measurement disturbance relation calculated by Osawa in 2003 from - wait for it - quantum mechanics!


    Whereas the QM precise statements about non-commutative measurement operators for position and momentum or energy and time remain precisely correct and have been proven over and over.


    It has over 212 citations: perhaps we can find some more educated comments on it from these?


    Oh look: no 2 in the list looks at the issue of what can be proven and gives a comprehensive answer, with a correct and proven version of the quantitative relationship!


    https://journals.aps.org/prl/a…03/PhysRevLett.111.160405

    Proof of Heisenberg’s Error-Disturbance Relation

    Paul Busch, Pekka Lahti, and Reinhard F. Werner

    Phys. Rev. Lett. 111, 160405 – Published 17 October 2013


    While the slogan “no measurement without disturbance” has established itself under the name of the Heisenberg effect in the consciousness of the scientifically interested public, a precise statement of this fundamental feature of the quantum world has remained elusive, and serious attempts at rigorous formulations of it as a consequence of quantum theory have led to seemingly conflicting preliminary results. Here we show that despite recent claims to the contrary [L. Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)], Heisenberg-type inequalities can be proven that describe a tradeoff between the precision of a position measurement and the necessary resulting disturbance of momentum (and vice versa). More generally, these inequalities are instances of an uncertainty relation for the imprecisions of any joint measurement of position and momentum. Measures of error and disturbance are here defined as figures of merit characteristic of measuring devices. As such they are state independent, each giving worst-case estimates across all states, in contrast to previous work that is concerned with the relationship between error and disturbance in an individual state.


    Mills's critique of conventional science is so clearly both wrong and ignorant that those thinking his theories should keep quite about this and hope it is not noticed - because it makes Mills look very bad. Like a PhD student with a single (erroneous) idea in their head who never did a proper literature survey to find the many obvious counterexamples and ended up with a whole thesis clearly provable wrong. Having wrong ideas is no sin: in fact every scientist worth their salt must have these as part of the discovery process. However, persisting with wrong ideas that have been easily disproven by many others shows bad professional practice and is a real shame because it results in much wasted effort. You might say that Mills is a classic example of this.


    RB: would you perhaps prefer to repeat your 6 figure THH mantra again (in the sure knowledge that I'll not reply unless you do it on a new thread) rather than making easily disprovable statements about physics?

    I guess if you take Mills as your reference you will make a lot of those, based on the the above comment of his that is so seriously wrong.

  • Plea to all those reading this thread.


    QM foundations is fun - and does sort of hint at new physics. Even if you don't agree with the Raamsdorf et al ideas I linked above, understanding in detail Bell's theorem, the various suggestions for ways round it, and the various distinct interpretations of QM, are necessary to have an appreciation of the merits/demerits of QM (let alone QFT) and this is helpful in evaluating non-standard proposals.

  • Quote

    perhaps you are an expert on quantum entanglement (I have my doubts). Even so, that will be QE as defined conventionally in terms of spacetime

    I'm expert to dense aether model only and this model defines entanglement as a process/state of phase synchronizing of pilot waves.


    IEeYRGn.gif

  • RB: would you perhaps prefer to repeat your 6 figure THH mantra

    The problem is that you have never refuted this statement by referring to the actual

    data in the Durr et al 2016 arxiv paper that you cited

    as having 6 digit significance for the n-p mass difference


    1.5 +-0.3 MEv or... as Durr states 1.51+-0.28 Mev


    Until you do I shall keep reminding you and the forum of your baseless assertion.

    If it irritates you please address the issue directly.


    This is an issue of the precision of quantum mechanics in the modelling of

    nuclear physics which is what this topic is about. and I shall remind you periodically on this thread about

    it/ If you don't like it complain to the moderators as you have done on another thread.

  • The initial state of the QM wave funciton is settled at the start of the entanglement. And the correlation seen at the measurments

    is a consequence of how the initial state was setuped. The measurements uses the wave function squared. Mills shows simillarly

    how the fields are setuped at an initial states and then progresses normaly and the measurements is in the fields squared. This

    is also how the entanglement is setuped in all experiments and quantum computers. No mystery at all it all boils down to the fact

    that the initial state introduce a dependency that is maintained in the progression of the fields. So in this interpretation there is no

    mystery at all. The mystery becomes accute when one uses other interpretatoins of the meaning of the wavefunction than just this

    which shows that those interpretations are most likely wrong. Most likely you could make those entanglement in quantum computers

    with Mills device as well. So yes entanglement is a proven phenomena, just that it's a bad naming and a very simple and

    straightforward an nonmysterious if you see it from the correct direction.

  • if you see it from the correct direction

    Mills's critique of conventional science is so clearly both wrong and ignorant

    Stefan .. you are so polite:) and actually try to engage with Mills statement rather than throwing in ad hominens holus bolus.

    Entanglement is a phenomenon than does not need Durr's entangled explanation.

    I think Mill's devotes about three of four pages in GUTCP to explain the experiment and Durr's viewpoint and... his

    I am getting used to reading GUTCP gradually.it covers a huge swathe of physics..

    I have found some bits useful

    The December,2018 version I have yet to download

    https://brilliantlightpower.com/book-download-and-streaming/


    As an alternative to THHuxley's badmouthing it is useful to read Professor Engelmann's review


    https://brilliantlightpower.com/engelman-review/


  • It is actually quite tough calculations that is needed and you need books and references to follow all steps. If you feel steps are missing

    you might see more details if you download the latest version of GUTCP.

  • RB: perhaps you would like to read the statements below and agree, or give your reasons for not agreeing, the following:


    (1) the 2012 experiment does not disprove HUP. Rather it shows that a never proven informal inequality commonly used is inexact as was known previously (2003) proven from QM theory. So far from disproving QM HUP this is in line with what has been proven.


    (2) Further an exact proof of a similar HUP inequality which is compatible with all experiments has been given in the 2013 paper - commenting on the 2012 result.


    (3) Mills classical explanation of double-slit results cannot explain measurement statistics from the very many diphoton measurement experiments where statistics from measurements on entangled photons clearly show entanglement


    (4) A more sophisticated objection to such being true entanglement, which relies on some classical effect between the choice of measurement orientation and photon state, has been proven (2018) to require a classical causal link between quasar light generation 8 billion years ago and entangled diphoton generation in a lab now. That seems unlikely, to put it mildly.


    (5) Mills interpretation of the 2012 experiment is contradicted by the abstract of the 2012 paper itself, and further destroyed by the 2013 followup.


    Informally: Mills has been making extravagant claims that "QM is wrong" for a long time, not one of which is correct, and some of which as above expose a deep inability to read and understand the experimental and theoretical literature even at the level accessible to many here. I'm not saying Mills is incapable of understanding QM (though that seems quite possible) but he is in that case determined not to read the literature.


  • I agree with this as an explanation of the phenomenon Mills says it describes Stefan. Unfortunately, as I have said above and given many references to, the very large number of key entanglement experiments which are based on statistical measurements of entangled diphoton isolated pairs with a dynamically changing measurement orientation cannot so be explained. Weird that Mills does not know of these.


    I can only think that Mills single's out this one experiment (which does not at all disprove classical mechanisms for entanglement) because looked at in isolation he can say this.


    But he does not reference the large literature on hidden variables interpretations of QM and experiments which make them look highly unpleasant! That is the core of his argument, so weird he is unaware of the vast literature?


    If anyone (RB, Stefan?) would like to go through this I will pick out one of these classic experiments, we can apply Mills's explanation to it, and see where it fails? There has been a lot of explanation of this so I can probably find some good semi-technical quanta etc articles as well.


    THH

  • and while we are interpreting experimental results

    does Durr et als' 2015 supercomputer modelling by QCD/QED


    actually show 6 digit accuracy for the n-p mass difference?

    or 2 digit or 3 digit accuracy?


    I think you know the reference.. since you first cited it.:)

  • Feel free .. but lose the ad hominems.. people might actually read your stuff


    You need to up your own game before taking on that tone. THH's posts are always well-written and rational. He seldom indulges in ad hominem attacks as far as I can see. In contrast, your posts are frequently poorly organized and are sometimes so ungrammatical and fractured that I have difficulty figuring out what you are talking about. And you indulge in a lot of name calling*. That is not just unpleasant, it actually obscures whatever point you are trying to make.


    *Edit: I reviewing this, think that "taunting" is a better term for what I had in mind here than "name calling".

  • Forget Mills for the moment, fitting his idea to all possible cases is not something one easilly and quickly performs without spending considerable amount of time. But let's focus on this. Isn't quantum entanglement

    just initiating the wave function and then later calulating correlations by using the wavefunctions at the different sites. Can you in a few words explain what's more in entanglement? Is there experiments where one need

    more than Schrödinger or Dirac to explain the results of the experiments?

  • The problem is that you have never refuted this statement by referring to the actual

    data in the Durr et al 2016 arxiv paper that you cited

    as having 6 digit significance for the n-p mass difference


    Yes, but I rowed back on that shortly after: and you ignored that and kept on repeating this weird 6 digit mantra!


    I am however quite interested in the general topic of who makes better predictions: Mills or QED/QCD. The problem here is QCD which is thoroughly difficult to get highly accurate results from for calculation reasons.


    However, we have QED - the "world's most accurately tested ever theory". I'm fascinated by, for example, the weird 2pi values that enter into Mills claimed calculation for the anomalous magnetic moment of the electron.


    Stefan - have you looked at this value's derivation (it is a cubic in alpha). I'd like to go through it in detail to understand how the anomalous 2pis get there (the alpha^2 part of the alpha cubed term not divided by 2pi).


    My reference for Mills is: http://zhydrogen.com/wp-content/uploads/2013/04/test6.pdf




    1 + alpha/2pi is simply stolen from the first order (in alpha) QED expansion coefficient, which is known analytically to be exactly this.

    Mills' semiclassical derivation based on the Poynting Power Theorem agrees with QED to this order, which should not surprise us.


    Let us work this out. Current experimental value for ae (the above value is supposed to be ae+1):

    ae = 0.001 159 652 181 643(764) (from 2011 Wikipedia)


    also


    (from Control of a Single-Electron Quantum Cyclotron: Measuring the Electron Magnetic Moment" 2011 is given in Wikipedia and consistent.


    Also alpha is known as roughly:

    α−1 = 137.035999049(90) (from 2010,2011 ref 3,4 in https://arxiv.org/pdf/1705.05800.pdf)

    also cf with the 2014 CODATA value: 137.035999139(31) that is consistent and only a tiny bit more accurate.


    For convenience we calculate alpha/2pi = 0.00116 140 973 3



    Using just the first order QED-only term (ignoring higher order QED, hadronic and electro-weak components):

    alpha/2pi = 0.0011641409733


    That is 6 sig figures on g/2, but since the first figure is 1 this is really only 5 sig figures. If you take the value of ae (g/2-1) which is the anomalous part of course it is only 3 sig figures accurate.


    So our starting point is this first order analytical QED approximation. How much extra accuracy does Mills next two terms give us?

    Subtracting first order terms from real value we get:


    -0.0000044888 - this is the number that Mills has to use numerology to hit!




    {\displaystyle a_{e}=0.001\;159\;652\;180\;73(28)}


    This differs by 1 in the 12th decimal place.


    Mills reference from 2006


    He quotes alpha-1 = 137.03604(11) from

    R.C. Weast, CRC Handbook of Chemistry and

    Physics, 68th edition (CRC Press, Boca Raton,

    FL, 1987–88), pp. F-186–F-187. - this is consistent with the 2010 value but 3 sig figures less accurate


    ae = .001 159 652 188(4) from


    R.S. Van Dyck Jr., P. Schwinberg, and H.

    Dehmelt, Phys. Rev. Lett. 59, 26 (1987). This is slightly inconsistent with the current value, being two SD too high.


    Mills, calculates

    ae= 0.001 159 652 120

    which he compares with (his 2006 experimental value)

    ae= 0.001 159 652 188(4)


    Excellent agreement


    Mills (in 2006) notes that values for the fine structure constant are variable. Indeed his alpha-1 value has error 137.03604(11) or 10^-7.


    Propagating this error to (ae -1) we see that the fractional (ae - 1) error is the same as the fractional alpha error which is the same as the fractional alpha-1 error.


    that gives a Mills calculated ae error (from his 2006 alpha data) of:

    0.001 159 652 120(100)


    [ EDIT - I meant to delete this The calculated value is coincidentally 50X better than would be expected if his formula was precisely correct given his stated error in alpha!


    Mills spends some time discussing different values for alpha: but he is cheating! He talks about the remarkable agreement between his value and the correct value, when he cannot have a value of alpha that justifies this level of accuracy! So his 11 significant figures accuracy for ae+1 is the same as 8 significant figures accuracy for alpha.]



    Let us see what happens if we use more recent values. The key value is that of alpha - which is less precise than ae by 2 sig figs.


    Using the current (CODATA 2014) value for alpha of

    137. 035 999 139(31)


    We have (alpha/2pi) = 0.00116 140 973 241(25)


    We get a Mills value for ae of:

    +0.001 161 409 732 41(25) (1st order - same as QED 1st order)

    -0.000 001 798 496 75 (2nd order)

    +0.000 000 041 231 02 (3rd order)

    +0.001 159 652 466 68(25) (total)



    versus CODATA value for ae of

    +0.001 159 652 180 91(26)



    Using recent values Mills is wrong by a factor of 500X the error bound


    Mills' calculation is precise. If his theory is correct. So something must give.


    Mills claims that the CODATA values for alpha may be wrong (by a factor of 1000X larger than the error bound?) because they involve QED. We may have to investigate this is anyone here (Stefan, RB?) feels that is a plausible claim.


    Alternatively Mills must now invoke otherwise unspecified errors to explain his lack of correspondance with theory.


    How does QED do? A 2017 improved QED 10 loop calculation is https://arxiv.org/pdf/1712.06060.pdf


    ae = +0.001 159 652 182 03 (72)




    But evaluating these calculations precisely is complex: the values given all depend on alpha - with error to first order the same (fractionla) as alpha. Neverthless this is 1000X better than Mills, unless you conclude that the CODATA value of alpha is wrong by 1000X the stated error.


    Stefan: there are quite possibly mistakes in this, though I've put some effort into it, let me know if you find any!



    THH

  • THH, Thanks for interesting thoughts!


    The derivation of the g factor is something I looked into and as far as I can tell Mills uses an ok approach in the beginning that one can follow and apply it through the integrals of the fields. I can only see a possible fudge of a factor of 2 in those fields. But as you say, the 2 pi are a weird factor and I think that most of us will fail to understand those. What I can tell is that he views the fields in different frames of references, one in the lab, one at c speed which are really strange and I have not seen any references of this view elsewhere. But if you look into it, it has a structure that is reused many times to yield correct results all over the place. You can do this fudge one time or not and you get a few bits of extra presition of this fudge which is way too small to explain the correctness of the formula. Anyhow I have an idea of what these frames of references are. In the c reference you could consider the solution as a standing wave that basically has a radial component (moves in and out radially) that has a period of r. But when you spin off a photon and move it it will in the lab frame (not in the c-frame) circulate the orbit and hence the period is 2pi r. That is at least my hand waving to try explain this strange fact (there doesn't seam to be room enough to fudge so it looks to have some kind of unknown validity)


    I agree that QED is exact in it's results, but here I miss the correct statistical approach to fix more digits. It looks like the theory always follows the new experimental accuracy. Also as far as I understand the expansions need to decide which terms to add and not to add at least that is the critique that Mills claim e.g. fudging.


    Anyhow I think you did a little mistake in the claim that Mills once was over exact in his value. You had alpha-1 = 137.03604(11) that's +/- 2 (2 sd) on the eights figure and it is quite likely

    to get the eight's figure correctly by chance. which is what he basically has.


    The accuracy is pretty high in Mills derivation and I am open for both QED and Mills as valid approaches to derive the result and this indicates that perhaps QED is based on a more exact formulation in certain setups but uses a way more complicated model compared to Mills theory. Perhaps W will one day see how his further work can shed light over what this correspondence really are because he has ideas of how to enrich Mills theory to improve accuracy. I certainly view QED as a non fudge formula.

  • THH's posts are always well-written and rational.

    The problem is that you have never refuted this statement by referring to the actual

    data in the Durr et al 2016 arxiv paper that you cited

    as having 6 digit significance for the n-p mass difference


    You haven't THHuxleynew... not really

    You rowed back at said that it had 3.5 digit precision.

    When I wrote to Stephan Durr he said his data showed


    1.51+-0.28 Mev


    which is not 3,5 digit precision in my book

    Its something like 2 digit significance. 1.5+-0.3

    Now 2 digit precision is a whole lot different from 6 digit precision.

    Clearly the QED /QCD modelling of nuclear parameter

    after seven years of supercomputer modelling and teraflops

    is rather crude.


    Now lets go back to entanglement,

    What specifically is wrong and ignorant ...

    clearly both wrong and ignorant

    as you have stated about Mill's

    explanation of the entanglement phenomenon

    Perhaps you would be kind enough to explain thatin the context of the 1998 Durr et a l findings as Mills has done over several pages in GUTCP.

  • Forget Mills for the moment, fitting his idea to all possible cases is not something one easilly and quickly performs without spending considerable amount of time. But let's focus on this. Isn't quantum entanglement

    just initiating the wave function and then later calulating correlations by using the wavefunctions at the different sites. Can you in a few words explain what's more in entanglement? Is there experiments where one need

    more than Schrödinger or Dirac to explain the results of the experiments?


    Yes - if that were all there was to entanglement then it would not be non-local.


    The issue is quite subtle, and to do with how the two measurements (of the two entangled photons) relate to each other.


    First - here is a really good "anyone can understand it" detailed backgrounder on entanglement:


    https://www.quantamagazine.org…ent-made-simple-20160428/


    Key concept: To get the (non-local => non-classical) weirdness you need entanglement AND complementarity.


    Now - having read that - why does Mills's idea not work.


    The problem is this. Suppose you have two complementary properties: color (read or blue) and shape (round or square). (In diphoton experiments these correspond to measurement of spin in two directions at right angle to each other).


    QM makes sure that if you measure one property the other will be random and vice versa.


    But, if you have two entangled photons P1 and P2, and measure one property (eg color) at P1, and the same property at P2 the measurements must agree. Classically that is fine, you can say that two photons start in the same state, read or blue.


    However the same is always true if you measure the other property on P1 and P2 (shape). Again that is fine, you can prepare a system with either two square or two round photons.


    The nonclassical issue is this. You can decide which measurement you are doing independently of preparing the photons. Whichever measurement you do (color or shape) you always get agreement if the two measurements are the same. And, if you measure the two opposite properties (color and shape) for P1 and P2 you get random results (no correlation).


    Because those statistics are always true, and you can choose the measurements you want to do independently of the photon generation source, there is no way that a local description of the system can generate the observed probabilities.


    The 2018 experiment used light originating 8 billion years ago to determine which experiment was domne and still got these non-local statistic correlations. Pretty difficult to see how that can be any classical method.


    Now, I've skipped over things a bit but you can see the argument in more detail following the proof of Bell's Theorem - which has been validated many many times by different experiments.


    Here is a description of that - and teh loophole which has now, courtesy quasar light, been effectively closed:

    https://www.quantamagazine.org…l-test-loophole-20170207/

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.