Fact Check, debunking obviously false information

  • I like this quote from the paper Robert linked:

    ...Prior todiscovery of the Lamb Shift, calculating the mass of an electron produced an infinite, andtherefore meaningless, result. Theorists used the experimentally measured mass of an electron to replace the infinite mass and renormalized the quantum electrodynamics equations...


    Woohaa...the predictive power of QED.


  • PreLamb QED was discrepant with the Lamb shift

    Bethe adjusted QED to explain the Lambshift on a train trip after it was announced at a summer conference by Lamb.


    Indeed - Lamb developed QED so that it correctly calculated lamb shift in 1947 (the same year as Lamb shift was accurately measured) not 1960.


    Anyway you seem all over the place here. Previously you were claiming QED was invented in 1960, so could not predict lamb shift in 1947. Now it is invented in 1940?? And "traditional QED" is non-relativistic? Sounds wrong to me!


    :)


    More seriously, QED has had a long history because it was difficult to work out how to make QM work with relativistic fields. When you did this naively, you got infinities (dealt with, unsatisfactorily but provably correctly, by renormalisation). And having done that you have a theory where everything is a perturbation theory expansion.


    QED has made spectacularly successful predictions throughout its history. And there have been no fudge factors (other than every calculation resting on alpha). All the corrections were expected from the original QED as higher order corrections. The more recent improvements come from incorporating QCD as well. The more you need to take higher orders in alpha, the more work you do, and also the more QCD corrections become significant. So while calculat


    W above ridicules the QED expansions. I guess he thinks because physics needs to be soluble with closed-form equations and if not it is no good? Well the known structure of QFT has fields that are quantised and correspond to particles and a vacuum (ground) state has virtual particles appearing and disappearing all the time. With this structure, there will always be higher order corrections coming from more complex loops.


    • If W is arguing against QFT - that fields are in reality quantised - then he needs to explain all of the experimental data that supports this in some other way. The start of such an explanation would be a well-defined set of equations of motion for the alternate theory.
    • If he is not arguing against QFT then as soon as he gets round to formulating a Lagrangian and working out from first principles the dynamics of his system he will find higher order expansions in alpha are necessary, because of higher order loops, which always exist.


    THH


  • Renormalisation is conceptually complex - and was originally a "trick" that worked but seemed weird (cancelling infinities).


    It is not in fact weird - but the underlying physical meaning took a good deal more time to dicover.


    Here from stackexchange is a good summary posed as Q&A:


    Now what has this got to do with your lattice idea of spacetime? Maybe you mean at low energies, perturbation is not needed in the calculations? But without perturbation, the magnetic charge of an electron would be different than the measured value. Also how is your statement that "Renormalization is the procedure of figuring out how a quantum field theory with a given symmetry looks like at low energies" fit to the idea of cancelling out infinities?


    Great question! The idea of figuring out how a quantum field theory with a given symmetry looks like at low energies makes sense - let's call this the Wilson-Kadanoff renormalization group. The idea of cancelling infinities is nonsensical. So the latter is simply a calculational trick, while the former provides the conceptual foundation. Historically, the trick was discovered first, and was accepted even though it was nonsensical because of its successful experimental predictions. However, Feynman, Dirac and many physicists continued to worry about the nonsensical subtraction of infinities. Around 1970, the discovery of the Wilson-Kadanoff renormalization group gave a conceptual basis to the calculational trick (ie. no infinities are actually subtracted), and physicists stopped worrying about the subtraction of infinities.


    A slightly technical, but quite readable if you are patient, history is given in the first chapter of Zinn-Justin's book. More technical details are found in

    http://arxiv.org/abs/hep-th/9210046v2

    http://web.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf (chapter 29)



    Source https://www.physicsforums.com/…ives-to-qft.574308/page-2

  • Anyway you seem all over the place here

    Good quantum wriggle THHuxleynew

    QED has developed over time through trial and error. even the name has evolved since Dirac.

    It was not predictive... it NEVER predicted the Lamb shift as THHNew asserted


    Brett Holverstott was entirely accurate when he stated

    ""

    The theory has never been compatible with special or general relativity; it didn’t predict electron spin,

    and it failed to predict a host of subtle changes in electron energy levels

    such as the Lamb Shift, Fine Structure, and Hyperfine Structure""


    Although THHnew dismissed Holverstott :

    Forgive me for not viewing Brett Holverstott, writing a puff for Mills, as a reliable scientific source.

    And, indeed, QED (you know that theory that predicts virtual particles, etc) did indeed predict the lamb effect


    In reality THHuxleynew is writing a puff piece for QED"... contrary to history

    when he states

    QED (you know that theory that predicts virtual particles, etc) did indeed predict the lamb effect


    THHnew is not a reliable scientific source... no matter how he spins it

    https://pdfs.semanticscholar.o…beb075eda16b0a494ede3.pdf

    Fact check.. No one predicted the Lamb Effect. Not Dirac..Not Bethe

    Not Tomihaga Not Dyson Not Wigner Not Feynman Not Schwinger

    Not even THHuxleynew..

  • THHnew is not a reliable scientific source... no matter how he spins it


    How tiring RobertBryant's junvenile personal vendetta against THH is.


    RB... what "Technical Content" does your badgering a pointless twist of words (rather 100% accurate or not) about whether

    something was instituted in 1940, 1960 or never. YOUR point is simply trying to make THH look bad and has ZERO "Technical"

    merit.


    A one time correction is quite enough for most mature and level headed debaters. However, you have taken a personal grudge

    against someone where who simply puts forward his opinions and views, most often with supporting links. You do not like those

    views, so you personally attack and hang on to some rather mundane points ad-nauseum.


    It is fine you do not agree with someone, however you cannot simply state that. You, like many Rossi believers, have to personally

    attack those who do not support your view or you feel are a threat to your view. Almost all of your repetitive, juvenile badgering

    is about meaningless "NO technical content" that does not add or improve the discussion. You simply try to personally tarnish

    THH because he does not agree with your standing. You seem to not be able to stand someone hold a view contrary to yours

    when they make a substantial case. So instead of defending your position, you attack them personally.


    Quite sad..... why not grow up and act your age. Disagree, post your supporting data and then let your facts do the talking.

    Your repetitive, juvenile badgering only reflects badly on you, not THH.

  • I think a lot of the internet dislike of SM comes from people who have never fully understood QFT and QED. They are not simple, although also not impossibly complex. Then, further dislike comes from the fact that:

    • QCD calculations are so difficult, and quark containment is counter-intuitive and different from other forces.
    • In QED renormalisation sounds like a fudge (and has often been described as this, and seems weird). It is not simple to understand, though quite simple to do.


    I strongly recommend this description of renormalisation.


    At the end, and relevant to the "fudge" issue, this paragraph on regularisation:


    Since the quantity \infty - \infty is ill-defined, in order to make this notion of canceling divergences precise, the divergences first have to be tamed mathematically using the theory of limits, in a process known as regularization.

    An essentially arbitrary modification to the loop integrands, or regulator, can make them drop off faster at high energies and momenta, in such a manner that the integrals converge. A regulator has a characteristic energy scale known as the cutoff; taking this cutoff to infinity (or, equivalently, the corresponding length/time scale to zero) recovers the original integrals.

    With the regulator in place, and a finite value for the cutoff, divergent terms in the integrals then turn into finite but cutoff-dependent terms. After canceling out these terms with the contributions from cutoff-dependent counterterms, the cutoff is taken to infinity and finite physical results recovered. If physics on scales we can measure is independent of what happens at the very shortest distance and time scales, then it should be possible to get cutoff-independent results for calculations.

    Many different types of regulator are used in quantum field theory calculations, each with its advantages and disadvantages. One of the most popular in modern use is dimensional regularization, invented by Gerardus 't Hooft and Martinus J. G. Veltman, which tames the integrals by carrying them into a space with a fictitious fractional number of dimensions. Another is Pauli-Villars regularization, which adds fictitious particles to the theory with very large masses, such that loop integrands involving the massive particles cancel out the existing loops at large momenta.

    Yet another regularization scheme is the Lattice regularization, introduced by Kenneth Wilson, which pretends that our space-time is constructed by hyper-cubical lattice with fixed grid size. This size is a natural cutoff for the maximal momentum that a particle could possess when propagating on the lattice. And after doing calculation on several lattices with different grid size, the physical result is extrapolated to grid size 0, or our natural universe. This presupposes the existence of a scaling limit.

    A rigorous mathematical approach to renormalization theory is the so-called causal perturbation theory, where ultraviolet divergences are avoided from the start in calculations by performing well-defined mathematical operations only within the framework of distribution theory. The disadvantage of the method is the fact that the approach is quite technical and requires a high level of mathematical knowledge.


    Conclusion (of the whole link):


    The early formulators of QED and other quantum field theories were, as a rule, dissatisfied with this state of affairs. It seemed illegitimate to do something tantamount to subtracting infinities from infinities to get finite answers.

    Dirac's criticism was the most persistent. As late as 1975, he was saying:

    "Most physicists are very satisfied with the situation. They say: 'Quantum electrodynamics is a good theory and we do not have to worry about it any more.' I must say that I am very dissatisfied with the situation, because this so-called 'good theory' does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small - not neglecting it just because it is infinitely great and you do not want it!"

    Another important critic was Feynman. Despite his crucial role in the development of quantum electrodynamics, he wrote:

    "The shell game that we play ... is technically called 'renormalization'. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It's surprising that the theory still hasn't been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate."

    The general unease was almost universal in texts up to the 1970s and 1980s.


    Beginning in the 1970s, however, inspired by work on the renormalization group and effective field theory, and despite the fact that Dirac, Feynman and various others never withdrew their criticisms, attitudes began to change, especially among younger theorists. Kenneth G. Wilson and others demonstrated that the renormalization group is useful in statistical field theory applied to condensed matter physics, where it provides important insights into the behaviour of phase transitions. In condensed matter physics, a real short-distance regulator exists: matter ceases to be continuous on the scale of atoms. Short-distance divergences in condensed matter physics do not present a philosophical problem, since the field theory is only an effective, smoothed-out representation of the behaviour of matter anyway; there are no infinities since the cutoff is actually always finite, and it makes perfect sense that the bare quantities are cutoff-dependent.

    If QFT holds all the way down past the Planck length (where it might yield to "string theory" or something different), then there may be no real problem with short-distance divergences in particle physics either; all field theories could simply be effective field theories. In a sense, this approach echoes the older attitude that the divergences in QFT speak of human ignorance about the workings of nature, but also acknowledges that this ignorance can be quantified and that the resulting effective theories remain useful.

    In QFT, the value of a physical constant, in general, depends on the scale that one chooses as the renormalization point, and it becomes very interesting to examine the renormalization group running of physical constants under changes in the energy scale. The coupling constants in the Standard Model of particle physics vary in different ways with increasing energy scale: the coupling of quantum chromodynamics and the weak isospin coupling of the electroweak force tend to decrease, and the weak hypercharge coupling of the electroweak force tends to increase. At the colossal energy scale of 1015 GeV (far beyond the reach of our civilization's particle accelerators), they all become approximately the same size (Grotz and Klapdor 1990, p. 254), a major motivation for speculations about grand unified theory. Instead of a worrisome problem, renormalization has become an important theoretical tool for studying the behaviour of field theories in different regimes.






    And here is a great intro to QCD


    More on topic of QED dislike.


    From: https://www.researchgate.net/p…have_the_jewel_of_physics


    For those wanting to follow up the sometimes heated arguments about QED, here is a useful link https://www.researchgate.net/p…have_the_jewel_of_physics


    Q:

    Quote from Wikipedia 2016:

    “Quantum electrodynamics.... Richard Feynman called it "the jewel of physics" for its extremely accurate predictions ... the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results .... theory being meaningful after renormalization is that the number of diverging diagrams is finite... quantum electrodynamics displaying just three diverging diagrams.”

    So, the renormalization has not removed all the infinities from QED??? How then the Richard says, what QED is extremely accurate??? The theoretically the QED is a monster, but practically – “jewel”. In fact, the infinities remained:

    Wikipedia: “An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero... From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy.[24] The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED appears to suffer from quantum triviality issues.”

    So, the Richard Feynman was not objective. The renormalization removes the infinities from the theory. The infinities are not completely removed. Conclusion: there is no single Quantum Theory of Field, which would be renormalized, and even quantized. So, theoretically, we have no Quantum Theory of Field. Please go along the path of the David Bohm's theory of Quantum Physics. Thank you.


    A (selected):


    "Can you elaborate?"


    Feynman invented the technology for doing QFT-calculations in general, not only in QED (or just some specific processes in QED, like the electron anomalous magnetic moment). It is known as Feynman diagrams.

    * It is manifestly invariant, combining many Lorentz symmetry breaking contributions of the old Hamiltonian formalism into single, explicitly invariant (or sometimes covariant) terms.

    * It provides a visual representation of long and complicated algebraic expressions, unambiguously defined by each diagrams, which makes it easy to identify expressions which are equal, or which cancel each other. Hence it makes it possible to do visual calculations (i,e., incredibly complicated algebraic computations in your head).

    * It supports intuition, interpretation and understanding of the physical mechanisms behind the processes under scrutiny, making it easy to decide which processes are possible in a QFT model (and give a fast quantitative estimate of its likelihood, if it is possible).

    * It provides a clear way to communicate ideas and results. [Q: What is your model? A: Let me draw the Feynman rules. Q: Why do you say this must happen? A: Look at this Feynman diagram. Q: Isn't your model already ruled out by experiment, because then that diagram implies unobserved high probability for that process? A: Uhmm... -- oh shit! Let me think some more...]

    I could continue, but you should have gotten an idea by now.


    "To my knowledge, this was accomplished only by tweaking the electron g factor..."


    This is not correct. The electron (and muon) anomalous magnetic moments can be computed without direct introduction of new parameters. In requires knowledge of the dimensionless fine structure constant, which can be obtained from completely different experiments. For higher order corrections, it also requires knowledge of the ratio between the muon and electron masses (again to be taken from experiments), and the measured energy-dependent cross-section of electron-positron annihilation to hadrons.


    Dmitri> we have no Quantum Theory of Field.

    We do have many mathematically well defined Quantum Field Models (to be simulated on a computer, they must be well defined), defined on space-time lattices. It can be argued that we have no well defined Relativistic Quantum Theory of Fields. But it can be argued against that also: There is no field of physics with better agreement between theory and experiment than QED, best exemplified by the electron (g-2) anomaly.

    QED is not only a Jewel in physics, it is The Finest Jewel in Physics. There may be other fields which shine equally well on the experimental side, or on the theory side, but not to the same extent on both sides at once.


    Dmitri> Please go along the path of the David Bohm's theory of Quantum Physics.

    I would hardly call it a theory; it is at best a way to interpret what has be calculated much simpler in other theories. And it cannot be made to fit with special relativity. You cannot compute the electron (g-2) in Bohm's theory; not even in principle.

    So it is a bad theory, and a wrong theory. Stay away from that path.

  • A one time correction is quite enough for most mature and level headed debaters.


    I agree, however politicians know that psychologically, when people operate at the level of emotions and hunches, repetition => belief.


    So I try not to repeat points and correct posters once, leaving it when they have seen the error of their ways. But I would not make a good politician.

  • How tiring RobertBryant's junvenile personal vendetta against THH is.

    B2 to the rescue again.

    Predictable.

    The problem is that THHuxleynew has stated erroneously that

    QED predicted the Lamb effect


    No amount of spin effect on T2's part or Junvenility on B2's part

    will change the fact that QED DID not predict the LAMB effect


    "Fact Check, debunking obviously false information"


    No one predicted the Lamb effect in 1947 did ....not Lamb..not Retherford

    It was a surprise to them as it was to everybody else

    Holverstott was correct and history is correct.


    If T2 and B2 are content with rewriting history to suit their personas

    can they keep it to their own private world. because this THHistorynew has been debunked.


  • Perfect case in point.


    THH responded in #303 in emotionless, regular caption, without personal attack.


    Your emotional shouting back shows the exact case in point and you have posted several times on it, unprovoked.

    The point brings little to no technical merit and is clearly simply a vendetta that seems to consume you.


    So if I ask, what difference does the "Lamb prediction" disagreement make? None. You simply seem to have

    this irrepressible need to show THH wrong, even on minor issues and cannot keep personal vendetta out of it.


    Who cares if it was 1947 or 1960? If THH is wrong on this, who cares! If you are wrong, who cares? Seemingly

    only you!


    Not the way a mature debate would or should proceed.


    Oh well...

  • robert bryant is actually correct. THH was definitive in stating that QM predicted the lamb shift (it did not, just like it didn't predict spin). THH said BH wrote a puff piece not worthy of scientific merit, and that Brett is wrong about QM's dates and times and predictions. In short, the travesty is to discredit without challenging the underlying evidence BH presents.


    Robert called him out on his statement and he was right to do that.


    It is almost tragi-comic when Bob#2 says "what difference does it make?" and that it is a "technical point" and a "minor issue."


    Perhaps in the grand scheme dates/times and predictions/explanations can be mixed and match at our pleasure. Perhaps how nature works is also a "minor issue."

  • If THH is wrong on this, who cares!

    what difference does the "Lamb prediction" disagreement make


    The assertion that QED predicted the LambShift just shows up THHnew's extreme bias.


    The QED theory is not predictive and never was.

    The Lamb shift was a real surprise to Dirac... one of the prime movers of the theory and to Bethe

    It was way out of left field.

    Basically QED plays catchup with the reality of physics because it actually

    does not have the correct modelling.

    The modifications to QED in its newest NRQED incarnation

    to get ever closer and closer postdictions to the Helium spectral data

    are heuristic nonsense with no basis in physics

    https://arxiv.org/pdf/1704.06902.pdf.

    THHuxleynew offers this up as evidence of prediction.

    Another historically erroneous position ... the helium spectra have been around long before NRQED

    What justification is there for throwing in terms

    which have the fine structure constant to the seventh power?

    How many arbitrary constants are hidden in the complication?


    Does Bob2 have an answer or is he just going to indulge in OT and junvenile personal attacks.

    in the end.. the shortcomings of the QED,QCD models need to be admitted

    so that better models can be found with predictive power .











    .

  • All: Huxley isn't being straight with you here. He's giving you a false narrative.


    The Wikipedia history of quantum field theory article tells you how in the 1930s QFT was “plagued by several serious theoretical difficulties”, and the situation was dire, desperate, and gloomy. The problem of infinities or “divergence” was the big one. It stems from the point-particle electron. This was proposed by Yakov Frenkel in 1925. He said electrons “have no extension in space at all. Inner forces between the elements of an electron do not exist because such elements are not available”. This was adopted and promoted by Heisenberg and Pauli and the rest of the Copenhagen school despite the following:

    • Gustav Mie’s 1913 foundations of a theory of matter. That’s where Mie said electrons are not, as has been believed for twenty years, foreign particles in the ether, but they are only places at which the ether takes on a particular state”. Mie’s chapter 2 is Knot Singularities in the Field.
    • The 1917 Einstein-de Haas effect which demonstrated that spin angular momentum is indeed of the same nature as the angular momentum of rotating bodies as conceived in classical mechanics.
    • Arthur Compton;s 1921 paper on the magnetic electron. He referred to the Parson electron or magneton which featured a rotation with a “peripheral velocity of the order of that of light”. Compton said “we may suppose with Nicholson that instead of being a ring of electricity, the electron has a more nearly isotropic form”.
    • The 1922 Stern-Gerlach experiment which demonstrated that the spatial orientation of the electron's angular momentum is quantized. They used silver atoms, which have an outer electron.
    • Louis de Broglie's 1923 letter to Nature on waves and quanta. He said he’d ”been able to show that the stability conditions of the trajectories in Bohr’s atom express that the wave is tuned with the length of the closed path”. His 1924 thesis was on the theory of quanta,
    • Erwin Schrödinger's 1926 paper quantization as a problem of proper values, part I. He said things like “a closer definition of the surface harmonic can be compared with the resolution of the azimuthal quantum number into an ‘equatorial’ and a ‘polar’ quantum’” and the “main difference is that de Broglie thinks of progressive waves, while we are led to stationary proper vibrations”.
    • Erwin Schrödinger's 1926 paper quantization as a problem of proper values, part II. He talked about wavefunction and phase and geometrical optics, and on page 18 said classical mechanics fails for very small dimensions of the path and for very great curvature.
    • Erwin Schrödinger's 1926 paper quantization as a problem of proper values, part 3. He says “since then I have learned what is lacking from the from the most important publications of G E Uhlenbeck and S Goudsmit”. He referred to the angular moment of the electron which gives it a magnetic moment, and said “the introduction of the paradoxical yet happy conception of the spinning electron will be able to master the disquieting difficulties which have latterly begun to accumulate”.
    • Franco Raseti and Enrico Fermi’s 1926 paper on the rotating electron. They said the electron has almost always been considered to be a material point up to now”. They also said this: “it was only in recent years that Uhlenbeck and Goudsmit made the hypothesis that the reason for some spectroscopic phenomena – in particular, the anomalous Zeeman effect – was to be found in a structural element of the electron. Those authors assumed precisely that the electron is animated with a rotational motion around itself, in such a way that it possesses a quantity of a real motion, namely, a magnetic moment”. Raseti and Fermi also said despite the grave energetic difficulties that have been pointed out, one can conclude that the hypothesis of the rotating electron must not be abandoned”. Unfortunately for quantum electrodynamics, it was.
    • Charles Galton Darwin's 1927 PRSA paper on the electron as a vector wave. He said we must regard the electron as a wave, and its motion in free space or weak fields can be treated by the ordinary theory of waves. He said “it is possible to regard the wave of the electron as in ordinary space”.
    • Robert Oppenheimer’s 1930 note on the theory of the interaction of field and matter. Oppenheimer said “the theory, is, however, wrong, since it gives a displacement of the spectral lines… which is in general infinite”.
    • Landau and Rudolf Peierls' 1931 extension of the uncertainty principle to relativistic quantum theory. They talked of absurd results and the complete failure of the theory, and said “it would be surprising if the formalism bore any resemblance to reality”.
    • Max Born and Leopold Infeld's 1935 paper on the quantization of the new field theory II. On page 12 they said this: “the inner angular momentum plays evidently a similar role to the spin in the usual theory of the electron. But it has some great advantages: it is an integral of the motion and has a real physical meaning as a property of the electromagnetic field, whereas the spin is defined as an angular momentum of an extensionless point, a rather mystical assumption”. On page 17 they said this: “the rest-mass occurring in our theory is not, as in Dirac’s, an absolute constant of the system but the total internal energy, depending on rotation and internal motion of the parts of the system. An external field will influence not only the translational motion, but also these internal motions”. On page 23 they said this: “in the classical theory we got the result S = D x B = E x H”. They’re talking about the Poynting vector. That's light going round and round.

    Renormalization was a kludge. A clumsy fix that was only necessary because the Copenhagen school somehow managed to successfully promote their point-particle electron. Despite all the evidence and papers to the contrary. It's been downhill ever since. Or should I say it's been all downhill since 1925. That was when Pauli shot down Ralph Kronig’s electron spin using the straw-man claim that the electron’s surface would have to be moving faster than light.

  • This is important. Note what Born and Infeld said. “in the classical theory we got the result S = D x B = E x H”. They were talking about the Poynting vector, and I said that's light going round and round. Now read this excerpt from something I've written previously:


    Take a look at what Feynman said in the Feynman lectures: Suppose we take the example of a point charge sitting near the center of a bar magnet, as shown in Fig. 27–6. Everything is at rest, so the energy is not changing with time. Also, E and B are quite static. But the Poynting vector says that there is a flow of energy, because there is an E × B that is not zero. If you look at the energy flow, you find that it just circulates around and around. There isn’t any change in the energy anywhere – everything which flows into one volume flows out again. It is like incompressible water flowing around. So there is a circulation of energy in this so-called static condition. How absurd it gets!”


    f27-06_tc_big.png

    Fig 27-6 from The Feynman Lectures by Michael A Gottlieb and Rudolf Pfeiffer


    Only it isn’t absurd at all. The Wikipedia Poynting vector in a static field article talks about a circular flow of electromagnetic energy. It shows the Poynting vector marked with an S. It goes around and around. The article says this: “While the circulating energy flow may seem nonsensical or paradoxical, it is necessary to maintain conservation of momentum. Momentum density is proportional to energy flow density, so the circulating flow of energy contains an angular momentum”. Feynman also said this: we know also that there is momentum circulating in the space. But a circulating momentum means that there is angular momentum. So there is angular momentum in the field”. You bet there’s angular momentum in the field.


    That's because the electron is a 511keV photon going round and round. When you've read all those papers and looked at the evidence, it's obvious.


    strip5electron-e1568465579109.png

    • Official Post

    If you look at the Clifford torus that is the topological representation of the SO(4) space you realize, as Wyttenbach has said already, that this Moebius strip representation is the 3D projection of the 4D path of the electron. You can trace that Moebius shape as a path over the surface of the torus. So the wave is the 2D shadow, the Moebius strip is the 3D and the torus is the 4D. Is the same phenomena that is being seen from different perspectives without finding the relation, until you see the torus as the volume where the electron “lives”.

  • If you look at the Clifford torus that is the topological representation of the SO(4) space you realize, as Wyttenbach has said already, that this Moebius strip representation is the 3D projection of the 4D path of the electron. You can trace that Moebius shape as a path over the surface of the torus. So the wave is the 2D shadow, the Moebius strip is the 3D and the torus is the 4D. Is the same phenomena that is being seen from different perspectives without finding the relation, until you see the torus as the volume where the electron “lives”.

    I just don't "get" this 4D path thing. Sorry. Maybe it's because I've lived with the Williamson / van der Mark electron for so long:


    toroid1colour.jpg
    Image from Is the electron a photon with toroidal topology? by John Williamson and Martin van der Mark

    It's a three-dimensional steady-state vortex solution with no singularities. Like a Hopf fibration:

    Hopfkeyrings.jpg
    Public domain image by David A Richter, see Wikipedia commons, caption: Some of the flow lines along a Hopf fibration

    I have difficulty getting to grips with a fourth dimension. I think of Charles Galton Darwin's the electron as a vector wave where “it is possible to regard the wave of the electron as in ordinary space” and I struggle to think of it as anything else.


  • You are (mostly) incorrect here. Here is the original:


    THH: SM does not guess quantised spin
    RB:

    I guess SM predicts quantised spin...somewhere..

    By SM.. the socalled Standard Model

    I guess THHNew is referring to the QED/QCD/ SU??? composite of ninety years of post-Bohr trial and error


    Brett Holverstott refers to the "Quantum bubble"


    Sean Carroll describes quantum mechanics as an “incredibly successful theory” but the frequency with which we hear this refrain does not make it more true.

    It is in fact a hot mess defined by constant failure and revision in the face of new experimental data.


    The theory has never been compatible with special or general relativity; it didn’t predict electron spin,

    and it failed to predict a host of subtle changes in electron energy levels

    such as the Lamb Shift, Fine Structure, and Hyperfine Structure.

    Yes it can calculate the excited state energies for hydrogen,

    but not for helium or anything that comes after.


    https://medium.com/@brett.holv…antum-bubble-8e9c3d9d1d92


    Certainly Dirac didnt predict the Lambshift. After the Lambshift etc Feynman,Schwinger etc formulated an explanatory theory called QED

    in the 1960s? unless THHuxleynews virtual reconstruction of history can be trusted.


    I agree QED does not predict spin (and never said it did). SM is formed from symmetries SU(3) charm X SU(2) spin and isospin X U(1) charge. It unifies isospin and spin - pretty clever. So it does not guess quantised spin, as I said.


    Now as for what QED predicts. The problem here is that most authors use the word prediction to mean "calculations give the experimental value" without concern about time. Mainly because, if they are proper calculations from a proper theory they can't be fudges so it does not matter. Modern QED does predict all those things, with great precision.


    Lamb Shift, Fine Structure, and Hyperfine Structure.

    Yes it can calculate the excited state energies for hydrogen,

    but not for helium or anything that comes after.

    It calculates helium excited energies as well, and quite a lot else. Because we are now very good at doing complex calculations.


    Wyttenbach views a coefficient calculated (often numerically) as a complex integral as a fudge factor. That is a very blinkered view of physics which does not allow much about the real world to be calculated. The QED Ci factors in the alpha series expansion for g are textbook work for students (the easy ones) and calculated by multiple teams using different numerical methods and compared (the difficult ones). They are not fudge factors. If you view every complex integral as invalid how can you do science?


    QED does predict the Lamb shift - and it is capable of doing this to very great accuracy - but it is true that the discovery of Lamb shift motivated the (real QED dealing with those loops that make infinities) that calculate it. So I agree - no prescient prediction - but exquisite correspondence with measurement. I agree people here should be suspicious of calculations that do not precede measurements, but not too suspicious. For example the impressive QED prediction of g (ignoring QCD corrections) is maths, not fudge factors. It says what g is in terms of maths that anyone can reconstruct from a very small number of axioms (I use that to distinguish this from Mills style ad hoc formulae which are hand waving with vague justifications). The predictions of g, Lamb shift, hyperfine structure, ionisation energies, etc all use the same maths and require very little plugged in in terms of experimentally determined parameters. (I can quantify that if anyone challenges it).


    The question of "how many fudge factors does QED have" is complicated by the fact that there are, for the classic g calculation, meson, electroweak, hadronic corrections that affect things at very high precision. Calculating those requires additional values plugged in, like the ratio of muon to electron mass, coupling constants, etc. But, these values are all independently determined by (different) experiments. Just like alpha (on which everything depends) which is also independently measured. So QED g calculation has no fudge factors.


    It is tough however to find quantitative experimental results where QED has shown greater accuracy than experiment before the experiment. That is because it is a lot of effort to make precise QED calculations, and people don't bother to do it till it is needed to match experiment. So generally the QED accuracy is pushed up until the theoretical uncertainty is just a bit lower than the experimental uncertainty and any anomalies can be detected.


    It comes down to whether you think the whole of Standard Model science is many different scientists all collectively deluded, or whether you think the stuff that is accepted has been tested a lot by people who were not sure it was right, before it was accepted. If you look at the history - or even the current papers - you see a lot of interest in detecting errors - and occasionally they are detected. Independent groups of theoreticians end up with the same results - and they are not colluding.


    In terms of predictions long before experiment the clearest are the neutral currents predicted by Weinberg theory and found, and eventually the W & Z bosons were found, predicted long before.


    I'm fighting on this thread against people who do not understand the merits of the Standard Model, and erroneously think its successes are caused by fudge factors. I often think such views are psychological projection.


    That is not to say it is complete, or satisfactory. Things not understood:


    • Why is magnetic moment the sum of angular and spin components? what connects these two things. Almost certainly we need a deeper understanding of QM (from a quantum gravity theory) to get this. Resorting to classical ideas (spinning charge in some understood continuous geometry) seems all wrong because we know at the size of an electron that the world is quantum, not classical, and that electrons are as near as we know point particles. There is no reason to think that classical notions of rotation help. On the other hand degrees of freedom seem a very basic to QM thing, and geometry in terms of number of dimensions, and dimensions "curled up" that is not seen by us, are all possible I guess. So topology that has macroscopic visualisations is in the frame.
    • What is QM? For me, and many, it seems very likely that the whole of QM, entanglement, virtual particles, QFT, is fundamental to physics, and when there is a unification with GR will explain GR. This world is so different from macroscopic physics that intuition from that does not work, one reason why I am unhappy with semi-classical theories, or theories that attempt to model QM in some classical way. It is understandable that people look for that, but I see it as anthropomorphism. No reason for fundamental physics to have any intuitive connection with macroscopic ideas and geometry, now that we are pretty sure that spacetime is an emergent phenomenom.
    • Where do the particle masses come from? random symmetry breaking, or something more? Do we invoke anthropic principle in a multiverse to determine values that give rise to non-trivial physics and astronomy? I would not mind doing that - others would.
    • Why those symmetry groups? Are there others hidden to us?
    • Lots of other stuff


    But it is no good turning your back on fundamental structure understood and experimentally backed (like QFT) when trying to answer these questions because any answer needs to be realistic.


    Back to BH - did I do him an injustice? His conclusions:


    The theory has never been compatible with special (false)

    or general relativity; (misleading - no-one has quantum gravity yet, unifying strong/weak/em is apretty big deal)

    it didn’t predict electron spin, (true)

    and it failed to predict a host of subtle changes in electron energy levels such as the Lamb Shift, Fine Structure, and Hyperfine Structure. (it predicts them, but theoretical and experimental work typically happens at about the same time)

    Yes it can calculate the excited state energies for hydrogen, (true)

    but not for helium or anything that comes after. (false)

    Now, we have two key experiments that demonstrate that quantum mechanics fails to account for the behavior of the electron. (false - but perhaps they should be discussed elsewhere)

  • . The problem here is that most authors use the word prediction to mean


    THHuxley trying to wriggle again.

    QED does predict the Lamb shift


    No it doesn't it never did.

    This is exactly what Holverstott said.

    Holverstott said that QED did not predict the Lamb shift


    When most authors use prediction it means predict... in the future.

    Of course THHnew takes a contrary view to suit his idiosyncratic narrative


    I predict that THHuxley will attempt some more wiggle to justify an erroneous

    interpretation of history. QED has been postdictive throughout its varied 90 year history.


    Predict

    "to say that an event or action will happen in the future, especially as a result of knowledge or experience:

    https://dictionary.cambridge.org/dictionary/english/predict

  • I'm fighting on this thread against people who do not understand the merits of the Standard Model, and erroneously think its successes are caused by fudge factors. I often think such views are psychological projection.

    The psychological projection is coming from you, Huxley. Now go and read those papers instead of studiously ignoring them. See post 315.


    Quote

    That is not to say it is complete, or satisfactory. Things not understood:

    They are understood. But you refuse to admit that understanding because it shows the Standard Model to be wrong.


    Quote

    Why is magnetic moment the sum of angular and spin components? what connects these two things. Almost certainly we need a deeper understanding of QM (from a quantum gravity theory) to get this. Resorting to classical ideas (spinning charge in some understood continuous geometry) seems all wrong because we know at the size of an electron that the world is quantum, not classical, and that electrons are as near as we know point particles. There is no reason to think that classical notions of rotation help. On the other hand degrees of freedom seem a very basic to QM thing, and geometry in terms of number of dimensions, and dimensions "curled up" that is not seen by us, are all possible I guess. So topology that has macroscopic visualisations is in the frame.

    Because a particle like an electron is a rotational spin ½ wave that looks like a standing wave. That deeper understanding was in the 1920s papers. Now it's been expunged and wilfully ignored and hidden behind paywalls. There is no evidence whatsoever that the electron is a point particle. On Wikipedia you can read that “observation of a single electron in a Penning trap shows the upper limit of the particle’s radius is 10−22 meters”. But when you follow up on the references and read Hans Dehmelt’s 1989 Nobel lecture you realise that the upper limit is merely an extrapolation. It’s an extrapolation from a measured g value, which relies upon “a plausible relation given by Brodsky and Drell (1980) for the simplest composite theoretical model of the electron”. When you track back to Brodsky and Dell you can read the anomalous magnetic moment and limits on fermion substructure. And what you read is this: “If the electron or muon is in fact a composite system, it is very different from the familiar picture of a bound state formed of elementary constituents since it must be simultaneously light in mass and small in spatial extension”. The conclusion is effectively this: if an electron is composite it must be small. But there’s no actual evidence that it’s composite. So it’s a non-sequitur to claim that the electron must be small. Meanwhile all the evidence points to the wave nature of matter. Not the point-particle nature of matter.


    Quote

    What is QM? For me, and many, it seems very likely that the whole of QM, entanglement, virtual particles, QFT, is fundamental to physics, and when there is a unification with GR will explain GR. This world is so different from macroscopic physics that intuition from that does not work, one reason why I am unhappy with semi-classical theories, or theories that attempt to model QM in some classical way. It is understandable that people look for that, but I see it as anthropomorphism. No reason for fundamental physics to have any intuitive connection with macroscopic ideas and geometry, now that we are pretty sure that spacetime is an emergent phenomenon.

    Virtual particles are virtual, see the peculiar notion of exchange forces part I and part II by Cathryn Carson. Einstein explained GR. That world is not at all different from macroscopic physics. Electrons go round in circles in a magnetic field like boomerangs go round in circles, because of precession. Not because magnets twinkle. Spacetime is not an emergent phenomenon. It's a mathematical abstraction that models space at all times, and therefore is static.


    Quote

    Where do the particle masses come from? random symmetry breaking, or something more?

    From a wave's resistance to change-in-motion when it's in a closed path. Hence the mass of a body is a measure of its energy-content. Hence E=mc².


    Quote

    Do we invoke anthropic principle in a multiverse to determine values that give rise to non-trivial physics and astronomy? I would not mind doing that - others would.

    No.


    Quote

    Why those symmetry groups? Are there others hidden to us? Lots of other stuff

    Understand the photon first. Then pair production. Then the electron. Not pretty patterns in the ephemera. You will never understand gunpowder by gazing upwards at the New Year's Eve fireworks.


    Quote

    But it is no good turning your back on fundamental structure...

    That's exactly what you're doing.


    ALL: Watch Huxley studiously ignore all the papers and evidence given in my post 315. That ought to tell you everything you need to know.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.