Randell Mills GUT - Who can do the calculations?

  • The problem in this reasoning is that precision does not increase the spin from h/2 to h in the (x,z) plane. The spin is a vector proportional to an area, in fact proportional to the magnetic moment squared. Mills says that, the x,z projections of the precision vector of h gives 1/2h and 1/4h what is correct(see GUT-CP figure 1.25). The resulting total spin (x,z) is then 0.559016994374947( (1/2)2 + (1/4)2)1/2. An other problem is that Mills talks of 60 degrees, what is correct if you use spherical harmonics modeling but this is the opening angle of the precision cone, that circulates around the central axes middle = 30 degrees! In fact h(x) 1/4 is not a "stable value" it's the magnitude of the spin precision component x of (x,z) in average it is "0" as spin does not add dissipative energy to a particle - added flux is just compensating the magnetic field and if the field vanishes the the two opposing fields annihilate!


    To get the maximal possible h you have to add all 3 dimensions. But as mentioned earlier. Only orthogonal spin values can be summed up! The mistake Mills made is to either to assume that the magnitude of the S vector is constant h what is wrong or that S is circulating at a constant angle... or else an phi=45 degrees S would be > 1...

    If things start to precess I would expect torque free precession. But I can't see that that is used. In recent versions of GUTCP there is a section that shows why the 60 degrees is needed. It is balansing of torques.

  • If things start to precess I would expect torque free precession. But I can't see that that is used. In recent versions of GUTCP there is a section that shows why the 60 degrees is needed.

    Where ?


    If it's torque free then he should explain the magnitudes of (X,Y) when R passes by... This only works if he thinks that the coordinates are rotating but then there is no precision movement at all.


    Anyway the charge surface of the electron is a 4D torus - too and things look quite a bit different but the current loops are much easier to prove. No precision (axes torque movements) could exactly be what happens in 4D when the energy is added in 2D to the perturbative (non relativistic) mass. There will be a one time short tilt.

  • Where ?


    If it's torque free then he should explain the magnitudes of (X,Y) when R passes by... This only works if he thinks that the coordinates are rotating but then there is no precision movement at all.


    Anyway the charge surface of the electron is a 4D torus - too and things look quite a bit different but the current loops are much easier to prove. No precision (axes torque movements) could exactly be what happens in 4D when the energy is added in 2D to the perturbative (non relativistic) mass. There will be a one time short tilt.

    No I assumed torque free precession of the loops and tried to see if that could give something. If you download the latest version an look at the section discussing the precession you

    will find a note about torque balance, that I found interesting.


    I also revisited the g-factor. The g factor mills derive is independent of mass, and yes the myon seam to have the same g factor as the electron.

  • No I assumed torque free precession of the loops and tried to see if that could give something. If you download the latest version an look at the section discussing the precession you

    will find a note about torque balance, that I found interesting.


    This is all clear - why it must be torque free.. but what happens if S-vector passes the (X,Z) plane as it should!

  • This is all clear - why it must be torque free.. but what happens if S-vector passes the (X,Z) plane as it should!


    Not sure I will put this on ice for now. But I have also reflected about Mills circular movements and the speed of light reference frame.

    I can't really understand his derivation and it is weird that such a transform is derived in third party sources. There are specific rules to

    follow and I think they are correct. But due to another reason. I envision that when the outside interacts with the electron it has to insert

    the fluxon or photon as it encircles the electron with wavelength 2 pi r. But if instead we consider the intrinsic fields and physics we have

    the photon performing a standing wave and hence here the wave length is r. Please see

    light-on-circle

  • I envision that when the outside interacts with the electron it has to insert

    the fluxon or photon as it encircles the electron with wavelength 2 pi r.


    According to 4D rules a photon makes one rotation where as the electron makes two (at light speed). A photon captured by the electron must acquire a second dimension rotation with light speed, but then we had to divide its mass/energy by alpha.


    But.. the recent electron modeling for the Hydrogen model shows that the electron is split into relativistic mass and perturbative mass. Adding the photon mass to the perturbative/ non relativistic mass indeed requires a division by 2 pi! which tells that there is a deceleration in one dimension. This also works fine in the neutron and other models.

  • It is a loooong way from a cold plasma with few ions and electrons at a temperature of 1/10th of a Kelvin in a high-tech lab to a commercial product...quite a bit different from one we have seen that seems to be made of home depot parts :)

  • https://phys.org/news/2019-01-…rs-super-dense-stars.html


    Cold plasma may be a new way to make ultra dense hydrogen (aka metallic hydrogen) and/or hydrinos.

    Laser cooling of hydrogen remains a challenge, see here:

    https://physics.aps.org/articles/v9/115


    If this technical challenge is met, I agree with you that laser cooling of H could lead to the generation of UDH, and this despite the very low pressure of laser cooling experiments.

  • According to 4D rules a photon makes one rotation where as the electron makes two (at light speed). A photon captured by the electron must acquire a second dimension rotation with light speed, but then we had to divide its mass/energy by alpha.


    The "two rotations" concept is consistent with experimental pion data.


    Mass (pion+) = 139.6 MeV/c2.

    Compton wavelength = h/(mc^2) = 8.8fm

    Reduced Compton wavelength = Compton wavelength / (2 PI) = 1.4fm

    Half the Reduced Compton wavelength is = 0.7fm


    The experimental pion mean charge radius is around 0.68fm.

  • Cosmological update angle on GUTCP

    From Brett Holverstott.. now. 8.15 am East Australian time


    https://medium.com/@brett.holv…-with-a-bang-1da364efcaa3


    "But Mills’s cosmological work was based on an entirely new theory of quantum gravity, based in turn on his entirely new theory of atomic and particle physics.

    Due to this language barrier, his work remains largely unknown in the scientific community, but the 1995 edition of his treatise in the Library of Congress will be a patent reminder of his prediction."


    The GUTCP Millsian language is a headache for me

    But paracetamol or acetaminophen relieves it a bit

  • "But Mills’s cosmological work was based on an entirely new theory of quantum gravity, based in turn on his entirely new theory of atomic and particle physics.

    Due to this language barrier, his work remains largely unknown in the scientific community, but the 1995 edition of his treatise in the Library of Congress will be a patent reminder of his prediction."


    Unluckily Mills stopped his work to early, after the important - exciting - first insight he got. So he failed to discover the universal laws of mass energy production given by the transport of magnetism from 3D,t to SO(4). But this does in no way invalidate his findings about the cosmos. But definitely his writing about the strong force is wrong.


    Mills is the only physicist in the last 40 years that made/brought real progress in the large. So he is the grandfather of NPP2.0.

  • Mills is the only physicist in the last 40 years that made/brought real progress in the large


    Despite Mills accuracy his modelling/ predictions on atomic properties the last few decades

    have seen almost zero acceptance, acknowledgement


    The barrier to acceptance is not as Brett Holverstott states... a language barrier.


    Its Plancks funeral barrier ..which is even higher than the Coulomb barrier.

  • Can you calculate something with nuclear QM to at least 6 digits precision as we would need for an explanation?

    The neutron is heavier than the proton .


    The measured neutron ''mass'' excess NME= 1.293 332 05(48) MeV


    Fyodor et al using a supercomputer and QM inspiration get NME=ZERO!!

    ZERO+- 47 MEV.

    ZERO+- 47000000 eV ...how can QM pickup 782,333?.


    Neutron mass=936 +-25 MEV

    Proton mass=936+- 22 MEV

    https://physicsworld.com/a/pro…ed-from-first-principles/

    the problem is that the accuracy of QM predictions is so low

    that reality is difficult to detect.



  • Here is the 6 digit precision asked for by Wyttenbach.


    https://arxiv.org/pdf/1406.4088.pdf


    The same neutron-proton mass difference. But 2015 not 2008


    Here, we provide a fully controlled ab initio calculation for these isospin splittings. We used 1+1+1+1
    flavor QCD+QED with 3HEX (QCD) and 1 APE (QED) smeared clover improved Wilson quarks. Up to now,
    the most advanced simulations have included up, down, and strange quarks in the sea but neglected all electromagnetic and up-down mass difference effects. Such calculations have irreducible systematic uncertainties
    of O(1/Nc/m2c, α, md − mu), where Nc = 3 is the number of colors in QCD. This limits their accuracy to the
    percent level. We reduced these uncertainties to O(1/Nc/m2b, α2), where mb is the bottom quark mass, yielding
    a complete description of the interactions of quarks at low energy, accurate to the per mil level.


    We may dislike a universe in which some fundamental calculations (QCD) are just difficult. Difficult is not however impossible, and modern supercomputers are a big help.


    We know QFT works beautifully to exceptional high accuracy in QED - where the perturbative expansions converge very rapidly and everything is simple.


    The same framework applied to QCD happens to lead to perturbative expansions that converge much more slowly - making everything more difficult.


    The great thing about QCD is that there are a very large number of experimental results, all of which can in principle be determined by the same theory. Progress in solving the equations gets better numerically all the time, and the ab initio results, and relationships between results, continue to predict experiment.


    Before anyone here talks about fudge factors I'd like to point out that QCD has made many, many predictions before experimental results existed (which were correct).


    http://www.physics.gla.ac.uk/HPQCD/summary_plots/


    and


    https://arxiv.org/pdf/1312.5694.pdf (for some of the details, or see links in HPQCD page)


    Perhaps Wyttenbach could state which predictions were made before results (with relevant dates)? That is of course the only real proof of the sauce.


    The other issue with QCD is that these are not hand-waving results. The methods are strictly derived from the theory, and the uncertainties (due to higher-order terms in expansions not considered, or low probability interactions not considered), are known and can be bounded.


    I'm all for some better theory describing nature than the standard model. It seems clear that there is something more fundamental along the lines of quantum generated spacetime if nothing else. But QED/QCD/standard model is so very successful and predictive that such a candidate must surely simplify to that.


    THH

  • Here is the 6 digit precision asked for by Wyttenbach.

    There is no 6 digit precision... NOT AT ALL. THH is mistaken

    If one reads the arxiv article by Durr et al ,2015 including the supplementary material


    it gives a choice of values for the n - p mass difference.

    1.51 , 2.52 and -1.00.. MeV


    Which is more correct?? Who knows.


    The average of these calculated results is about 1.? 1.0 Mev or 1000000 eV


    The experimental measured result is 782,332. 9 eV


    Go figure...the accuracy

    of supercomputers..

    lost in quantum.


    definitely not 6 digit.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.