Randell Mills GUT - Who can do the calculations?

  • Aabout pilot wave, I'm told it is incompatible with the experiment around Bell inequality which rule out hidden variable "excuses" in quantum physics.

    By the way, on BBC a documentary explains that problem very well, even for kids.

  • The Randel Mills theory essentially contradicts the Pilot wave theory (and quantum mechanics as a whole, after all). He doesn't interpret the orbitals like the fancy wave interference patterns of Pilot waves (or wave functions) of electrons - but like hollow less or more spherical shells.

  • R.Mills theory says that the freely traveling electron is a disk, the bound electron in an atom is a hollow shell. In the double slit experiment he says there is a free electron (disk) and the walls of the slit (protons and bound electrons - holloow shells) and there is an electromagnetic wave which has en effect on the free electron which causes the wavelike distribution on the target screen. Where does the EM wave come from? I suppose it is not from the electrons as they have a "closed" shape. Is it from the protons?

  • Aabout pilot wave, I'm told it is incompatible with the experiment around Bell inequality which rule out hidden variable "excuses" in quantum physics.

    I think it's more subtle than this. According to Wikipedia, Bell had discovered Bohm's work (and presumably liked it), and he wondered whether the explicit non-locality of the de Broglie–Bohm theory could be removed. That inspired Bell's famous inequality. The experiments looking into Bell's inequality are believed to rule out "local hidden variables." That was not the answer Bell was hoping for, as I understand the story. But since the de Broglie–Bohm theory was already non-local, the experimental support for Bell's inequality did not rule out the theory. I believe the current understanding in physics is that the math for both traditional quantum mechanics and the pilot wave theory produce the same predictions in most cases and diverge in only a few corner cases for which there are currently no experiments to decide between the two approaches, making them indistinguishable in practice.

    Take this description with a big grain of salt. Perhaps someone else can better elucidate this matter.

  • To be honest, I even don't understand, why/how the Maxwell's theory of light should work for massive particle like the electron.

    the fundamental basis of the entire theory is that the electron *is* light. *is* a photon. there *is* nothing but light, in the entire universe. bar nothing. bit mind-blowing, eh? :)

    how the hell could a photon become an electron? that surely violates something somewhere, doesn't it? actually... it's the wrong question. how the hell could a photon become looped up its own... backside so that, from the *outside* we have, through lack of knowledge and empirical data, *defined* that photon as something *other* than a "looped photon" and called it instead something completely different, aka "an electron"?

    ... you see the difference? you also see where i'm going with the question?

    on http://erm.lkcl.net i begin with two definitions (working hypotheses) of a particle. one simple, one much more technically accurate (as best i have been able to derive):


    The working simple definition, under exploration, of a particle:

    A looped frictionless non-radiating photon that indefinitely sustains itself.

    The working more comprehensive definition, under exploration, of a particle:

    A phasor-phased-harmonic array of one (or more photons) with a positive self-stabilising feedback loop between their own centripetal force and their own electro-magnetic field that sustains a perfect non-radiatingfrictionless balance with self-correcting feedback on its radius.

    does that make sense?

    a photon *happens* to not be going in a straight line but instead *happens* to be attracted (influenced by) its OWN electro-magnetic field, which *happens* by some amazing mathematical coincidence to be *self-sustaining*... luckily for us.

    does this definition also help to go some way towards explaining why the non-radiating condition is absolutely, absolutely, absolutely FUNDAMENTALLY critical? because without that non-radiating boundary condition the particle - the photon within the particle which unfortunately misleads us into MISNAMING it as a particle - would quite literally go "fizzle" in a straight line.

    by virtue of the very fact that particles do NOT go "fzzzzzblblblbblbl" like an open balloon tells us that there must be a non-radiating condition. we can then *use* that mathematical condition to reverse-engineer the equations... oh wait! Dr Mills has done most of the work already! :)

  • As you mention, we are not in a church. Hopefully we can use reason to talk through one specific difficulty with orbitspheres. I understand orbitspheres to be infinitesimally thin spherical shells comprised of great circles of circulating electric current. The key concept here is that the orbitsphere is a two-dimensional surface and not a volume. Have I misunderstood anything at this point?

    it's a convenient word to encapsulate and refer to a mathematical construct / concept. the next phase is to then integrate those two dimensional surfaces and come up with a 3D shape... which happens to be... for example... ooo i dunno... a mass or energy figure for an E.M. field or something. bottom line, an orbitsphere is *not* an actual physical object.

  • I think this is one of the propositions in need of testing rather than simply assuming to be true, which is thankfully an effort you are seeking to help out with.

    the test - if i am reading you correctly - is to read the work and follow the derivations, from scratch, and look, for yourself (or trust someone else) if there are any "magic constants" or anything *other* than the (one) constant. there are: they're c, planck, alpha, e, pi and... well, the rydberg constant is mentioned somewhere but that can be derived. so it's not *strictly* true that there is *one* magic constant. i wish that the planck constant h wasn't so damn inaccurate (8dp as of current CODATA sigh).

    unnnnfortunately.... 30+ separate papers in a 1700 book is a bit of a monster. thus it becomes more a problem that many of us *just don't have time*. of those *with* the time, only a few dedicated such people in the world are actually willing to read the damn book, and even fewer actually prepared to make an effort to understand it!

    thus we have Much Pontification And Gnashing Of Teeth.... two such people who have clearly demonstrated closed minds in direct contravention of the spirit of scientific enquiry are, within a mere matter of a couple of hours of reading just two threads on this forum, those people are *already* on my "block" list... *sigh*...

  • Abd : That is more or less exactly what I want to do here.

    i have - because of my own inability to fully grasp the mathematics (or, more, in recognition of the fact that it would take a decade that i don't have to get up-to-speed) - decided to take a different approach.

    rather than try to replicate a 1700-page work, which is an awful lot, applying black-box "reverse-engineering" techniques, i have decided to attempt to verify it against known data... *without* actually worrying about "what's in the box". this on the basis that (barring glaring errors) if you have a model that matches to a high degree of accuracy a significant *number* of pieces of "known data", then regardless of what you may believe *or disbelieve* the model, the probability that it's wrong decreases drastically with each and every single additional datapoint of "known data" against which that model is being compared.

    this is, fundamentally, the whole basis of black-box reverse-engineering, which happens in turn to be the basis of knowledge derivation, and it's *really important* to recognise and accept that comparision of two "things" and looking at the difference *is* itself "new knowledge".... that in no way *actually* needs to critically depend on the "correctness or otherwise" of the two things being compared.

    where i feel that most people go wrong on this is two-fold:

    (a) an unwillingness to accept the entire approach outlined in the paragraph above. as in: you could show them the tables of atomic numbers and electron orbits predicted by mills, assign even an ultra ultra small probability based on the error bars as to whether mill's theory is sound or not, then multiply those up and, due to the sheer number of correct data points come up with an astronomically overwhelmingly LARGE statistical confirmation of the hypothesis that there must in fact be something in what Dr Mills is doing... AND THEY STILL WOULD DENY IT.

    for such people.... sadly i have to conclude that there is nothing that anyone can do, except to respect their decision.

    (b) an unwillingness, even after accepting (formally or informally) the evidence of correct data-fitting (to within reasonable error bars), to accept the actual theory at face value... such that the next logical step is to REPLICATE the entirety of the theory.

    this latter is extremely worthy and laudable: hell, there's no way that i, within my lifetime, would be able to consider doing the same. yet at the same time.... i feel that it is still a worthwhile approach to attempt to take Dr Mill's work *out* of the exclusive realm of hydrinos, and into other areas such as particle physics theories. so that is where my focus will be. i will *assume* that Dr Mill's work has faults, yet is fundamentally statistically quite clearly "sound". and branch off equations that demonstrate a match against particles such as the kaon, mesons, proton, neutron and so on.

    the basic hypothesis being that, if this proves also to be successful *and accurate*, it provides *another* string in the bow against which detractors of Mill's work can safely be ignored, whilst at the same time attracting sufficient *POSITIVE* attention such that people of sound judgement of mind and sufficient mathematical ability will be willing to work on it.

    that's the plan, anyway :)

  • To note I tend to be a bit critical to some of the bashing of QM - it really has to have a value. For example QM is non radiating if you interpret the wavefunction as a representation of a physical field.

    Also something you can hear is that QM is all fudge factors does not really match my interpretation of how QED is derived. The basics is that you assume the space is filled with a soup of waves and that locally to get the momentum you take the derivative, to get the energy you take the time derivative etc. Then you say that einsteins special relativity should be satisfied and voila Klein Gordon appears and as a small extention also the final QED equations. So what we have done is modeled the world as a soup of waves and constrained it to satisfy some known fundamental law there really is not much fudging here and it do yield a few good predictions.

    two things, i feel: i mentioned the first one already: QM, by moving Maxwell's Equations purely to the frequency domain, cannot take into account a static (constant) value. my understanding is that you cannot take an FFT of a DC value. i feel somehow that this may be absolutely crucial: time will tell.

    the second, is that unfortunately, solutions for equations that fall out of QM require partial differential equations - Feynmann Diagrams of 12th order - and they're *still* not accurate and are near-impossible to solve without supercomputers. i think the last one that someone worked on, the total equations, if you printed them out in 10 font point type you'd fill a hundred A4 pages with the g/2 "solution".

    the best way i can illustrate this, if people know the mathematical joke, "a bird flies between two trains at 120mph and the trains are 120 miles apart, travelling at 60mph, and on a collision course. how far does the bird fly before it's squashed by the two trains?"

    now, when a friend told this joke to a colleague, he answered immediately, "why, one hour of course!" and my friend said, "oh, have you heard the joke before, then?" and his colleague answered, "no, i just took the sum of an infinite series and did the math in my head just now".

    the difference between QM and GUTCP is very much like that. everyone involved in the Standard Model is doing the "sum of an infinite series" where Dr Mills has noticed, "oh, the time is the same for the two trains as for the bird, therefore the distance travelled can be calculated from the speed of the bird's travel but the time is taken from the train's expected collision / arrival(s)".

  • Inner shell electron capture is a process mediated by the weak interaction requiring an electron to be present in the nucleus. It is not known to directly involve the electromagnetic interaction. So an explanation that uses the terms "electromagnetism" or "photons," etc., does not shed light on the matter, unless we postulate that electron capture happens via a completely different pathway than known up to now. Or perhaps I have misunderstood your description. It is not clear how a photon would drag a charge (i.e., electron) with it, or why there would be a photon being dragged in at all.

    within the definition of what i hypothesise a particle to be (self-looped self-attracted photon), i am also struggling to come up with an explanation for what charge could actually be. the best hypothesis that i can come up with is that it is a side-effect of the looped photon "protecting its existence" i.e. that it is a reaction of the self-looped self-attracted self-protection photon to the "influence" - through Maxwell's Equations - of nearby E.M. fields.

    to understand and appreciate this distinction it is very very important to remember that there is - must be - a MASSIVE difference between "what we normally think of as being a photon on account of them going in straight line for most of the time" and "a photon wot has literally got its knickers in a twist, and i do mean literally".

  • The "ground" state hydrogen atom has 511k eV stored as potential and kinetic energy in the orbiting electron so there is a large reservoir of energy that can be released. At the ground state, the electron is in force balance and the effective central charge is 1. If a photon is absorbed, the effective central charge felt by the electron will be fractional (1/n) and it will go to a higher orbit. Conversely, if an energy hole/trapped photon is absorbed, the effective central charge felt by the electron will be integral (n) and it will go to a lower orbit.

    Sorry, don't understand this part. A photon/trapped photon is a standing wave in the electron's orbitsphere. What force is it creating? Maybe you could read Mills' equation for the photon? Chap. 4 in GUTCP.

    apologies for referring to you in the 3rd person, wyttenberg: optiongeek he is still thinking that the photon is (must be) travelling in a straight line. wyttenberg: the photon curves. attracts itself so strongly - or more likely "phase-cancels-itself-out-and-recreates-itself-such-that-it-APPEARS-to-be-curving" - that it is extremely difficult to answer your question without actually quoting the entirety of one of the chapters of Dr Mill's book at you. that's.... just the way that it is, i am afraid.

    to help avoid the stupid stupid "of course photons cannot go in anything other than a straight like what the f*** are you talking about you stupid moron", to understand and accept the concept of "light really can curve" i suggest reading up on the following topics:

    * ido kaminer et al's work on optical tweezers.

    * experiments in the field of optics last year that showed that not only can light be bent but it SLOWS DOWN at the same time

    * waveguide experiments which it has now been shown that "braided light" is um.... slower than... umm.... light. and also fascinatingly *retains information*.

    the work by ido kaminer is particularly important as they deliberately set out to find the mathematical equations that would allow a phased-array (coherent X-Ray multi-beam laser i think they used in the actual experiments) to go in a semi-circle WITHOUT FALLING APART i.e. each "turn" of each part of the beam was by EXACTLY the same amount in each case, such that as they "turned" (actually... phase-cancelled such that the result was the *appearance* of quotes turning quotes) the ENTIRETY of the beam REMAINED TOGETHER. what was even more fascinating than that was that the phase of each component of the beam rotated by HALF the angle through which the beam "turned". which would mean that if such a beam successfully managed a 360 degree revolution, its phase (or specifically the phase of every contributing component) would be 180 degrees compared to the original. meaning that it would take TWO revolutions to get back to the same state.

    ... and what... exactly... does that remind you of? let me give you a hint: it begins with "spin" and ends with "half".... :)

  • Prof. Bakker has responded to my email which I copy here:

    "This is absolutely right. Photons can be superimposed absolutely if they have the same wavelength (size) and the energy absorbed/emitted by an electron is added/removed from the, one, photon trapped in the orbitsphere.

    this is the fundamental basis on which i have the goal to extend dr mill's work into particle physics. i researched the ways in which superposition can occur in a constructive (non-destructive) fashion and they are really quite specific. it's not enough that they be the same wavelength and energy, there are other conditions as well, for which you have to look up phrases like "phasor", "jones matrices", and "optical vortex knots" and "mobius light", then once you've skim-read the various articles, look up a paper by Castillo from 2008 relating Jones Matrices onto a Poincare Sphere - SU(2) - and using spinors.

    the summary - key - basically is that the phase of the E.M. field of each or any of the photons that you wish to superimpose *must* be at right-angles. then when you add them using phasor or jones matrices mathematics the sum is something that is phase-shifted by a specific (easily-)calculated amount, and the frequency remains at *exactly* the same wavelength as the two contributing photons.

    this insight is one that both Dr Mills *and* the proponents of the Standard Model are entirely missing. Dr Mills because he assumes that the angles of the up and down quarks within a proton are 120 degrees (they're not... they have to be 90 for the superposition to work...) and the Standard Model because they reject all and any possibility of *considering* what a particle actually is.

    if that latter is hard to understand, ask any proponent of the Standard Model the following question: "what is a particle actually made of?" and when they don't properly answer, keep on asking the question until they finally tell you the truth, which is that "they don't know, and the theory doesn't even put forward any hypotheses". this is i think what Dr Mills is referring to when he says that QM, by moving to the frequency domain, consider particles to be mathematical constructs, not actual *things* about which it is easy to logically reason.