Validation of Randell Mills GUTCP - a call for action

  • >> Researchers in the US have successfully teleported information encoded into particles of light over 100 kilometres of optical fibre, smashing the previous distance record of 25 km.


    This is a common bogus interpretation, information is fixed at common sorce and doe to that both signals are correlated at the beginning they are correlated at the end. I repeat if this statement where true
    physisist would not do aspect experiments. The whole idea with the information beeing fixed at the beginning, something Einstein assumed have been tried to evaluate and one typically does this with the
    Bells inequality. The issue though is that although "normal" hidden variables can be dissmissed for sane particle formualtion, for fields, however, with laws that are non local, like Quantum mechanics, or for systems
    that follows an approximate nonlocal law with fields the conclusion that the system is fixed at the beginning still is valid. As I said before Mills have reproduced Aspects tests, and all experiments in this buissnees is
    variations on this.

  • When I state that the formulas produces the right answer then it simply means that when yo use the formula and calculate the value you usually get close to the experimental value. To find out if this is true is trivial. ... I can only say that most of Mills results seam s to be through a method orginating from the basic assumptions. The easiest thing to verify is the ionisation energy of the hydrogen atom to about three digits accuracy which you do by skipping the contribution from the magnetic spinn of the nucleus. Usually in Mills derivations, which seam to be mostly correct up to a point where there is a change of reference system - here I lose Mills and dont understand why he get the new expressions.


    What I was hoping to find out was to what extent you yourself have gone through these calculations to see if the results that are obtained match up with those that Mills advertises, and how much you're simply repeating what you've read somewhere. From your description above I take it that (1) you yourself have verified the ionization energy of hydrogen to about three digits of accuracy; and (2) you have not verified some calculations involving a change of reference systems. Is there anything else you can add to this description of what you've personally verified of Mills's claims to have derived very accurate formulae with few parameters?


    The uncertainty principle is bogus. It just reflects the error of the QM model e.g. when approximating the GUTCP fields using a soup of waves you make an error I think that the uncertainty principle is basically a measure where this modelling breaks and the QM prediction becomes fuzzy.


    "Error" in the context of the uncertainty principle is a potentially misleading term. It is error in the statistical sense. The Heisenberg uncertainty principle says that the product of the standard deviations of different pairs of related observables is greater than some minimum value. It says something about what you see in actual experiments when you measure things. If you arrange to measure the position of a photon with great accuracy (the standard deviation in measurements is small), there will be a large spread in the measurements of the energy (and momentum), which is to say that the standard deviation of the latter will be large. In describing this odd experimental phenomenon, the uncertainty principle relates what is seen in experiments very well, because what is seen in experiments is that these various pairs of related observables are fidgety like that, in the actual data that are obtained. Any theory that seeks to somehow make predictions that are more accurate than the product of the standard deviations of these conjugate variables is departing from what is seen in experiments rather than being more accurate.


    (There is an experiment by Shariar Afshar that claims to violate this principle of complementarity. Although the results are not in dispute, the conclusions are controversial and subject to debate.)

  • @Other calculations ...


    The ionisation energies of all atoms included in the book seams to be correctly calculated and I find that modulo that reference frame trick the derivation is quite ok. To get the last decimals it looks a bit tweaked
    though and I think that a general method solving an energy minimization proiblem is needed to convince me that the solutions are the right one - but say the first 2-3 digits seam to be okay as a derivation.


    The g-factor calculation is also ok module the reference frame trick that I don't undersa. I could follow this to 5 digits or so of accuracy.


    One of the proof of non radiation has serious flaws if you ask me. I posted my own proof of it here before and can do it again if you ask me.


    the mass ratio between muon, electron etc claculated the right values and the derivation seams to be okay here again. There is a formula for the mass of the electron as well but that look bogus to me because it is circular
    something that proponents of Mills theory often misses.


    I do note that there is a consistancy in the trick I can't follow - so this can't account for a random tuning of the formula to the answer.


    @"Error" in the context of the uncertainty principle is a potentially misleading term.
    Well it is as you say an error with structure, I'm fully aware of it. In my book this can mean exactly what you say - I'm actually an expert when it comes to statistics so I view errors as not only "white noise" but colored. But this
    does not change my point in the argument you still get error bars and people stop trying to refine the model when they discover that. My point is also that it is possible that by modelling fields with hefty oscilating waves which dissapears
    to a great degree when you take the norm of the Phi you will pay with an error structure that should match something like Heisenbergs inequalty.

  • External Content youtu.be
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    In plasma physics, a "self-sustaining" plasma is unknown to that science. Yet we clearly see the ignition is turned off (time stamp 04:30) and back on minutes later (time stamp 06:20) yet the plasma does become self-sustaining. The way this happens is through a positive feedback condition in the hydrino reaction. Can anybody explain how this positive feedback loop works? What keeps that feedback mechanism constant and regulated?

  • Well it is as you say an error with structure, I'm fully aware of it. In my book this can mean exactly what you say - I'm actually an expert when it comes to statistics so I view errors as not only "white noise" but colored. But this does not change my point in the argument you still get error bars and people stop trying to refine the model when they discover that. My point is also that it is possible that by modelling fields with hefty oscilating waves which dissapears to a great degree when you take the norm of the Phi you will pay with an error structure that should match something like Heisenbergs inequalty.


    I don't recall anyone claiming that the Heisenberg uncertainty principle prevents one from obtaining a precise expectation value for an observable, to the extent that one's measurement instruments and the physical process allow. Perhaps you can ground this objection by making it more concrete? What is a concrete example of someone saying that because quantum mechanics/the Heisenberg uncertainty principle, it is not possible to obtain a good expectation value, where in fact Mills's calculations show that you can?

  • Amazing work stefan! Finding out that Mills proof has flaws but giving a different proof that yields the same results is better than I hoped :thumbup: . Building up on these results there is this paper, which tries to find out if the solution of the nonradiation condition as found by Mills (and stefan :-)) makes it possible to contruct stable models for fundamental particles based only on Maxwells laws. The results are positive and use a completely different approach than Mills (at least for my untrained eyes). And even better it gives a reference to a paper of some guys who found parts of this in 1990: Bergman, D. L., and Wesley, J. P., “Spinning Charged Ring Model of Electron Yielding Anomalous Magnetic Moment,” Galilean Electrodynamics, vol. 1, no. 5, pp. 63-67 (Sept./Oct.,1990)


    I have taken the liberty of adding a live link to the paper above. Alan.

    http://www.commonsensescience.…f_electron_yields_new.pdf

  • " I don't recall anyone claiming that the Heisenberg uncertainty principle prevents one from obtaining a precise expectation value for an observable, to the extent that one's measurement instruments and the physical process allow. Perhaps you can ground this objection by making it more concrete? What is a concrete example of someone saying that because quantum mechanics/the Heisenberg uncertainty principle, it is not possible to obtain a good expectation value, where in fact Mills's calculations show that you can?
    "


    As an example, if I don't missrember, the calulation of the fusion rate in hot fusion is off without using Heisenbergs inequality.

  • stefan wrote:
    Well it is as you say an error with structure, I'm fully aware of it. In my book this can mean exactly what you say - I'm actually an expert when it comes to statistics so I view errors as not only "white noise" but colored. But this does not change my point in the argument you still get error bars and people stop trying to refine the model when they discover that. My point is also that it is possible that by modelling fields with hefty oscilating waves which dissapears to a great degree when you take the norm of the Phi you will pay with an error structure that should match something like Heisenbergs inequalty.



    I don't recall anyone claiming that the Heisenberg uncertainty principle prevents one from obtaining a precise expectation value for an observable, to the extent that one's measurement instruments and the physical process allow. Perhaps you can ground this objection by making it more concrete? What is a concrete example of someone saying that because quantum mechanics/the Heisenberg uncertainty principle, it is not possible to obtain a good expectation value, where in fact Mills's calculations show that you can?


    The Heisenberg uncertainty relation recently did undergo a slight revision regarding special effects with spin, which can be measured without extensively disturbing the momentum! (used in Quantum entangled electrons.)
    For me the “H” relation raises a different question. As Heisenberg originally postulates: At a given time we can not instantly measure momentum and the space coordinates of particle! But who did already reflect about the role of time? Is physics at the small dimensions (below 10-15) really bound to a single time arrow? Isn't macroscopic time in the “small space” just a mathematical crutch, which luckily mostly fits the problem..?

  • As an example, if I don't missrember, the calulation of the fusion rate in hot fusion is off without using Heisenbergs inequality.


    Earlier you wrote: "But this does not change my point in the argument you still get error bars and people stop trying to refine the model when they discover that." I will withhold judgment on whether this is true until you are able to come back with something that shows that people are using the Heisenberg uncertainty principle as a crux that allows them to give up on obtaining more accurate expectation values.

  • @Epimetheus


    Mills have two proofs in his book about the non radiation. In the first one it looks like he take the convolution of factors, one which has a zero. But it is unclear to me (It is generally wrong)
    that the resulted convolution has a zero as well. Do you follow me or did I make a misstake here?

  • @stefan


    I only found the proof in appendix (p.1685 ff.). I have some expierence with fourier/laplace transforms, but what Mills is doing is a bit above my level of expertise :) . At first I also had a problem with the convolution part because he was doing it in frequency space. But going back to the beginning shows, that the functions he convoluted are multiplications in the time domain, so that is fine. I have problems with understanding the infinite series of one of the functions - infinite series seams weird to me but I often read "infinit cycles" in GUTCP so it might be linked to that term.


    But to your question: I do not remember a statement of zeros regarding convolution. I always did it the other way around: switching from time to frequency domain to avoid the convolution. If I had a better understanding of vector fields and maxwells equations perhaps I would look deeper into it, but with my current state of knowledge I would need weeks to feel comfortable with all this mathematical tricks. I don´t have that time so I switched to the easy mode with my validation efforts. But I won´t stop looking for someone who can look over this.

  • There is a big difference between "peer reviewed" and "to a level to secure funding".
    There is also a big difference between research "with no necessary measurable benefits" and "must clearly demonstrate practical results to get funding".
    I suggest that both methods have their flaws, as do both current theories.

  • I am not quite sure what you are implying. Are you saying that Mills presents his results in the best looking way to secure funding? In my eyes this is a necessity if you cannot finance yourself. I am presenting my work results in the best possible way, aswell as my boss and his boss. You are also doing it on your website - there are probably a lot of things that still dont work in your theory framework.


    Or are you saying that Mills is (partially) faking theoretical or experimental results? That is one point this validation is about. Finding out if his equations give values that fit to experiments. I did not find a false statement or faked equation so far. The equations I used worked.


    Is your own work promotional or academic? Or have you found a better way of doing it?

  • I am not quite sure what you are implying. Are you saying that Mills presents his results in the best looking way to secure funding? In my eyes this is a necessity if you cannot finance yourself. I am presenting my work results in the best possible way,…


    Having been previously actively involved in discussions on the Mills theory via the yahoo "Society for Classical Physics", managed by BrLP (Mills), there were many examples of information control of differing views, particularly in relation to a complete denial of low energy fusion. There is also an ongoing history of legal responses to differing views, and over-claiming on commercial potential.
    Much of the ideas presented by Mills may be eventually accepted, but there are also some quite significant issues. Maths is great, but to be valid, must accurately match reality.


    I don't have a problem with any theorist not being entirely correct, but when a researcher threatens legal action to defend theory that is best only partly correct, I do have significant concerns, particularly when the researcher is trying to claim IP ownership over the whole LENR sector based on at best partly flawed theory.


    AlainCo: duplicate post removed. I hope this latest is the best

  • Does the quaternion form of Maxwell-equations make any difference in the theories that are based on the standard representation?



    Mills theory is based on Heaviside's form of equations, so I would guess, that the quaternion form could make the non-radiative condition (which Mills hydrino model depends on) less probable.

  • Hello,


    I am new to this forum and I also try to understand better the GUTCP by Mills.


    I understand the occam's razor argument which only holds if the theoretical results stand reality. Therefore support the idea of testing the results of the equations vs experimental results.


    However, the equations should have a physical fundament and a sound reasoning. Please, give your comments if my current understanding of Mills theory is right or not:


    The charge of the electron is smered out on a surface of a sphere. This sphere is constructed out of ring currents: the charge is somehow distributed as a constant charge density on rings, which circulates with a certain frequency around the core of the atom.


    This I can accept so far.


    Now Mills claims, he has a mean for pinning the radius of this circle by the non radiating condition.


    This I do not understand at all: A constant ring current is not time dependent. Therefore the sources are constant in time.


    Now using the Helmholtz-equation and the resulting integral over the Green-Function and sources (you can see in the lower part of https://de.wikipedia.org/wiki/Helmholtz-Gleichung) I conclude:


    The Potentials Phi and A are constant in time. Therefore the magnetic B and electric field E are constant in time. Therefore only the (time) Fourier component with frequency 0 exists. No photon emits.


    So there is no special condition for the radius of this ring current. (I believe to remember that Thompson already got the same result, but maybe my memory foolishes me)


    Thanks for your attention and your comments


    Best regards

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.