New journal article from Brilliant Light Power

  • @THH: The Dirac equation does not include the magnetic energy stored in the field. May be you should ask somebody competent in Maxwell physics to get a better judgement. If QM would include the magnetic energy, the equation are no longer separable, because there is no symmetry between electric forces and magnetic forces.


    The even more severe mistake of nuclear physicists is the use of Minkovski space for nuclear models. But that needs an even a more deep understanding of mathematical structures ...


    To make is clear: There are some Mathematicians out there, that simply laugh about the math physicists use. But that is unfair, because as an engineering model it often works quite well.


    This is not as you have written it clear.


    the Dirac equation does not itself include necessary QED calculations. Not surprisingly: we know that QM => virtual particles so total energy now becomes complex, but calculable.


    QED is a great theory making precise predictions (even though the calculations are perturbative and necessarily approximate) with wonderful match to experimental data. Mills may claim to have something better than QED. My point is that no-one is competent to say that until, at least, they have a full understanding of QED, unless Mills' stuff is provably more predictive than QED.


    I think many people argue something like "QED is complex and weird, some other (easier to understand) must be better". But that is not a true argument. QED is complex computationally and conceptually, but very logically derived from QM (which we know for many other reasons must be true). So the additional complexity of QED - given QM is known - is computational and conceptual but not problematic because there are lots of new assumptions. There is a computational trick to do renormalisation which introduces a parameter, but that parameter can be calculated from multiple different experiments.


    My conclusion is the same as that of any experimenter: With Mills I get 3 digits more precision - simply, if I use his rules. Thus: It's not up to me to prove that he is right. The other have to (dis-) prove it.


    You'd have to make a comparison with QED and experiment for accuracy. QED is phenomenally accurate for computable cases, and what is computable gets more complex all the time. Furthermore, for the 3 digits from Mills in some cases these are post hoc. Thus Mills is working out how to apply his stuff knowing the correct answer, and his rules are not clearly grounded in other known correct theory - they are free for him to bend to make fit the data. 3 digits could for example come from linear combination of known constituents which might be good enough, but has nothing to do with the specific theory.


    If he made predictions provably different than best known values, which turned out after more accurate experiment to be much closer than the best values that would be important and lead one to see that there must be important truth in his stuff. But, remember, that has to be better than the best of the experimental and theoretical from other methods values at any given time. Mills would have these values and would naturally check his results against them and correct errors if his results proved obviously different. You have shown no evidence of Mills actually doing better than figures he could get from others (theoretical or experimental).

  • Electron capture is made more mysterious, not less, in Mills's model. Mills's model does not fit what we already know in this instance, unless we are also to abandon our understanding of how the weak interaction works.


    According to Mills logic there is no weak force, at least not in the sense of a primary (force) phenomen. Mills did a huge task and I will not blame him for not covering everything to the deepest possible level. K-Shell captures according Mills should be modeled as an effect of multi pol forces & energy transitions. (39.52 for transition possibilities).

    For the alpha-decay he has some beefed up formulas in the 2016 version, showing that his first guess is in line with the expected decay rates. Whether it is better than the standard, has to be answered by the literature.


    The dirac fields couples with electromagnets, I think you need to give me some background on why you don't think that a resonant coupling can't ork. What I mean with this is just as with resonant couplings you see nada when there is no resonanse e.g. it is only for special setups of the EM fields that you will get a measurable effect so it is quite possible that we missed it. Allso Mills model is an attractive idea due to it's simplicity, everything is electromagnetic theory and also the weak forces and strong forces are actually a special electromagnetic phenomena.


    On nuclear level, we seem to have only resonances = ratios of energies/frequencies. There is a lot of ongoing work (- not using the classical standard assumption, that never had success). Mills theory ends (fails = needs extensions) as soon as we go to the first true nucleus. The adding of a neutron to a proton still can be calculated (=> correct deuterium mass) but for higher compounds the model breaks = less accurate. This is no surprise as also Maxwell needs some strict assumptions, which are no longer given in a soup of protons & neutrons. (E.g. We live once in a light-like field, where some masses move at light speed if we assume they are masses - e.g. the electron of a neutron and in other cases - proton it is - mechanically non-kinetic.) In Mills model you have to carefully separate the light-like frame parts from the others, because for each (nested) frame of reference you have to use different measures. There are nested relativistic frames.. and who knows where they exactly sit.


    Regarding the proof of Mills convolution formulas: I did once a different approach based on a more logical reasoning: What we know is, that in a synchronized system waves must interfere in the turning point of the second derivation, if we want a perfect non resonance (No added curvature => no added momentum). This implies that the waves must meet orthogonally, in phase, in all intersection points. The two great circles are per definition orthogonal, different Legendre polynoms are orthogonal too, thus their convolution always holds this criteria. In the case of a locked in photon the third circle alway violates the first criteria. Thus the proof is only challenging for outer non S-orbits.




    I would put it differently. It is hard to follow several volumes of word salad, because the individual details are disjoint and do not provide an actual mathematical argument.


    Eric : Can you give us a specific example of word salad? I agree, if you are use to standard terminology, then you expect other words.


    For an alpha particle scattering off of lead, at ~ 25 MeV the scattering angle starts to depart significantly from the Rutherford prediction as a result of the nuclear interaction. There are probably tens of thousands other such experimental phenomena that will also need to be examined anew if we're to set aside the nuclear and weak interactions and attempt to explain them as being derivative of the electromagnetic force.


    Another example: if the electromagnetic force is infinite in range, and the weak interaction is derivative, why does the weak interaction work at only 0.1 percent of the diameter of a proton?



    @Why is QUED not able to calculate the proton charge radius? The measured deviation is larger than your 1%...

  • Can you give us a specific example of word salad?


    I have in mind specifically the narrative text that joins all of the equations I copied in this post. In physics, "word salad" conveys the impression of technical jargon which, although recognizable from the relevant field, is being used improperly and does not join together into a coherent thought. Is my impression incorrect that this exposition is likely to be word salad? This can be shown to be the case if someone enterprising were to connect the dots in the exposition with explicit steps.


    @Why is QUED not able to calculate the proton charge radius? The measured deviation is larger than your 1%...


    I'm not really familiar with the proton radius problem. There appear to be various studies which look at this question and raise different possibilities, such as this one, which suggests a "mismatch of renormalization scales." (Are they wrong? Because if they're not wrong, it seems there's not really a problem.) But more to the point, how does your question bear upon what has been discussed up to now in this thread?

  • I'm not really familiar with the proton radius problem. There appear to be various studies which look at this question and raise different possibilities, such as this one, which suggests a "mismatch of renormalization scales." (Are they wrong? Because if they're not wrong, it seems there's not really a problem.) But more to the point, how does your question bear upon what has been discussed up to now in this thread?


    Eric Walker : Just logic: If you say (the weak interaction works at only 0.1 percent of the diameter of a proton?) the weak force seems to be known more exactly that the dimension of the source of the force. For me just an indication of an other kind of salad.

  • Eric Walker: Just logic: If you say (the weak interaction works at only 0.1 percent of the diameter of a proton?) the weak force seems to be known more exactly that the dimension of the source of the force. For me just an indication of an other kind of salad.


    The proton charge radius is determined experimentally, as is the range of the weak interaction. The point you brought up had to do with the theoretical QED calculation. Pointing out that the QED calculation is off does not impugn the experimental determination of the charge radius. But I also am not wedded to the exact specifics of the range of the weak interaction, for I am simply referring to a claim made here, by someone much more knowledgeable on the topic than I am.


    My point stands if the weak interaction has a range far less than the radius of an n=137/137 orbitsphere (i.e., no shrinkage). So the 0.1 percent proton radius diameter claim is not critically important, even if your point did not succeed in calling it into question. What matters is that the electron orbital must be within range of the weak interaction with the proton in order for electron capture to occur.

  • The proton charge radius is determined experimentally, as is the range of the weak interaction. The point you brought up had to do with the theoretical QED calculation. Pointing out that the QED calculation is off does not impugn the experimental determination of the charge radius. But I also am not wedded to the exact specifics of the range of the weak interaction, for I am simply referring to a claim made here, by someone much more knowledgeable on the topic than I am.


    Eric Walker : Mills calculates the fundamental vector Boson (Z0) as a resonance of the Muon. The energy calculation is off by 0.02% (GUT-CP 37.46) The follow-up decay to the W is more complex an the result a bit more off, about 0.2%.

  • Eric Walker: Mills calculates the fundamental vector Boson (Z0) as a resonance of the Muon. The energy calculation is off by 0.02% (GUT-CP 37.46) The follow-up decay to the W is more complex an the result a bit more off, about 0.2%.


    Ok. But how does this bear upon the earlier discussion? Perhaps you're suggesting that the Mills calculation is more accurate than the QED calculation?

  • Eric Walker : Just logic: If you say (the weak interaction works at only 0.1 percent of the diameter of a proton?) the weak force seems to be known more exactly that the dimension of the source of the force. For me just an indication of an other kind of salad.


    I can't understand that. The source of the weak force is quark-quark flavour-swapping interactions mediated by virtual intermediate vector bosons. Quark diameter is not known but has been determined experimentally to be less than 0.43E-16m. The length scale for weak can be determined simply from the lifetime of virtual IVBs. This is very very small given their high mass (Heisenberg bounds detaT and deltaE) , hence weak cannot exist over longer distances. Whether the weak length scale is smaller than quark size is not known, but if quarks have finite size and therefore some constituent structure beyond standard model, then presumably this would be what carried the weak force. So in this (unknown whether true) case there is still no contradiction.


    BTW the experimental discovery of predicted IVBs was a predictive triumph for the standard model. I don't see any such triumph from Mills.

  • I wrote:


    The proton charge radius is determined experimentally, as is the range of the weak interaction.


    It looks like I might have been wrong about the experimental basis of the range of the weak interaction. According to Wikipedia, Fermi's original formulation for the weak interaction proposed a contact force with no range to account for beta decay. With respect to the modern understanding of the weak interaction, one notable contributor at PhysicsForums writes: "it looks like it was more the accumulated general success of the model rather than any direct measurement."


    So the the understanding of the range of the interaction might well go back to a purely theoretical consideration of the masses of virtual W and Z bosons and how far they can travel given their masses, along the lines of THH's suggestion.

  • I did a rewrite of the stack exchange question and are on -1 now, one up from -2. I found some bugs and tried to make things clearer e.g. more well defined. Also

    I clearly separated the commentary from the question so that the question is clearly stated. Now only ignorant trolls would downvote it. E.g. people with little

    knowledge of math but a diss-like for Mills, we should be clear that those will try to downvote this question now if they can which is downright stupid.


    Anyway the outcome is not important and if it is not upvoted it shows that stackexchnge is flawed. I can simply ask people at my old institution.

  • Stefan, how about this further editing of your question?


    Quote

    I have a question about the following scenario (question to follow): Let G be a covering system G=(I,μ,{S1α}α∈I,{nα}α∈I)G=(I,μ,{Sα1}α∈I,{nα}α∈I) associated with a unit 2D2D sphere, S2S2, embedded in R3R3 as follows. Let II be an index set and let all S1αSα1 satisfy S1α∈GeoSα1∈Geo be the set of geodesics of S2S2, nαnα be a unit normal associated with a geodesic S1αSα1. We define the positive measure μμ, a measure on I×S1I×S1 so that ∫I×S1dμ=1∫I×S1dμ=1. Furthermore if we take F:S2←P(I×S1)F:S2←P(I×S1), P(⋅)P(⋅) the power set, as the mapping of a point of the sphere towards a discrete point of the geodesics covering that point, Then we also constrain the measure μμ to satisfy ∀p∈S2 :(∑β∈F(p)dμ(β)=1/(4π)dS2∀p∈S2 :(∑β∈F(p)dμ(β)=1/(4π)dS2. Finally we define total(G):=∫I×S1nαdμtotal(G):=∫I×S1nαdμ Now due to the triangle inequality we have that for all coverings of type GG, |total(G)|≤1|total(G)|≤1.


    Am I correct in thinking that there exists at least one covering of type GG and furthermore supG|total(G)|=1/2supG|total(G)|=1/2 and there is an example that attains that supremum?


    See how it is very short? Is there any way you can make it even shorter?

  • Stefan, how about this further editing of your question?



    See how it is very short? Is there any way you can make it even shorter?

    I can leave out details and it results in the executive summary. So that is the short version. But I want to make a fully mathematical definition as well and get it correct on a Phd level

    then you get the quote above. I could leave it out but I'm planning to engage professors in this, and then this definition is a bonus point in their eyes. _Making things well defined

    mathematically lead to a few more details than the summary, which is fine for a person to grasp what it is all about.

  • But I want to make a fully mathematical definition as well and get it correct on a Phd level then you get the quote above.


    I agree that you wouldn't necessarily describe the problem in the same way for a professor you're taking the question to. But you can have more than one formulation of the problem: one for Mathematics Stack Exchange which is concise (even more concise than what I have above), and another one that is for the people you know that you're going to talk to. You can have both descriptions at once, because there are many electrons and many pixels in the world.

  • I agree that you wouldn't necessarily describe the problem in the same way for a professor you're taking the question to. But you can have more than one formulation of the problem: one for Mathematics Stack Exchange which is concise (even more concise than what I have above), and another one that is for the people you know that you're going to talk to. You can have both descriptions at once, because there are many electrons and many pixels in the world.


    Would you like to nuke the example? Would you like to nuke the summary? Mathematicians like to put section things as Theorems and Definitions and I like that too. I think that we should keep that tradition.

  • Would you like to nuke the example? Would you like to nuke the summary? Mathematicians like to put section things as Theorems and Definitions and I like that too. I think that we should keep that tradition.


    You're projecting your experience with mathematicians onto a website that has its own conventions. To see why I'm recommending brevity, take a look at some of the higher voted questions on Mathematics Stack Exchange. They're generally quite short, they get to the point right away, they assume the reader is mathematically literate and willing to unpack a dense setup, and they do not follow the pattern of the question you posed, even after you revised it. When you go to a website and ask for help, you should follow the conventions of the website.

  • stefan


    Anyone trying to follow this will need to reference the correctly formatted question:

    https://math.stackexchange.com…uniform-geodesics#2316201


    I have a problem with your question, which is that I don't understand how you define the mapping F:


    Furthermore if we take F:S2←P(I×S1)F:S2←P(I×S1), P(⋅)P(⋅) the power set, as the mapping of a point of the sphere towards a discrete point of the geodesics covering that point, a function that we constrain G so that it exists.


    F would appear to be a function domain I X S1 (the set of all points on all great circles) and range S2 (the surface of a sphere embedded in R3). I think you need the arrow in the opposite direction so that with F you are mapping a point on S2 to the set of all great circles that go through that point? I don't understand how this constrains G because it will exist (though may be trivial) for all G. I also expect that you actually require the existence of the Euclidean metric on R3 which induces a manifold on the embedded S2 - because you assume differential structure below.


    The constraint on Mu is central to this definition:


    Then we also constrain the measure μμ to satisfy ∀p∈S2 :(∑β∈F(p)dμ(β)=1/(4π)dS2∀p∈S2 :(∑β∈F(p)dμ(β)=1/(4π)dS2


    Mu is defined to be a positive measure on I X S1. S1 is embedded in R2 and inherits a manifold structure from that embedding. I however is a set with no metric structure. That is OK, the fact that Mu is a measure will impose a measure (but not metric nor even topology) on I X S1. I think however you want this measure to be compatible with the implicit embedded local metric on S1? And maybe (see below) you want some extra structure on I.


    Beta is the set of all great circles through p and is a subset of P(I X S1). Mu(beta) is therefore the measure on this subset. I think you are actually using the induced manifold structure of S2 here (p in S2) and saying in this constraint that the measure must be locally symmetric wrt any rotation of S2 (basically, small patches on S2 with the same area will have the same measure).


    For this to be a proper question we need it to be expressed much more tightly. Also, I suspect that we don't need S1. The whole problem becomes simpler to think about if you just ask questions about the nalpha. For example the set of nalpha corresponding to great circles going through a point p is makes the great circle whose normal is itself p. A pleasing and well-known symmetry.


    Now I'm still unclear about the symmetry here induced on G. The obvious symmetry satisfying this condition leads to Total = 0. It would help elucidate this to show constructively (without introducing concepts from physics like angular momentum) the existence of any non-trivial solution with Total not equal to 0. You just need to give the solution as math and show it complies.


    When you have tied this up I still doubt your result is true, unless you have the full euclidean induced manifold structure on S2 and something similar on I. After all the hand-waving physical model you wish to hook to this question certainly has I isomorphic to S2 and a natural euclidean metric (that is - the set of great circles on S2 is isomorphic to their normals, which are isomorphic to S2. In fact this isomorphism makes a duality). I also expect (but don't know) that it can be shown true much more trivially with a much simpler isomorphic structure than you have set up here. I'm uneasy about mixing a measure with a manifold which seems plain weird.


    You can't ask people to think about this without some more work on what this constraints really means. This, together with the above loose points in the definition, make for the bad rating I believe.

  • I can't understand that. The source of the weak force is quark-quark flavour-swapping interactions mediated by virtual intermediate vector bosons. Quark diameter is not known but has been determined experimentally to be less than 0.43E-16m.


    THHuxleynew : From the old Physics viewpoint you can't understand Mills logic. But keep in mind that the standard model failed and that there exist proofs (according to Geneste) that the math is flawed. If you measure a quark radius (what is just a guess because you can't separate most quarks and only measure there excitation fields), then the underlaying math, used to gauge the experiment, also determines the outcome of the measurement. In fact, according to Mills Maxwell calculations, there are no basic "particles" like W,Z, Higgs etc. bosons.

    Particles, in classical understanding, do have a rest mass and can be separated from each other. In basic Maxwell physics most particles are "resonances = basic particle plus captured photon". Mills still assumes that quarks are basic particles. But if you expand his thinking, then even quarks could be viewed as a special feature of the underlaying energy flow. At the very end only the Proton and electron remain as classical standard particles, which of course, depending on the type of measurements can show sub-particles with particle like nature. The rest are photons and resonances.


    F would appear to be a function domain I X S1 (the set of all points on all great circles) and range S2 (the surface of a sphere embedded in R3). I think you need the arrow in the opposite direction so that with F you are mapping a point on S2 to the set of all great circles that go through that point? I don't understand how this constrains G because it will exist (though may be trivial) for all G. I also expect that you actually require the existence of the Euclidean metric on R3 which induces a manifold on the embedded S2 - because you assume differential structure below.


    The simplest approximation for a great circle density can be made as following. Orthogonal great circles have two crossing points. Each point is cut by a infinite number of circles, where the number of circles/point is given by the length x 2 x point-density on a circle -2 (the corssing points count once..). You have to prove, that you only walk once over a point (generating to circles) and that all finite walks lead to a symmetric distribution of the circles. All patterns following geodesics rules are regular. If you take the midpoint of a "spheric quadrant", then divide the quadrant by 2/2 and apply the rule again, you never will meet the same point again and you will cover the whole surface of the sphere.

    The line density will always be identical for all decreasing (total an subs) areas in S2. (Border rule e.g. is north/west. You only need to cover/show it for half the sphere)

    ( If you throw away all n-1 circles/points and look at the nth generation it will be even simpler.)



  • Thanks, this is good feedback.

    So what I'm struggling to say rigorously is:


    We consider a subset of all geodesics and specifically a subset where only a discrete number of selected geodesics goes through a point p on the sphere, so for a point, there is a selection of geodesics, {S_a_1,S_a_2,...} and for each of the geodesics you have a point which cover p, e.g. you have pairs {(a_1,p_1),(a_2,p_2),...} eg a subset of (I x S^1) e.g. en element of the powerset now each of these points has a measures or if you like infinetismal weights mu(a_1,p_1), mu(a_2,p_2),... and they should sum up to 1/4pi dS. and then you can integrate the surface and recover total mass of 1. So this is a fancy way of saying that the sum of the coverings is uniform. In order for there to be a covering of type G I assume that this mapping F should exists and it does in Mills example (but it is hidden). Your good point is that perhaps I must better motivate the existance of F.


    > Mu is defined to be a positive measure on I X S1. S1 is embedded in R2 and inherits a manifold structure from that embedding. I however is a set with no metric structure. That is OK, the fact that Mu is a measure will impose a measure (but not

    > metric nor even topology) on I X S1. I think however you want this measure to be compatible with the implicit embedded local metric on S1? And maybe (see below) you want some extra structure on I.


    I am sloppy or rusty whatever you like to call it and do find it tricky to get the formulation right, I understand your points here.


    > Beta is the set of all great circles through p and is a subset of P(I X S1). Mu(beta) is therefore the measure on this subset. I think you are actually using the induced manifold structure of S2 here (p in S2) and saying in this constraint that the measure > must be locally symmetric wrt any rotation of S2 (basically, small patches on S2 with the same area will have the same measure).

    Yes


    > For this to be a proper question we need it to be expressed much more tightly. Also, I suspect that we don't need S1. The whole problem becomes simpler to think about if you just ask questions about the nalpha. For example the set of

    > nalpha corresponding to great circles going through a point p is makes the great circle whose normal is itself p. A pleasing and well-known symmetry.


    > Now I'm still unclear about the symmetry here induced on G. The obvious symmetry satisfying this condition leads to Total = 0. It would help elucidate this to show constructively (without introducing concepts from physics like angular momentum)

    > the existence of any non-trivial solution with Total not equal to 0. You just need to give the solution as math and show it complies.


    I tried to indicate an example which the Total is not equal to 0, I indicated the construction and it is included in Mills text. Shall I add page after page with a proper deduction? Can't I refer to his book?


    > When you have tied this up I still doubt your result is true, unless you have the full euclidean induced manifold structure on S2 and something similar on I. After all the hand-waving physical model you wish to hook to this question certainly has I

    > isomorphic to S2 and a natural euclidean metric (that is - the set of great circles on S2 is isomorphic to their normals, which are isomorphic to S2. In fact this isomorphism makes a duality). I also expect (but don't know) that it can be shown true

    > much more trivially with a much simpler isomorphic structure than you have set up here. I'm uneasy about mixing a measure with a manifold which seems plain weird.


    The generality of measures is perhaps too much, perhaps I should be working with manifolds as you say.


    What about this. We want that for all measurable set A on S^2, we have a set F(A) \in P(I x S^1) so that F(A) is measurable with mu and mu(F(A)) = n(A), n being the

    uniform measure on S^2?


  • I think that we can't skip the loops because you need to map points on several loops to a point on the sphere.

  • THHuxleynew wrote:

    > After all the hand-waving physical model you wish to hook to this question


    The hand waving can be made rigid with a clear model and assumptions and lead to the correct ionization energy to 3 correct figures. This is not the difficult part.

    The clear difficulty is to understand if hbar / 2 is special or just a tuning. Everything is precise. I think that this is a deep result and I feel that it has importance. I'm

    sure that if QM is right this is a deep connection to understand the intrinsic spin, perhaps we can find similar systems that cater more to the QM.

  • I tried to indicate an example which the Total is not equal to 0, I indicated the construction and it is included in Mills text. Shall I add page after page with a proper deduction? Can't I refer to his book?


    The merit of what you are doing (unlike anything I've seen in Mills book) is that what you say (with some tweaking not yet complete) is precise and can be properly understood with no hand waving.


    I need a similar description of Mills' claimed example - using maths not angular momentum and loops.


    As far as hbar/2. hbar comes from the equations and this factor is simple, it could be no doubt adduced from many models classical, QM, right or wrong.


  • It is not rigor, but it's at least without angular momentum and clean, see the commentary in the question. My next step will be to try make that example in rigor and link to it, cause it will be too much to put in directly

  • phew another version is finished, still very abstract but more concrete then before and it should be well defined. Next it would be nice to show Mills example in rigid mathematics, with lot's of steps.


    I suggest we get your question tightened up before moving on to make one of Mills's derivations explicit. There are too many questions, and I am pessimistic that attempting the harder exercise will be useful at this point. You should not refer to Mills's book in your Mathematics Stack Exchange question, if this was not already obvious.


    With regard to your Mathematics Stack Exchange problem, I am still trying to understand what you're describing. I am new to differential geometry, so please forgive me if these questions are pretty basic.


    1. Why are you making the index set "I" explicit? Normally it's sufficient to leave the index set implicit, along lines like these: Let png.latex?\inline&space;\large&space;A_i&space;\epsilon&space;\mathbb{R}^3,&space;i=1...n, be the set of geodesics on a spherical shell png.latex?\inline&space;\large&space;S&space;\epsilon&space;\mathbb{R}^3. What is it that the index set I is getting you that you wouldn't be able to do otherwise?


    2. When you refer to a unit normal of a geodesic, what do you have in mind? A geodesic in this context is an arc of a great circle, connecting two points png.latex?\inline&space;\large&space;p&space;\epsilon&space;\mathbb{R}^3. The arc has many unit normals along its length, one at each point. Which one of these unit normals do you have in mind? Bear in mind as well that there are an infinite number of geodesics, one for each pair of points along the great circle. So we have two degrees of freedom here which make things confusing.


    3. You write png.latex?\inline&space;\large&space;F&space;:&space;S^2&space;\rightarrow&space;\prod^k(I&space;\times&space;S^1). By this I gather you mean that your map F takes a given spherical shell and produces a k-product of (index, geodesic) pairs? What does it mean to express the range as a product of (index, geodesic) tuples? Why is k a superscript?

  • By unit normal he means (I hope and should previously have corrected) the vector perpendicular to the geodesic plane (it is the only possible way naturally to map great circles in S2 onto the set of unit vectors)


    The key issue however is what manifold structure is put onto I - this is needed for what stefan wants to even vaguely make sense.


    The k tuple product has got me beat - it is new and I don't thus far have a clue about it. Previously it was the power set of I X S1 which made sense.


    BTW - I don't think the index set I can be countable, if you think about what it represents physically which is the set of all great circles on S2.


    Maybe I'm not understanding what he is trying to do? Uniform covering implies manifolds and therefore uncountable sets...

  • I don't think the index set I can be countable


    Yes, I agree. That example that I gave was bugging me. Really we're probably talking about an integration (product?) over infinitesimals here, and not a summation (product?) of a countable set. I agree, then, the index is probably misguided, given that there are an arbitrary number of great circles that intersect at a point on a sphere.


    I'm still curious to know what is intended by the cross product of I x S1 that appears in the range. Given the proposed theorem, I would have figured that the real numbers would be the final range, and not a complex structure of some kind.

  • Quote

    From the old Physics viewpoint you can't understand Mills logic.


    Yes you can - the Mills logic is actually based on very reductionist old Physics - the Maxwell's theory in particular - something like the Alphens's Plasma Universe model for astrophysics. Unfortunately the Mills theory is rather analogy of geocentric model of modern physics - i.e. the progress turned on its head. Such an outcome has its meaning in time reversal geometry of dense aether model, but it's important to understand its limits.

  • I assumed that for a point p in the sphere, there is a function F : p -> (f_1(p),...,f_k(p)). Hense the superscript k - that usually is the cardinality.

    I assumed that there was a fixed number of imersions. This formulation can most probably be generalized. Later I say that for small enough

    measurable sets A, dS 2(A) = \mu(f_1(A)) + ... + \mu(f_k(A)) i will probably need to do the example thoroughly in order to understand if this is

    enough. Perhaps we need to make use of differential geometry. As I think THHuxley indicated.

  • The explanation of figure 1.14 and 1.16 is not congruent. 1.14 shows a rotation about the axis (result in fig 1.6 or as density in 1.18), 1.16 shows a rotation of the axes not around the axis, resulting in infinite many circle with same origin. 1.14 will not lead to a total coverage of the sphere while 1.16 will.


    But the Y 0/0 is a convolution of BECVF & OCVF what finally leads to the assumption that 1.16 is the convolution result of the basic BECVF current loop with the final density given in fig.1.19/1.20


    stefan : Now the question is: In which step are you interested? The weigths for the final convolution or for the intermediate BECVF & OCVF steps?