Electron-assisted fusion

  • A message received on 8/10/13 from DGT...


    For the history:


    We first realized such magnetic noise linked with thermal anomalies 18 months ago, when we were making tests with R4 Reactors with isoparabolic calorimetry. One day whilest huge thermal anomalies were observed in the active reactor, our cell phones got grazy and the line telephone system was full of noise! We tried to repeat the noise with Ar with no success. So we started looking on this seriously not just as a noise problem. At that time we were seeking for any possible gamma radiation as the main source of lab danger. We realized that we, as well as all before us, that we were looking the wrong way. I think that you know the rest of the story. Out of public health protection from magnetic radiation we phased the problem of protection of the control electronics, living very close to the reactor. That was (and still is) a major challenge for NI engineers too.

  • Why were they fiddling around with telephones whilest huge thermal anomalies were occurring?


    Shouldn't they instead have been paying strong attention to the reactor?


    If memory serves, all the telephones in the building that were in use were behaving badly at times making the telephones unusable. They isolated the telephone noise problem to the reactor in operation.


    I found a post coveing the subject as follows:


    AlainCo
    03-09-2013, 09:14


    On vortex-n, Abd ul-Rahman Lomax report some more details about those claims
    http://tech.groups.yahoo.com/group/newvortex/message/628


    The 1.6 Tesla report, repeated by Kim in his ICCF-18 presentation, is mentioned. That should seriously be discounted, and here is why: first of all, yes, if it were a 1.6 T static field, at the reported distance from the presumed source, it would be *astonishing.* In fact, so astonishing that it would likely be impossible.
    However, I asked. It was not a static field.
    That's a "peak" measurement. Okay, that's still amazing, eh? Except for one problem. To get that measurement, the mu-metal shields were removed from the reactor. The reactor can be viewed as a high-power spark transmitter. This thing puts out RF noise that shut down the Defkalion phone system, that seriously interfered with their data-aquisition boards, according to reports. That result is essentially meaningless. Kim repeated it without really thinking deeply about it, it's fairly obvious. He simply saw it as an anomaly. I expect that there will be more data released. It is not impossible that there are magnetic effects involved, complicated issue.


    Many informations:


    1.6 Tesla is a peak value, thus field is varying
    to get that the mu-metal shield was removed. so the shielding is mu-metal (not simple Faraday cage)
    it shut down DGT phone system, interfere with the acquisition board


    Abd conclude with prudence that this result is meaning less. There was huge electromagnetic effects, but the instruments may be troubled by those huge E-M effect, giving false reading.


    so 1.6Tesla is dubious, yet some very powerful EM effect is clear.
    It seems the huge interference are more sure than the magnetic field.
    It is also to note that the mu-metal shielding protect from that effect, meaning that the source of interference is strongly magnetic (otherwise simple faraday shielding would be enough).


    Abd continue


    However, what Kim said is simply standard scientific practice, as I've mentioned above.
    Reports are presumed true unless controverted.
    There was no deception. There was a peak measurement with a gaussmeter. It's an observation, and we treat it as a fact. That is, we believe, by default, that a gaussmeter actually displayed that figure, as described.
    It's obvious that Kim did not consider it in detail, or he'd have been more careful.
    Hadjichristos confirmed that, yes, this was a measurement in Teslas, not Gauss. (If it had been Gauss, the original reading at the initiation of spark would have been consistent with geomagnetic background, 0.6 Gauss.) Closely questioned, Hadjichristos did not recall the specific model of gaussmeter ... he was sitting on a Greek beach, on vacation. He indicated that more data would be released.
    We have no indication of any deception here, only, possibly, of some shortcoming in analysis.
    We'll know more when there is better data, and we'll know even more if there is independent confirmation of the field value. That may be possible, by the way, without an "independent demonstration."


    The peak reading was allegedly in the refractory period after spark stimulation was turned off. So if the measurement is made carefully, there would be an absence of the possible RF interference from spark stimulation. One would want to see how the field varies with position, something easily done with a handheld Gaussmeter.


    The measure was done when there was no spark, so whatever is the artifact, it is linked to the LENr reaction, nor the sparks.


    He continues


    Others mentioned tools being yanked from "across the room." Yup. A 1.6 T field could certainly do stuff like that. What's missing here? The 1.6 T field was a report from a different test, done privately by DGT, in which the shielding was removed. Further, it was, as I mention above, a peak measurement, and itself subject to RF noise, almost certainly. The reactor in the July Defkalion demo was shielded, heavily. The conversation becomes what Hadjichristos wore, a white lab coat.


    Those defending the 1.6 T field generally seem to neglect that this was a measurement at 20 cm from the presumed source. If it were a stable field, it would be *dangerous*. (Al Potenza redefines this as 20 inches.)


    Abd remind us that 1.6T at 20cm mean much more near the reactor, and is dangerous (it is nor far from what you have inside a MRI machine - any metal is forbidden inside the rool)
    Danger identified are metallic object flying over the room. I would add that changing field would mean industruction, interference...
    This reduces the credibility of the measure, which is probably troubled because of huge magnetic and electromagnetic anomalies. (maybe not 1.6T, but sure some huge electromagnetic field).


    Abd continue on another message
    http://tech.groups.yahoo.com/group/newvortex/message/629


    It was taken with the mu-metal shield removed, that normally encases the Hyperion. Someone somewhere referred to the shield as a "Faraday cage," and pointed out that this would not inhibit magnetic fields.


    That's correct, AFAIK. But what was missed was this:


    The Hyperion when it was not shielded, and certainly they started that way, shut down their phone system, and interfered with their National Instrument data collection system. The Hyperion is a nifty spark transmitter, putting out lots of very noisy RF. Basically, they are generating a big spark, a plasma discharge, inside the device, that is their "stimulation."


    So, did the Gaussmeter measure a DC field? No. It was set, we were told, at Peak. We don't have the data on the make and model of meter yet. But an ordinary Gaussmeter might not do well at all with a massive RF source 20 cm. away, and unshielded.


    The magnetic anomaly reported may mean *nothing*. It's a bit of a shame that Kim didn't question this, but my guess is that he really didn't think about the absolute value of the field reported and simply was passing on a reported anomaly. His report was quite unclear as to the meaning. I.e,. what was the background, with no spark stimulation? How, then, did the reading behave during the stimulation? The 1.6 Tesla reading was supposedly during the refractory period after spark stimulation, or was it? A peak reading would depend on the *period* measured. What was that?


    and as conclusion


    In asking about the reported field, I pointed out what many skeptics have mentioned: if there were a DC field of 1.6 T at 20 cm, the field would be picking up metal objects and slamming them against the source. That field is more intense, at that distance, than the field from the superconducting magnets of a CAT scanner, which are dangerous in that way. There is a great picture on-line of a chair up against a scanner, at about head height. It's been pointed out that a neodymium magnet, the strongest permanent magnets made, have that field *at the surface.* Magnetic field strength, at least DC fields, decline with the cube of the distance. So if we extrapolate the field at 20 cm, to very close to the source, that field is *enormous.*


    What a peak field, the actual measurement, means, is far from clear. The most likely explanation is a meter that is not reading correctly becasue of RF noise. But if that's a real peak reading, I'll avoid speculating, I'll leave it to RF engineers and the like.


    Some people came running in all directions, like headless chickens, with various theoretical speculations. "Aha! I *knew* that magnetic fields are involved!"


    Simmer down!


    People, we can see, tend to interpret new evidence according to what we already believe.


    Finally the general impression about that is that 1.6Tesla claim is probably an artifact, caused by an anyway huge magnetic effect, unrelated to sparks, that is shielded by mu-metal, and which without shielding is troubling all electronics around.


    We need more work to confirm or correct, but sure something surprising is there! Probably smaller than the initial claim, but sure higher than what skeptics imagine.

  • Gameover remembers.


    https://matslew.wordpress.com/…on-reactor-demo-in-milan/


    Quote

    UPDATE: I forgot to say that according to CTO John Hadjichristos there are HUGE magnetic fields inside the reactor as a result of the reaction, in the order of 1 Tesla if I remember right, possibly due to extremely strong currents over very short distances. Hadjichristos says the field is shielded by double Faraday cages, probably the reactor body and the external metal cover outside the heat insulation.


    https://www.lenr-forum.com/old-forum-static/t-2185.html




    Yeong E Kim ICCF18 paper


  • I have found two Gryzinski's articles related to fusion - and they are both publicly available (I can privately share some others):


    1982 "Intense ion beam generation in " RPI " and " SOWA " ion-implosion facilities" - regarding this coaxial plasma gun:
    https://hal.archives-ouvertes.…jphys_1982_43_5_715_0.pdf


    And 1979 "Theoretical description of collisions in plasma : classical methods": https://hal.archives-ouvertes.fr/jpa-00219441/document
    Which is a good starting point for LENR considerations.
    It uses Binary Encounter Approximation (BEA) - exactly as I have written:
    treat the two essential particles directly (e.g. p/D/T + e for LENR)
    and the rest effectively - which from his considerations was oscillating multipole:
    C_n(r^hat)/r^n sin(omega*t) + C_m(r^hat)/r^m
    where C are multipole functions (dipole or quadrupole or octupole) - this pulsating multipole approximation work especially well for modeling scattering on noble gases - see his 1975 papers.


    ps. some his papers can be download here: http://www.newkvant.narod.ru/
    ps2. his work is continued e.g. by prof. Victor V. Vikhrev (also plasma physicist) - here is some his recent freely available paper
    https://www.researchgate.net/p…bining_in_hydrogen_plasma

  • What if DFG fields was huge muon radiation? It generate lot of noise in copper lines, looks like RF noise, show nothing in geiger etc.


    About month ago I got hint to ground reactor RF shield. I put maybe 5m wire from reactor that was unconnected to reactor then I start to connect next part. When touched that 5m wire copper end it give nasty electric shock. When grounding line was ready and RF shield grounded it affect nothing to RF. It is not RF that make noise like RF.
    Best explanation theory currently are muons.


    Me365 report RF radiation that go through 1cm Al.

  • Gryzinski gives many examples where he claims that quantum predictions give unsatisfactory agreement with experiment, while simple classical calculations give much better agreement.
    They mostly concern various scattering scenarios, what is not surprising as QM seems to describe dynamical equilibrium.
    However, there are also other examples, like calculating diamagnetic coefficient, Ramsauer effect (as outer shell electrons screening charge of nucleus for inner shell electrons), and also Stark effect - which is in nearly all QM textbooks as example for using perturbation theory (alongside Zeeman).


    Wikipedia article ( https://en.wikipedia.org/wiki/Stark_effect ) has a nice figure with n-th level splitting into n-1 equally spaced sublevels:

    It is hard to find published experimental results - please cite if you know some.
    A clear one for Lyman series (2->1, 3->1, 4->1) can be found in historical "Der Starkeffect der Lymanserie" by Rudolf Frerichs, published January 1934 in Annalen der Physic (its editors back then: W. Gerlach, F. Pashen, M. Planck, R. Pohl, A. Sommerfeld and M. Wien), here are its results:


    These are clearly not equally spaced.
    One could expect that such paper, top physicists were aware of these 80 years ago, should now have hundreds of citation - like for conformation of the theoretical calculation in all QM textbooks, or maybe a surprise that should be understood and repaired...
    In contrast, it has now just 3 citations: https://scholar.google.pl/scholar?cites=15476592679702358817


    So this was pointed out in Gryzinski 2002 book (unfortunately in Polish and it seems there is no published paper for that), alongside a few lines of classical calculations (Bohr-Sommerfeld), leading to this picture (top QM, bottom classical, blue - experiment):



    Could anyone comment on that?


    update: I have looked at two of these citations of the Frerichs 1936 paper (more recent and English): the 1992 one concerns much higher levels (10->30, getting nearly equally spaced sublevels) and refers to only one experimental paper for Lyman series( ->1, the Frerichs') and 3 papers for Balmer series (->2). The second (1996) concerns Lyman-alpha (2->1).
    There is something really strange going on with this Lyman-gamma ...

  • A clear one for Lyman series (2->1, 3->1, 4->1) can be found in historical "Der Starkeffect der Lymanserie" by Rudolf Frerichs, published January 1934 in Annalen der Physic (its editors back then: W. Gerlach, F. Pashen, M. Planck, R. Pohl, A. Sommerfeld and M. Wien), here are its results:


    This is an interesting history. I know nothing about Gryzinski, except that he seems to want to explain atomic spectra using a semi-classical approach, rather than quantum mechanics? With regard to the Frerichs paper, it seems like until other groups replicate the effect, it's not something to use to support a theory. Do you know what Frerichs et el. were doing that was different?


    It also seems to me that a physical interpretation like the one that of Gryzinski's would only conflict with the Copenhagen interpretation of quantum mechanics, which is opinionated about what is going on under the hood, rather than quantum mechanics as a whole?

  • @Eric,
    This seems indeed a fascinating story about the foundations of modern physics ... a paper from the best journal in 1934 showing disagreement of quantum calculations, which still seems the only one with Lyman-gamma ... and just 3 citations. I cannot find any other experimental paper with Lyman-gamma?
    Seems an inconvenient problem brushed under the carpet ... then shameless use of Stark effect in all texbooks claiming perfect agreement ... without even referring to experiment.


    Gryzinski didn't even use semiclassical, but pure classical: Bohr plus classical spin of electron (magnetic dipole moment + gyroscope), plus eventually precession of this spin to explain Bohr-Sommerfeld quantization condition (1987 paper).
    And in nearly 30 papers published in Phys. Rev. level of journals (1957-2000) he shows surprisingly good agreement of these classical considerations, sometimes better than quantum ones - especially for various scattering problems. His papers have ~ 3000 total citations ( https://scholar.google.pl/scholar?hl=en&q=gryzinski ).


    I don't think classical considerations have to be in disagreement with quantum ones. Adding thermal noise to a classical trajectory and averaging over time, we get exactly quantum probability distributions (Maximal Entropy Random Walk).
    The question here is to understand short time dynamics of electrons - especially if we want two nuclei to cross the Coulomb barrier in 1000K ... and Gryzinski's work has strong arguments that radial trajectory usually dominates (zero angular momentum limit of Bohr-Sommerfeld).
    Such radial trajectories can happen between two nuclei, screening their Coulomb repulsion and so making LENR possible to imagine.


    ps. I have also asked this question at physicsforums - maybe they will be able to clarify it (?)
    https://www.physicsforums.com/…ory-vs-experiment.885330/

  • I'll be interested, Jarek, to see the reply that is given to your followup question on that forum. What is clear from the answer from Khashishi is that there are the spectra, and then there's the matter of interpreting them, and that the latter is subtle and requires sophisticated technique.

  • The argument here is:


    (1) QM predictions are wrong


    (2) classical predictions are reasonably good


    That has very little merit in determining fundamentally which theory is best because the "quantum predictions" are not that. They are one particular approximation, true only under various assumptions. Most physical predictions for complex systems involve approximations and assumptions. So that when comparing reults you are often testing the validity of those assumptions and aproximations rather than that of the underlying theory.


    Luckily such approximate data is not required to establish the quantum nature of electrons. Electron beam effects uniquely quantum in nature have been observed (double slit experiment etc). Wikipedia's source for this is Feynman lectures vol 3 - not very helpful perhaps, but I'm sure a bit of digging will find the original.


    It is not necessary to confuse this with foundations arguments about Copenhagen etc. There are various ways out of the Copenhagen mess, the one most preferred, because simplest, now is Everett many worlds. And others are perhaps cleverer. But all these foundation arguments do not alter one iota the physical predictions of the theory.

  • This is not about one of them (classical or QM) being wrong, the other being right.
    Both of them are applied as approximation:
    - practically used QM neglects e.g. the neighborhood of the atom, interaction with which causes wavefunction collapse - which results are not predicted by QM (just probabilities). This lack of information (like neighborhood) is treated in statistical physics manner - QM has built in statistical mechanics of the state of this neighborhood. If we could consider QM of larger systems, finally the Wavefunction of the Universe, there would be no longer neighborhood and so no wave collapse - it would become a deterministic theory ... but we are faaaar from being able to practically work with it,
    - practically used classical mechanics neglects e.g. field everything is happening in, which leads e.g. to requirement of Bohr-Sommerfeld quantization condition to find resonance with the field (as in the picture of Couder's walking droplets: http://www.pnas.org/content/107/41/17515.full ).


    QM and classical pictures are just different perspectives on the same system, for example we can look at coupled pendula through their positions (classical) or normal modes (quantum). Increasing the number of pendula to infinity, we get crystal with classical positions or "quantum" phonons.
    Considering classical trajectories, we need to add the neighborhood in statistical way: as noise, averaging such perturbed classical trajectories, we get Boltzmann distribution among trajectories, which leads exactly to quantum density clouds (euclidean path integrals/Maximal Entropy Random Walk).
    Adding field and wave-particle duality e.g. for the electron: that it is both localized entity (indivisible charge) and coupled waves around it (caused by internal periodic motion: de Broglie's clock/zitterbewegung) - we get interference in double-slit experiment: particle goes single trajectory, its coupled "pilot" wave goes all trajectories, leading to interference. See double-slit for Couder's walking droplets: https://hekla.ipgp.fr/IMG/pdf/Couder-Fort_PRL_2006.pdf


    We shouldn't see QM and classical as opposing - instead, they are complementing tools/perspectives - we should learn to choose the best from them for approximation of real systems.

  • It is not necessary to confuse this with foundations arguments about Copenhagen etc. There are various ways out of the Copenhagen mess, the one most preferred, because simplest, now is Everett many worlds. And others are perhaps cleverer. But all these foundation arguments do not alter one iota the physical predictions of the theory.


    We're half agreeing and half disagreeing. I was pointing out that Gryzinski, with his physical interpretation of electrons literally falling in orbits around the nucleus rather than interpreting them in the usual manner as a probability field, might still be consistent with the mathematical application of QM (to be determined), although perhaps not with the Copenhagen interpretation, which seems to prefer inscrutable things such as probability fields at this scale.

  • Does Copenhagen fulfill energy conservation?
    Imagine you perform a measurement of momentum of a photon, and so of its energy - Copenhagen says that you can get a random value of measurement.
    So there is some probability that this photon has lower energy, some that it has higher - how can we even talk about energy conservation for a theory predicting random energies?


    Theory predicting only probabilities, like thermodynamics or statistical physics, is an effective theory - trying to predict the most probable evolution of our limited knowledge, using some law of large numbers. And it also applies to QM.
    We should search for fundamental theory - for which effective description in some scale is QM.
    We use Lagrangian mechanics from QFT to GRT - this fundamental theory should be most likely a Lagrangian theory.
    Lagrangian theories are deterministic, so using Bell theorem as counter-argument does not apply ("super-determinism").
    We need a field (e.g. EM) with localized constructs like charges/particles - localized entities of fields are called solitons, using topological solitons we get quantization of charge, rest mass of particles, Coulomb attraction/repulsion, finite energy of electric field of charge (not true for point charge) and many other properties.
    We need to understand trajectories of these particles/charges/solitons - long time should average to quantum probability clouds (and it does), short time is nearly Kepler-type, and experiments suggest dominating nearly zero angular momentum: free-fall radial trajectories.

  • Does Copenhagen fulfill energy conservation? Yes, whole the quantum mechanics is just a pile of energy balance equations.


    Imagine you perform a measurement of momentum of a photon, and so of its energy - Copenhagen says that you can get a random value of measurement. The uncertainty principle says instead, this random value will be correlated with another quantities of photon by energy conservation law.

  • Imagine you have a source of photons in state |psi> = (|a>+|b>)/sqrt(2), where |a> and |b> are eigenstates of Hamiltonian for different energies.
    What is energy of such superposition before measurement?
    Now perform measurements, separately on each photon - sometimes you will get higher energy, sometimes lower.
    Is energy conserved here? In other words: is the energy of superposition before measurement always the same as after measurement?

  • Now perform measurements, separately on each photon - sometimes you will get higher energy, sometimes lower.
    Is energy conserved here? In other words: is the energy of superposition before measurement always the same as after measurement?


    All measurements involve an exchange of energy and need a finite time frame to complete. Thus the result of a single measurement is always of statistical nature. Reasoning about one photon makes no sense!
    There is one small exception regarding the information of the spin, which, in entangled systems, can be determined without disturbing the system.
    Thus if the system - feature you intend to measure, is decoupled from the measurement, then you can reason about a single photon.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.