Longview Verified User
  • Male
  • from Earth
  • Member since Nov 17th 2014

Posts by Longview

    My surmise: a single (i.e. one shot) very brief, super-critical discharge, that is, far above the critical current density for sustained superconductivity, might "sneak through" via some mechanism. Perhaps transient transmission of the excess "supercritical" current pulse via surface "image" positive charges traveling via relatively immobile positive ion to ion "hand offs", which might remain at least charge balanced by above -surface mobile electron flights. That is likely to be expected, and likely seen many times.


    More difficult and more interesting, and showing only when the superconductive critical current density is exceeded, but before thermal destruction of the SC state, and at very high transient field gradients, one might observe near condensed surface the equivalent of "vacuum" or diffuse plasma pair (e+ and e-) production. Perhaps further enhanced by surface oscillating square waves (many examples in LENR, including the Lipinski patent). Hypothetically, at least a relatively easy positron and/or electron escape in the context of such surface condensed lepton / ion charge mobilities.


    As many here know, that is not really relevant for sustained power use or production. On the other hand, it could easily provide useful information on materials and processes for further R&D. An "interface" or surface-associated field gradient of great intensity, yes. The possibility of released leptonic charges, yes. Over-unity spikes, sure, from several sources including store / release, measure errors in charge-noisy, EMF-noisy environments.


    But net, overall, energy gain? Perhaps not as easily, if at all. But still it is here that the information and "theorizing" provided by "Eros" is worthy of some attention and perhaps further, and a bit less "hopeful" empirical effort.

    I always appreciate your deep and sensible knowledge of New Physics, dear Wyttenbach. But you are not the only one speaking and writing in that way of neutrons without the evidence most can grasp. I prefer deposing a theory by experimentation rather than with words. Much of the W-L discussion seems fatal to itself. It is becoming reminiscent of the alchemical era with the best theories containing ideas such as "phlogiston".. I think the S of W-L-S did drop out, it just did not hold up in any evidentiary sense. No one is complaining about that loss, probably not even the author. And thus by experiment, it is seen that hypothetical superconducting regions are not able to shield or absorb say 1 angstrom gammas, that otherwise can pass through a lot of anything.... So what experiment will depose W-L?


    Experiment is vital to science. The rest is artful mathematics, attractive ideas and useful recipes.

    In David French's podcast on CFN: David French on the Cold Fusion Now! podcast (#008)

    he claimed the USPTO had indicated to him 4 years ago, that they would grant an LENR patent if the inventor proves it works. From what I read of Swartz's argument, he said many things, but provided no proof other than his own results, and general LENR lore to support his technology.




    David French and other experts in patent law have often told me that you should never include theory in a patent. Theory is a disadvantage with no redeeming benefits. If the theory is wrong, the patent may be invalid. Whereas if there is no theory, the patent will be just as strong as it would be with one. You cannot patent "a force of nature," which I gather means a law of physics or a theory.


    The problem there, and a problem that David French may not be mentioning: So called "Pioneer patents". These are very rare, but likewise might only get the grant of monopoly only by quite strict adherence to protocol. In a truly pioneer patent, citation of the theory may be essential (that is why I cited Gould). The reason is simple: the examiners very likely have no idea of the underlying physics, chemistry, biology, QED etc. Hence the necessity in those very rare "pioneer" patents of presenting the theory, and if no theory truly applies, then at least a coherent new theory can wow the examiner enough that she/he can say later to the courts "they presented apparently coherent theory supporting their application and claims". After all the fundamental motive of a patent system is "full disclosure" and the rationale should be that it accelerates the development of new technology.


    Lipinskis may have harmed / or slowed their short term prospects.... but the long term might be quite a different story.


    Gould shows us how extraordinary such pioneer patent grants are, in every way, violating all the rules and procedures, yielding huge revenues, while dragging down large successful, mature implementors of technology such as once leader Spectraphysics and eventually all the rest.... perhaps assuring that a "Gould v." can never be relived. But, don't count on the Supremes in this regard, they have deeply surprised the corporate "handlers', the "Nation", the "public" and the World.... many times before.

    I imagine many of us have seen David French apparently knowledgeably lecturing on the ostensible "post SAWS" view re: the USPTO. But, in one such lecture at some meeting (at the 2013 ICCF ?) it is certainly Mitch's voice and his attempt to describe, in the background, off-camera. Mitchell Swarz there attempts to offer up strong personal counter evidentiary claims to dispute directly the nuances in "Practice and Procedure" that David French attests as the USPTO's new "reform" modification. Mitch there is, IMHO, unfortunately effectively cut off by the skilled rhetoric and misdirection of French, esq.


    At least it is a good example of one reason to have an attorney. And certainly when a few $trillion or so, might be at stake.

    Last I knew, many PTO examiners are ad hoc "volunteers" and now come from various, thought to be relevant backgrounds, and hence may not be completely up on Langmuir, Schwinger, Boehme, Pauli, Bose and surely not up on the latest from P. Hagelstein, F. David, R. Mills, G. Egely, Mel Miles and so on.

    As most know here, many prominent AHE / LENR reports might be explained by some plausible mechanism of how the 782 keV/c^2 mass-energy deficit is made up.


    Earlier, discussed here on LF, in some detail from a quite different perspective, now linked at:


    deBroglie's equation and heavy electrons


    [I deduce we no longer need to advise "newbies" that the W-L theory ostensibly explains the "growing" a proton to neutron by adding such a heavy electron mechanism.]

    As many here may know, and IMHO, deBroglie offers, a quite different explanation for "heavy" electrons, and that explanation might be, at least under "classical" QM, quite inconsistent with the usual "classical" relativistic interpretation. I believe the relativity-based explanations may be quite difficult-to-reconcile with observations. Recently it seems the theoreticians are seeking, and more recently, demanding difficult-to-imagine scenarios. And if Dr. Hagelstein's, and other JCMNS articles analyzing the energetics are correct, it may be quite impossible or insufficiently productive to seek "relativistic" explanations of increased mass suitable to accomplish LENR. This has apparently now evolved to a "received view" that is often used to brush away any low velocity heavy electron explanations of LENR / CF.

    A little background: I was primed long ago, long before my science career, for discussions of QM via deBroglie from P.W. Atkins' Physical Chemistry (W.H. Freeman, 1978), especially the 2nd section "structure". And now I see a much later influence layered on the Atkins text: The (one volume version) of the 1993, 2nd edition of McGraw-Hill Encyclopedia of Physics (Sybil Parker, editor). In that tome was where I first saw the initial hints that led me to focus on one of the two core deBroglie equations, that is: lambda = h/p where "p" can be taken as classical momentum (that is: m dot v, as I understand it). Some key passages are there ca. p. 1112. I won't review that here, except to say that it appears that the by now classical QM uncertainty relationship, that is a reciprocal "complementarity" relationship of position (distributed as "wavelength" in lambda) with momentum distribution as "p".... can indeed be further decomposed (classical Newton: p=mv) to allow not only velocity uncertainty, but mass fluctuation, at least in one vectorial pairing, in that article, say the "x" axis.

    With the greatest respect for some of the living giants in the CF field, it seems such a strictly relativistic explanation of mass gain, at least at first view, is quite the contrived situation. So much so, that I suspected it a "strawman" to have electrons in the conduction band travel anywhere near the vacuum velocity of light.

    But, here is another piece the older folks know well -- or maybe not: in view of Cerenkov / Cherenkov / Tcherenkov radiation, which is the emission of energy of the "forbidden" transluminal velocity as energy via photons induced by superluminal particle transit through transparent media. For example, velocity of light in pure water is about 0.76 of vacuum c in free space. Thus, with ANY superluminal transmission (ie. over c in the media) of any massive particle, we might "have our cake and eat it too", that is an increased relativistic mass at a relatively modest velocity compared to vacuum, AND the possibility of bonus energetic photon(s) accompaniment via Cherenkov.


    [For "very newbies", Tcherenkov, Cherenkov, Cerenkov radiation, is the blue light seen in reactors and radwaste storage pools, the ostensible source of that blue light, for decades now, and I see at our never-to-be-trusted for controversial information, is Cherenkov's1934 explanation which led to Cherenkov's 1958 Nobel.]

    With respect to what has been somewhat difficult-to-envision, this claimed as relativistic mass gain of 0.782 MeV/c^2 (to give a total of ~2.53 X the nominal CODATA rest electron mass of 0.511 MeV), I am deducing from ICCF 21 chats and other conversations with some of our most famous LENR scientists, and yet others perhaps not so famous, that if indeed this were possible, it is enabled by the much lower velocity c (from our external perspective, of course!) in some solid media, such as boron nitride (B4N or simply BN), graphene, diamond-like coatings, PTFE, Schott and Nikon high refractive index glasses and polymers, as well as barium glass, uranium glass, leaded glass and so on.


    c in high refractive index materials such as boron nitride, boron carbide, as well as spodumene-like products, including high mp, low cost, quite transparent materials such as Corning's "Visions Ware", which is likely related to their "Vicor" glassware, or traditional transparent refractories such as fused or amorphous quartz, zirconia etc.


    As one simple example, here is a little boron nitride paper:

    https://ntrs.nasa.gov/archive/….nasa.gov/19870015584.pdf


    therein is reported a refractive index of 1.65 to 1.67, taking a mean, gives c =1.8 X 10 ^8 m/s, v. ~3.0 X 10^8 in vacuo.


    Finally, we see a conundrum of an almost ever-present structural theme in the more functional CF and LENR "cells" resolved and/or explained without resort to "crazy" stuff. That is the near universal presence of phase, state, field, or other structural interfaces / junctions in such cells . That may provide a plausible explanation of how a "situational transluminal velocity" could accomplish something constructive, even when there is too large a Fermi gap between the valence and conduction bands for metallic behavior. And proactively, we can now see that (perhaps) photons in transparent media are conjoined with electrons in nearby metallic media. Is there another parallel to Hagelstein's evocation and explanation of the Karabut data, or others theorists also seeking to explain the possible direct thermalization / "phononization", [to use an awkward term], of MeV photons?


    Ponderables: What is the likelihood of a "heavy" conduction band electron making an excursion, or channeled, stripline or TIR fiber style, into adjacent non-conducting / insulating dielectric material? What happens to Cherenkov photons in that case? Are they plasmons? Can they participate as bosonically additive entities (that is, can they coherently elaborate optical pumping that might through Thomson-Compton or other mechanisms, further highly accelerate already electrostatically accelerated electrons?


    As always, I admonish to "do you own due diligence". Think of energy and power densities that might accidentally occur. Don't want to lose these genii !!

    The tables shown suggest that particular melting points, the mp to bp range, and perhaps other colligative properties govern the efficient formation of Taylor cones.


    But, I should add that some ICCF chat suggested that metals with very low melting points might be the best candidates for turning something akin to transient Taylor cones into magnetic vector potential launchers.

    It is less the "marks" that I found to be the interesting part. The self generation of immense field effect "needles" and their self magnification and self repair are the prime reason I posted Wilson's pdf. Those interested, please look at the later part of the slide show concerning "Taylor cones". Those later images show needle-like protrusions spontaneously built of conductive or semi-conductive metal and and extend remarkably as needles out toward the opposing electrode.


    These cones or needles only build on the negatively charged electrode. They can enhance huge vector potential discharges-- a careful reading and acceptance of the claims, could justify >782 keV electrons occurring at the tip of a population so discharged, since the base voltage is up to 100 kV (classically giving 100 keV max energy to electrons). An optimist might be buoyed by the claim that these Taylor cones appear to be continously reconstituted by the ambient charge gradient itself. The Taylor cones are dependent on the semiconducting nature of the materials from the positively charged base electrode. The tables shown suggest that particular melting points, the mp to bp range, and perhaps other colligative properties govern the efficient formation of Taylor cones.

    The upshot of my comment above Re: THHuxleynew, and with respect to this thread, is that there very well may be specific photonic energies that promote the specific reactions, and surely some that are often identified as LENR. That is effectively quite equivalent to a very "narrow temperature range", if we are hunting for electron "pushing"... whether in bonds or perhaps as part of say initiating a "vector potential" discharge. Here is an elaborated hierarchy of photon sources, subject to more expert review:


    Incoherent broadband sources, rough order of increasing spectral narrowness:

    1. Incandescent lamps

    2. Halogen lamp (hotter, at least)

    3. Fluorescent photon sources (CFL, phosphored white fluorescent bulb, some "white" LEDs)

    4. Color phosphor sources (some "neon" sign tubing, some old color fluorescent bulbs)

    5. specific color LEDs (many have quite narrow, say 15 to 40 nm MWHM, last I looked, and now extend out to near 200 nm UV

    6. "Neon" style gas discharge lamps, with single narrow spectral peaks defined by the gas content (H, He, Na, Ar, Ne, Hg and many others)



    Coherent narrow bandwidth sources, rough order of increasing coherence and narrowness:

    7. Gas discharge "super-radiant" lasers, eg. TEA types

    8. Linear gas discharge lasers (eg. some CO2 IR and some deep UV )

    9. Diode lasers

    10. Gas discharge pumped lasers

    11. Diode pumped lasers

    12. Fabry-Perot cavity style "classical" (tuned / mirror) lasers

    that thesis will work only if the laser energy is much higher than the typical Planck thermal energy.


    Don't you have that backwards? We know that 1 eV represents a 1240 nm photon, in the near infra red, about half the energy per photon of red visible light. But, if what I see, at say, the Physics Stack Exchange is correct, we readily find that 1 eV is also equivalent to 11,604 K, [that is 6 places of CO-DATA value of 10 digits, BTW.] Another competing and quite distinct interpretation is that 1 eV represents not temperature (a colligative property) but bond energy, i.e. ~1.6 X 10 to minus 19th Joules. One implication of the two incommensurate figures, is that an incoherent thermal source at modest (say a 3400 K color temperature halogen lamp) is FAR LESS effective at specific photochemical activation than say a laser also having the requisite per photon energy minimum for say the same bond specific activation, or say a specific bond scission.

    Regardless of those incommensurate interpretations at the Stack Exchange, it appears that you THHuxley are implying, that laser energy is somehow less adequate to the task of doing electronic bond work than mere heat? Quite the contrary, for several easily understood reasons, laser energy is much more capable in this regard (per watt) than incoherent and/or thermal radiation having say a Boltzmann mean, or "tail" above the same threshold energy. Photon fluence rates, and for higher order reactions, photon coincidence rates, coherence, energy specificity ALL are typically vastly better on a per watt basis than any similarly powered incoherent, broadband, incandescent, fluorescent photon or even LED source.

    A possibly instructive example from another technology rich area of innovation: Genetic engineering patents used to be "easy", before whole genomes of many crop plants and model vertebrates were "fully" sequenced bginning in the late 90s. Before about 1990, few if any patent examiners had the basic skills to understand the relevant tools and technologies: "Molecular Biology" that is, Cloning / DNA / RNA / immunology / PCR / molecular phylogeny / virology / bio-informatics and so on. Back then, virtually no one at the USPTO had hands-on experience with any aspect of the then very new processes involved. By contrast, now, there are many, many "MolBio" technicians turned PhD or JD or DPh or MBA/biologist or engineer/MD etc who have such knowledge. Innovations over the decades have refined techniques as "packages" and devices now allow "plug and play"... so a genetic engineering patent is often now only refining some aspect of that tech "at the edges", so to speak. Thus since about 2001 and ever more to the present, genes themselves have become much more controversial and again questioned as representing any form of patentable technology. Remember the 1970s role of gene transfer in overturning the supremacy of the old Patent, Trademark and Law of the Sea appeals court in the US. In a decision not long ago, the Supremes ruled that DNA sequence per se was not patentable, or subject to IP protection (copyright), but foolishly they decided that RNA, as mRNA, rRNA, anti-sense RNA, iRNA or as other forms of artificial AND natural regulatory RNA was still eligible. Apparently, the Supremes goal was (with no expertise in the disciplines affected) to marginally disenfranchise individual inventors in that major and burgeoning bio-medical area, but by contrast to massively and prejudicially reward a particular genre of biomedical research. That is, the focus has shifted from motivating inventors as individuals, now to rewarding corporations whose efforts push that particular 'selected' direction. A path already seen likely to be very dangerous by many, and by now, yet another rather myopic tendency to allow "tilting the game table" at an incredibly persistent and steep angle.

    I used to believe, just as some here have written, that "there are too many CF theories". Now, I believe there may be only a few GOOD theories, possibly too few. Thus for me, anyway, it is absolutely essential that there be basic research done in conjunction with theory development-- that is the only way to weed out bad theories. That is in preference to rhetorical assertions, and sui generis argument from comfortable dogma. To have didactically declared "impossibility" in some cases now appears just wrong, particularly if it is based on standard models from collisional physics that seemingly still predate and seem uninformed by the Enewetok disaster, predate condensed matter physics, predate solid state theory in many cases, predate nuclear catalysis (one way of describing muonic fusion), predate McKubre's excellent evidence (re)confirming F-P AHE.


    To Mitchell Swartz' credit, he offers theory, or at least he confidently inspired me to believe he "sees" and understands what is happening enough to drive innovation in his own devices....


    Now that the field (for me: Lowered Activation Energy N R, LAENR) has attracted some new and skilled theorists, but frankly many younger theorists seem far removed from the needed "hands on" engineering development. Their theories can be so specialized as to be daunting for even the most skilled constructors and empirical analysts, and unfortunately far away from allowing the requisite "falsifiability", being necessary to advance experimental hypothesis testing.


    Theorists might note that deep physics may well need a GUT that is broader, not deeper.



    put damned theory in their patents.

    I know you have good intentions here, Alain. But "damned" v. "good" theory can just be a matter of a few months or years, or a few famous supporters, or a few less moronic reporters or armchair commentators. The reason R. Gordon Gould triumphed in the end was that he had excellently documented notes, a full and timely disclosure to coworker witnesses, and a well-grounded quantum mechanical theory behind his rather speculative 1957 notes on building an optical maser....or "laser", as it became known. Over decades after the expiration of the "original" Bell Labs laser patents... Gould and assigns were granted some 46 retroactive new patents, a largely "one and only" genre of practice from the courts never before impacting the USPTO. After a couple of hundred million in royalties in 1990s dollars US, the Gould family and the assignees were neither damned nor crying. It is a good legacy of "theory". Similar theory, but adding further "reduction to practice" beyond Gould's prescriptive notes, perhaps wrongly gave Shawlow and Townes the Nobel in physics for the same device, under Bell Labs as assignee.


    Lipinski's have a patent, Swartz does not. Lipinski's have an enunciated and quite unusual theory, and they deny any LENR attestation. Swartz has theory, but it is, by his word, not novel (should be good!) and perhaps not as well enunciated. Further Swartz' application is unfortunately identified with a SAWS proscription .... which btw also implies possible defense-related applications.... just as Gould's Fabry-Perot coherent laser cavity was and is.


    So, I would suggest that generally theory is a strategic tool that can help or harm a patent application. This may be where good counsel is important, but certainly not necessarily vital. A friend of mine had his pro se patent application for a novel product personally shepherded through the patent process by a sympathetic USPTO patent examiner!


    [Longview is not a patent attorney nor patent agent, do your own due diligence].

    From our infamous online encyclopedia, specifically the less controversial article on Oliver Heaviside: "That same year he patented, in England, the coaxial cable. In 1884 he recast Maxwell's mathematical analysis from its original cumbersome form (they had already been recast as quaternions) to its modern vector terminology, thereby reducing twelve of the original twenty equations in twenty unknowns down to the four differential equations in two unknowns we now know as Maxwell's equations. The four re-formulated Maxwell's equations describe the nature of electric charges (both static and moving), magnetic fields, and the relationship between the two, namely electromagnetic fields.
    Between 1880 and 1887, Heaviside developed the operational calculus using p for the differential operator, (which Boole had previously denoted by D), giving a method of solving differential equations by direct solution as algebraic equations. This later caused a great deal of controversy, owing to its lack of rigour. He famously said, "Mathematics is an experimental science, and definitions do not come first, but later on." On another occasion he asked somewhat more defensively, "Shall I refuse my dinner because I do not fully understand the process of digestion?"

    Oliver Heaviside, 1850 to 1925. Zeus46 has this exactly right. Even non-specialists know that Maxwell's equations as given today are not in their original form. Heaviside long ago did "the work" of a particular and useful simplifications. A lesson there I suspect. I understand electrical engineers still use Heaviside methods for their simplicity.... thus he is long revered and surely not for obscuration....

    Below are 2 suggestions, since we are drifting quite a ways from ICCF-21.That is, the Swartz as appellant, Leonie Brinkema adjudication and his by now retrospectively criticized pro se effort, is a quite important and a separable non-ICCF-21 topic, and not necessarily for just those who want to write or read stuff directly related to ICCF-21. That is a quite important and a separable NON-ICCF-21 topic, and not necessarily of any interest at all for just those who want to write or read stuff directly related to ICCF-21. It is important to keep the ICCF-21 topic going since, since some here seem, or pretend, to know little to nothing about it.


    However, in view of the substantial topic drifting, I suggest one or two new threads, with appropriate reassignment of several recent posts to:


    1. A new thread in which to discuss Mitchell Swartz' latest and perhaps his earlier legal efforts.

    2. And another to discuss the many other important legal issues relating to IP in LENR that surely could be usefully discussed, perhaps in yet a third thread.

    Super... but has he? And how long ago did he offer that?


    @ S-O-T: Please re-read my post, I admit my writing is not as easy as it could be. That offer was, as I wrote, "at Boulder ICCF " (the subject of this thread!), that is between June 3 and 8 this year, and more precisely about June 5, 2018. I am not presently prepared to take advantage of Mitchell's offer. And my original offer by email was to conduct my experiments in a context which he defined. The specific questions I had in mind did not include validating his device--- although that might have incidentally been evident as a by product of the primary motive: refinement of technique and testing of theories.


    If one's motive is to steal his innovations, I'm sure that sort of risk might be apparent to him. When one has put as much effort and thought and innovative synthesis into something, as he appears to have, one might get concerned about intellectual property theft.


    @Shane: It is an unfortunate state of IP affairs today, individual or "small entities" are unfortunately the last gasp of the patent system envisioned and realized by Jefferson in the US, back in (was it?) 1803, setting in motion a period when inventors and their patents drove much of, at least the consumer side, of what has now evolved into an ongoing "tech revolution". Now thousands of active patents are operative in a single technology.... much of "high tech" is often no longer as easily begun, or even approached in a garage, and now highly bureaucratic corporations often drive "innovations" that are merely marketing ploys. They must cross-license thousands of patents just to produce a popular product.


    The essential motivator of the "patent" that is the temporary monopoly granted in exchange for full functional disclosure, which once surely motivated the individual innovator, is a now a very difficult path, made essentially impossible for the merely comfortable innovator human individual. Often the best one can hope for today is that one's patent becomes at least a "publication" for other advancement purposes in some bureaucracy (academe, government, industry). Further such "publication" can at least establish priority for prizes and notoriety, and even the ability to blather far beyond one's presumed expertise....

    Having watched Dr. Swartz' presentations at the cold fusion MIT extracurricular classes, my impression has been that his nanor and phusor devices likely do work in a predictable manner. Their small size would perhaps uniquely, and more safely, be able to fit into smaller experimental formats.


    So, I too had once attempted by email to Mitchell Swartz to explain my own experimental need for such a "lab rat" module, as it was apparently being offered by him or his firm. But, at least initially, no response. Mitchell and I do go back a bit further as we both have some history as cancer researchers (mine very basic research and his more clinical research) and we have a common professional organization membership or two. I happened on Mitchell at the Boulder ICCF..... it was a fortuitous opportunity to discuss the situation face to face. After a fairly detailed biographical discussion, and a perhaps bit of review of my project he graciously said "I will help you".


    My motives at this age, probably not far from Dr. Swartz' own age, are not to get help for me, even though I may need it =O .....but hopefully to help push the whole LENR / LANR / CANR / CF theory / experiment effort forward.