Longview Verified User
  • Male
  • from Earth
  • Member since Nov 17th 2014

Posts by Longview

    "is that Lithium ions should not be transferred between two battery terminals at a rate exceeding two moles per minute."


    Seems to a be a missing parameter here. Moles per minute per unit of area? For example: per square cm or square meter.... In any case, it seems to be a large number of charges transferred regardless of the areal dimensions of the transfer surface. Given that 1 Coulomb is an Ampere second, and that there are ~96 thousand coulombs per mole: Hence at this suggested "limit" we are indeed at huge, and unlikely to be realistic, implied power densities.

    Here is a very crude guess from a rank amateur:


    Curbina: What about somehow incorporating solid superacids in one "control" and solid "super bases" (if they exist) in the experimental? The former should regenerate hydrogen from H-, at least in a gas phase rather than aqueous milieu. The latter would at least be consistent with the overwhelming numerical count of protocols emphasizing that naked protons are not often seen in LENR results (FP electrolytes are almost always strong bases, for example... IIRC).

    UGC Lipinski Replication


    The Lipinski / UGC materials referenced at the thread linked above might be appropriate for a replication using resources available to the likes of Google. If nothing else, their results deserve an independent confirmation. Further their results, if replicable, show that much lower impact energies (a few hundred eV) may result in useful energetic output (Q as high as several thousand) from at least a lithium target.

    I understand your position and argument. That it "10e-4" "is literally DEFINED as 10 x 10-4 ". Fortunately, it is not likely to be my problem at this late age. I rarely use the notation you mention, other than foolishly pasting it in from some other source. I would now not recommend it to anyone.


    That authors of computer programs, and hence their followers, in failing to follow the rules for exponential notation set up over the last 100 yrs, court disaster, but what's new? Personally I recommend the use of the "^" symbol or the superscript equivalent, then no one will be left wondering why "-4" suddenly refers to one thousandth rather than a "ten thousandth". The leading "10" suggests to me that the base of exponentiation in that particular case is "ten" rather than 2 or "e" or some other base.


    IMHO, the convention you are advocating is "way out" from intuitive, and could crash any laboratory exercise, or an important engineering effort . An order of magnitude is a lot of error to court because of poor attention to math notational fundamentals.


    By the way, "e" has quite another meaning we surely all know. Borrowing that centuries old designation for "exponent" is yet another invitation to unintended blunder.

    Python:


    >>> 1e-4

    0.0001

    >>> 10e-4

    0.001



    If true, suggests the writers of Python may have been seriously short in the exponents of 10 business. For a simple example, 10 raised the the "minus one" power is by definition simply 1/10 = 0.1, likewise and by definition, 10 raised to the minus 2 power is simply 1 divided by 10 squared, that is 1/10^2, or 0.01 or "one hundredth".

    As we all know, 1 raised to any positive power is simply 1, and by inference 1 raised to any negative power is simply 1/1, 1/1^2, 1/1^3 etc. And hence also simply "one". Such notation ostensibly from Python above seems at odds with basic exponential notation and its related algebra


    We're going to see more of this, I fear. What concepts, and programmatic idiosyncrasies, rather an actual old fashioned engineering training, might have led to the failures of the Boeing 737 MAX MCAS/ACAS system?

    Jed Rothwell wrote:


    "I mentioned in passing that in the distant future if someone discovers fragmented computer data recorded in DNA...."


    I see a strong risk of misallocation of resources pursuing the path of DNA as an information storage medium prematurely, and perhaps at all.


    While reading DNA is relatively easy, and relatively fast today at 100s of base pairs per second. Writing to DNA is still very slow and challenging relative to say magnetic or optical media. DNA synthesis speeds for "writing" are surely many orders of magnitude slower than the write speeds of giga bits per second seen in optical or magnetic media. And that is not yet true "writing" to DNA, which presently can only be inferred by comparison to known replication speeds for DNA. These replication velocities have been studied, and do not exceed ~1000 bases per second of the processivity numbers known for DNA polymerases, And note that is not ab initio writing but simply copying an existing script. That also neglects error corrections that would be necessary, and which would substantially slow the overall write speed.

    The potential information density is impressive (order of 10^22 bits in 100 ml), but I suspect no more so than any system allowing small molecules to be the "letters" of such information storage.

    Nothing is easy. I do not think DNA relies on high redundancy in nature. If it did, cancer and other diseases would be more common. But for data storage, even one copy is as reliable as the best human data storage, according to what I have read. It can be copied much faster than any other medium, because copying is done in parallel, not serial. If you want redundancy, you could make a million copies, or a billion, and compare them. Toss out any with errors. Store in a cool dry place reliably for far longer than recorded history has lasted so far.


    It is a large task to read enough to properly understand the situation. Redundancy only allows US to at least, and at last, easily read many thousand year old DNA, isolated typically from bones. But, without hundreds or even thousands of copies, there would be no chance of reading such ancient sequences reliably. These are errors that accumulate in DNA of dead tissues.


    Redundancy is apparently used in nature, hence diploid and polyploid genomes. Your point is well taken in referring to the DNA of a single living organism, there each cell is doing a fair job of retaining the fidelity of the the original genomic sequence data through perhaps as many as 40 to 60 or so doublings in a lifespan (Hayflick limit). Our cells have numerous repair mechanisms to correct the many distinct types of naturally occurring errors. But even with all these editing and repair mechanisms, there are still a residue of multiple mutations accumulating to a surprising frequency through a lifespan. And of course there are well studied mechanisms that induce cells with excessive, and likely irreparable DNA damage, to "commit suicide" i.e. apoptosis. The failure of such mechanisms is a major player in the progression of cancer from benign to malignant and thence to metastatic.


    Damage to DNA from passive storage is well known and well characterized. Careful refrigeration --"snap freezing" to encourage a glass transition rather than cleavages from crystalization. Then storage at least to minus 78 and preferably to liquid nitrogen temperature is a way to assure very long term storage of DNA. But likely not a practical approach to archiving non genetic and/or non biological information.

    Properly stored, DNA will last for hundreds of thousands of years, perfectly intact


    Not quite so easy as it may seem. It has taken several decades of effort to begin to reliably sequence truly "ancient DNA" (first Neandertal mitochondrial sequences in early 1990s, were "easy"). The error levels are high, even in dehydrated storage. Mainly deaminations of cytosine, and depurinations (loss of whole bases from the strands). The main DNA virtue is that there are typically thousands of copies in an ancient specimen, so inferentially it is redundant enough to be reliable. As it is, DNA in the form of chromatin, that is with histones and nucleosomal structures intact can be sequenced with substantial redundancy. So we now can easily look at the sequences of Neandertal, Homo luzonensis, Denisova, Homo floresiensis and so on. That took a huge effort, and is now becoming more routine. One reason is that the data of a huge number of species (potential contaminants, both microbial and museum curatorial) have already been done multiple times for many thousands of species, so any contamination is more readily identifiable and discounted appropriately. I am not saying that DNA is bad, but it is not easy, and certainly relies on high redundancy for reliabilty. Magnetic, optical or electronic domains also have problems. Time and entropy degrade all information storage...

    Those described are hard copies, last I purchased some. Not electronic versions... which we never can "own", as I am sure you know. (And thanks Jed, for your translations!).


    The big A is a temptation that I avoid. Seek alternatives and one can find them. Of course the alternatives often cost a bit more... WE will pay later when all competition is gone.

    With respect to avoiding "graphite soup" one might consider the electrodes of old fashioned carbon-zinc batteries. Those old "dry cells" have central electrodes that seem to be a very rugged form of amorphous carbon with possible graphite admixture.

    To reconcile the "incredible density" with ordinary physical chemistry, can we assume that the loss of one, or even two, degrees of freedom in a surface environment might explain the formation of ultra dense deuterium. So the packing on a solid surface might enable not only charge-charge shielding, but also relatively fixed positional proximities not seen in any ordinary gaseous context.

    I suspect the nuance and lesson from De Broglie's relationship, "lambda = h/p" which devolves classically to lambda (that is the rms positional uncertainty, or wavelength, equals Planck's constant divided by Newtonian momentum, where momentum p = mv). I suspect it is important to keep in mind that this relationship is "empirical", and is likely as fundamental as E = MC2, and possibly even more so.


    The Widom-Larsen theory thus considers, that the electrons are moving fast, so that they get heavy.


    This is quite an orthogonal or even opposite implication from a relativistic mass increase with increased v. That is, strong confinement implies a proportionate mass increase. Essentially, slowing to a zero velocity implies an infinite average mass. Effective mass for electrons has many implications, and perhaps none apply here. But Widom-Larsen is a good place to start. That theory does imply effective mass can become useful mass in making up the mass - energy deficit normally realized as a neutrino in the reverse reaction.


    I am certain is is not useful to devolve to Kepler and the Bohr atom, for many reasons well discussed in physics since the "ultraviolet catastrophe" beginning around 1900.

    Since the MEAN mass of the electron is precisely known, there may be only the possibility that the mass becomes time variant about that mean. Possibly related to this, an examination of "heavy electron" theory, will show that generally mass variation for electrons or "effective electrons" is vectorial, that is an increased effective mass in one direction is accompanied by decreases in at least one of the other spatial dimensions. I should note that the "deBroglie equation" has more complex variants that specify the variables with subscripts of x, y and z. Further there are relativistic versions. The general idea holds nevertheless.


    I see in David Bohm's text "Quantum Theory" (1989 Dover Reprint of the original from Prentice-Hall, 1951) what I should have known already, quoting Bohm, page 69:

    "De Broglie's derivation has the advantage, however, that it shows the relation E = h[nu] and P = h/[lambda] are relativistically invariant."