Mizuno reports increased excess heat

  • @Jed,

    Properly stored is the key. Most people use the cheapest storage media available, store them somewhere inappropriate (like the attic or crawl space ) and do not have enough backups. Things are improving, but that does not help the stuff from 1980 to now that are slowly demagnetizing, oxidizing and depolymerizing, and live on floppy discs that the labels fell off of.


    (Sorry for OT again...)

    Perhaps it is time for me to convert that stack of IBM 360 computer cards I have stored in the attic.... oh and that cigar box of paper tape. (I really still have such things)

  • Most people use the cheapest storage media available, store them somewhere inappropriate (like the attic or crawl space ) and do not have enough backups.

    I think most people nowadays use a removable hard disk, stored next to the computer, and off-site cloud storage. Both are extremely reliable and long-lived. Floppy disks, mag tape and the like degraded quickly, but a hard disk that is seldom used remains intact. I have some from 20 years ago that work perfectly. I know this because when I do a backup, I tell the program to go back over the data and confirm it is correct.


    I do not know how reliable solid-state disks are, but I have been using one for several years with no problem. I assume they will eventually replace mechanically spinning disks.



    Not quite so easy as it may seem. It has taken several decades of effort to begin to reliably sequence truly "ancient DNA" (first Neandertal mitochondrial sequences in early 1990s, were "easy"). The error levels are high, even in dehydrated storage. Mainly deaminations of cytosine, and depurinations (loss of whole bases from the strands). The main DNA virtue is that there are typically thousands of copies in an ancient specimen, so inferentially it is redundant enough to be reliable.

    Prof. Church and other say that data stored in DNA and kept in a cool dry place should remain intact for hundreds of thousands of years. Far longer than any present-day storage method, including paper or cutting letters into stone.


    I am not saying that DNA is bad, but it is not easy, and certainly relies on high redundancy for reliabilty.

    Nothing is easy. I do not think DNA relies on high redundancy in nature. If it did, cancer and other diseases would be more common. But for data storage, even one copy is as reliable as the best human data storage, according to what I have read. It can be copied much faster than any other medium, because copying is done in parallel, not serial. If you want redundancy, you could make a million copies, or a billion, and compare them. Toss out any with errors. Store in a cool dry place reliably for far longer than recorded history has lasted so far.

  • This paper well explains that DD way seems to be more easily reachable than HH.

    Now if you consider Aston chart, H appaers to be more competitive with lithium, not new.

    About engineering it becomes more complex than DD, because you have to "mix" 2 differents " matters" together then put them under IR.

    it shouln't be so easy as for example doing an alloy with lithium inside, for example.

    I still think that's what Rossi did, Ecat Ht design seems to have been studied to fulfill this .


    Please, no need to listen more empty comments about Rossi.


  • Nothing is easy. I do not think DNA relies on high redundancy in nature. If it did, cancer and other diseases would be more common. But for data storage, even one copy is as reliable as the best human data storage, according to what I have read. It can be copied much faster than any other medium, because copying is done in parallel, not serial. If you want redundancy, you could make a million copies, or a billion, and compare them. Toss out any with errors. Store in a cool dry place reliably for far longer than recorded history has lasted so far.


    It is a large task to read enough to properly understand the situation. Redundancy only allows US to at least, and at last, easily read many thousand year old DNA, isolated typically from bones. But, without hundreds or even thousands of copies, there would be no chance of reading such ancient sequences reliably. These are errors that accumulate in DNA of dead tissues.


    Redundancy is apparently used in nature, hence diploid and polyploid genomes. Your point is well taken in referring to the DNA of a single living organism, there each cell is doing a fair job of retaining the fidelity of the the original genomic sequence data through perhaps as many as 40 to 60 or so doublings in a lifespan (Hayflick limit). Our cells have numerous repair mechanisms to correct the many distinct types of naturally occurring errors. But even with all these editing and repair mechanisms, there are still a residue of multiple mutations accumulating to a surprising frequency through a lifespan. And of course there are well studied mechanisms that induce cells with excessive, and likely irreparable DNA damage, to "commit suicide" i.e. apoptosis. The failure of such mechanisms is a major player in the progression of cancer from benign to malignant and thence to metastatic.


    Damage to DNA from passive storage is well known and well characterized. Careful refrigeration --"snap freezing" to encourage a glass transition rather than cleavages from crystalization. Then storage at least to minus 78 and preferably to liquid nitrogen temperature is a way to assure very long term storage of DNA. But likely not a practical approach to archiving non genetic and/or non biological information.

  • It is a large task to read enough to properly understand the situation. Redundancy only allows US to at least, and at last, easily read many thousand year old DNA, isolated typically from bones. But, without hundreds or even thousands of copies,

    I am talking about reading books and data, recorded in DNA recently, by people, and stored in buildings designed to keep the DNA from being corrupted. You are talking DNA from thousands of years ago. Obviously that is more challenging and it requires many copies.


    I mentioned in passing that in the distant future if someone discovers fragmented computer data recorded in DNA, with multiple copies, they might be able to recover it using techniques similar to what we use today to recover ancient DNA. So, even if the computer records become fragmented, they might be recovered. This is not possible with magnetically recorded data on a smashed hard disk discovered 20,000 years from now. So, DNA is probably the most robust and recoverable storage media there can be.


    I assume people will routinely make many copies of DNA data, because DNA costs nothing and takes up very little space. As I said, ~100 ml will hold all of the data in the world as of 2019. If you want 10 copies, use a liter of the stuff. If you only have a few terabytes of data, you might as well make thousands of copies, or millions.


  • Jed Rothwell wrote:


    "I mentioned in passing that in the distant future if someone discovers fragmented computer data recorded in DNA...."


    I see a strong risk of misallocation of resources pursuing the path of DNA as an information storage medium prematurely, and perhaps at all.


    While reading DNA is relatively easy, and relatively fast today at 100s of base pairs per second. Writing to DNA is still very slow and challenging relative to say magnetic or optical media. DNA synthesis speeds for "writing" are surely many orders of magnitude slower than the write speeds of giga bits per second seen in optical or magnetic media. And that is not yet true "writing" to DNA, which presently can only be inferred by comparison to known replication speeds for DNA. These replication velocities have been studied, and do not exceed ~1000 bases per second of the processivity numbers known for DNA polymerases, And note that is not ab initio writing but simply copying an existing script. That also neglects error corrections that would be necessary, and which would substantially slow the overall write speed.

    The potential information density is impressive (order of 10^22 bits in 100 ml), but I suspect no more so than any system allowing small molecules to be the "letters" of such information storage.

  • Here is an example of how closely the blower power correlates with the air speed measured by the anemometer:



    Air temperature and other factors have no visible impact on the air speed. No doubt they do have an effect, but the fluctuations in electric power are so large they swamp these effects. You would need much more sensitive instruments to detect them.

  • Here is an example of how closely the blower power correlates with the air speed measured by the anemometer:



    Air temperature and other factors have no visible impact on the air speed. No doubt they do have an effect, but the fluctuations in electric power are so large they swamp these effects. You would need much more sensitive instruments to detect them.


    I checked the engineering rules of thumb for ducted fans Jed and found that thermal mass transport (and hence power reported by Mizuno's calorimeter convective heat transport per degree C) is directly proportional to blower power, and that air temperature and pressure have null effect. The only thing that would have an effect is if the humidity was different as the specific heat is different for water vapor than air. Thus, 6.5 watts of blower power is the same amount of mass being transported per second past the thermometers, which if they have the same specific heat, would transport the same amount of heat for the same temperature difference. This is a nice feature of the Mizuno calorimeter.

  • The direct radiation artifact


    [NB - it has two variants - direct, and indirect, as shown below. In both cases direct radiation from a hot reactor surface might cause false high power indications]


    Here is an artifact that might explain both anomalous high power measurements, and differences between control and reactive reactors, for Mizuno experiments.


    It is easily checked and shown not to apply. My problem is that I cannot find such checking in the published data (but maybe I am missing something, and Jed can help me). I'm hoping that this can be eliminated through definite checks done: if not it is important to investigate it further.


    For a reactor placed in front of the exit tube from the calorimeter, the reactor surface is visible from the exit. Therefore a RTD , measuring exit gas temperature will view the reactor surface. At high reactor surface temperatures the radiation from the visible part of the reactor will heat up the RTD above the temperature of the air in which it sits. Just as both radiation and forced convection contribute to the heat passage from reactor to calorimeter insulation, so the same is true between the reactor and the RTD. The RTD is much further away, and therefore sees a smaller part of the reactor surface. However the calculated output power is very sensitive to the RTD temperature. Even a 5C increase in temperature would lead to a large false positive output.


    An insulated acrylic box is used for airflow calorimetry. It is 400 mm × 750 mm, height 700 mm. During a test, the
    inside of the plastic box is covered with 1.91 m2 of reflective padded aluminum insulation (shanetsu.com, Fig. 7).
    This minimizes losses to radiation. These losses are low in any case, because the cooling air keeps the inside of the
    box at ∼36◦C (16◦C above ambient). Similar insulation from a US vendor (US Energy Products) has an R-value of 11,
    so the insulation radiates ∼3 W (16◦C/11 W/m2 × 1.9 m2
    ). The air inlet and outlets are circular, 50 mm in diameter.
    The inlet is located near the bottom of one side, and the outlet is on the top surface. The outlet is connected to a pipe,
    which makes the airflow more uniform across all parts of the cross section of the outlet, to increase the accuracy of the
    airflow measurement. The power to the blower is continuously monitored.
    The blower is operated at 6.5 W. The outlet air temperature is measured with two RTDs. They are installed in the

    center of the pipe, one in the stream of air before it reaches the blower, and one after it, to measure any heat added to the
    stream of air by the blower motor. The difference between the two is less than 0.1◦C. However, there are indications
    that heat from the blower motor is affecting both of them. A calibration with no input power to the reactors shows that
    when the blower power is stepped from 1.5 to 5 W, the outlet RTDs are ∼0.35◦C warmer than inlet (Fig. 8). This is a
    much larger temperature difference than the moving air in the box alone could produce.

    The calibration would be performed using the control reactor heater. It therefore would not detect this radiation effect. Of these two RTDs the one after the blower would be affected much much less than the one before the blower. However we do not know which RTD - after or before - was used for the measurements.


    From Figure 5 we see the geometry here. If I understand the above text you can see the blower mounted flush with the calorimeter wall, and a 50mm white pipe projecting horizontally from the blower. A RTD before the blower is not visible and would likely "see" the surface of the middle reactor. It would not necessarily "see" the side mounted reactor, depending on what flange it was mounted on and its orientation, and how much the insulation, when added, obscured this, but in any case the angle subtended by the side reactor surface would be smaller. Even if some metal, such as the blower itself, or a flange, obscured the reactor surface then this metal would itself warm up significantly over the test, and radiation from the metal back to the TC would make the reading higher than the air temperature.


    We cannot know. What is likely from the geometry shown in figure 5 is that any RTD or other surface at the top of the box before the blower would read high due to radiation from the middle reactor, and not so high (perhaps not high at all) due to radiation from the side reactor.


    Jed, over to you. Continuous checks using an "after blower" RTD would show this affect for direct radiation, but not if the whole blower heated up from direct radiation, and then both radiation and convection from the blower surface heated up the exit air. In that case the effect would be unpredictable in how it changed "before" and "after" RTD readings.


    I realise this class of anomaly is easily controlled. My problem is that it is also sensitive to changes in the exact geometry of the calorimeter, position of the RTD, and where the reactor is situated in the calorimeter. None of these things are clearly stated in the published material.


    Replicators would do well to be aware of these possible artifacts. Because both indirect (from the blower heating up) and direct (from radiation onto the RTD or its mounting bracket) effects are possible, replicators should site the air exit further from the reactor surface and use a double baffle (e.g. made of insulation) to protect blower and associated RTDs from any temperature caused by radiation from a reactor.


    I'll link this post in the replication thread.


    Best wishes, THH


  • Right, that deals with direct radiation after the blower, which as I've stated would be negligible. It does not deal with either direct or indirect heating via radiation onto the blower, or any RTD before the blower. The direct route would affect RTD before blower, the indirect route after the blower. The RTD checking cannot be relied upon because would be done with a differently located control reactor.


    Maybe the 2017 report clarifies this matter? My problem is that with the reactor redesign other things like small details of calorimeter geometry, or reactor location relative to exhaust port, might also change, so even if it does we cannot be sure the later design is OK.


    I'm not saying that this artifact necessarily exists - there are many ways to deal with it - just that we do not know whether it exists!

  • Reading more carefully the outside / inside RTD comparison is done with no power to the reactors and therefore provides no information about the level of the direct radiation artifact (direct or indirect version).


    I'm really not sure how this can be evaluated from available information: however with the same setup it would be easy to check by adding a double baffle made of foiled insulating material in front of the outlet, which would eliminate any such effect. Jed might know whether this was in fact done.


    Exit sensor after blower means that we have the indirect effect (from blower heating up) not the direct effect.

  • For a reactor placed in front of the exit tube from the calorimeter, the reactor surface is visible from the exit. Therefore a RTD , measuring exit gas temperature will view the reactor surface. At high reactor surface temperatures the radiation from the visible part of the reactor


    There are several reasons why this hypothesis is incorrect.


    1. The reactor surface is not visible to the RTDs. As noted by Robert Bryant above, the outlet RTD is placed after the blower. (Actually, there are 2 RTDs, both after the blower.) There is a layer of reflective insulation between the RTDs and the reactor surface.


    2. The temperature of the outlet air has been confirmed with thermometers and hand-held thermocouples. The reflective insulation also blocks the reactor from these instruments.


    3. This hypothesis cannot explain why the outside of the reactor box is measurably warmer when there is excess heat.


    4. Why would this happen with excess heat, but not with calibrations? If the excess heat is an artifact, as THHuxley postulates, it would be exactly the same 50 W coming from the same heater in both the excess heat tests and the calibration. It would be ordinary heat from a resistance heater inside the reactor. Why would it have a drastically different effect on the RTDs in the two tests?


    5. Assuming there is anomalous heat (which makes the THH hypothesis wrong), again, why would it drastically change the performance of an RTD? Heat is heat. Once it passes through 1/8" of stainless steel, I cannot imagine it would look different to RTDs.


  • See above.

  • Exit sensor after blower means that we have the indirect effect (from blower heating up)

    The blower heats up just

    as calorimeter walls heat up


    and both transfer heat to the air flow....by convection.. and the RTD measures the temp. increase.

    Are you saying that the blower transfers heat to the outlet RTD by conduction or by radiation

    Are you saying that the blower is a huge degrees temperature different from the adjacent calorimeter acrylic wall?.

  • Exit sensor after blower means that we have the indirect effect (from blower heating up) not the direct effect.


    Again, you assume there is no anomalous heat. So the heat in both cases is 50 W coming from the resistance heater inside the reactor. Why would this same 50 W heat the blower in one instance, but not the other? It is same level of heat, coming from the same heater, in the same place.


    If there is no excess heat, and both the calibration and the excess heat test are at 50 W, how can they heat the blower up to 40 to 250 W?


    If the blower heated itself up by 40 W to 250 W, this would be recorded in the data, and it would burn up. There is no way it could produce this much heat itself. It consumes at most 5 W.

  • THH wrote:


    Quote

    The control reactor is to one side, the active reactor directly underneath and close to the blower. This the active reactor surface radiation would heat up the blower via the exhaust hole, unless this was baffled.


    As stated in the paper and shown in the photos, they are equidistant from the walls and the blower, which is in the center. But again, how can a 50-W calibration add 40 to 250 W to a blower? That violates the conservation of energy.


    Furthermore, if you have any experience with insulation, you will know that all 50 W of heat cannot penetrate straight through the insulation and go into the fan only. It will go everywhere in the box. Even if there were no insulation, and the calibration reactor was placed directly under the fan, the heat from the calibration reactor would go in all directions. Only a tiny fraction of it would heat the fan. It could not magically transfer 50 W to the fan directly, which then heats the air by 10 deg C extra, indicating 250 W. The air is definitely heated. This is confirmed by two RTDs and handheld thermometers and thermocouples.



    Quote

    Umm... I don't assume there is anomalous heat


    Then why would the calibration be any different at all from the apparent anomalous heat? Why would it affect the RTDs and handheld instruments differently? Why would it heat the air with far more net energy than the 50 W going into the system? Whether the 250 W of heat emerges from the reactor or the blower, where would it come from?