Posts by kirkshanahan

    Quote: "Needless to say, the D in the lattice could not reach the surface in that time (the diffusional relaxation time is ~ 10^5 s) while the rate of diffusion of oxygen through the boundary layer could lead at most to a rate of generation of excess enthalpy of ~ 5mW."


    10^5 seconds = 27 hours. Sorry I was not more specific.


    Thank you for clarifying. In my estimation, this number is ludicrous unless one is talking about total unloading to 0.0 D/M. It is hard to get the last bit out. It is easy and fast to drop from D/M = 0.9+ to 0.7 (seconds to minutes), and somewhat slower to drop to the mid-plateau region (D/M~.3-.4) and below for the reasons F&P note, but it would only take minutes to a low number of hours at worst. Once the beta to alpha phase transition is complete, the desorption rate always increases again. The only modifying factor one needs to invoke is surface contamination. Some contaminants can poison the D+D->D2 recombination reaction, which would slow the unloading, but there is no information presented on this. What F&P is assert an unreferenced number, which is in disagreement with my experience with similar materials.


    Jed and other trolls won't accept this of course. I'm not going to argue with them.



    You don't believe Fleischmann?


    I believe F observed a deformation and jumped to the conclusion it was melting, and then got the melting temperature wrong.


    No, plugs do not get "deformed" for no reason in test tubes. Nothing was putting pressure on it. No one squeezed it with a pliers. The only reason it could change shape would be because it melted. There are no other widely variable conditions that could cause this.


    And you know this how? Your infinite scientific acumen?


    The rest of the post is just trolling....

    Not rapid. It takes 27 hours. See p. 12:


    There is no "27' or 'twenty' even on pg. 12 or in the entire document. More made-up stuff JR? Please quote what you are claiming supports your contention so I can actually find what you are talking about in the paper if you expect a reply.


    In that case, why didn't the plug melt in the control runs?


    How would I know JR? My point is that it is unproven that it 'melted' at all. It is much more likely it deformed, and the conditions for doing that are widely variable. Without more info, we cannot make any use of this info, i.e. it is an anecdote, stimulating to some, boring to others.


    The question then becomes: Why did the boiling continue after the power was cut to zero? There has to be some other source of heat. Most people would say that.


    Have you ever boiled a pot of water on a stove? When you got the water boiling and cut off the heater, did it immediately stop boiling? In my experience the answer is "No, not immediately." There is a thing called thermal inertia. It comes from the parts of the container and contents that got heated to a point greater than they would be at when in equilibrium with 100C water. It would all depend on those details, which again we don't have...


    You, however claim that just because something remains hot that does not mean there is a source of heat, and you claim that you can heat a 10 kg sample of metal, leave it in air, and it will still be hot 3 days later. So obviously you have no trouble believing that a plastic plug will melt even though the power turns off, the cell cools rapidly, and the plug remains submerged.


    Back to the stupidity trolling again JR? No comment.

    So you have personal experience of measuring hydrogen loading in the lower portion of an half-immersed electrode?


    No, I have experience loading Pd foils of the same approximate size in gas loading experiments, which is equivalent to the situation of the UPPER half of the electrode. Since the upper half remains attached to the lower half, the H in the lower half _will_ move into the upper half as the H in the upper half exits it to attain equilibrium with the gas space, which constantly changes due to outflow through the vent and D2+O2 reactions on the exposed upper half of the electrode. Note that this actually starts happening when the very first bit of the electrode is exposed and continues until it is all exposed.


    As an aside I will also remind you of the care CFers take to make sure the Pt counterelectrode, which normally is wound around supports and encloses the Pd cathode, is wrapped uniformly. They do that to prevent electric field variances at the Pd electrode surface that allow 'leaks' to develop. IOW, the leaky areas would be seeing a reduced field strength which translates to lower loading in that area. The 'leak' is exactly the same as I am describing above.


    Also note that the H formed at the covered part of the electrode will be absorbed now due to the depletion from the upper half. When the electrode was covered and fully loaded, most of the H just formed D2 and bubbled away.


    I consider it unlikely that the designated magic loading number of >0.85 or so will be maintained under these conditions, but the problem is that F&P are claiming a HAD after the electrode is fully exposed. It definitely will not remain at >0.85 D/M for more than a very few minutes, while the HAD is supposed to have gone on for hours.


    But also note that as I noted in my discussion with Ascoli65 the question is indeterminate with regards to whether the electrode is fully exposed (0V) or not.


    So once again, the point is that the experiment was not very useful due to missing information. Par for the course...

    Is this statement based on some real mass diffusivity calculations you've done, or is it all just a bit hand-wavey?


    It is based on personal experience with gas loading of a variety of hydrides, including Pd , supported thick film Pd, and Pd alloys. They unload very rapidly when well activated.


    Edit: I should add that when Ed Storms measured loading by weighing loaded electrodes, he did so by immersing them in liquid nitrogen. The H atom recombination reaction goes to near zero rate at <120K or so. If they didn't unload fast, Ed wouldn't have needed to do that.

    Ascoli65,


    You also might enjoy http://lenr-canr.org/acrobat/GoodsteinDwhateverha.pdf It describes the skewed reviewing that happens in the field. Goodstein wasn't aware of my 2002 publication in 2000 (even thought he original manuscript came out then, see http://lenr-canr.org/acrobat/ShanahanKapossiblec.pdf). I would guess he would be really interested to see the blatant use of a strawman argument in http://lenr-canr.org/acrobat/MarwanJanewlookat.pdf used to discredit my explanation of the FPHE. To see what I really said you have to look up J. Env. Monitor., 12, (2010), 1756-1764. The 'MarwanJane...' paper immediately followed mine.

    Ascoli65


    You know there is another consideration not routinely brought up in considering these experiments, namely unloading. For the case of Pd and assuming the bulk loading level matters (which I don't but most CFers do), as the electrolyte level drops below the top of the metal Pd cathode the uncovered portion no longer has the electrolytic force present to keep the H or D in the metal. You have converted to a gas loading situation, with the external H2 pressure being that which is supplied by the electrolysis and limited by the atmospheric pressure plus any flow restrictions. That means that the H concentration in the Pd will drop precipitously in the uncovered region, and the H from the covered region will migrate to the 'lower pressure' unloaded areas. Thus the total H conc in the metal will start dropping. It should rapidly get below the magic H/M of 0.9 or so, down to the 0.7 range, which McKubre says won't do cold fusion.


    (Further, the H in the uncovered Pd is readily able to react with the O2 from the electrolysis, unloading it even faster.)


    By the time the electrolyte level gets to the bottom of the electrode, the H or D concentration should be minimal, and CF should have stopped. So, what would be driving a HAD? Nothing except some data interpretation error IMO.

    Ascoli65


    Your comments made me take a second look and I found something interesting. As I noted in my whitepaper, the ICCF3 paper that JR referred to was later published in a slightly modified form in Phys. Lett. A, 176 (1993) 118. They presented the Figures 6B and D from ICCF3 as FIg 8a and b in PLA93. I compared the B and D figures in my whitepaper, but comparing the D and b Figures (i.e. supposedly the same data) I find a discrepancy. The cell voltage at the end of the run in Fig. 6D is at 0V exactly, while in 8b is shows as a few volts positive! (See attachment. Note that the blue shaded boxes are 'select' boxes and were drawn with the top of the box at exactly 0 V.)


    Applying Gene Mallove's criteria, we can call F&P frauds and con men based on this!!


    (Note: Gene Mallove disputed the legitimacy of the MIT authors clipping their CF study results to omit baseline shifts up and down at the start and end of their Figure. They also called the center of the noisy trace as 0. Gene thought this was fraudulent. In fact it is SOP since baseline shifts like that are a common problem. However in F&P's data that we discuss here, it makes a difference, because 0V means no conductivity and no ohmic heating. Positive V on the other hand leaves an active heat source in the cell by implication. The final point is the same as made by myself and Ascoli65, F&P did a poor job when writing this paper.)

    You have a genius for missing the point. Suppose the temperature is 200 - 215 deg C, as you say. The point is, the plug melts in tests with Pd-D when there is excess heat, but it does not melt in control tests with Pt-H or Pd-H, when there is no excess heat. The exact temperature does not matter.


    And you are an idiot savant at missing the point, twice now. The melting point is not the issue, the deformation temperature is. That value is 126C based on literature references. So, the 'damage' F&P saw was likely caused at ~125C, not at over 300C. 125C at the plug in that apparatus after electrolysis stopped is reasonable, not exceptional. No excess heat required.


    Your other comments also miss the point or they are mistaken.


    By the way, information not found in this paper can be found in others, so perhaps you should misconstrue them as well


    No, they're don't.


    I asked for references and you pull the old 'go find it yourself', while asserting vehemently (in all your posts) that what you say is true. My experience with you is that you make things up when you need them, so like those who choose to ignore Rossisays, I choose to ignore Jeddisays. I will consider actual references that support your assertion. But so far you are zero out of one.

    This material is melts at 300°C.


    Then it would not be Kel-F as F&P claim in their paper. Kel-F's melting point is more like 200-215C, as I showed in the references in my prior post. F&P specifically say this: "furthermore the Kel-F supports of the electrodes at the base of the cells melt so that the local temperature must exceed 300ºC. "

    That actually says nothing about what the materials melting point actually is, just that it is less than 300C. But given that Kel-F melts at 215C or so, how can they jump to 'exceed[s] 300C'? The best they could really say is 'exceeds 215C'.


    Furthermore you obviously missed the point of listing the deformation temperatures. You will note they are listed at two pressures and both are 126C, well below the melting point, both being determined via an ASTM method. You will also please note that they are determined to be the same temp at two different applied pressures. That suggests the value may remain the same at even lower applied pressures. In the words, the deformation that F&P noted could easily and with high probability occur at ~125C. That temp is certainly reasonable to obtain in their experiments, especially in the static environment of 'after boil-off'.


    They obviously weren't plastics chemistry experts and seemingly mis-identified the phenomenon that resulted in the deformation. There is only one way to resolve this, and that is to get more data, which is unlikely to happen. So for all intents and purposes we are now left with...wait for it... the Fleischmann and Pons Melted Plug Anecdote! (you may recall that anecdotes aren't science...)




    This paper does not say what you say it does. Surprise, surprise...


    Checking the paper for occurrences of 'pt', I found 16, but only 4 were for 'Pt'. The rest were for 'pt' as part of another word. Also, only one occurrence of 'platinum'. All were with respect to a supposed 'control', or non-excess-heat run. It is acknowledged that F&P *thought* Pt cathodes were inactive, but Storms proved they aren't. However, that whole point is barely relevant because everyone also acknowledges that getting the Fleischmann-Pons-Hawkins Effect (FPHE) is difficult to do. It is likely more difficult on Pt but not impossible. So having a null run is expected.


    The key point with regards to the referenced paper is that *one* 'excess heat' experiment is discussed in detail ("We examine next the results for one Pd cathode "), and *no* other results are presented other than a null Pt 'calibration' run. I noted this problem in another instance in my whitepaper (http://coldfusioncommunity.net…4/SRNL-STI-2012-00678.pdf) w.r.t. the Pd data presented where I overlaid two thermal histories from this paper (and its Phys. Lett. A version). They were identical, yet F&P only talked abut one of them showing excess heat, and specifically after the electrical connection was disrupted, in what everyone calls a 'Heat-After-Death' instance. But why only claim XSH in one case when they show data for two identical thermal histories? Makes no sense... How hard would it have been to put up a table saying 'We ran x experiments and observed XSH in the following:"? This paper is notable for what it *doesn't* report.


    {Note to those who read JR's referenced paper: F&P discuss one run from a set of 4 using similar Pd electrode with slightly different current time profiles. The Figures 6A-D show the thermal histories. It is obvious from the text that the in-Figure caption of 6D is wrong in that it refers to 'electrode 2'. That should be 'electrode 4 instead. The 'x' in 'Demo9_x' is the electrode number.}

    During a test with Pt and/or ordinary water, the boiling stops immediately, and the cell begins to cool. After the test, there is a little unboiled liquid left at the bottom of the cell, and the Kel-F plu is intact. Whereas with anomalous excess heat:


    1. Boiling continues until all the water is gone.

    2. The cell does not cool. It often gets hotter after all the water is gone.


    reference please.

    3. The Kel-F plug melts.

    Some tidbits on KelF specs:


    Wikipedia https://en.wikipedia.org/wiki/Polychlorotrifluoroethylene

    "This results in having a relatively lower melting point among fluoropolymers, around 210–215 °C."


    https://www.aetnaplastics.com/…aetnaproduct/18/PCTFE.pdf

    [ASTM Method]D648 Heat Deflection Temp (°F / °C) at 264 psi 258 / 126

    [ASTM Method]D3418 Melting Temp (°F / °C) 415 / 212


    http://www.complast.com/kel-f/neoflon.htm

    Melting Point 210-212°C

    Deflection Temperature (66 psi) [ASTM Method]D-648 126 °C


    https://en.wikipedia.org/wiki/Heat_deflection_temperature
    The heat deflection temperature or heat distortion temperature (HDT, HDTUL, or DTUL) is the temperature at which a polymer or plastic sample deforms under a specified load.

    garding Kaolin this is not important in this experiment - the paper is just good for activation but it is working in the same way also with a porous plastic such as a foams. Please read my previous posts

    Pot, kettle.



    Functional Fillers for Plastics. DeArmitt, Chris.

    Applied Plastics Engineering Handbook.(2011). 455-468.

    10.1016/B978-1-4377-3514-7.10026-1.


    Fillers are an extremely diverse group of materials. They can be minerals, metals,

    ceramics, bio-based, gases, liquids, or even other polymers. There are four

    fundamentals that influence the property of fillers: filler concentration, particle

    size and size distribution of the filler, distribution and dispersion, and shape

    and aspect ratio. Nearly all-common fillers are stiffer than, that is has higher

    modulus than, typical polymers. Therefore, adding filler tends to increase the

    tensile and flexural modulus of the polymer. In addition, vast majority of polymers

    are excellent thermal and electrical insulators. Outstanding electrical insulation

    leads to extensive use in wire and cable insulation as well as numerous other

    applications. Although a few polymers are intrinsic conductors of electricity, for

    most polymers, conductivity must be induced through the use of conductive fillers.

    Similarly, plastics are superior thermal insulators and even more so when foamed.

    There are applications where plastic with exceptionally high thermal conductivity

    is called for. One notable example is heatsinks for laptop computers. Plastics

    allow complex, efficient shapes that fit within the strict confines of a laptop and

    when appropriate fillers are added.



    INTRODUCTION TO FOAMS AND FOAM FORMATION

    Michael O. Okoroafor, Kurt C. Frisch, in Handbook of Plastic Foams, 1995


    INTRODUCTION


    Cellular plastics or plastic foams, also referred to as expanded or sponge plastics,

    generally consist of a minimum of two phases, a solid–polymer matrix and a gaseous

    phase derived from a blowing agent. The solid–polymer phase may be either inorganic,

    organic or organometallic. There may be more than one solid phase present, which can

    be composed of polymer alloys or polymer blends based on two or more polymers, or

    which can be in the form of interpenetrating polymer networks (IPNs) which consist

    of at least two crosslinked polymer networks, or a pseudo–or semi–IPN formed from a

    combination of at least one or more linear polymers with crosslinked polymers not

    linked by means of covalent bonds.


    Other solid phases may be present in the foam in the form of fillers, either fibrous

    or other–shaped fillers which may be of inorganic origin, e.g. glass, ceramic or

    metallic, or they may be polymeric in nature.



    FOAMING

    Dominick V. Rosato, ... Matthew V. Rosato,

    in Plastic Product Material and Process Selection Handbook, 2004


    Each plastic can include fillers and/or reinforcements to provide certain improved

    desirable properties.


    In addition to the basic plastics in liquid and bead forms with foaming agents,

    fillers, additives that include cell controllers and fire-retardants, catalysts,

    surfactants, styrene monomer, systems that vary viscosity from liquid to paste form,

    and other additives are used.


    THERMOSETTING FOAMS

    Kaneyoshi Ashida, ... Kadzuo Iwasaki, in Handbook of Plastic Foams, 1995


    Epoxy Resins as Matrix Resin.

    Burton and Handlovits used conventional epoxy resins as the matrix resin, and fiber

    glass, wollastonite and inorganic fillers as the reinforcement.




    https://en.wikipedia.org/wiki/Filler_(materials)


    Types of fillers


    In the past, fillers were used predominantly to cheapen end products, in which case

    they were called extenders. Among the 21 most important fillers, calcium carbonate

    holds the largest market volume and is mainly used in the plastics sector.[2] While

    the plastic industry mostly consumes ground calcium carbonate (GCC), the paper

    industry primarily uses precipitated calcium carbonate (PCC) that is derived from

    natural minerals. Wood flour and saw dust are used as filler in thermosetting

    [citation needed] plastic.


    In some cases, fillers also enhance properties of the products, e.g. in composites.

    In such cases, a beneficial chemical interaction develops between the host material

    and the filler. As a result, a number of optimized types of fillers, nano-fillers or

    surface treated goods have been developed.

    Please let me know what chemical reaction could induce the counts?

    What else on Earth can do this?

    What else it could be?

    If there is something wrong with the radiation measurement how it can be fixed with zero physical motion just by time?


    “Kaolinite is one of the most common minerals; it is mined, as kaolin, in Malaysia, Pakistan, Vietnam, Brazil, Bulgaria, France, the United Kingdom, Iran, Germany, India, Australia, Korea, the People's Republic of China, the Czech Republic, Spain, South Africa, and the United States.


    “The main use of the mineral kaolinite (about 50% of the time) is the production of paper; its use ensures the gloss on some grades of coated paper.    https://en.wikipedia.org/wiki/Kaolinite


    “Various materials, including Kaolinite, calcium carbonate, Bentonite, and talc can be used to coat paper…”  https://en.wikipedia.org/wiki/Coated_paper

     

    MethodsX  Volume 5, 2018, Pages 362-374

    Radioactivity and radiological hazards from a kaolin mining field in Ifonyintedo, Nigeria

    T.A.Adagunodoa  A.I.Georgea  I.A.Ojoawob  K.Ojesanmic  R.Ravisankard

    Abstract

    The concentrations of the radionuclides in the subsurface formation (soils and rocks) solely depend on their geological origin, which enables its variation from point to point on the Crust. Construction materials can possess elevated concentrations of radioactivity if their byproducts are mined from contaminated radionuclide sources. In this article, results of in situ measurements of radioactivity concentrations of 40K, 232Th, and 238U as well as gamma doses and radiological hazards from kaolin mining field were presented and evaluated. Eleven stations were randomly occupied in order to cover the upper axis of a kaolin mining field in Ifonyintedo. The radiometric survey was achieved using Super-Spec (RS-125), equipment capable of measuring activity concentrations and gamma doses. For each location, measurements were taken four times, while its mean and standard deviation values were estimated for better accuracy. The overall mean activity concentrations (for 40K, 232Th and 238U) and gamma dose were estimated as 93.9 Bq kg−1, 65.1 Bq kg−1, 38.2 Bq kg−1, and 59.6 nGyh−1 respectively. The estimated radiological hazards from the measured parameters showed that the overall mean concentrations of Radium Equivalent, External and Internal Hazards, Annual Effective Dose, Gamma and Alpha Indices, and Representative Level index are 138.5 Bq kg−1, 0.37 0.48, 0.29 mSvyr−1, 0.48, 0.19, and 0.97 respectively. By comparing the mean values of the activity concentrations and their radiological risks with the several world standards from the literature, kaolin deposits in Ifonyintedo are highly rich in thorium.


    […]

    Method details

    Kaolin is one of the types of clay found in nature, with the chemical composition of Al2Si2O5(OH)4 [1]. The name “kaolin” is derived from a Chinese word Gaoling, which literally mean “High Ridge”. The industrial usefulness of kaolinite clays can be found in paper industry [2], paint industry (as filler for paint), rubber and plastic industry [3], and construction industry [4]. They are used in the production of ceramics, cement, porcelain and bricks [5], toothpaste, food additive, and cosmetics [6]. Kaolinite clay also found its application in agricultural domain (production of spray that repel insects and avert sun burn) and medicine [6]. Recent study from Turkey showed that Kaolin clays are cost effect when used as pozzolanic additives in cement and concrete [7].


     [2]  J. Velho, C. Gomes Characterization of Portuguese Kaolins for the paper industry: benefication through new delamination techniques,  Appl. Clay Sci., 6 (2) (1991), pp. 155-170





    Map of U ground conc in US & Canada

    https://www.thoughtco.com/map-…ctivity-in-the-us-3961098

    Map of Th ground conc in US & Canada (note similarities and differences to U map)

    https://en.wikipedia.org/wiki/…ia/File:NAMrad_Th_let.gif


    https://www.epa.gov/radiation/radionuclide-basics-radium

    “ Radium is a radionuclide formed by the decay of uranium and thorium in the environment.”

    “In the natural environment, radium occurs at trace levels in virtually all rock, soil, water, plants and animals.”

    “As radium decays it creates a radioactive gas, radon. Radon is common in many soils…”



    Soil concentrations of “emanating radium‐226” and the emanation of radon‐222 from soils and plants

    JOHN E. PEARSON & GARY E. JONES

    Tellus Volume 18, Issue 23 First published: August 1966

    https://onlinelibrary.wiley.co….2153-3490.1966.tb00282.x

    “The emanation rate of radon‐222 from soils into the atmosphere varied 1000‐fold among geographical regions but was shown to be uniform in Champaign County, averaging (140 ± 73) × 10−18 curies per square centimeter per second.”

    Please note that it is not just the paper. It is everything where the vapor went. The paper is just the easiest thing to move and measure while I don't wanted to interrupt the experiment.

    But even without the paper radiation is elevated in very close proximity to the cell (near the opening).


    OK. So you have radioactivity in places besides the paper. What makes you think the direction of travel is from your experiment up to the paper and not the reverse? (rhetorical, I don't expect an answer)

    Sorry if this is a silly question but... Did you check a new sheet of paper fresh from the pack?


    Not silly at all. However, the new sheet should be wetted and left to sit for approximately the same time as the sheet over the experimental apparatus.

    Yes, and if it is run long enough so that we can rule out chemistry-

    say >50 MJ/ kg for the entire system mass calculated from the integrated

    amounts.


    But don't forget...


    50 MJ/kg = 50 kJ/g


    So if you have a 2 gram sample, that means you produced 100 kJ.


    Now 1 J/s = 1W, so if you were seeing a 1W excess heat, that would require 100,000 seconds = 27.8 hrs. So, if you ran longer, you'd get an even bigger 'energy density'.


    But 1W is of the same size as the calorimeter errors coming from CCS/ATER. You would need to assure yourself your 1W 'signal' was actually a signal.


    Bottom line: Integrating an error gives a bigger error, not a discovery.

    Well, it’s been a month and no response. Maybe McKubre forgot. Here’s a reminder. (I answered this once, but I did this independently so as to hopefully add more detail.)


    Note I use the old Internet '>' symbol to indicate what Dr. McKubre wrote in the following. My responses follow as usual.


    >We obviated the precise issue that Kirk speaks about as follows:


    Hmmm…not likely….


    >1. The electrochemical cell was enclosed (at pressure) in a metal heat integrator (“isothermal wrap” >in THH's words).


    And then placed inside the calorimeter? Or is the metal wrap the cell boundary?


    >2. Nothing left the cell except wires and a gas pipe for initial H2 or D2 gas charging.


    The primary unmeasured heat loss pathways.


    >3. A complimentary Joule heater was intimately wound into the metal heat integrator axially >symmetric to the electrochemical cell.


    OK. Did you ever compare the results using this heater for calibration vs. electrolytic calibration? Ed Storms found they differed slightly, and his Joule heater was immersed in the electrolyte.


    >4. The calorimetry fluid submerged and completely enveloped the integrator bathing externally all >surfaces and picking up heat from wherever sourced


    So, question from 1.) answered. Cell with wrap inside calorimeter.


    >(BTW there are 7 conspicuous heat sources in FPHE calorimeters, not just 2):


    Of course, but considering two is a minimal picture to understand how the CCS happens. With 7 you have lots more possibilities I suppose.


    >a. The anode (I * V anode)

    >b. The electrolyte (I2 * R electrolyte)

    >c. The cathode (I * V cathode)


    I.e., the standard non-electrolysis power. Once Vth is exceeded one gets electrolysis, which gives H2 + O2 with their energy content given by I * Vth


    >d. Any excess power


    If any….


    >e. The recombiner (I * [V cell-V thermoneutral])


    As above (a,b,c)…


    >f. The complimentary Joule heater that kept the sum of input power constant (I2 * R heater)


    Yup…


    >g. The wires (I2 * R wire). Note that since V was measured at the calorimeter boundary only the >wires inside the calorimeter contribute to this term, and it is fully measured


    Yup, and not considered an issue.


    >5. The thermal efficiency of our early design was ~98%, later improved to 99.3%.


    Good for you.


    >6. Only the missing 0.7 to 2% (that is lost primarily by thermal conduction to the ambient down wires and the pipe) needs to be “calibrated”.


    As I noted above…



    >7. Calibration of the first law parameters (I, V, ∂m, ∂t) were performed independently of the calorimeter.


    Hmmm…this will be an ideal situation in most cases, meaning that you will include the terms you think of and won’t include terms you don’t think of. Especially since you are deriving these things 'independent' of the calorimeter. They would necessarily be based in some theory, not actual measurements. But I believe this to be an irrelevant point to the CCS issue. Your model is still a lumped parameter one, meaning you have no capability in it to detect when a CCS has occurred and thus you are incapable of compensating for it.



    >8. At constant input power the presence of excess heat can be inferred qualitatively by a rise in >temperature of the outgoing fluid (normally water). Our largest excess power levels were ~300% in >input power. Our largest statistical significance (Excess power / measurement uncertainty) is 90 >sigma.


    As yes, the old 90 sigma ploy… My work with Ed’s data showed that a 1-3% change in calibration constants produces up to 780 mW apparent excess heat signal. Ed liked to say his calorimeter had a 70-80mW error band, but that is just the instrument noise. It doesn’t include the impact of CCS/ATER. CCS/ATER upped his error band by 10x. What this shows is one needs to compute error bars correctly. I have been bashing the Beiting report on the same basis in another thread here on L-F. How did you compute yours?



    >9. We tested our assertion that heat was measured equally independent of its source position two ways:

    >a. Finite element calculation (this is a complex matter not handled by two term algebra) which modeled the entire calorimeter up to its isothermal boundary: submerged in a water bath held at constant temperature ±0.003°C; in a room held constant to ±1°C


    I have previously said finite element simulation is the ultimate form of my simplified 2-zone model. Instead of 2 zones, you’d have thousands. So, good for you that you used FE. But as a modeler, I know the secret. The model can only model what you put into it. So, what %recombination at the electrode values did you use in your FE sim runs? (Note I suggest here several values are needed, ranging from 0 to 100%)


    >b. Experimentally testing the influence of current to the cell and the complimentary Joule heater

    >over a wide range in blank cells (H2O, Pt or poorly loaded Pd cathodes, early before initiation of the FPHE)


    This would not simulate a change in heat distribution from the base case of 0% ATER. What you are simulating is 0% ATER + heater changes + electric current/voltage changes. It might answer my prior question about how close the Joule heater and electrolytic calibration came out.



    >10. The calorimeters were proven to be heat-source position-independent already by 1991 when I stopped worrying about this effect for our calorimeters. >The fact that long long long hours of calorimetry were performed (>100,000), covering wide variations of cell and heater power, with calorimetric registration of >zero excess heat sadly but conveniently reinforces our conviction that the Shanahan hypothesis that heat excess can be incorrectly measured (always >positively?) by the displacement of heat sources – plays no significant role in our calorimeters.


    None of that suggests you tried altering the heat distribution inside the cell, which is what ATER does, and thereby induces a CCS. You have simply verified that you did a lot of work while not understanding this problem.


    (BTW- What's the deal with the "always positive" comment? Do you calibrate with active electrodes? No? So why are you confused by the fact that the FPHE produces a one-sided signal when you always calibrate at 0%ATER?)


    >11. This last conclusion, equally rigorously supported by their designers and authors, applies to the >two other modes of calorimetry with which I am closely >familiar: F&P’s partially mirrored dewar design; the heat flow calorimetry of Violante and Energetics (using heat integrating plates).


    Since the last conclusion was irrelevant, so are these other examples.



    >There are more insidious potential error sources possible particularly in electrochemical >calorimetry.


    I don’t disagree, but I’m not talking about them. I am specifically talking about recombination heat appearing at the electrode.


    >Ed discovered one in simple isoperobolic calorimetry for which the thermal barrier was the (pyrex) cell wall (changing wall hydraulics). Others exist and we >should always be alert and open to suggestion. On the other side I suggest that the suggestors pay close attention to the literature, make quantitative >calculation modeling the physical processes that drive the putative mechanism, and do not make global claims of “it is all wrong because…”.


    Which is exactly what I did, and I have not found anyone who correctly models the cell-calorimeter setup to allow for the impact of ATER to be seen.


    >It is not that I claim that Kirk’s suggested semi-mechanism has never applied to LENR calorimetry. The effect he describes did play a role in the NRL / >Coolescence Seebeck calorimeters when the >recombiner is more or less well coupled to the predominant heat-flow path.


    In your cell and Ed’s and others I have looked at, the root cause of the CCS is not the predominant heat flow path, but the heat flow path of the unmeasured heat lost, which is what gives you less than 100.000000000…% heat capture efficiency.


    >But this was recognized by them. It is not that his “discovery” is never significant, or never could be.


    Ahhh..blade to my heart there Mike…


    >It is that the mechanism is well known, was historically anticipated, and is irrelevant to most of the calorimeters with which I am familiar.


    Ummm...since you haven’t demonstrated any comprehension of what I assert the problem is, this statement is inaccurate. You simply can’t evaluate that until you a) understand what I am saying, and b) check it.


    >Even if he could show one case quantitatively, it would not affect the whole of our understanding.


    I did show one case quantitatively (actually two but from the same calorimeter, just had some feedback noise in it so Ed reran it and I used the second run for my paper. The first run did the same thing though after you subtracted the feedback problem out. More complex to explain though, so I just left it out. Actually, maybe it was 20 cases (10 voltage sweeps in each run). Tricky how to count that right?). And it clearly should affect the whole of your understanding but clearly hasn’t.



    >Here endeth the lesson. I will answer only relevant technical questions for clarification (and then probably slowly).


    But you haven’t 'moved' at all, and it’s been almost a month since the ICCF ended, and you’ve posted twice elsewhere on L-F since then. Just planning on continuing to ignore me?


    Oh, P.S. – Why is it you guys keep trying to redefine my acronyms. When I wrote the original paper I scoured the literature to see what acros had been used so that I wouldn’t create confusion over acronyms. You yourself wrote of the Fleischmann-Pons Heat Effect but you used ‘FPE’ as the acronym. I therefore chose ‘FPHE’ to represent the Fleischmann-Pons-Hawkins Effect to underscore that I was talking about a non-nuclear effect. Now you’re trying to steal my acronym and redefine CCS. Not polite at all…


    Oh, P.P.S. Still waiting for your special qualifications…

    stefan


    Yes, I am definitely describing stepwise regression, primarily because some posters here didn't seem to understand how to do it and kept asking for help. Presumably their questions are now answered. If not, oh well, I tried...


    I am aware of 'R' (the language) but don't know it. I wondered if I should mention that it shouldn't be confused with the R of the R^2 values...


    Getting back to the main points I was making (and out of the statistical minutia discussions), the first and most important point is the Beiting did not compute his error in output power appropriately. He started out well in that he used the POE equation, but then he only computed it for the T variable, which is often the least significant to the error. The calibration constants he computed from his calibration experiments are experimentally determined numbers that come with their own error that is then propagated through the computations to the output power. So he needs to fix that omission.


    Next, the error he lists on his calibration curves seems to be the 'standard error of y', and not the errors of the coefficients, which are the ones used in the POE. While Beiting says he used more points, my hand-digitized version gives roughly the same values as he reports, which leads me to believe he is not being clear in what he means. He need to clarify this.


    Finally, he does not report on why he chose the cubic equation. I found the quartic and quadratic to be almost as good (or maybe better), and I showed that this (the model choice) induced a difference in computed output power that encompassed the reported excess heats, i.e. his 'signal' could be due to a math problem, not a real heat source. More info is needed from him to validate the choice of calibration equation form.


    All that was the first part of a critical review of his report, which was announced here as 'one of the best ever'. While Beiting does at least use the POE process, he didn't do it right.


    There is a Part II and a Part III also, but I'm tired of trolls. The Part I analysis says the report is typical of the field, i.e. inconclusive.

    Seems some need more spoon-feeding...OK...last shot...if this isn't good enough then you need to take a statistics course or two (or three...).


    From the Minitab reference I pointed to previously:


    Adjusted R-squared and Predicted R-squared: Generally, you choose the models that have higher adjusted and predicted R-squared values. These statistics are designed to avoid a key problem with regular R-squared—it increases every time you add a predictor and can trick you into specifying an overly complex model.” [emphasis added]


    “low p-values indicate terms that are statistically significant.”


    “The p-value for each term tests the null hypothesis that the coefficient is equal to zero (no effect). A low p-value (< 0.05) indicates that you can reject the null hypothesis. “


    From https://dss.princeton.edu/onli…terpreting_regression.htm


    “The t statistic is the coefficient divided by its standard error. The standard error is an estimate of the standard deviation of the coefficient, the amount it varies across cases. It can be thought of as a measure of the precision with which the regression coefficient is measured. If a coefficient is large compared to its standard error, then it is probably different from 0.”


    “Your regression software compares the t statistic on your variable with values in the Student's t distribution to determine the P value, which is the number that you really need to be looking at. “


    Use P to assess quality of fit of the model, t to assess numerical error of coefficient – both are important.



    {addition}


    From the Minitab site:


    "T and P are inextricably linked. They go arm in arm, like Tweedledee and Tweedledum. Here's why.


    When you perform a t-test, you're usually trying to find evidence of a significant difference between population means (2-sample t) or between the population mean and a hypothesized value (1-sample t). The t-value measures the size of the difference relative to the variation in your sample data. Put another way, T is simply the calculated difference represented in units of standard error. The greater the magnitude of T (it can be either positive or negative), the greater the evidence against the null hypothesis that there is no significant difference. The closer T is to 0, the more likely there isn't a significant difference."