Posts by kirkshanahan

    @Cy


    The use of Pd coatings to promote hydrogen absorption is well known. When I joined the hydrogen processing group in 1995, a PhD materials scientist told me his thesis work was loading niobium with hydrogen, which could only be done if it was Pd coated. He had already been working for several years.


    In ~1995 I submitted a funding proposal that suggested I would coat TiFe, a hydride former, with Pd to accelerate the kinetics and ease the activation process.

    @Cy - I started into study of cold fusion in 1995, when I joined the hydrogen processing group here at the Savannah River National Laboratory. The claims made by CF pundits directly threaten my safety, and the safety of my coworkers. Therefore, I try to stay abreast of the field and I have studied specific aspects of interest in the past, which led to my publications in the field. I personally work with almost all of the materials people claim do LENR. Previously I worked for 8 years with pure tritium. I am currently setting up a small gas handling manifold for work on materials with small amounts of tritium in them. I have much relevant experience beyond that as well. Is that 'big long enough' for you?

    Claytor used Femototechs to produce those results. Femototechs are susceptible to interferences from other chemicals in the gas phase which is why Tom notes the extremes he went to to supposedly clean the surfaces with a plasma before he ran the experiments. However, being a pathoskeptic, I wanted to know how he assured himself the experimental conditions didn't cause 'additional' cleaning, i.e., how did he know his cleaning was adequate, so, I emailed him. He replied, 'read the paper' (paraphrase). I replied 'I did, but I still have a couple of questions, which are...'. He never replied.



    Some time later I asked Dale Tuggle about the paper, expressing that I was a little skeptical. He replied that I was right to be so. No further discussion ensued.


    Replication answers all.


    ref: https://www.osti.gov/servlets/purl/102234

    Not reading Russian, is it possible that the different forms of titanium are just absorbing different amounts of tritium from the deuterium?


    Isotope effects are possible but Ti doesn't show a strong isotope effect in most cases. I'd look elsewhere if I were you.


    Might be a surface area effect. The sieved sample would have fewer small particles which have a larger surface/volume ratio, i.e. more m^2 per gram. Thus they might absorb more. You'd need to look at the whole of the particle size distribution results and follow up with some other fractions I would guess.

    The tritium story seems to have world-wide acceptance now


    World-wide? Doubtful. Within the LENR community. Of course.


    Which is why I was bringing up the method for detecting tritium. The 'industry standard' is the method described in:


    "Tritium Analysis in Palladium with an Open System Analytical Procedure", K. Cedzynska, S. Barrowes, H. E. Bergeson, L. C. Knight, and F.G. Will, Fus. Tech., 20, (1991), 108


    In fact, the paper above was recommended to me by a colleague for some work we were doing jointly. In the conclusions, they mention this is the method used by Wolf, one of the first to claim T detection in F&P cells.


    They also say: "Unfortunately, in evaluating the applicability of this analytical procedure for reliable tritium determination, we find the open-system technique to be sometimes subject to artificially high count rates (due to color effects in the solution and, possibly, to metal contaminants in the palladium), and also to artificially low count rates (due to possible loss of gaseous tritium during the various steps involved in the open-system procedure)."


    Elsewhere in that section they note that "improper analytical procedures" could introduce tritium contamination. They go on to say; "the use of reliable techniques should be mandatory" and "A reliable technique ... has been developed and will be described in the near future".


    IOW, the 'industry standard' method has some issues, at least according to Will, et al.


    Will later published this:


    "Closed-System Analysis of Tritium in Palladium", K. Cedzynska and F. G. Will, Fus. Tech., 22, (1992) 156,


    that describes the improved method. However, that method uses a microdistillation, which is a real pain to do, very labor intensive. Plus, it also incorporates a catalytic gas recombination to catch lost tritium. As such it isn't normally used, since most samples in the nuclear business are well above the T limits where problems can arise with the interferents Will lists in his 1991 publication. The last sentence of the paper is: "Application of the closed-system procedure...is advisable to ascertain that there is no possibility for tritium contamination." IOW, "the best technique is this one". But the 'best' is not the 'standard', because of the work load of the best technique.


    The 'standard' technique requires the use of quench compensation to offset the sample coloring by dissolved Pd in these analyses. This is why contaminants can alter the results. But everyone knew from the earliest days that Pt could be found on the Pd cathode, i.e. the electrolyte was dissolving some of the anode and depositing it on the cathode. The reverse of that was never investigated to my knowledge, an interesting omission. Dilute enough Pd won't show visible color but can still affect the tritium

    measurement. Other elements or particles can do that to, especially nanoparticles. For ex., solutions of gold nanoparticles are red-colored. If you have nanoparticles floating around, the quench correction would need to be adjusted. When might that happen? How about during codep, when sometimes macroscopic chunks break off (as per comments made in this forum recently). Nanosized deposits forming in solution or breaking off the cathode are clearly possible.


    The point is, the CF researchers never talk about this. They just say, as Ed did, "We used LSC." There's a lot more to it than that.



    Oh BTW, Will was the director of the National Cold Fusion Institute that was set up in Utah after the F&P announcement when he published these papers.

    Jed,


    Please look at Figure 20 of T. Mizuno / Journal of Condensed Matter Nuclear Science 25 (2017) 1–25 (ref 1 of the ICCF22 preprint). It shows the inlet and outlet T’s on a run from the same calorimeter. Two points from that Figure: (1.) The inlet temperature tracks the outlet temperature in the first (major) part of the trace, (up to a little past 7 hours) and (2) the noise levels of the two signals are slightly different but remain the same throughout the whole run. Now look at the Figure I posted. The purple curve clearly shows regions of increased noise levels that don’t track the output temperature, and the noise level of those regions is more like 0.8-0.9 degrees, nor <0.1. In fact the noise levels near the end of that run show ~.2-.3 degrees span, so they change during the run. I can see the inlet T noise in Fig. 20 might be the 0.1 degrees, but the outlet is larger. What is most important though is the loss of tracking that we see in the spreadsheet plots. That indicates an ‘external’ source of signal that causes a dip in Tin, which translates to a positive jump in Wout. One common cause of that behavior is electrical noise.


    Room temperature drift should really not be a problem. That’s why one subtracts the Tin from the Tout instead of assuming a constant Tin, to correct for drifts. But as Ruer noted, one likes the Tin to be pretty unchanging, certainly not tied to the outlet T. That is a problem. (BTW, the Ruer quote is from page 58 of ICCF22_Abstracts.pdf.)


    Does this invalidate the calorimetry? I don’t know. It is a red flag is all.


    Replication will answer all.

    As long as I am at it….


    A.) Basics of air flow calorimetry - Jacques Ruer SFSNMC [email protected]

    “Any fluctuation of the inlet temperature results in an error of the heat flow measurement. The heat storage capacity of the whole system introduces a time lag on the readings”


    I find the change in inlet temp on the calorimeter as described in the 2017 paper (I believe, see following) to be very suspicious.


    B.) In the spreadsheet “Mizuno 2017 120 W input excess heat”, the delta-T seems to show an abrupt jump when then input power is turned off (~21,500 sec) with a concomitant decrease in noise level. (The deltaT plot is on the spreadsheet. Note the change at the ~21,500 sec point.) This seems to be primarily due to outlet T which shows some significant negative deviations from the expected values starting at that point. See below. The inlet T starts decreasing when the power is turned off. The outlet T is very messy. This all says ‘electrical noise’ to me, but that’s just me…



    Actually, there are a couple of interesting items arising from the Figure Jed just posted. There are a couple of 'flyers' or 'outliers' based on the line drawn through the data, but I go back to my original question - Is the line right? If one fits a straight line to the data, the two outliers near the center-high end will be less so, but the point at low reactor T becomes way off. However, what if the correct 'line through the data' has a threshold response level, i.e. no signal (or just noise) until a particular value is reached (which would be the X axis intercept of the line)? That paints a different picture., and the points that define the lines are not well-replicated. I think it might be worth considering, but that's just me...

    How about you skeptics stay away from this kind of stuff, and stick to keeping us honest about the science?


    Your bias is showing Shane. You are consistently reading stuff into what I have said today that isn't there. Please stop.


    I have no opinion or comment to offer on the Taubes affair.


    You may not have refreshed yourself enough to note that Ed published a paper claiming that he showed that spiking could not have been the cause of a particular tritium result. I indicated I thought he was referring to Bockris but wasn't sure, but Ed can correct me on that. I further indicated that his method did not prove that. That is Ed's method. Nothing to do with Bockris or Taubes.


    With Jed's comment above, I have now indicated that the methodology used to detect tritium is never adequately described. I will add here there is reason (probably two in reality) to be suspicious of 'industry standard' (to use Ed's term) results.


    We shouldn't base any belief/disbelief on questioned results.

    @AF


    You got "vel m/sec 3.99". I haven't checked your calcs but I find this interesting. In FIg. 9 of the new Mizuno/Rothwell paper, this translates to an input power of ~3.6 W. Then on Figure 8, we see that there is a 'flyer' data point at that velocity, if one assumes the smooth line drawn on the Figure is correct. It is about 15% low. That makes me wonder if there is really a plateau in the data, up to higher input powers where it might suddenly jump up to the data regime shown at ~5 W input power. IOW, I am wondering if the smooth line is the correct model, and whether or not Figure 8 indicates some unexpected behavior on the part of the fan. Just something to consider.


    Edit: I should have added that if the line is right, the flyer point, being a real point, indicates that the 2sigma limits are something like +/- 30%, which suggests lots of variation in fan performance.

    Are you willing to say that everyone who detected tritium also engaged in fraud or was too incompetent to know where the tritium came from?


    Of course not. Stop trying to make me out as a pathoskeptic. You know better.


    The tritium was measured by the scintillation method, which is the industry standard.


    Well, that's nice, but that's hardly adequate. You have defined the fact that liquid scintillation was used, which means a class of experimental methods using LSC devices and cocktails was used. Which ones? (cocktails , instruments, prep methods, etc.)


    So, you assume Bockris got some...


    I assume nothing here. I simply pointed out that the appropriate way to define how a particular time profile (in this case of T in water) came to be is not to assume an exposure profile and show it doesn't match. That simply eliminates one of an infinite number of possibilities. The correct way is to back-calculate the exposure profile necessary to obtain the results obtained, and see if that could happen accidentally or deliberately due to contamination.


    BTW, that generic approach is exactly what I did in my reanalysis of your Pt results. I assumed no excess energy, back-calculated what had to have happened to get the signal you got, and postulated a reasonable mechanism to do so. The fact that the changes required were very small was a bonus, especially when a systematic trend was found in them.