Document: Isotopic Composition of Rossi Fuel Sample (Unverified)

  • But where Jed falls of the rails is where he denies the presence of a "nuclear signature." First of all, de novo helium is a nuclear product,


    I am aware of this. However, helium is very difficult to detect, because the background is high. If you do not get measurable heat, not even say, 50 mW, you will not get enough helium to measure with confidence.

  • My source confirmed that I may release the file:
    xa.yimg.com/df/newvortex/analy…0JsCucYkWtg&type=download
    If that doesn't work, it is in the filespace for newvortex. groups.yahoo.com/neo/groups/newvortex/files


    I cannot access any of these. Could you post a readable document, please.


    Edit: Why should these samples be more reliable than the samples given to Sven K in 2011? Rossi himself say they were faked!


    Edit 2: The engineer provided a readable version. The only news is info on who did the analysis. So, a sample has been analysed.

    I can give dozens -- hundreds! -- more examples from science, technology, business, banking, agriculture . . . You name it, I know of examples. I happen to have many books about folly, mistakes and failure.


    Since you can list the follies, they have obviously been identified as such. Of course scientists are sometimes wrong. That is why constructive criticism is so important in the scientific method. There are many poor and misleading LENR-papers - you have said that yourself. Can we dismiss LENR altogether because of that? I don't think you would say that!

    8Be will normally spontaneously fission within about a femtosecond, and that would generate a very hot gamma


    No, the fission as such has no gammas since 4He has no accessible excited states. There is, however, a very very small gamma branch to the 8Be ground state, which the fissions to 2 alphas. The gammas in the process is created by reactions of the two 9 MeV alphas.

  • On the 18th, Abd wrote:
    “There are questions about the heat/helium correlation, within the community. The exact value is disputed. Some dispute it entirely, because of asserted weaknesses in the reports. Hey, ask Joshua Cude, he will point all that out. Or Kirk Shananan, the last standing peer-reviewed published skeptic still writing about cold fusion.”


    So…let’s talk about the supposed heat-helium correlation from the He POV. Abd is really big on Miles’ work at China Lake, where he stuck closeable flasks in the exiting gas stream of an open F&P-type electrolysis cell so he could collect and analyze gas samples. The data in the Figure Abd and I were discussing previously (Figure 47 in Storms’ book) is of interest.


    I’d like to address a couple of Abd’s prior comments about my comments regarding Figure 47 in my last paper in J. Envir. Mon. First Abd makes a big deal out of the fact that I digitized the plot when the data was in Table 7 on the next page. Mea culpa! I did miss that in the rush to write this stuff up. But really, who cares where the data came from, as long as it is the data? It seems to me that Abd is trying to set up the old “Shanahan made a mistake so he can never be right…” conclusion, which is of course false. It’s good that my ‘error’ was caught, but I never claimed to be perfect. In fact I claimed to make mistakes all the time, which is why I publish and post, to get feedback on what I’m doing and saying…


    Next…in the paper I mention Figure 47 and point out that it is not a correlation plot, in fact, its correlation coefficient indicates complete randomness. I should have continued on to say that the correct way to look at it then is as an attempt to display achieving a constant under different conditions, which is what Abd said the plot was. So the correct thing to do is to just look at the mean value and the standard deviation. Supposedly, this value indicates an approach to theoretical yield expected from the standard yield of 23.82 Mev/fusion or 2.6x10^11 He atoms/watt-sec.


    So what is that number then? Including all 10 points we get the average = 1.83 x10^11 +/- 1.25 x10^11 (1 sigma). The 2 sigma band on that average then goes from -0.67 to +4.33 x10^11, i.e. it is essentially indistinguishable from 0. There is a ‘flyer’ data point in the set. Excluding that point gives 1.49 +/- .67 x10^11, for a 2 sigma band of +0.15 to 2.83 x10^11, i.e. almost the whole available range (0-2.62 e11). I say it is impossible to make any firm conclusions from that. Storms tries to talk about what fraction of He that might have been produced would have been trapped in the solid, but that is just hand waving to a predetermined conclusion.


    But more interestingly, I got to wondering how the # of He atoms per watt per second was calculated. From the data in the table, plus info from Miles’ papers, I can see that the concentration of He atoms measured is converted to # He atoms in the collection flask. The table gives the supposed excess power (1st column of Table 7 and of the Table below) measured for that run, so we can divide the # atoms by the Pex to get #atoms/Watt (2nd column). Then we obviously have to divide by some time to get the #atoms/Watt/sec, but Miles does not give this number. But it is trivial to compute it from the published #atoms/W/sec (column 3)(multiply by Pex and divide by #He atoms in 500cc then reciprocate), and we see some interesting hidden data (column 4), a number that represents some sort of time value associated with that run.


    Miles never explains how he transitions from the #He atoms/Watt to #He atoms/Watt-sec, thus we don’t know exactly what this time represents, but one would think it is the duration of the excess heat event. I can’t think of anything else it might rationally be, but maybe I just missed something here. In his papers, Miles does say he typically left the flask in line for up to 2 days (8.64 x10^4 sec) and that the time to flow 500cc of exiting gas through the flask under ‘nominal’ conditions is ~4140 (or maybe it was 4410) sec, and those two numbers bracket the computed times, so maybe we are right assuming they are the amount of time that excess power signals were observed. If so, it is interesting to look at a plot of the #He atoms produced vs. that time. What we see is a decreasing # as time increases. A linear regression gives y = -2.714e+9*t+1.316e+14 with an R^2 value of 0.750 (meaning the R = 0.866). But even better is the exponential fit of y= 1.417e+14 * exp( 3.132e-05 * t) with an R^2 of .846 (=> R~0.92) (y = # He atoms in 500cc flask in both equations). These correlation coefficients are high enough than one could start believing them!


    This directly implies that the longer one runs the lower the # He atoms found. What this looks like to me is a slow dilution of a single introduction of He atoms into the sample. But wouldn’t we expect a continuous or at least increasing He atom production as long as the 'excess heat' is being produced? So, we are left with the questions “What is really going on here? What are these times? How do they relate to the experiment?”. Miles never explains this. Any comments Abd, any inside information?


    In the Ruby Carat video of the Miles interview, he says that he used a sealed system, but in fact he didn’t. The gases exited through an oil bubbler. As long as the internal pressure remains ~equal to the external pressure (i.e. when the pressure differential (dP) is within that that can be compensated by the oil level moving up or down) the system will remain ‘sealed’ (in fact you also have the problem of atmospheric gases dissolving in the oil and then releasing into the apparatus, but that is probably minor, but the measured He levels are on the order of 1-10 ppb, so who knows!). But what if the dP goes too high or too low. Then the oil level either sucks the oil into the experiment and perhaps sucks a little air back with it or pushes the oil far enough out that you can get direct gas to atmosphere flow. Both of these conditions allow for some degree of contamination. Of course, arguments will now ensue about whether that is enough to give the obtained He.


    When might such a problem occur? How about when a nearly stoichiometric mixture of hydrogen and oxygen explodes? You will get a pressure surge then (maybe enough to blow out the oil?). Immediately following that the internal experimental pressure should drop significantly (perhaps sucking oil+air back in?). This kind of event would give a spike like introduction of air, with its He content, which would then slowly be diluted away after normal operating conditions returned. Kinda looks like what we see in the Miles data. Speculation I know, but do you have a better explanation for the #He atoms vs time plot? I’d love to hear it.


    The real bottom line is that the data set is too small to be trustworthy for anything except speculative conclusions. One’s conclusions will depend on whether the flyer data are trustworthy or not (there actually are one or two more points that are set off from the rest besides the one that I clipped out in the analysis above). The fits above are highly influenced by the flyers, which means we need more data supporting this set before we can conclude anything firmly.


    In any case, given the fictional nature of the excess heat values, and the vagaries of the He data, the supposed He-heat correlation is really just wishful thinking.


    Storms’ Table 7 + Shanahan computed times
    Pex He atoms He/Watt-sec
    Watts in500cc sec[KLS]


    0.1 1.34E+14 1.90E+11 7.05E+03
    0.05 1.05E+14 2.40E+11 8.75E+03
    0.02 9.70E+13 4.90E+11 9.90E+03
    0.055 1.02E+14 1.60E+11 1.16E+04
    0.04 1.09E+14 2.50E+11 1.09E+04
    0.04 8.40E+13 1.40E+11 1.50E+04
    0.06 7.50E+13 7.00E+10 1.79E+04
    0.03 6.10E+13 7.00E+10 2.90E+04
    0.07 9.00E+13 1.20E+11 1.07E+04
    0.12 1.07E+14 1.00E+11 8.92E+03


    Edit: Table headings format got all mucked up. Headings are: Col 1. Pex (watts), Col 2. #He atoms in 500cc, Col 3. #He atoms per watt-sec, Col. 4 seconds (computed by KLS).

  • Hermes suggests the obvious, thinking that nobody else has thought of it, and offensively, like "why are you so stupid as to think," and then he says the opposite of what I think.


    Lomax it is you make "blatant typos" and then blame me for being offensive for words I neither articulated nor implied.


    Anyway let me make another obvious statement. Yours was not a typo but another misconception of physics. Your repeated inappropriate references to gammas and nuclear transions demonstrates this. The real problem Lomax is that you are so obsessed with being right at any cost that you feel the need to distort the facts, manipulate and insult those who dare to disagree. For you LENR is a religion and any evidence which contradicts your beliefs must be dismissed, ridiculed or ignored.


    There are many people in this forum who sincerely want to learn. There are some who are honest enough to admit they are wrong (e.g. Jed). But your only purpose seems to be show off. You come here claiming to facilitate research. But when challenged about your experience and qualifications for such role you take offence and evade the questions. Suit yourself. People will make their own conclusions.


    On one side are those who inquire, examine, experiment, research, propose ideas and subject them to scrutiny, change their minds when shown to be wrong and live with uncertainty while placing reliance on the collective, self-critical, responsible and rigorous use of reason and observation to further the quest for knowledge.


    On the other side are those who espouse a belief system which pre-packages all the answers, who have faith in it, who trust the repeated mantras of authorities, priests and prophets, and who either think that the hows and whys of the universe are explained to satisfaction by their faith, or smugly embrace ignorance. If the "gods" proposed d+d fusion that is what it must be. If historically they used calorimetry, however unsuitable this may be today, that is the sacred path we must follow.


    Your review paper is an example of pre-packaged answers. It contains no discussion of possible artifacts, no discussion of alternative models to explain helium production other than deuterium fusion. In other words, it presents an unbalanced view where reality is only portrayed in the extremes of black and white.

  • Could you please explain this so a nuclear physicist can understand? What's the Hagelstein limit?


    Peter Hagelstein calculated that if 4He were formed in a deuterated environment its recoil in any 24 MeV reaction would be such as to accelerate other deuterons which in turn would cause low level hot fusion including 2.45 MeV neutrons. As we don't see the expected number of fast neutrons, the implication is that we don't have many fast alphas either. Hagelstein's calculation has not been verified experimentally but there is no evidence that is wrong AFAIK.

  • Peter Hagelstein calculated that if 4He were formed in a deuterated environment its recoil in any 24 MeV reaction ...


    Just to add to that, there are several things Peter Hagelstein expects to find, including the aforementioned neutron-producing side channel. Taken together, he estimates they place an upper bound of sorts on the energies of any particles in a system undergoing LENR to 10-20 keV (what the upper bound looks like, e.g., whether it is a strict one, has not been specified, as far as I can tell). This derived limit is one of the things that leads Hagelstein and others such as Ed Storms in the direction of attempting to figure out how a 24 MeV quantum might be split up into many small pieces.

  • Calorimetry is the be all, end all proof of cold fusion.


    What is the intention of is statemement?


    If my aim is to remove (transmute) Cs137 I, in a first approach, I do not care about calorimetry. Nevertheless the presented reaction would be a LENR one.


    For me an other prove is since long given and critcs are just historical. DD fusion produces Helium in the expected amounts, without calorymetry... Read the Stringham papers.


    Only if we go for the "fools path", if we follow those, who don't like to share information, but like to get research++ money, than we need the "business prove" of calorimetry.


    May be You could tell the folks about the COP's of NANOR9/10 ?? The current leaders of the pack.

  • If my aim is to remove (transmute) Cs137 I, in a first approach, I do not care about calorimetry. Nevertheless the presented reaction would be a LENR one.


    If you do not use calorimetry as a diagnostic, you will have no way of knowing whether the reaction is occurring. It probably will not occur, especially if you are inexperienced. So you will look for transmutations and find nothing. Because there was no reaction.


    You might repeat that 10 times, or 100 times. You might spend 3 years at it, or 10 years. You might have no reaction and no transmutations the whole time, but you will never know, because you have no diagnostics. No indication the reaction has turned or, or if it does turn on, no indication of the intensity. This is like throwing darts in the dark.

  • If you do not use calorimetry as a diagnostic, you will have no way of knowing whether the reaction is occurring. It probably will not occur, especially if you are inexperienced. So you will look for transmutations and find nothing. Because there was no reaction.


    Wyttenbach wrote:
    If my aim is to remove (transmute) Cs137 I, in a first approach, I do not care about calorimetry. Nevertheless the presented reaction would be a LENR one.



    If you do not use calorimetry as a diagnostic, you will have no way of knowing whether the reaction is occurring. It probably will not occur, especially if you are inexperienced. So you will look for transmutations and find nothing. Because there was no reaction.


    With this you are way off. Mass spectrometry /XMS as Iwamura did is enough to prove transmutations.


    Without calorimetry, how would you know it is the expected amount of helium? Expected in what ratio, to what? This makes no sense.


    Stringham uses calorimetry.


    This is of course right. But if You know that all Helium is captured, then You could rely on the He measurments. But I agree to have two measures is much better.
    I only wanted to make clear that not everybody needs calorimetry.

  • This is like throwing darts in the dark.

    This should be made clear. If you keep looking for something, you are likely to find it. That could be because what you are looking for is real, or because you have tried enough times that you stumbled across some artifact. So you might hit something with a dart in the dark, but you won't necessarily know much until you have something reproducible or reliable in some way.


    That is why I point to heat/helium, because heat/helium is a confirmed result from many experiments, showing the correlation of what would be expected to normally be independent variables. Error in one would not be expected to consistently correlate with error in the other. So this is far stronger evidence than any isolated result, or even a pile of results that don't show correlation.


    (some, seeing this without knowing the experimebtal background, may think that "heat" means the experiment was hotter and therefore might leak more helium. However, XP does not mean hotter. It means that some heating was detected, and in some approaches, the experiment is maintained at constant temperature and, further, that this "leakage" would somehow manage, across many different protocols with different cell materials, would settle near the theoretical value for energy released from deuterium conversion to helium .... not plausible.


    Arguments that heat is not "proven" completely neglect the correlation, which confirms, roughly, the heat measurements, just as the heat measurements confirm the de novo helium production.

  • Mass spectrometry /XMS as Iwamura did is enough to prove transmutations.


    Some people at the NRL disagree. They say the Mitsubishi transmutations are probably contamination. The only way to prove the transmutations come from cold fusion is to detect anomalous excess heat. Iwamura did measure anomalous heat in some of his early experiments. He later stopped doing calorimetry.

  • (some, seeing this without knowing the experimebtal background, may think that "heat" means the experiment was hotter and therefore might leak more helium. However, XP does not mean hotter. It means that some heating was detected, and in some approaches, the experiment is maintained at constant temperature . . .


    In other cases, the overall heat generated in the cell was less during anomalous heat production than it was during a run with no heat production, because electrolysis power was higher in the latter case.

  • Some people at the NRL disagree. They say the Mitsubishi transmutations are probably contamination.


    What NRL thinks is biassed anyway. The Iwamura claims were replicated many times, with on the fly in reactor buildup of the substrate and measurments through tight windows. Ininital measurmemnts were made and continous measurement were also taken (but not published for business reason..).


    In that particular case I would point to contaminated thinking/reasoning.

  • Some people at the NRL disagree. They say the Mitsubishi transmutations are probably contamination. The only way to prove the transmutations come from cold fusion is to detect anomalous excess heat.


    Excess heat is a very poor way to detect transmutation. Generally speaking nuclear methods are many orders of magnitude more sensitive. For example, 10^6 gammas per second would be an enormous signal far beyond the background and yet calorimetry would never detect it at all. As for transmutation MS, XRF, NAA are all appropriate methods and should be used together.


    In the specific case of Iwamura's transmutations he showed that the increase in Praseodymium was time correlated with the decrease in Caesium. As this is much better than the correlation of excess heat with helium evolution, it would be inconsistent to accept the latter yet reject the former.

  • Very well. Why don't you postulate a model which explains heat and helium production (not something trivial like radium decay) from materials that are or might be present? Please calculate how the alpha decay rate can be enhanced by the required amount to produce measurable heat.


    Here is a model: https://goo.gl/5rWWFN. The numbers are not very promising. Assuming an overly-optimistic amount of platinum that is actually participating in any alpha decay process, the screening needed is ~ 21 electrons. Please take a look and see if I've done anything clearly wrong.


    Obviously the screening required is a huge (surely unrealistic) amount. Here's one thought as to why we might be able to take it down by a large amount: presumably my adaptation of the Gamow theory to allow for electron screening is assuming isotropy. But if you had anisotropic screening of some kind, e.g., some kind of impingement upon the nucleus coming from a specific direction, perhaps a little bit of screening will go a much longer way than under the assumption if isotropy.


    Some interesting notes:

    • If you have a fission activity one order of magnitude larger with an average Q value of 15 MeV, you get the 0.5 W that Miles et al. saw.
    • Using the same starting values, two other elements are quite interesting: samarium and neodymium. (There are others as well.)
  • If you have a fission activity one order of magnitude larger with an average Q value of 15 MeV, you get the 0.5 W that Miles et al. saw.
    Using the same starting values, two other elements are quite interesting: samarium and neodymium. (There are others as well.


    OK Eric. Please spell out exactly which reactions you refer to and how you make your calculations. :) Do you have any evidence that anisotropy would be relevant?

  • OK Eric. Please spell out exactly which reactions you refer to and how you make your calculations.


    By "reactions," I think you're referring to the platinum fission reactions in my suggestion above that "If you have a fission activity one order of magnitude larger with an average Q value of 15 MeV, you get the 0.5 W that Miles et al. saw."


    At this this link I list some fission reactions yielding stable daughters that, if they could be brought about somehow, would be exothermic. For platinum, the Q values range from 106 MeV to 3 MeV. For palladium, the Q values range from 19 MeV to 86 keV. The 15 MeV value I mentioned above comes from my having examined the palladium reactions sometime back and getting that number stuck in my head. As you know, in the barrier tunneling calculation, a reaction with a higher Q value is more likely than one with a low Q value. So on qualitative grounds a 15 MeV average value per reaction is not completely unrealistic.


    I’m not sure yet how to model a screened fission reaction. Perhaps using the WKB approximation? I assume the calculation bears some resemblance to the Gamow calculation for alpha decay, except that the daughters are stochastic rather predetermined. In the notes I linked to above, I estimate that in an experiment by Miles et al. they observed an implied rate of 4He generation of 2.25e+10 4He per second in one of their better runs. If in addition to helium generation you had fission reactions at an average of 15 MeV per reaction and an activity of one order of magnitude higher (i.e., 2.25e+11 s^-1), you’d get ~ 0.5 W power:


    (2.25e+11 reaction/s * 15 MeV/reaction) = 0.54073461 joules/s ~ 0.5 W


    Do you have any evidence that anisotropy would be relevant?


    My argument at this point is qualitative and not more than suggestive: (1) First, consider that if your assumption is isotropy, and that’s not needed, then you’re spending ~ 21 electrons for screening isotropically across 4 pi steradians. If exposing the nucleus and Coulomb barrier to a gradient of screening from one side was sufficient, or perhaps a narrow hole is punched through the barrier by a current of electricity, you could reduce the electron count by a huge fraction. (That is to say, making this assumption about anisotropy is convenient and helps the numbers to become slightly more plausible.) (2) The fact that the Coulomb barrier is a barrier preventing positively charged fragments from escaping the parent nucleus is a counterintuitive thing to try to understand. Without further information, one would assume the opposite: the only thing holding the nuclei together in the parent nucleus is the strong force, and the only role that the Coulomb potential plays is to repel the positively charged nuclei. But the Gamow calculation proceeds from a very different assumption — although protons repel one another, it is also the case that there is something about the Coulomb force that prevents positively charged fragments from escaping. In that light I imagine the Coulomb barrier as a kind of Faraday cage. As long as the Faraday cage is flawless, there’s a low probability of something escaping through it. If there is an imperfection in it, e.g., a little bit of screening from one side, the probability of "RF" (i.e., a fragment) escaping increases considerably.

  • I’m not sure yet how to model a screened fission reaction.


    Good work Eric. I checked the first Pt fission reaction and the energy you calculate is correct. :) You can use the code I published earlier to calculate the Gamow factor for every possible fission anf you will get some nice assymmetric "fission yield" curves. (For actinide fission Gamow theory predicts a peak near doubly magic 132Sn and this is confirmed experimentally).


    The only problem is the fission energy for Pt is around 100 MeV or so (compared with 200 MeV for uranium). This means there is a formidable Coulomb barrier to overcome. The Gamow factor is -47 for Pt190 compared to -35 for Pu239 (for spontaneous fission). Pt fission will be a trillion times slower - quite unubservable.


    The next good point you allude to is that if Pt fission does occur, then stable products are most likely. That's unexpected! :) On the other hand there will likely be less frequent channels which do create radio-active daughters. So I think the model is wrong but I do like your thinking.


    Every model I have evaluated predicts radioactive products. It would be very exciting if someone were to find the predicted radioactivity.. Failing that, it would be nice if a model predicted radioactivity where it has already been observed. In particular platinum (anodes) should be analysed.

  • As a next step I will take a look at the Gamow factor applied to the fission daughters and see where that goes.


    The only problem is the fission energy for Pt is around 100 MeV or so (compared with 200 MeV for uranium).


    Perhaps palladium will see higher fission activity than platinum, then (see the second list in this link). I'll know more once I add the Gamow factor to fission reactions.


    The next good point you allude to is that if Pt fission does occur, then stable products are most likely. That's unexpected! On the other hand there will likely be less frequent channels which do create radio-active daughters. So I think the model is wrong but I do like your thinking.


    Yes — it was an edited list of platinum and palladium fission reactions leading to stable products rather than a model. A prerequisite of a model would be something that says how likely a particular branch is. With your additional advice I can start to make some models. How unrealistic the general approach is will depend upon the activity that is predicted from the unstable branches.

  • Perhaps palladium will see higher fission activity than platinum, then (see the second list in this link). I'll know more once I add the Gamow factor to fission reactions.


    Alas no. Palladium is much less fissionable than Platinum. If you like I can give you a list of all fission products with Gamow factors for any isotope you care to name. IMHO fission is too slow, too crude to account for LENR. We need something more probable yet more delicate. I think you know what I have in mind! But let's discuss that privately.

  • @Eric & Hermes
    Two comments and a question:


    N/Z ratio and stability
    If you fission a heavy nucleus both fragments will end up away from the line of stability - they will have the N/Z (number of neutrons/number of protons) ratio of the parent which is too high. The fragments will thus necessarily be radioactive and decay by beta-minus decay (exactly as actinide fission).


    Element analysis versus isotope analysis
    There is a danger of only doing elemental analysis: an element can be transported to a different place in the sample by chemical reactions. Element ratios could then be changed without any nuclear reactions. Isotope analysis would be much better. There could be effects also in this case, but they will be very small since isotopes of a given element have very similar properties.


    Alpha production
    Why do you talk about production of alphas when you are discussing fission? Alpha decay is not considered to be fission in nuclear physics.

  • Hi Peter,


    If you fission a heavy nucleus both fragments will end up away from the line of stability - they will have the N/Z (number of neutrons/number of protons) ratio of the parent which is too high. The fragments will thus necessarily be radioactive and decay by beta-minus decay (exactly as actinide fission).


    There's two things that are different in this case — (1) I'm looking at the question of heavy screening of nuclei lighter than the actinides and seeing what kind of activity might be expected (if you really crank up the electron screening); and (2) we're supposing spontaneous fission rather than fission following upon neutron capture. Although the fragments may fall away from the line of stability, there are many branches that lead to stable daughters, so much will depend upon what the predicted rates look like.


    There is a danger of only doing elemental analysis


    I'm not doing elemental analysis, per se. I'm attempting to do modeling of decay rates of isotopes of various elements under electron screening, and this is why I've been talking about elements such as "platinum" and "palladium". Really I'm talking about isotopes of these elements.


    Why do you talk about production of alphas when you are discussing fission? Alpha decay is not considered to be fission in nuclear physics.


    Alpha decay arises by the same mechanism as spontaneous fission (tunneling through the Coulomb barrier), so it's natural to deal with both. Alpha decay is being used to look at the helium in LENR helium/heat experiments, and fission is being used to look at the heat. As Hermes suggests, perhaps it's all very unpromising!

  • There's two things that are different in this case — (1) I'm looking at the question of heavy screening of nuclei lighter than the actinides and seeing what kind of activity might be expected (if you really crank up the electron screening); and (2) we're supposing spontaneous fission rather than fission following upon neutron capture. Although the fragments may fall away from the line of stability, there are many branches that lead to stable daughters, so much will depend upon what the predicted rates look like.


    The line of stable nuclei is bent all the way down to light nuclei (where fission is endothermic), so fission will always yield neutron rich nuclei. Even if some end up in stable nuclei, most will be radioactive with easily detected radiation. No radiation, no fission! And how does fission help you to explain reactions with Ni?


    I'm not doing elemental analysis, per se. I'm attempting to do modeling of decay rates of isotopes of various elements under electron screening, and this is why I've been talking about elements such as "platinum" and "palladium". Really I'm talking about isotopes of these elements.


    There are experimental papers where only elemental analysis was performed. What I meant was that these results may be unreliable.


    Alpha decay arises by the same mechanism as spontaneous fission (tunneling through the Coulomb barrier), so it's natural to deal with both. Alpha decay is being used to look at the helium in LENR helium/heat experiments, and fission is being used to look at the heat. As Hermes suggests, perhaps it's all very unpromising!


    Both alpha decay and fission are more complex than just barrier penetration! I agree with Hermes that fission is probably not the explanation for LENR.

  • The line of stable nuclei is bent all the way down to light nuclei (where fission is endothermic), so fission will always yield neutron rich nuclei. Even if some end up in stable nuclei, most will be radioactive with easily detected radiation.


    This is reasoning from first principles, and it deserves a proper analysis. :)


    And how does fission help you to explain reactions with Ni?


    I don't imagine fission and alpha decay explain Ni LENR. My suspicion for Ni: once you get down into the medium and light nuclei, any heat goes back to a combination of induced electron capture/beta decay and things going on with heavier impurities.


    There are experimental papers where only elemental analysis was performed. What I meant was that these results may be unreliable.


    Ah, yes. I got the impression somewhere that elemental analysis is cheaper. But the papers that make use of it solely are almost useless for this kind of investigation.


    Both alpha decay and fission are more complex than just barrier penetration!


    Not alpha decay and fission — spontaneous alpha decay and spontaneous fission. In light of this clarification, can you elaborate on what you have in mind? Obviously when heavy nuclei are left in excited states for one reason or another you get both of these processes. But those are not cases I'm considering right now. Considerations going back to spin are also important, and I'm not worrying about them right now, but I don't think they'll be sufficient to change the general direction of any analysis.

  • Miles never explains how he transitions from the #He atoms/Watt to #He atoms/Watt-sec, thus we don’t know exactly what this time represents, but one would think it is the duration of the excess heat event. I can’t think of anything else it might rationally be, but maybe I just missed something here. In his papers, Miles does say he typically left the flask in line for up to 2 days (8.64 x10^4 sec) and that the time to flow 500cc of exiting gas through the flask under ‘nominal’ conditions is ~4140 (or maybe it was 4410) sec, and those two numbers bracket the computed times, so maybe we are right assuming they are the amount of time that excess power signals were observed. If so, it is interesting to look at a plot of the #He atoms produced vs. that time. What we see is a decreasing # as time increases. A linear regression gives y = -2.714e+9*t+1.316e+14 with an R^2 value of 0.750 (meaning the R = 0.866). But even better is the exponential fit of y= 1.417e+14 * exp( 3.132e-05 * t) with an R^2 of .846 (=> R~0.92) (y = # He atoms in 500cc flask in both equations). These correlation coefficients are high enough than one could start believing them!


    This was a long post by Kirk. I have answered in detail on newvortex.
    https://groups.yahoo.com/neo/g…onversations/messages/815


    Previously, Kirk looked for a correlation in the scatter of measurements of what may be a constant, so, of course, he found none. I calculated a correlation coefficient for the full Miles data from Storms and combined with the six control experiments, the coerricient was 0.89. And that is a small part of the full heat/helium data, which is work that has been done in many labs. And yes, it can be improved, that's obvious.


    Here, he misread the papers. The time is 4440 seconds (I provide sources) and this was a constant. The time was not related to the excess power measurements, though he did take samples where he was seeing excess power (which is, with this work, then, relatively rarely). Miles is not completely explicit in the papers, but one does derive more from studying them. All the work assumes that there is electrolysis under way, and that the collection time is the time for 500 ml of gas to be evolved, and he gives the time as 4440 seconds, and uses that time in energy calculations. So he is later comparing energy (as average power for the period of collection) and with atoms of helium (as found by the lab, working blind, not knowing the power release history).


    Then in the measurements in question, he subtracted the background level of 0.51 x 10^14 atoms per 500 cc.


    The time is dependent only on electrolysis power and gas evolution, and this is after the cell has stabilized, I'm sure, so that all generated gas is being evolved. Miles was measuring gas evolution with the bubbler. As to long periods, he would, I think, run the cell for a relatively long time, and then, seeing XP, take a sample. That way the atmosphere in the flask would be what was recently evolved. It's a bit sloppy, but ... he had little funding by that time and did what he could. Much more precise work was done later.


    (I am not sure whether he held the time and current constant or held the collected volume constant (i.e. evolved gas as shown by the bubbler), but they would be close to each other. A major difference would have been a serious problem.


    Kirk has been correlating his errors, it seems. I'll look again if Kirk thinks I missed something.

  • Quote from Abd Ul-Rahman Lomax: “My source confirmed that I may release the file:
    xa.yimg.com/df/newvortex/analy…0JsCucYkWtg&type=download
    If that doesn't work, it is in the filespace for newvortex. groups.yahoo.com/neo/groups/newvortex/files”
    I…

    The filespace for newvortex should work for anyone who has a yahoo account and who joins the mailing list, which is free. Others later took the file from my space and posted it to E-Catworld. The newvortex filespace also has all the significant Rossi v. Darden files, inone place. You will need "full featured access," i.e, have a yahoo account and be logged in. Go to https://groups.yahoo.com/neo/groups/newvortex/info and join the list. If you don't want to receive list mail -- including notifications of files -- set your subscription to "special messages" which would still allow the list moderators to send a special message to all members. We haven't done it in many years....


    Your subscription should be immediate, unless yahoo requires you to respond to a confirmation mail. I forget, but moderators do not have to approve subscriptions.

  • Your review paper is an example of pre-packaged answers. It contains no discussion of possible artifacts, no discussion of alternative models to explain helium production other than deuterium fusion. In other words, it presents an unbalanced view where reality is only portrayed in the extremes of black and white.

    Perfect. Thanks. That paper was polemic. The initial goal was to convince the physicist who was reviewing it. It worked. He was originally very negative, so I rewrote it.


    This particular topic is quite mature, and it's time for action, not more words. I'm perfectly familiar with and often write in the academic style that thinks of all the exceptions and possible problems and the result of that is long papers what will not be read by most people. That paper was a call to action, and that is working, too. The suggested research is being done.


    I have discussed possible artifacts with heat/helium ad nauseum. If someone wants to do this, join new vortex and raise the issues. I have a direct line to the people doing the research, and if you actually come up with something not already under consideration, you might actually make a difference.


    Or you might learn something. Or both.

  • You can use the code I published earlier to calculate the Gamow factor for every possible fission anf you will get some nice assymmetric "fission yield" curves.


    Once I have all of the theoretical fission branches for a given parent nuclide, the Gamow factors and the activities for each branch, how do I aggregate activities in order to calculate the aggregate power for, e.g., a mole of 190Pt? To get the power output, do I simply sum up the activities of the individual fission branches multiplied by their Q values, or is there a further normalization step that is required across the branches before the output for each branch can be summed together?