Document: Isotopic Composition of Rossi Fuel Sample (Unverified)

  • As a next step I will take a look at the Gamow factor applied to the fission daughters and see where that goes.


    The only problem is the fission energy for Pt is around 100 MeV or so (compared with 200 MeV for uranium).


    Perhaps palladium will see higher fission activity than platinum, then (see the second list in this link). I'll know more once I add the Gamow factor to fission reactions.


    The next good point you allude to is that if Pt fission does occur, then stable products are most likely. That's unexpected! On the other hand there will likely be less frequent channels which do create radio-active daughters. So I think the model is wrong but I do like your thinking.


    Yes — it was an edited list of platinum and palladium fission reactions leading to stable products rather than a model. A prerequisite of a model would be something that says how likely a particular branch is. With your additional advice I can start to make some models. How unrealistic the general approach is will depend upon the activity that is predicted from the unstable branches.

  • Perhaps palladium will see higher fission activity than platinum, then (see the second list in this link). I'll know more once I add the Gamow factor to fission reactions.


    Alas no. Palladium is much less fissionable than Platinum. If you like I can give you a list of all fission products with Gamow factors for any isotope you care to name. IMHO fission is too slow, too crude to account for LENR. We need something more probable yet more delicate. I think you know what I have in mind! But let's discuss that privately.

  • @Eric & Hermes
    Two comments and a question:


    N/Z ratio and stability
    If you fission a heavy nucleus both fragments will end up away from the line of stability - they will have the N/Z (number of neutrons/number of protons) ratio of the parent which is too high. The fragments will thus necessarily be radioactive and decay by beta-minus decay (exactly as actinide fission).


    Element analysis versus isotope analysis
    There is a danger of only doing elemental analysis: an element can be transported to a different place in the sample by chemical reactions. Element ratios could then be changed without any nuclear reactions. Isotope analysis would be much better. There could be effects also in this case, but they will be very small since isotopes of a given element have very similar properties.


    Alpha production
    Why do you talk about production of alphas when you are discussing fission? Alpha decay is not considered to be fission in nuclear physics.

  • Hi Peter,


    If you fission a heavy nucleus both fragments will end up away from the line of stability - they will have the N/Z (number of neutrons/number of protons) ratio of the parent which is too high. The fragments will thus necessarily be radioactive and decay by beta-minus decay (exactly as actinide fission).


    There's two things that are different in this case — (1) I'm looking at the question of heavy screening of nuclei lighter than the actinides and seeing what kind of activity might be expected (if you really crank up the electron screening); and (2) we're supposing spontaneous fission rather than fission following upon neutron capture. Although the fragments may fall away from the line of stability, there are many branches that lead to stable daughters, so much will depend upon what the predicted rates look like.


    There is a danger of only doing elemental analysis


    I'm not doing elemental analysis, per se. I'm attempting to do modeling of decay rates of isotopes of various elements under electron screening, and this is why I've been talking about elements such as "platinum" and "palladium". Really I'm talking about isotopes of these elements.


    Why do you talk about production of alphas when you are discussing fission? Alpha decay is not considered to be fission in nuclear physics.


    Alpha decay arises by the same mechanism as spontaneous fission (tunneling through the Coulomb barrier), so it's natural to deal with both. Alpha decay is being used to look at the helium in LENR helium/heat experiments, and fission is being used to look at the heat. As Hermes suggests, perhaps it's all very unpromising!

  • There's two things that are different in this case — (1) I'm looking at the question of heavy screening of nuclei lighter than the actinides and seeing what kind of activity might be expected (if you really crank up the electron screening); and (2) we're supposing spontaneous fission rather than fission following upon neutron capture. Although the fragments may fall away from the line of stability, there are many branches that lead to stable daughters, so much will depend upon what the predicted rates look like.


    The line of stable nuclei is bent all the way down to light nuclei (where fission is endothermic), so fission will always yield neutron rich nuclei. Even if some end up in stable nuclei, most will be radioactive with easily detected radiation. No radiation, no fission! And how does fission help you to explain reactions with Ni?


    I'm not doing elemental analysis, per se. I'm attempting to do modeling of decay rates of isotopes of various elements under electron screening, and this is why I've been talking about elements such as "platinum" and "palladium". Really I'm talking about isotopes of these elements.


    There are experimental papers where only elemental analysis was performed. What I meant was that these results may be unreliable.


    Alpha decay arises by the same mechanism as spontaneous fission (tunneling through the Coulomb barrier), so it's natural to deal with both. Alpha decay is being used to look at the helium in LENR helium/heat experiments, and fission is being used to look at the heat. As Hermes suggests, perhaps it's all very unpromising!


    Both alpha decay and fission are more complex than just barrier penetration! I agree with Hermes that fission is probably not the explanation for LENR.

  • The line of stable nuclei is bent all the way down to light nuclei (where fission is endothermic), so fission will always yield neutron rich nuclei. Even if some end up in stable nuclei, most will be radioactive with easily detected radiation.


    This is reasoning from first principles, and it deserves a proper analysis. :)


    And how does fission help you to explain reactions with Ni?


    I don't imagine fission and alpha decay explain Ni LENR. My suspicion for Ni: once you get down into the medium and light nuclei, any heat goes back to a combination of induced electron capture/beta decay and things going on with heavier impurities.


    There are experimental papers where only elemental analysis was performed. What I meant was that these results may be unreliable.


    Ah, yes. I got the impression somewhere that elemental analysis is cheaper. But the papers that make use of it solely are almost useless for this kind of investigation.


    Both alpha decay and fission are more complex than just barrier penetration!


    Not alpha decay and fission — spontaneous alpha decay and spontaneous fission. In light of this clarification, can you elaborate on what you have in mind? Obviously when heavy nuclei are left in excited states for one reason or another you get both of these processes. But those are not cases I'm considering right now. Considerations going back to spin are also important, and I'm not worrying about them right now, but I don't think they'll be sufficient to change the general direction of any analysis.

  • Miles never explains how he transitions from the #He atoms/Watt to #He atoms/Watt-sec, thus we don’t know exactly what this time represents, but one would think it is the duration of the excess heat event. I can’t think of anything else it might rationally be, but maybe I just missed something here. In his papers, Miles does say he typically left the flask in line for up to 2 days (8.64 x10^4 sec) and that the time to flow 500cc of exiting gas through the flask under ‘nominal’ conditions is ~4140 (or maybe it was 4410) sec, and those two numbers bracket the computed times, so maybe we are right assuming they are the amount of time that excess power signals were observed. If so, it is interesting to look at a plot of the #He atoms produced vs. that time. What we see is a decreasing # as time increases. A linear regression gives y = -2.714e+9*t+1.316e+14 with an R^2 value of 0.750 (meaning the R = 0.866). But even better is the exponential fit of y= 1.417e+14 * exp( 3.132e-05 * t) with an R^2 of .846 (=> R~0.92) (y = # He atoms in 500cc flask in both equations). These correlation coefficients are high enough than one could start believing them!


    This was a long post by Kirk. I have answered in detail on newvortex.
    https://groups.yahoo.com/neo/g…onversations/messages/815


    Previously, Kirk looked for a correlation in the scatter of measurements of what may be a constant, so, of course, he found none. I calculated a correlation coefficient for the full Miles data from Storms and combined with the six control experiments, the coerricient was 0.89. And that is a small part of the full heat/helium data, which is work that has been done in many labs. And yes, it can be improved, that's obvious.


    Here, he misread the papers. The time is 4440 seconds (I provide sources) and this was a constant. The time was not related to the excess power measurements, though he did take samples where he was seeing excess power (which is, with this work, then, relatively rarely). Miles is not completely explicit in the papers, but one does derive more from studying them. All the work assumes that there is electrolysis under way, and that the collection time is the time for 500 ml of gas to be evolved, and he gives the time as 4440 seconds, and uses that time in energy calculations. So he is later comparing energy (as average power for the period of collection) and with atoms of helium (as found by the lab, working blind, not knowing the power release history).


    Then in the measurements in question, he subtracted the background level of 0.51 x 10^14 atoms per 500 cc.


    The time is dependent only on electrolysis power and gas evolution, and this is after the cell has stabilized, I'm sure, so that all generated gas is being evolved. Miles was measuring gas evolution with the bubbler. As to long periods, he would, I think, run the cell for a relatively long time, and then, seeing XP, take a sample. That way the atmosphere in the flask would be what was recently evolved. It's a bit sloppy, but ... he had little funding by that time and did what he could. Much more precise work was done later.


    (I am not sure whether he held the time and current constant or held the collected volume constant (i.e. evolved gas as shown by the bubbler), but they would be close to each other. A major difference would have been a serious problem.


    Kirk has been correlating his errors, it seems. I'll look again if Kirk thinks I missed something.

  • Quote from Abd Ul-Rahman Lomax: “My source confirmed that I may release the file:
    xa.yimg.com/df/newvortex/analy…0JsCucYkWtg&type=download
    If that doesn't work, it is in the filespace for newvortex. groups.yahoo.com/neo/groups/newvortex/files”
    I…

    The filespace for newvortex should work for anyone who has a yahoo account and who joins the mailing list, which is free. Others later took the file from my space and posted it to E-Catworld. The newvortex filespace also has all the significant Rossi v. Darden files, inone place. You will need "full featured access," i.e, have a yahoo account and be logged in. Go to https://groups.yahoo.com/neo/groups/newvortex/info and join the list. If you don't want to receive list mail -- including notifications of files -- set your subscription to "special messages" which would still allow the list moderators to send a special message to all members. We haven't done it in many years....


    Your subscription should be immediate, unless yahoo requires you to respond to a confirmation mail. I forget, but moderators do not have to approve subscriptions.

  • Your review paper is an example of pre-packaged answers. It contains no discussion of possible artifacts, no discussion of alternative models to explain helium production other than deuterium fusion. In other words, it presents an unbalanced view where reality is only portrayed in the extremes of black and white.

    Perfect. Thanks. That paper was polemic. The initial goal was to convince the physicist who was reviewing it. It worked. He was originally very negative, so I rewrote it.


    This particular topic is quite mature, and it's time for action, not more words. I'm perfectly familiar with and often write in the academic style that thinks of all the exceptions and possible problems and the result of that is long papers what will not be read by most people. That paper was a call to action, and that is working, too. The suggested research is being done.


    I have discussed possible artifacts with heat/helium ad nauseum. If someone wants to do this, join new vortex and raise the issues. I have a direct line to the people doing the research, and if you actually come up with something not already under consideration, you might actually make a difference.


    Or you might learn something. Or both.

  • You can use the code I published earlier to calculate the Gamow factor for every possible fission anf you will get some nice assymmetric "fission yield" curves.


    Once I have all of the theoretical fission branches for a given parent nuclide, the Gamow factors and the activities for each branch, how do I aggregate activities in order to calculate the aggregate power for, e.g., a mole of 190Pt? To get the power output, do I simply sum up the activities of the individual fission branches multiplied by their Q values, or is there a further normalization step that is required across the branches before the output for each branch can be summed together?

  • @Eric Walker. The Gamow factors can only give a relative rate, not an absolute one. Missing are the probabilities that fission daughters materialize. The code I gave was for alpha emission, but you can imagine that more equal fragments are more probable so possibly some binomial factor is required. It would make a good paper....

  • I don't imagine fission and alpha decay explain Ni LENR. My suspicion for Ni: once you get down into the medium and light nuclei, any heat goes back to a combination of induced electron capture/beta decay and things going on with heavier impurities.


    Somebody linked and old russian paper, which focuses on Ti and the induced vanishing of Ti48! (Ti48 transmutes to a chain of other elements)


    https://arxiv.org/ftp/physics/papers/0101/0101089.pdf


    and his daughter paper: http://lenr-canr.org/acrobat/LochakGlowenergyn.pdf

    I believe that this would be a far more optimal starting point to do calculations, as we now that the experiment is highly reproducible!

  • I have answered in detail on newvortex.


    I don't do 'newvortex'. It's poor form to answer a post here with a post elsewhere.


    I calculated a correlation coefficient for the full Miles data from Storms and combined with the six control experiments, the coerricient was 0.89.


    How does one do that? The Fig. 47 data are for He atoms produced by 'LENR', which is alternatively evidenced by 'excess heat'. If there is no excess heat, there supposedly is no LENR, and there would be supposedly no He. The Figure though is trying to illustrate that the amount of He produced is consistent with what one would expect from the nuclear energy released in the putative LENR. Folding the controls' data into the plot is an 'apples and oranges' situation. All it really does is bias the data set numerically with more extreme values ('flyers', 'outliers'), and as I noted, the data set is already strongly impacted by that problem. Adding more of that in just hurts. it doesn't help. Of course the correlation coefficient will 'improve'. You're taking a data set that lies as a group far away from the (0,0) point, and adding several points in at (0,0). That will force a line through basically the center of mass of the original data (its average) and (0,0). (By the way, it seems that you have taken a data set that 'shows' (not really) that various levels of excess heat all lead to a consistent energy per He atom, as shown by the near random correlation coefficient, and turned it into one that proves the opposite (which is what a correlation coefficient of 0.89 implies). Is that what you intended?)


    To answer my own question, you do it by 'just doing it'. You just chuck the new points into your calculator and churn out a correlation coefficient, with no regards to whether or not combining the data in one plot is legitimate. The numerics you end up with make no sense however, as would be expected from such a bogus procedure.


    The time is 4440 seconds (I provide sources) and this was a constant.


    I noted this number in my post. It is the nominal time to produce 500cc of electrolysis gases, which is computed from the current flow. But the experimental runs were not all at the same current flows, they were varied, sometimes within the same run. I assume the idea is to correlate He production with 'LENR' activity as measured by apparent excess heat. That makes sense right? So it doesn't make sense to collect gases when no excess heat is showing, you should see only background. Further, isn't the idea that as the 'LENR' proceeds, more He will be produced, perhaps in proportion to the total activity as measured by apparent excess heat? If yes, then you at least want to take the sample during the event, otherwise the He will be flushed out of the sample cell when the 'excess heat' quits.


    Furthermore, if the 'time' is fixed at 4440 seconds, then there is no apparent connection between the two numbers in the data table (He atoms/500cc and He atoms/W-sec). That simply makes no sense. No, the time I computed and posted as the last column in the data table is somehow related to the magnitude of 'LENR' activity. But as I noted, Miles never explains this.


    I believe my procedure was correct. Take Miles' He atoms/Watt/second, divide by the given He atoms/Watt to get 1/seconds, and reciprocate to get the unidentified time in seconds. And plotting the He atoms produced as a function of this time gives a plot that looks like a spike dilution process.


    The time is dependent only on electrolysis power and gas evolution, and this is after the cell has stabilized, I'm sure,


    Exactly, which varies from run to run, so a fixed 4440 is not correct. Your "I'm sure" indicates to me that your interpolating on what Miles said, and not quoting his actual words. In other words, like me, you're guessing what Miles did. That's one of my underlying objections to most CF research reports so far, insufficient information given forcing readers to guess at what was done, usually when what is being guessed at is very important, like this case.

  • I don't do 'newvortex'. It's poor form to answer a post here with a post elsewhere.

    Take a flying leap, then. You can read the response there, and you can quote it here. But very long posts are disliked here by some, and this is a highly detailed issue.

  • Eric Walker. The Gamow factors can only give a relative rate, not an absolute one. Missing are the probabilities that fission daughters materialize. The code I gave was for alpha emission, but you can imagine that more equal fragments are more probable so possibly some binomial factor is required. It would make a good paper....


    Yes, indeed. In the model above for the Miles experiment I also incorporated the code from this HyperPhysics page, where Rod Nave obtains an actual decay constant and half-life using the relation P = exp(-2G), where G is his calculation of the Gamow factor. (So now I have both your code and this code living side by side, which are used in different printouts, but I haven't compared the factors yet to see if they're equivalent. I suspect they are not exactly the same.) Nave uses the assumptions about the alpha particle knocking around the nucleus trying to escape, calculating the velocity and the barrier assault frequency, and then making use of the probability to get the decay constant.


    When I looked into codes used for calculating fission activity (e.g., FREYA), they all seemed either to rely on databases of existing experimental measurements or, where those data were not available, to interpolate from existing measurements. I could not find screening as a parameter, which would seem to imply something more ab initio than that, but perhaps I did not look hard enough. So I might need to do something along the lines you suggest, which I will need to look into more before I fully understand. Do you have a guess as to what incorporating a binomial factor might look like? How would you validate that sensible answers are being obtained?


    Btw, your calculation of the Gamow factor goes to zero for a large number of palladium fission branches when approaching an electron screening of 20, which was what was needed to make Nave's code yield the alpha production seen in Miles's experiment (given one or two arbitrary assumptions about the amount of platinum available). You can adjust the screening down quite a bit and still get promising factors. So if there's something about anisotropy that implies that this much screening is not actually needed to obtain a given activity, there's hope. There were a LOT of branches that give rise to beta activity, but (1) a (very) cursory glance suggested that the half-lives were short and (2) one of my working assumptions is that weak interactions are somehow accelerated by the same mechanism of a superabundance of electrons. So perhaps those very short half-lives go down to nothing for the most part.

  • Abd Ul-Rahman Lomax wrote:


    How does one do that? The Fig. 47 data are for He atoms produced by 'LENR', which is alternatively evidenced by 'excess heat'. If there is no excess heat, there supposedly is no LENR, and there would be supposedly no He.

    Kirk, you have "LENR" on the brain. The goal of the experiment is to measure anomalous heat (the "Fleischmann-Pons Heat Effect") and helium in the electrolytic outgas. So there are excess heat measurements and helium measurements. That Miles reports the XP results in one table with no-XP results in another table does not change that these are all experiments in the same series, data collected with the same procedures. I gave the spreadsheet data. You can use it to calculate your own correlation coefficient. Overall analysis of the Miles work and its statistical significance has always included the "null results." Which are not zero as to helium. While I'd have preferred to use actual XP measurements (which can be negative), Miles did not report them, he reported this as only "no XP."


    I have often critiqued cold fusion experiments for selecting data according to what was considered significant. It's a problem. Some scientists are more thorough, such as McKubre. Others only report "positive results," leading to suspicion of the file drawer effect. Miles did report all his results, except not this detail, of what heat measurements were considered "no heat." I just used zero in the table. In fact, had he reported the (minimal) heat, it might have (slightly) increased the correlation. Or not.


    Your approach would throw out the no-heat measurements, only looking at the correlation from the ones with XP. But that would be artificial. There is no basis for it. The goal of the work is to determine if there is a correlation between XP and helium. Your first shot that this completely missed (in the JEM letter). Now you are still missing it. There is a correlation. Get over it. It is no longer reasonable to reject this without very careful analysis, and probably without contrary experimental work. What scientists will agree on here is the appropriateness of further measurements. Someone who doesn't, who wants to continue arguing that something must be wrong, is not, in that, functioning as a scientist, but as a crackpot and fanatic.


    You get to choose where you live. You have an opportunity to be a part of the future, or to be stuck in the past. That future includes full-on, rational skepticism. We need it. That's science.

  • While I'd have preferred to use actual XP measurements (which can be negative)


    When you say "negative" I suppose you mean null. With no heat. Not endothermic. The experiments never show an endothermic reaction. Miles did use the null tests. Collecting effluent gas from experiments with no heat is very important. It is how Miles established the baseline.


    The sequence of events in these experiments is a little confusing. The gas is collected for two days, and the flow of gas fills the collection flask 40 times during that time. The excess heat from the last of these 40 time segments is recorded for the sample. (Or no heat, if there is none.) The gas flow rate is both computed and measured: "Actual measurements of the gas evolution rate by the displacement of water yielded 6.75 ± 0.25 ml min-1 for cell A and 6.69 ± 0.15 ml min-1 for cell B." Shanahan's concerns about the flow rate and volume are answered in the papers. As usual he has not read or understood what the authors say.


    You may find my summary plus the quotes from Miles easier to follow than the original papers. Starting on p. 4 here:


    http://lenr-canr.org/acrobat/RothwellJintroducti.pdf

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.