The church of SM physics

  • Due to the low counts per second with this system a background spectra takes a while to come into focus.
    This is why I was wondering about channels and bins earlier. To get an idea of how much of a average is obtained over 10 minutes. I was also wondering how a more sensitive detector might work for background subtracting, but then normalization of CPS (CPM) becomes a probable point of argument.

  • Are the features indicated by red lines examples of the sort of thing that you would identify as "grapes"? Or do you have something else in mind

    ?

    Exactly.
    They (or at least several strong ones) should line up with the usual background gamma energies. So the spectra can still be aligned with calibration energies (using the typical characteristic gammas) to ensure the spectra is not distorted or drifted for some reason, even after averaged background subtraction.

  • I was also wondering how a more sensitive detector might work for background subtracting, but then normalization of CPS (CPM) becomes a probable point of argument.

    At the point of this measurement the instrument was highly stable as you can see from two different deltas about two hours apart. The classic dominant magnetic lines did shine up exactly at the same point for about 8 hours.


    As said we could calibrate during measurement as in LENR magnetic lines with exact energies are dominant above background. E.g. Pd 38.720keV often was 20x background...


    So discussing about measurement issues is a waste of time. Physics is the name of the game.

  • Due to the low counts per second with this system a background spectra takes a while to come into focus.
    This is why I was wondering about channels and bins earlier. To get an idea of how much of a average is obtained over 10 minutes. I was also wondering how a more sensitive detector might work for background subtracting, but then normalization of CPS (CPM) becomes a probable point of argument.

    It is my understanding that the background spectrum Wyttenbach shows is in raw counts and is not averaged in any way. What is being reported here is supposed to be the number of counts in each bin after 10 minutes.


    When I asked Wyttenback a while ago about the absence of the discrete lines that he claims are supposed to be visible in the background, he said that to see them you would have to sample for much longer. To my mind, this does not fit with what is shown in his Fig 2. In that background (replicated again below), very little Poisson noise is evident even though the bin counts are low. A longer sampling period would not bring out lines that are somehow hidden by noise because there is already almost no noise.


  • It is my understanding that the background spectrum Wyttenbach shows is in raw counts and is not averaged in any way. What is being reported here is supposed to be the number of counts in each bin after 10 minutes.


    When I asked Wyttenback a while ago about the absence of the discrete lines that he claims are supposed to be visible in the background, he said that to see them you would have to sample for much longer. To my mind, this does not fit with what is shown in his Fig 2. In that background (replicated again below), very little Poisson noise is evident even though the bin counts are low. A longer sampling period would not bring out lines that are somehow hidden by noise because there is already almost no noise.


    Oh, well then of course the common background characteristic lines/humps should appear.

    They should show up nearly right away, and should fairly quickly resolve to a peak, while the rest of the scatter energies will slowly fill in the remainder of the spectra, sort of randomly.


    A background prepared for subtraction would have to be averaged or otherwise normalized, otherwise a 10 minute (for example) background cumulative count would be several times larger or smaller than the experiment count (unless it is the same period, in which case it should nearly erase all of the counts of the experiment spectra as one might generally hope).

  • A background prepared for subtraction would have to be averaged or otherwise normalized, otherwise a 10 minute (for example) background cumulative count would be several times larger or smaller than the experiment count (unless it is the same period, in which case it should nearly erase all of the counts of the experiment spectra as one might generally hope).

    If you want to detect a weaker signal then you should avoid false positive. Thus Background averaging is a bad idea. You have to join the maxima of many backgrounds. Of course that way you will miss some signals. In a last iteration step you still can compare which peaks are less frequent in all backgrounds and may be you will find 2-3 more lines...

  • Oh, well then of course the common background characteristic lines/humps should appear.

    They should show up nearly right away, and should fairly quickly resolve to a peak, while the rest of the scatter energies will slowly fill in the remainder of the spectra, sort of randomly.

    Absolutely correct in my view. This is exactly what I would expect. It isn't what Wyttenbach seems to describe, however.


    A background prepared for subtraction would have to be averaged or otherwise normalized, otherwise a 10 minute (for example) background cumulative count would be several times larger or smaller than the experiment count (unless it is the same period, in which case it should nearly erase all of the counts of the experiment spectra as one might generally hope).

    Wyttenbach and George say that all samples they analyzed were 10 minutes in duration.

  • Wyttenbach and George say that all samples they analyzed were 10 minutes in duration.

    Then in that case any two total cumulative count per bin/channel backgrounds should very nearly cancel each out when one is inverted, most of the time. The most ideal sampling period would be where the most cases of any two background spectra are almost completely cancelling each other out as close as possible, within the shortest reasonable time, occur. Maybe it’s 10 minutes, maybe 30 is better with this arrangement. That would have to be experimented with a bit. Too long is a time waster, too short adds uncertainty.

  • It is my understanding that the background spectrum Wyttenbach shows is in raw counts and is not averaged in any way. What is being reported here is supposed to be the number of counts in each bin after 10 minutes.


    When I asked Wyttenback a while ago about the absence of the discrete lines that he claims are supposed to be visible in the background, he said that to see them you would have to sample for much longer. To my mind, this does not fit with what is shown in his Fig 2. In that background (replicated again below), very little Poisson noise is evident even though the bin counts are low. A longer sampling period would not bring out lines that are somehow hidden by noise because there is already almost no noise.


    If this is the number of counts per bin, and the y axis is counts, then what accounts for the partial (non whole number) counts?

  • If this is the number of counts per bin, and the y axis is counts, then what accounts for the partial (non whole number) counts?

    You have to inspect the Theremino code if you want to have/understand all drawing details. As usual they try to smooth the curve. Some averaging is done by counting in the previous bucket and the follow up bucket.


    Theremino has some problems with scaling as you can apply some log factor for the axis. In reality the delta spectrum should show much higher deltas for some peeks. As said spectra are nice to watch to get a first impression but for our purpose only the bucket file (histogram is of value).

    But how can you graphically represent a 20:1 delta in a complex spectrum?


    The only folks on this planet that have a similar data salad are CERN folks and extraterrestrial signal detection folks. It's nothing for beginners and wiki-scientists.

  • You have to inspect the Theremino code if you want to have/understand all drawing details. As usual they try to smooth the curve. Some averaging is done by counting in the previous bucket and the follow up bucket.

    No averaging is done unless you ask for it to be done using the Theremino menu*. It doesn't really look to me as though you have used the IIR filter that Theremino offers, but I suppose it could be the true. In any case, all this could be resolved if you make a new plot using what you call the bucket file or histogram. Why not do this for the background you show in Figure 2? Or you could post the actual file. It is trivial to plot it out.


    *The same is true for log scaling on the axes ... it isn't done unless you ask for it.

  • It looks like I would expect it to.
    For only about 8000 counts total it looks surprisingly like the right overall shape.

    (I’m not looking closely at anything specific.)

    Do you mean the overall shape of the background? Yes, it is what I would expect too -- except very little noise and no discrete lines.


    I also note (as I think you might too have noticed) that the overall integrated number of counts shown is about right for about 10 cps for 10 minutes.

  • In the atom ecology paper is stated

    "

    Samarium shows a very interesting behavior, that possibly could be exploited for a stable energy production device. As the starting isotopes are reproduced after 4He spallation one can assume that under ideal conditions only the

    2 2H → 4He reaction runs.

    4He spallation also explains why rare earth are rare because starting with cerium the build up is slowed down."

    "

    Could this tendency of spallation cancelling out buildup reactions via 2H also apply to W ,

    and heavier actinides such U,Th?

    Download relative_abundance_of_chemical_elements.png image from www.periodni.com

  • As said: We had a classic British clay brick wall with high Thorium/Uranium content. This produces some discrete lines...

    Of course we also have 3 hours backgrounds. We do research not wiki-science...


    If you measure in a clean room with 1m concrete around made of selected material - lime stone - like boron barium and some Gd, lead added you finally end up with only internal radiation from your body and cosmic radiation from muons and the K40 peak you stop with the lead shield.


    Next time we will use it together with 6x more sensitive instrument. But in Switzerland concrete sometimes also contains thorium from granite rocks...Also a lead shield contains some active isotopes with beta activity you have to shield with a copper foil.


    Thus its all about careful calibration & measurement.

  • Could this tendency of spallation cancelling out buildup reactions via 2H also apply to W ,

    and heavier actinides such U,Th?

    Lead is the last stable element predicted from SO(4) physics as all available alpha orbits are occupied. In fact all elements after Cerium are less stable with added D*-D* than with a separate 4-He from D*-D*. Thus adding D* to a high Z element only work if the alpha orbit has the adequate band gap.

    I did calculate Alpha waves for some Lanthanides. It's pretty congruent with the available gamma energy. But I currently run many projects in parallel with no help so far for theory as most fools do conventional physics with no success for ever...

  • As said: We had a classic British clay brick wall with high Thorium/Uranium content. This produces some discrete lines...

    Of course we also have 3 hours backgrounds. We do research not wiki-science...


    You are not describing your research well. You do not show the data you actually analyze, instead you show blurry figures that you claim are different, in a way you refuse to specify, from anything used to generate your results.


    If you don't want to retract this manuscript, then improve it by making a good case for yourself instead of a poor one. Optimally, you should replace Figures 1 and 2 with ones that directly reflect what you call the "bucket" files or "histogram" files. These are the datasets you actually used when finding the spectral lines that you claim match your theories, so theses are the ones you should exhibit. It is trivially easy to do this. And it is important because you need to reassure your readers that your analysis has not resulted in simply assigning a multitude of separate peaks to random noise. Given your low bin counts and your explanations so far, it certainly seems as if this is likely.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.