THHuxleynew Verified User
  • Member since Jan 18th 2017
  • Last Activity:

Posts by THHuxleynew

    The y-axis is the log of 100 times "enthalpy generation / joule enthalpy input." That does not seem to correlate with anything in Table 1. And why is it "Joule" enthalpy input and not "Watt"? At peak, it is 100 * 3 = 300 times . . . what?And what does "enthalpy generation" mean, anyway? Is that excess heat? (~10 W just after 1.6 million seconds.) Or is it input power plus excess heat? (~17 W just after 1.6 million seconds, I think.) Why is it multiplied by 100?? I am confused!

    it is pretty confusing.


    (1) it must be the ratio of excess heat / heat in (or just possibly total heat out / heat in) averaged over some small period - because if it was total heat out / in you could not get a graph of that shape with sharp changes. So basically it is excess power out / power in or possibly total power out / power in.


    (2) the 100 X is weird but as above makes sense if it is a percentage.


    (3) the log is very weird - but the values in the graph do make sense of this.


    Thus log10 (%age excess pout / pin) would be:


    pout = pin => 0 % => - infinity (we do not see this)


    log (%age total power out ./ power in):


    pout = pin => 100% => 2 (after the log). Most of the graph is a bit below that.

    pout = 10*pin => 1000% => 3 (after the log). Seems reasonable for a large heat burst as is claimed.


    Using this interpretation that system is mostly endothermic, absorbing heat, especially at the start (which makes sense). However there is a large power generation burst of up to (instantaneously) about 10X the input.


    I realise a few of the things about this interpretation are guessed. But the alternatives are I think all eliminated as not being compatible with the given graph.


    So: y = log10(100*Pout/Pin)

    Pout - instantaneous power out (total = Pin + Pexcess)

    Pin - instantaneous power in (from heater hence joule enthalpy).


    The other possible fit would be to suppose that both the 100X and the log annotation on the Y axis are mistakes. In which case this could reasonably just be Pout / Pin - with the reaction clearly exothermic over some time, but the heat burst much less significant.

    This is an interesting system, with interesting results.


    Thinking of it as "potential LENR" comes on the (see discussion on some other thread) pseudo-science side of the science / pseudo-science dividing line. Or, colloquially, "minds so open that they fall out".


    (1) No NAEs - the one half-plausible and generally agreed LENR mechanism

    (2) no obvious anomalies


    So why LENR?


    YHH

    Not saying it was not a success, because it was. Better techniques. better equipment, measurements, with the same, or slightly better, results over last years. But in my capacity, I look for something that may challenge the scientific community and the public to reconsider their long held consensus that LENR is pseudoscience.


    Ben Barrow from the US Army Lab, kind of summed up my overall impression. He said something like "yes, I am still getting tantalizing results of transmutations like last year. So have others for many years...yet we have been ignored by the mainstream. We need to get more conclusive proof". He is working on that BTW.


    Haven't seen the CleanHME video yet though, and some others, so still have an open mind. But curious if others see things differently?

    LENR is not pseudo-science as long as those involved are honest and don't conflate "tantalising results" with "proof of new physics". Which AFAIK is the case for moat (all?) of the new gen scientists. For commercial reasons you don't expect quite the same clarity from companies - where cheery optimism is necessary to get funding.


    Thus:

    Results that fit (in an interesting way) within existing physics - tantalising or not - science. For example the lattice-enhanced reaction rates.

    Results (e.g. transformation) which come from interpretations outside existing physics - science only if noted as tantalising and unsubstantiated until the proof is available.


    The pseudoscience bit comes from rolling up all the tantalising and just plain not understood stuff into a grand "Useful LENR exists" meme which is then used to interpret all future experiments in a "unexpected results are likely to come from new physics" way.


    LENR is sufficiently flexible as a set of potential phenomena / theoretical interpretations that it encompasses all this stuff, both the science and the pseudo-science.


    Anyway I hope anything interesting from ICCF-25 will be highlighted here.


    THH

    The conclusion is that the sharing schemes, even when shared across the entire continental United States does not solve the problem.


    It’s a very well thought out and logically presented argument. If you choose not to watch it you are showing your cognitive dissonance.


    Curbina nobody said wind and solar don’t work. The point of this analysis is that there is no path to net zero with wind and solar without bankrupting the global economy.

    It is true that grid stability adds a lot of cost to renewable energy once the non-renewable stuff is not enough to provide that (I mean - it can be done now with a whole load of different storage techniques, but you need a lot of it and it is expensive).


    BUT - batteries and esoteric storage techs have been getting better and progress will continue. We have many fallbacks - such as keeping old FF generation available for the once-in-10-years weather goes against you for a long time scenario. And we can have some slice (how much depends) of nuclear.


    So:

    • Renewables are cheaper than anything else we don't need FF or nuclear for net zero - wrong now, and maybe storage will always be very expensive in which case we are going to end up needing nuclear (or FF with Carbon capture & storage)
    • Renewables are not a good way to get to Net Zero with some added base load and/or storage - absolutely wrong
    • The argument about what is the cheapest mix - as we know from PVs - will change over time as technology advances. We have got time - no need to decommission all FF plants yet. We just (as with PVs) need to put in enough early money to incentivise to R&D so we have decent solutions in time.
    • 20 years ago people would have said using PV was not cost-effective without subsidy. Now it is. We can see how the relevant technologies have been getting cheaper and better.

    The point is that there are many different ways to obtain decent supply which is effectively net zero and mostly renewable. Once in 10 years weather-related use of FF is not a problem because it is such a small amount total - and it can at high cost even be net zero if joined to DAC schemes.


    It would cost a lot more now than a dirt-cheap (and not allowed on human health grounds in developed countries) dirty coal plant. But we don't know whether it will cost more in 30 years time than advanced coal plants that don't poison cities - even with no requirement to capture carbon. So "bankrupt the global economy" is overreach.


    Also, you need to consider intelligent cost-based demand-side management which will not work for everything but is an effective tool in dealing with some of the supply-side variability and allow every EV battery to be an essentially free bit of medium-term grid storage.

    I think it is going to be worth the wait.


    External Content youtu.be
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    Sam - tell us you are still saying that just to wind us up? Or do you really mean it?

    As I understand it, hot fusion view COP >= 2 as success measured at the experiment boundary - on the grounds that with 100% efficient lasers etc and power recovery that would mean useful power excess. It is not contentious, and well documented, that fusion happens at some level in these experiments, so COP > 1 is not news.


    Whereas LENR experiments tend to think that COP > 1 proves that LENR exists, and is interesting.


    There is then the question of integrity of measurements. And of practicality.


    For all of the hot fusion methods there are significant losses on input side and also (in practice) on output side. In theory near-perfect electricity generation from alpha particle K.E. is possible for reactions that generate only alphas. In practice this is not easy at all. Thermalising all output and using a generator seems still the only practical method - and delivers 40% efficiency. It is worth noting that for LENR systems there will be a similar issue (or worse if heat is at lower temperature).


    On the input side lasers currently used for research are very inefficient. In principle high power lasers can be made quite efficient. maybe 50%???


    So under optimistic assumptions the output/input ratio needed is 10X (COP = 11) for 50% of the output power to be available externally. That is a reasonable metric since with lower availability the cost of the generating and cooling equipment goes up, and therefore capital cost, per MW generated.


    The current results are COP=3.25


    For integrity of these NIF measurements of power in and out I do not know. I have not seen them accurately written up.


    THH

    It's very fair when I say I have not seen the data set which is what I am waiting for. Hard to tell the difference between an energy producing reactor and a 500W hair-dryer using just my eyeballs.

    Umm... I was not arguing positively or negatively for the the integrity of the current experiment. Even more than you - I have no info on it!


    Let me just say the evidence so far looks like: "we've got a new hair drier and when I feel the output it seems hotter than I'd expect compared with my old one".

    I know from experience that once you get systems running at high-temperature it is almost impossible to judge how much energy they are producing - look at the endless arguments over Lugano heat measurements for example.

    Only impossible if you have a particular not-likely-to-be repeated set of circumstances all at the same time:

    1. Like Levi (following Rossi) you make an elementary measurement mistake in a report and do not engage when it is pointed out
    2. You by coincidence measure the one surface (Al2O3) that given that particular mistake amplifies COP by an arbitrarily large factor
    3. You decide (for whatever reason) not to use the temperature measurements from a TC embedded in the system that would give direct temperature measurements and expose the mistake.


    Any one of those things is extraordinary. Together they make a mess no-one but Rossi could engineer.


    Not fair to use that as the yardstick for experimental integrity. Those "endless arguments" were resolved: that exactly what the measurements meant was not resolved is hardly surprising given such an indirect and flawed measurement procedure. It is like saying, after a fraudster has stolen 90% of a companies money, that there are endless arguments about the theft when the debate is about is it 85% or 95%, and no-one will ever know for sure.


    THH

    I know you and George will ask plenty of questions during your visit, Alan Smith , but I just want to remind you of this post from further up the thread. It would be interesting to hear why they think a cell can have a 5:1 CoP in one lab, but a 1.8:1 CoP in another.

    From my experience: COP is a calculated result from an experiment. In the case of my houses heat pump it is pretty indisputable - I'd either pay (even more) for the electricity or get cold. In the case of all of BLP's experiments the COP comes from careful measurements, comparisons, assumptions, etc, because the claimed excess energy is difficult to measure. BLP's devices, interestingly, have each one got more difficult to measure than the previous one.


    To go from 5 to 1.8 you just need different assumptions. And I am willing to bet a few less assumptions and COP lies in a range that includes 1.


    If BLP had what they claim they would be very rich, and very famous. My skepticism is that any of their systems over 25 years, if as they claim, would be extraordinary and wonderful. Instead of more carefully measuring the first one, they move on, and again, and again, each device completely different with different and more challenging issues in measuring energy in and out.

    Nice find Ahlfors . When that Nature article was first published, I don't think any of us would have predicted most, if not all, of the old Google Team would continue on doing "research that builds upon foundational work summarized in the 2019 Nature article", as they put it. They basically claimed they saw nothing, so what was there to build on? Little did we know that that was only the beginning of better times.


    Will never forget back then how the field became demoralized and started pointing fingers at themselves, and/or blaming TG for the failure. Most thought it would set us back years, and dry up what little funding there was. Instead, we now have probably more public and private money flowing into the science than at any time in LENR history.

    Some of us, at the time, felt that it was a positive report and that the LCF stuff had potential for interesting future research...


    Which, hey, perhaps just shows that skeptics can occasionally be right :)

    Did you bother to read Lombriser's paper? The mathematical treatment leaves the predicted measurements unchanged. The theory still fits the observed red shifts etc without requiring cosmic expansion. I think you didn't open the links, just support the current dogma religiously.

    Thanks for encouraging me to be less lazy.


    It is a nice paper - but both more useful and less radical than saying that the big bang does not exist.


    Both these theoretical and observational challenges have prompted for the development and search of new physics [11, 12]. In contrast, a less radical approach to venturing beyond the Standard Model of Particle Physics and ΛCDM is the simple mathematical reformulation of our theoretical frameworks underlying them. This can offer new perspectives with different physical interpretations and possibly even provide solutions for these theoretical and observational problems.


    Or in more detail:


    The field equations can now be recast into a different geometry $\tilde{g}_{\mu\nu} = A_{\;\;\;\mu\nu}^{\alpha\beta} \hat{g}_{\alpha\beta} + B_{\mu\nu}$ as

    Equation (4)

    This is merely a mathematical manipulation, simply a substitution of the variable one is solving the differential equation (3) for. Just as with any other differential equation, one may always perform such a change of variable, in this case the metric. The physics remains unchanged. Note, however, that freely falling particles in $\hat{T}_{\mu\nu}(\tilde{g})$ no longer follow geodesics for $\tilde{g}_{\alpha\beta}$, which manifests as an additional interaction and breaks a degeneracy for visible matter species between the two frames set by the two different metrics. In contrast, an interacting dark sector is simply receiving additional interactions. Alternatively to equation (4), one could also define modified geometric and matter sectors such as $\tilde{M}_{\mu\nu} \equiv \hat{G}_{\mu\nu}(\tilde{g})$ and $\tilde{T}^\mathrm{eff}_{\mu\nu} \equiv \hat{T}_{\mu\nu}(\tilde{g})$ in $\tilde{M}_{\mu\nu}(\tilde{g}) = \kappa^2 \tilde{T}^\mathrm{eff}_{\mu\nu}(\tilde{g})$ or other terms that are equation (3) in disguise. A consequence of these reframed field equations is that one may cast the physics of a system into a spacetime geometry of choice. The implications of that for our physical interpretation and understanding of the observed Universe will be the main subject of this article.


    Thus It is not challenging the idea of a big bang (or bounce). Rather it is saying that mathematically the same cosmology can be described in two ways:

    1. (as is usually popularised) an expansion of space with particles under gravitation following geodesics

    2. (via a conformal transformation which leaves equations the same) a static space in which particles under gravitation following non-geodesics


    The negative is that it breaks the beautiful simplicity of gravity, and therefore requires additional assumptions and constants. The merit is that having recast everything there is scope for more natural explanations of inflation (for sure) and (possibly) dark matter and energy.


    So I am all for it. It seems like a step backwards (abandoning the simple "particles move along spacetime geodesics") that might allow us to go forwards in new directions with more confidence.


    What it does not do is say that the universe is different, or make predictions for new observations, merely that it can be described by different and maybe better maths.


    The best analogy (not however exact) is the difference between different QM interpretations. These do not, and cannot, be differentiated through observations and therefore are physically irrelevant. The analogy is not exact because having reformulated gravity and spacetime in this way the set of "extra stuff that is natural" on top of gravity changes. We have all this "extra stuff" from observations, so the reformulation might have real merit.


    THH

    We have red shift correlating with other measures of cosmic distance. We have as has been pointed out CBR. An expanding universe looks a good simple explanation for that.


    Not to say there are things we do not understand about exactly how it works. It is not at the moment entirely clear with too many arbitrary (= we do not know why) things. e.g - inflation - dark energy - dark matter.


    But those are just detailed stuff we do not yet understand clearly - and they do not affect the overall take home of an expanding universe with an initial big bang or (can't be sure) big bounce.


    It is folly to reject the understanding we do have - juts because that understanding is not complete.


    THH

    You may want to take note that Nikkei Business Publications is a Mainstream company. This is not a Clean Planet publication, is a mainstream media coverage of the Clean Planet development.

    :)


    Business media is not in the business of working out whether the technology companies claims will work. Not their capability. The coverage here is about future possibility as presented by the company. More generally, as you know, the non-science media coverage of anything scientific is very inaccurate and sensationalist. Basing scientific credibility on such reports is unwise.


    But we don't need to argue - we can just wait. I have been doing that since the original very strong claims - expecting to get better scientific evidence for them and not getting it. I will say I told you so in another two years when there is still no better evidence they have anything working.

    The more Clean Planet talk about a grand future - without showing that their technology works - the more red flags.


    Were I them I'd want to get results before making grand plans. They have had lots of money with which to prove their no-good calorimetry by doing proper testing.


    You need to think they are hiding Nobel Prize worthy progress (some much better internal tests) , while talking up future prospects without basis for them.

    A revealing, informative answer. Thanks!

    Basically: what W says is entirely fair. He is not doing science, but gambling on a new invention which would certainly be revolutionary, which he and others here believe will work.


    It is worth noting that this is high stakes and also highly unusual, for the good reason that technology based on new science needs a good understanding of the science, and new science (usually) needs the criticism and multiple viewpoints that come from publishing.


    In the past there were lone inventors who revolutionised industry, and lone scientists who made advances. Pretty difficult for that to happen now, and even more difficult for technological success to follow scientific success without scientific validation. So the number of inventors claiming new scientific revolutions this century has been much larger than the number of inventors whose inventions actually are based on revolutionary new science.


    You can see why this is. For anything revolutionary that depends on new science (e.g. a practical inertialess drive aka antigravity) scientific proof of concept enormously increases the profitability for the inventor by allowing necessary very large investment at a lower equity stake. Whatever the excuses, it is therefore usually the case that those who do not get this cannot get it.

    Alan Smith , THHuxleynew already shared his view about this as I asked him to specify what chemical reaction between brass and H could generate the energy, his answer was:

    So, now we know H Just “Moving around” is what causes the energy that is detected as electricity from the LEC, so no need to further elucidate anything.


    Curbina I also linked a paper specifically about hydrogen occlusion in brass alloy, the post-exposure analysis they used showed no hydriding of the brass. So no chemistry.


    I think THH should read Hasok Chang's paper- I gave him an open goal.

    "no chemistry"


    I am not, as you know, a chemist. My point is that for LEC you need a very small amount of energy to get from (some unknown) chemical reaction. You would have to be very brave, or an LENR supporter, to assume there could be no such unaccounted for reaction in a system with H and anything.


    My question for you would be: in an LEC plate, what does the surface look like? I mean, if it is clean brass, then we need consider only reactions between H and brass - which if you say so do not exist. It it is anything else - then we have reactions between that stuff and H. You do not need much of a reaction - and H will get into mots things (brass an exception? not sure...).

    As you know I have commercial/technical interests in the hydrogen business. The problem of what is called 'gas crossover' in large electrolytic systems is known and has been studied extensively because it costs companies money. So I have been digging around in the published research. From this it appears that the the predominant mechanism is gas crossover of oxygen and the resulting catalytic formation of hydrogen peroxide at the anode side of the cell. Which is endothermic.

    They don't worry so much about recombination at the cathode, which is considered to be insignificant, and as one paper mentions it actually produces current which in turn splits more water, something which reduces the effect of recombination upon system efficiency.

    Which just makes exactly what happens in these systems even is clear.


    Whereas LENR supporters can use "it seems likely that...) to support anomalous results which are not anomalous to them, because they indicate LENR...


    Other people see anomalies... as anomalies. So you go and check everything, till you find the mistake. And only after multiple checks do you announce it as something unexplained and unexplainable.


    This difference in attitude is explains a lot of the arguments between me and others here.

    Jed - I am getting quite tired of you simply repeating things: without engaging with the argument.


    I will accept evaporation as negligible.


    In that case (you remember I did the calculations - right?) the setup must have a way for vapour to be cooled down to something close to room temperature and the condensed liquid returned to the reaction vessel. That is fine - but how can we be sure that does not affect calibration constants due to conduction or convection via this returning liquid, which obviously travels from lower temp zone into the reaction vessel temp and therefore across a calorimeter thermal boundary? We cannot assume the return liquid stream size is constant between calibration, control, and active.


    I think I have said this same thing for 10X now and so don't be surprised if your next non-reply or reply attending to only 50% of what I have said does not get an answer from me...


    Anyway I will be poistive and hope for a complete reply understanding the issues and not dismissing them.