Posts by THHuxleynew

    Can you provide any details about what needs to change in the standard model and quantum physics?


    I'll do that. quantum physics at the moment has no connection with GR. That needs to change. There is much work (many different people) showing how spacetime (and spacetime curvature as in GR) can emerge naturally from quantum entanglement in a physically realistic way:


    https://arxiv.org/ftp/arxiv/papers/1807/1807.06433.pdf (a more speculative one in)

    https://arxiv.org/pdf/1901.04554.pdf ( better recent one)


    But it does not quite yet work - or rather "much work needs to be done to fill in gaps and make it solid".


    Even so this approach has now developed sufficienctly that it would be very surprising if some better "theory of everything" understanding did not emerge from it. Whatever your views about standard model you would have, if you looked at it, to agree with that.


    it is not unreasonable to think that when we understand the connection more deeply we will also have an underlying mechanism from which all of the standard model structure can be derived, and possibly a bit more.


    Note though that from what we already know, that QM is essential, that the universe was once much hotter, we expect symmetry breaking to introduce arbitrary constants into the world that now appear immutable physical constants.


    So looking for answers to "why is this fundamental constant this value" other than on amthropomorphic selection basis will often be fruitless.


    A better understanding of the fundamentals will make it clearer to everyone when that is so.

    I think Rossi is only mentioning software now because the truth is that he was utilizing additional electrodes that he was inserting into regions of the plasma ball at different electrical potentials. This is something he doesn't want people to know about. Of course, I don't think it is the optimal way to extract electrical energy from the system because the plasma ball will eventually consume any material. Instead, the best idea is to harness the ion acoustic oscillations (electrical current flow) produced by the macro-EVO. This will probably produce a dirtier form of power which would need to be cleaned up, but you won't have to use sacrificial electrodes.


    Have you ever taken a Rorschach test?

    Research Scientist Presents Critical Insights Into Wuhan Coronavirus


    info on the gene structure of the virus

    http://stateofthenation.co/?p=6737


    A word of caution. It is true that much is unknown about 2019-nCoV. In this situation alt-news guesses thrive, and research scientists can be found to advocate persuasively almost any hypothesis. In this situation news outlets will report stories that seem interesting or in line with their prejudices. So while Lyons-Weller has some expertise, we don't know this is authoritative, and his ratings of the various hypotheses seem like his view, rather than what is likely true.

    Jed: You are the only one who thinks that is what I am saying. I suggest you read this message carefully, and think about measuring an object that is sitting on a platform, by measuring from the floor to the top of the object. If you know the exact height of the platform, and you subtract it out, why does it matter how high the platform is? Answer: it doesn't matter.


    To further examine this analogy. You can easily measure the object height from the top of the platform. True. That is not possible in the LENR case.


    Otherwise, we are measuring pillars where at one time only one pillar is available. One is 1m high. The other is 0.1mm higher. You need a ruler accurate to significantly more than 1 part in 10,000 to do this.


    Actually, for length, this is no problem. We have such equipment.

    Jed: Nope. The more excess power you get, the easier it is to measure. Excess power, that is. Input power is subtracted, so it does not matter.


    Jed, you have made throughout this topic the same - clearly wrong - mistake.


    I have not corrected you because we have hashed this out many times, but occasionally, as here, your black and white thinking leads you so far astray I need to point it out.


    If you have 1W input power, and 1W excess, it is normally very easy to measure. We would both I think agree that. You can find scenarios, e.g. when the equipment generating the power is very massive and large, where it is still challenging. But still, normally easy.


    If you have 10,000W input power, and 1W excess, it is normally very difficult to measure. That is because although you can subtract the input power, and calibrate, any error in the input or output measurements (it is usually output that is most difficult) gets amplified by the difference between the (subtracted) 10kW and the (signal) 1W.


    Specifically if your power measurement has 0.01% noise or error that 1W signal will be lost.


    Now, the cases we actually consider are usually in between these extremes. In good experiments, well recorded and calibrated, with clear attention to ensuring that systems are the same between calibration and recording results, you can argue that the positive results obtained are well beyond possible errors. You can properly argue that subtraction allows results << input to be reliably deduced. My problem with the LENR examples quoted here is that such results do not get reproduced. What can be reproduced ends up different from the apparently solid positive result. Whereas results that are not solid (like F&P' famous open cell experiments with bubbles) get fully reproduced by people who makes the same assumptions F&P do and do not question or further instrument them.


    Anyway, that no doubt you disagree with, as do many here, and that is fine.


    What is not fine however is when you make statements that are (a) wrong (b) genuinely misleading, as the one above.


    It does matter what is the input power when trying to measure excess. How much it matters depends on the accuracy of the measurements and the degree to which differences between calibration and control systems can be reduced. Both these factors vary enormously over the range of experiments considered as possible evidence of LENR.


    Therefore your generalisation here will lead to bad judgement of LENR experiment success. It should be avoided by everyone interested in this.


    Regards, THH

    Death rate

    So far this flu season, about 0.05% of people who caught the flu have died from the virus in the U.S., according to CDC data.


    The death rate for 2019-nCoV is still unclear, but it appears to be higher than that of the flu. Throughout the outbreak, the death rate for 2019-nCoV has been about 2%. Still, officials note that in the beginning of an outbreak, the initial cases that are identified "skew to the severe," which may make the mortality rate seem higher than it is, Alex Azar, secretary of the U.S. Department of Health and Homeland Security (HHS), said during a news briefing on Jan. 28. The mortality rate may drop as more mild cases are identified, Azar said.


    The two unknowns for 2019-nCoV are:


    (1) mortality rate

    (2) human-human transmissability.


    (1) affects how bad it is if it becomes pandemic. (2) affects whether it will become pandemic or merely have continuing outbreaks.


    (1) we do not have accurate figures for. The ratio of deaths/hospitalised is too low when numbers hospitalised climb so much, and in any case these figures from China in this chaos are less reliable than we'd like. However the amount of community infection with mild symptoms is also unknown. That would make figures for deaths/infected too (EDIT typo) high.


    Best estimate seems to be around 2% which would make a pandemic very serious indeed but not a catastrophe. But as far as I can see best estimates are at the moment very uncertain.


    In addition it is not clear, since this virus seems new, whether it will mutate in ways that alter death rate or transmissability. That is cause for real concern.


    THH

    You said, "input power subjects the experiment to a poor signal to noise ratio based on noise in the output measurements." That would only be true if input power could affect the output measurements. It cannot, because it is all subtracted out. If it were not 100% measurable, it could not be subtracted out. That would be noise, by definition. "Noise" means you don't know how much there is; it varies randomly; and it is not controlled. In short, only noise can subject the experiment to a poor signal to noise ratio. A signal that can be measured to one part in 10,000 cannot do that.


    The issue here Jed, which your argument above does not address, is not what you say. Maybe the OP thought it was, in which case you are correct in contradicting their argument but not on the overall conclusion.


    Input power matters because it translates to output power. Although (usually) input power can be measured very accurately, that is not so true of output power. Small fractional errors in output power measurement become problematic when input power >> signal.


    In calibrated systems the assumption is that the system conditions (inasfar as they affect measured temperatures) are identical between calibration and active setups. If this assumption is even slightly wrong, we get a (fractional) error in output measurement.


    So: input power can affect output measurements, in principle, and in practice for some systems.


    Don't trust USPTO to do science.


    While of course right about AR, the Ni stability argument is here is wrong.


    proton capture by Ni is strongly exothermic in spite of Ni stability, because p or d binding energy per nucleon is (very) low. Adding it to Ni releases energy.

    Just to comment on ITER.


    I know environmentally-minded young physicists who are now going into hot fusion because they see it as a way forward for our technological civilisation that can beat climate change.


    Now, personally, I'm not so sure. I see more benefit from battery + PV technology, both of which are still being pushed forward - particularly batteries where we are nowhere near the fundamental limits.


    But the hopeful startup "small hot fusion" projects are chasing a not impossible dream, I follow them with interest. And their success, if any of them ever get it, comes from all the work done on ITER.


    It is easy to rubbish people working on very difficult problems. You can blame politicians for putting money into science only when it has some tenuous link to weapons (distorted fission research) or where it is a project so big it has political momentum (ITER). I would not blame those hot fusion scientists.


    THH


    While that is true to first order, it will not be exactly true. how inexact would depend on reactor topology.


    The key thing is that the heater will be hotter than the reactor, and its temperature. If there is any thermal transmission from heater through the reactor body to metal outside this varying temperature alters external temperature equilibrium, losses, etc.


    Even if not, the reactor may not appear is an isotherm (in fact it has been posted here that in many cases the reactor case temperature is non-uniform). In that case the interior gas can alter the temperature distribution on the outside of the reactor.


    In an air flow calorimeter with low losses and no other issues (e.g. accurate air temperature measurement in and out without artifacts related to temperature of other objects) none of this matters (as you state).


    In the case that there are other issues it could matter.


    THH

    I'd like to warn against false negative comments about Rossi.


    He is a master of PR and while his posting himself under socks on his blog is habitual and proven, and paying click-farms to promote his ridiculous researchgate publications is likely, no-one should underestimate his drive or ability (at doing PR).


    For example, patent applications, always a key element in a successful PR campaign, need bear no relationship to working stuff and Rossi will I expect continue to churn these out.


    Remember also that scientists are human, and some will be taken in by a good story, without looking deeper. It takes only 0.01% of the world's scientists to be so gullible and Rossi has a very impressive following.


    THH

    Adiabatic temperature rise= 0.243 C

    which is a large proportion of 0.35 C.(actually measured)


    Oh, wow. I've not been reading here carefully. Is it really true that the results here are based on such a low temperature difference (out - in)? Surely not! It would be unsafe in lots of ways. And unnecessary because in any air calorimeter flow can be reduced to increase out - in temperature difference.


    Anyway I guess I am taking your comment out of context?


    Very low temperature changes are unsafe because then second order factors affecting the heat content of the air (like pressure - as in your adiabatic contribution, moisture) become more relevant. Though I can't see how moisture could be different between input and output.


    Jed,


    This summarises the (only) difference between us on this site.


    You make assumptions. I don't. And you simplify. It may be 100% obvious those things need to be checked (is it, to everyone, given many replicators are not professionals in calorimetry?) - but the very fact that you are implying they are dead easy to check means that some may agree with you, do a superficial "making assumptions" check, and think all is good.


    In fact, because thorough testing of all these things is not simple I think it likely that with early results they are not checked: reasonable assumptions will be made but not yet fully validated. That is the smart thing to do. And, equally, I'd expect those replicators all to agree with me not you and therefore know they have made assumptions, and do that checking carefully, reporting they have done it, before confirming initial promising results.


    Maybe I'm wrong? We will see.


    THH

    What difference does it make if there is a temperature error due to turbulence as long as temp measurement is consistent and monotonically increasing with actual temperature. Doesn't change the calibration, i.e. if a 500 watt inert control reactor makes the temperature rise 70C, and the active run shows an increase by 100C, that difference of 30C is positive and can be calibrated by running the control reactor at 600W, 700W, and 800W.


    As long as the heat transfer is more or less the same between control and active reactor, it is calibrated. What could we be missing here THH? I know you're a skeptic and I want to hear it. I don't have this reactor in front of me to prove it for myself, but the report on its face seems like it would be proof if accurately reported. I'd like to see more details like in a peer reviewed paper (JCMNS is fine) where the reviewers point out missing elements in the paper and the authors fix the paper to improve its quality. But it seems pretty conclusive if the detail is provided. The details will come because Mizuno committed to open sourcing the test and we are doing it. Someone will write the conclusive paper.


    Happy New Year All!


    To answer this: if the average out temp is different from the sensor temp due to non-mixed air, then many changes in conditions, e.g. small ones that will exist always between cal and active, could change this and result in a relatively large change in temperature. Of course it is possible to be pretty sure the air is mixed.


    We know from the extensive analysis of m's system that there are a relatively small number of issues that need to be checked here, and I'd expect anyone now doing this stuff getting positive results to check them one by one:


    • unmixed air out
    • thermal bridging direct to sensor from hot case
    • change in heat loss due to different airflow active vs cal
    • change in heat loss due to thermal bridging from hot heater inside to outside
    • errors due to room temp drift (I think easy to address, and maybe have been here)


    Perhaps I've missed some but those seem the main issues to me.


    THH

    As I said, nit picking. There is no evident way in which such a large difference in temperature with the same electric energy input can be explained by anything conventional.


    The issue is what is the heat loss. I agree differences between cal and active cannot be larger than the total cal losses: but we do not have much info about what these are at the moment.


    One caveat, if the in/out temperatures as measured are a not fair average of the air stream that would be another error that could be reactor temperature dependent (altering turbulence etc). Do we have info about that?


    Here we go.


    There might be thus-and-such a problem.


    Specifically, a difference between active and cal runs that results in 30% less heat output from the calorimeter on cal run than on active.


    All that is needed for this is for the reactor, or possibly some elements inside the reactor with thermal bridge to outside, to be much hotter during cal conditions cf active. As Jed points out an equilibrium is reached in which the heat out is still the heat in (roughly). However this can be with different temperature of the reactor and therefore different heat loss. The reactor temperature depends on the cooling of the air, the internal heater temperature depends on configuration and internal gasses (relevant if this thermally bridges outside the reactor.


    How would I check? Increase by factor of 2 the insulation everywhere and see whether apparent heat excess reduces.


    The way these results scale makes them look like some such issue to me, but it is easy enough to check and provide convincing support if they are real.


    THH

    "Our research group is conducting calorimetry of a metal hydride chemisorption reaction at high-temperature reaction conditions. The mass flow calorimetry setup used for this experiment measures heat output by determining the temperature gradient of water flowing around a heated (100W input) reaction chamber. The amount of energy released by the reaction can be calculated with the heat capacity of water and the flow rate. To increase accuracy of the measurement, heat losses not going to the flowing water must be minimized. Using the non-isothermal flow feature of the Heat Transfer Module, we modeled the heat-loss of our reaction vessel for insight on how to best thermally insulate our reaction chamber."


    [2019 COMSOL Conference in Boston]


    They get 150C temperature change over 4 inches of the feeder tube. The (easier) methods that they have not said are (1) use smaller diameter tube and (2) use several meters of tube, all insulated. The tube thermal conductivity scales as A*Kcond / L. So A and L are easy ways to change this. Kcond chnages from 50W/mK (SS) to 7W/mK (manganese) so this is pretty significant but SS tubing is surely much cheaper than manganese tubing, since manganese is v difficult to work. Maybe they mean Mangallow - a 10% mangaese variant of stainless steel which again is difficult to work because brittle (low thermal resistance in metals => brittle).


    But it is good to see them wanting to do this. They have 15W loss from their 100W in. They could easily reduce that to 5W loss with (they would hope) then much more significant results. Or, if not much more significant, understanding that the results were something else.