THHuxley Verified User
  • Member since Oct 22nd 2014

Posts by THHuxley

    Quote

    Hundreds of them stood up to scrutiny in the literal sense, "a close and searching look." That is to say, you could see the cells were boiling. After a while there was no more liquid between the anode and cathode, so there was no electrical current. Blank cells driven to high power immediately stop boiling when this happens, yet these cells not only continued to boil, they remained hot to the touch for hours.


    @Jed. It is obvious that a few blog posts cannot settle this disagreement easily. However I do notice when someone makes comments which are internally innaccurate. In your case your point only stands if there are no chemical sources of heat present after death in the active cells and not in the control. Yet, this is exactly what Kirk claims and has proposed a mechanism for.


    I realise you don't view his claims as likely. But you cannot use this argument, as stated, as additional ammunition against them because if they are true it falls.


    On a more general point I view this type of anecdotal evidence with suspicion. If definitive it could now be replicated, with careful instrumentation, and would be the clear evidence Abd et al want once chemical causes had been eliminated.

    <a href="https://www.lenr-forum.com/forum/index.php/User/603-THHuxley/">@THHuxley</a>


    Take care in your accusations. You seem far too invested in this. Are you going to answer my question?


    <a href="https://www.lenr-forum.com/forum/index.php/Thread/4745-Rossi-vs-Darden-developments-Part-2/?postID=45650#post45650">Rossi vs. Darden developments - Part 2</a>


    You yourself are far to invested in sub-Gluck conspiracy theories.


    I did answer your question. My post was maybe too quickly phrased. What i meant is evidence that based on documents now open will come out in court - and I sort of pushed the two things together.


    I have no non-public info about this matter: but to have my views, given what is public, you don't need private info - just common sense!

    Quote

    I'm not disputing that numbers must be evaluated within a context, with error bars. My complaint is the manner in which IH has misled the public. By their statements, they have led others to believe that they were unable to measure excess heat at any time. They had a motive to take that hard stance: because it served to discredit Rossi very early on. Now as the cocks come crowing, they must face the brisk response. None of this is to say that Rossi is off the hook. Because he isn't.


    I'm puzzled by this. Are you saying that if IH do a whole load of experiments with what they know is a sigma = 0.15 on COP setup, and get COP all over the place, sometimes 1.3, that this is "measurable excess heat"?


    It is not. At least not to a competent engineer. It is a measured COP=1.3 on a setup with large error bars, so that it does not indicate excess heat.


    The two statements you think are contradictory, on whose basis you repeatedly slander IH, are no way contradictory.

    Engineers are not all highly articulate and in Court may well not explain things as they could in a formal report, and some indeed are not much good at writing clear reports.


    So "ambiguous statements" in this context are possible.


    But, of course, we don't know this was an ambiguous statement because we do not have all the context. That would be mention of the elephant in the room - those error bars. IHFB here consistently has avoided it, and seems to think error bars don't exist. If in Court it was avoided than either the engineer examined was incompetent, or he was not given an opportunity to explain by a lawyer interested in something other than truth. I see the second possibility as very likely. And if there was examination from both sides I'd expect this to have come out in other evidence not quoted by Rossi. But not necessarily - because lawyers are not engineers.


    Every engineer knows that from an experiment "COP=1.3" does not exist. "COP could be 1.3" is valid, but is very possibly consistent with "no measurable excess heat".


    I'm sorry IHFB, but these are matters of fact.

    These documents are entirely predictable, but interesting just as seeing a real dead rat cut up in Biology is when compared with text books.


    Neither IH (for reasons of reputation, and because they have a winning hand, and think Rossi still has some money) nor Rossi (his character, or because till he has other people hooked he must play for time) would be inclined to settle.


    That the "customer" was fake was blindingly obvious a long time ago.


    The evidence that has emerged throughout has been very strongly in IH's favour except on one point: Rossi and the people involved (no doubt following Rossi) would treat the test as GPT. I'm confident that IH highups never treated it as such (or there would be a paper trail).


    In order to get any comfort from this Rossi has to show all of the following:
    (1) That the test was fair (we are not talking legal nitpicking, which IH wins due to the lack of evidence this was legally the GPT, but fairness. Therefore whatever the AWOL ERV report, the real evidence will matter. There is a vanishingly small chance he can do this given that the claimed COP is clearly very wrong, and Rossi has been shown to be highly deceptive over the test.


    (2) That the ERV report is consistent with physical evidence, and is validated by somone other than Rossi. That seems unlikely unless Penon comes back. In any case physical evidence would appear to be 100% against it (I say appear because we have not had much of this in the open yet - but there is every reason to think that it exists).


    (3) That Rossi's version of "was it the GPT" is more plausible that IHs. Given what we now have no-one will believe anything Rossi says. Also, it is pretty clear that the others involved will all say that their views come from Rossi. Not likely IH would be keeping the "it was not a GPT" argument if they had told any of their underlings it was the GPT. My reading is that IH were always clear that from their POV this was not the GPT, that to keep Rossi on board they needed to let him run it, that Rossi would claim it was the GPT.


    Why do they do this? The IH motive involves the existence of known unknowns and has been consistently misunderstood here by the Rossi fans. At the start they still hoped Rossi's stuff would work. Lugano carried some weight, and while even then suspicious they are in the business to find this stuff if they can. It is high reward, so even 10% chance of it being real must be followed to the end. Rossi being dishonest in many ways does not mean they can be sure he has nothing. Their in-house testing would have been frustrating - repeating the (bad thermography) Lugano results, Levi arguing the bad thermography was good, none of their engineers clearly able to outvote Levi. Continued inconsistencies around this would get them more and more suspicious, and bringing in some more professional talent to look at the tests would settle the matter.


    Given the nature of these tests, a grunt engineer would indeed say "well, sometimes COP 1.3". A more professional person would immediately answer by talking about error bars and whether the error analysis was solid, or whether there were still untested assumptions on which it rests. Doubtless they have this now, and equally likely they did not have this when they started the Rossi testing.


    This picture has some uncertainty over the exact progression of IH scientific and engineering understanding. But it is highly believable. I think it has little traction on planet Rossi because it is complex. Instead of an IH test giving a known COP, which must be either positive or negative, we have all these shades of different analysis, not understood errors discovered later, etc. Anyone with any practical experience should realise this is how things are. It speaks for the inexperience of those advancing anti-IH arguments that they do not see this.


    IH can perhaps in retrospect be blamed for not having more rigorous testing at the start. But think about it: if the lugano results were real they did not need rigorous testing. Stable and repeatable excess heat of double the input heat can be measurd in many ways and is commercially viable as a heat pump replacement. It would not have needed high powered testing expertise. They would have been very surprised and worried by the fact that 6 Lugano testers made a large calculation mistake, and that Levi continued to assert this was not a mistake. It was bad luck for them. Good luck for Rossi. Though Rossi seems to make his own luck, and in ways that may be magnificent but are not to my taste.

    Quote from IHFB

    What is very clear to me is that people and organizations are willing to bend the truth when it comes to a world-changing technology if it serves their purposes.


    Which is a non sequitur here, since there is no evidence Rossi has world-changing technology and much (of omission) that he does not.

    Quote

    I can't say that we never had a result that was -- let's see if I can say this right -- we probably had results greater than one, 1.3 might be an answer. I think that reliably, repeatedly, replicating those results has not happened. So at some point in time there could have been a result of 1.3 that we thought was good.


    That could properly be a true report of a sequence of results that are null. It is as commonly done, and commonly not understood. A result of 1.3 means 1.3 +/- all errors and artifacts. If those are not quantified you don't know whether this is null. Even if they are quantified (as in Lugano) there may be some not realised error. So it is entirely reasonable - in fact certain - that in IH testing they get some marginal positive results as this, which mean nothing. You can however see that in a Court an engineer will not find it easy to convey all this stuff about unquantified error bars.


    Also remember they will have been following the Lugano methodology and repeating the large error there. Even though they has suspicions about that methodology, they did not have the better analysis that would show them why 3 / 3.6 both turned into something close to 1 with no acceleration. Without that better analysis those results could be read as positive but not definite (because of the complexity of the analysis and other things that did not make sense around it).


    I've no idea how this all will be processed by the Court. I'd hope with the help of expert witnesses they will get to a true understanding which is profoundly negative for Rossi. His "IH stole my IP and screwed me" message is contradicted by the evidence so far in many ways and will not stand up. From evidence to emerge IH had no motive for this and every motive and ability to pay Rossi, if his stuff worked.


    Assuming the Court come to this common sense conclusion that Rossi's stuff does not work, however, as I think is very likely, it is still not crystal clear to me which way the legal arguments go, even though I think IH has the upper hand.


    EDIT (just for IHFB) Evidence to emerge -> evidence that has been signaled in these docs, will emerge no doubt in Court

    Quote from stefan

    Are you playing with me? or do you want people with less knowledge of math to follow the reasoning? There is no need to go further down in the details unless you are very weak in maths in which case I think we should invite someone else to testify the steps. Also did you not see how m -> m / (2 pi) can be done in a change of reference systems and explains all 2*pi.


    Eric may or may not be, by your standards, very weak in maths. That is not the point: which is that any mathematical derivation can be laid out explicitly and in detail (as you would find in any proof) so that it can be agreed or (if incorrect) shown wrong by anyone able to do maths. Expecting others to fill in gaps, however this may be easy for you, is not fair because the same argument could be made by someone who cannot fill in the gaps himself.

    Quote from Wyttenbach

    We all are waiting eagerly for your first contribution to show us a path to explain a Mills equation!If you think, that your skills are well above ours, then you should manage to do this in a few minutes. But is OK for me, if you just tell us, 'its not worth spending the time to dig into just another GUT, you think (hope?) which will vanish anyway'...


    The point here is that we are wondering whether Mills's work is a major advance in theoretical physicists, ignored by the mainstream but revealing gems if we study it, or whether it is not that in which case it would seem by far most likely that the numeric coincidences in the formulae come from ad hoc adjustment to make theory agree with experiment. In that case Eric would necessarily be unable to answer your challenge, because it is unanswerable. The onus however is on you to do this, not him, since you claim that these results are obviously true.


    If the formulae can be derived, via maths, from some explainable definite theory we have the major advance in theoretical physics.


    However, just proclaiming that this is so, without being able to demonstrate it step by step, is not convincing. Having word reasons for each step "this quantity is the diameter of the orbitsphere" without the axiomatic basis to validate how these relate to a coherent theory, is similarly unconvincing. I suspect that this is what Stefan and others find. Key points in the analysis contain these statements which seem plausible but are essentially ad hoc with no underlying validity.


    Maybe Mills is a genius who has made a great contribution to theoretical physics, but we can show this only by scrutiny of his work showing that it is mathematically complete and sound. That is obviously a lot of work, but Eric here is asking for only part of it. Saying that you have to spend years reading 400 pages does not resolve the matter, and leaves is no closer to understanding whether Mills is the genius that he claims.

    @JedRothwell


    Re "thousand of tests prove LENR". There is some fascinating meta-argument about this, which Kirk's slant is one specific variant on. But I'll keep clear of that for now because it would be repetitive and I see no point!


    Re BE. For 5 years, since reading their write-ups, I have thought that RFI issues with those Q-Pulses would provide identical results from what they claim. Of course, there are subtle differences - for example the differing time constants thermal and electric - but I've not seen anyone at BE or SRI investigate the matter fully and show how they control for this or how the time constants resolve the matter. Have you? For me, it would be an obligatory section of any report claiming that these results are positive for something new.


    This is work that SRI should now be doing. I just hope they are.

    @Dewey


    Quote

    about the BE HHT testing at SRI? You also have no idea about how extensive the IH testing has been but may get to find out a little more information in the not too distant future.


    I'll be fascinated to read how they have nailed on the head the key issue of Q-pulse RFI rectification in the TC amplifiers or TCs. That as far as I can see, would be indistinguishable from LENR without a careful examination. I have not seen such an examination done anywhere or ever referred to as being necessary! I live in hope. SRI certainly have the resources to do this.

    Quote from Alain

    When you say "I think that is an artifact" you are talking about experimental instruments and procedures. You have to be specific or your assertion cannot be falsified. That which cannot be falsified is not science.


    There are two ways this statement is incorrect.


    (1) doubting the robustness of an experiment does not require the doubter to know exactly what is the error mechanism. I agree, where things are simple enough and errors appear ruled out the experiment provides strong evidence. But, even then (as has often proved the case - e.g. the FTL neutrino result) some later analysis shows an error that could not previously be found. There is therefore no requirement for a doubter to be specific. They could for example indicate an area of complexity in which insufficient checks had been made to ensure correctness, without actually finding an error.


    (2) If that which cannot be falsified is not science, you must I think agree that LENR is not science, or else indicate what experimental result would in your view falsify LENR. I have always seen one of LENR's big weaknesses the lack of falsifiable predictions which is inherent in any phenomena which can for reasons not yet understood deliver negative results in any specific experiment.

    @Bob


    Perhaps I misspoke. Personally i find it very interesting. Just that I can put little weight on the results in their current state.


    Quote

    Although the total pulse power from the generator is constant the pulse power measured at the core does vary with pulse length. Still, the magnitude of the power compensation is a greater percentage of the pulse power at 100ns than at 300ns. Calculations show that at 300ns the Qreaction is quite small but is of much greater magnitude at 100ns.


    It requires more time, and a more complete write-up, to analyse this fully but I find the above sentence worrying.


    That is what you'd expect if the pulses generate rectification in the amplifier circuitry, since the pulse power is lower per pulse at 100ns than 300ns. So this is evidence in favour of an RFI artifact - but of course it could have other explanations.

    Quote

    Now Industrial Heat guys can just see, that during tests A.Rossi developed a way better technology (Quark-X reactor) for their money - so that their licence did become obsolete before they could get some profit from it.


    They would be highly unlikely to say or think that given their public and extreme PR'd caution about interpreting results of LENR testing, and the fact that they have stated for sure that Rossi's previous positive tests on equipment test negative when done by them in-house.


    Quote

    I think that this endurance test will last till the end of January, maybe earlier. Reaching that without failure will trigger the organization of the demonstration and will free the money for the industrialization phase.


    Long endurance tests are Rossi's modus operandi. They delay the requirement to get stuff working for real, and keep fans hopeful. We have absolutely zero evidence (less even than with previous "products") that these Quark-X devices work. And their spec is even more unbelievable. I can't see anything in Rossi's statements here except more of the same, with less substance than before.


    As for what Rossi will do in Feb: he will not announce negative results. Otherwise since he rides the wave of "magnificent" PR it is very difficult to predict which of the many options will happen. I suspect Rossi himself will not know till he finds who he can get to agree to something that sounds like an endorsement.

    I've not had time to look at this report properly. One trouble is that it is not self-contained. A PR from a prelim report should not be held to the standards of a publishable paper, of course, but therefore it is also more difficult to draw conclusions.


    My main concern with Brillouin has been EMC issues. The pulses stimulating the reaction will generate RFI that stimulates all sorts of other electronic equipment and can therefore result in offsets in TC data. This effect is bound to happen to some extent, and also bound to be variable with any physical change in the apparatus (for example replacing the contents of the reactor).


    That makes it very difficult to distinguish in principle between apparent LENR signal and the stimulus altering the values of TC measurements.


    I also take on the comments above about input power. These spiky waveforms are difficult to measure directly because you need very high sample rate integration of V*I and both measurements can suffer inductive error issues. Approximations here are based on scope traces are just not safe.


    So I judge this work by how carefully they have controlled or otherwise measured these two obvious error sources.


    Quote

    Since its reconstruction and calibration, I have been able to corroborate that the IPB HHT system moved to SRI continues to produce similar LENR Reaction Heat that it produced up in its Berkeley laboratory at Brillouin. Together with my prior data review, it is now clear that these very similar results are independent of the system’s location (Berkeley or Menlo Park) or operator (Brillouin’s or SRI’s personnel). This transportable and reproducible reactor system is extremely important and extremely rare. These two characteristics, coupled with the ability to start and stop the reaction at will are, to my knowledge, unique in the LENR field to date.


    It is helpful to have such a system. But, if (as I believe is the case) the system included all the calorimetry and instrumentation that in no way helps with the experimental issues above. This type of replication deals with one-off errors or operator error, but not with artifacts of the equipment. Similarly, having multiple identical systems does not deal with equipment artifacts.


    Quote

    We feel that the calorimetry was studied exhaustively and validated to an extremely high level of accuracy (see further discussion and test data review below).


    I trust SRI with the calorimetry, but not with identifying RFI issues, since there is no discussion of these and what steps have been taken to measure or control them. Maybe there are no such issues, but no-one can know that without a check, not described here. I'd expect, if they had extensively checked RFI issues, that there would be a comment at least on that in the discussion.


    Quote

    COP = (output power delta - heater power delta) / stimulus power


    They use compensation calorimetry, in which a heater keeps temperature constant as the stimulus power is varied. The change in heater power thus must be added to the change in measured output power to get the real output change.


    That is good, in that the compensation reduces most calorimetry artifacts. Bad, because COP is not referenced to the total input power to the heater. Any artifact in that, or in the temperature measurement, therefore has an amplified effect on this differential COP. Specifically, TC drift due to RFI will be amplified by this.


    Quote

    Q pulses are 1% duty cycle and (usually but not always) asymmetric.

    This is optimal for getting rectified RFI in amplifiers. It is clear that they are optimising Q-pulse parameters for affect on the output. Unfortunate that this could be optimising RFI effects rather than LENR effects and there is no discussion of how to distinguish the two or even of whether RFI effects could exist. I'd like to see such a discussion.



    Quote

    m factor analysis


    The compensation calorimetry is imperfect, in the sense that the controlled power only includes half of the stimulus power - the rest if lost. This must therefore be calibrated and compensated for. There is then another source of error, if anything alters the m factor - how much of the stimulus power is compensated. I have not analysed this properly and would expect SRI to be safe in doing this - but would like to point out that this is an extra layer of interpretation needed to generate the COP and therefore an extra thing that could have artifacts. Especially, it could interact with other artifacts to make an artifact obscure. I don't know this, but without a deeper analysis I must flag the additional complexity as a possible issue.


    Overall: this is still a very long way away from anything that could be conceivably commercial. You don't need these complex measurement systems if you really have COP=1.4. The headline figure here however is not exactly a COP=1.4 and therefore should be treated with extreme caution. Personally, I think it most likely that Brillouin are dealing with Q-pulse related artifacts - since there system seems highly susceptible - you might even say highly optimised for - the possibility, and they do not discuss how these are eliminated. But it is complex enough that even without that they can have artifacts that generate this apparent figure from small errors in total power measurement.


    I don't in any way think anyone is dishonest in their work here. But I do think that they are not (as I read here) seriously dealing with potential artifacts, which makes the work of little interest until they do.

    Quote

    Mills claims there would be infinite energy that would be a consequence of the Schrodinger equation; if true, that sounds like an unphysical mathematical artifact that arises from an imperfect modeling of the system at small scales.


    To see how Mills is wrong you need only to integrate over electrostatic potential. Suppose, close to the nucleus, the electron probability density is constant (as is roughly the case for the case Eric supposes). Then the total electrostatic energy is given by an integral over volume of the 1/r^2 electric potential. The purist might wish to note that this is the absolute value of the energy, since the sign is negative:



    You can see that this is clearly finite, even in the worst case where we assume the nucleus to be a point charge (not true). The Schroedinger equation determines the electron spatial probability density, but does not affect this simple result. Mills would appear to have some fundamental misconception about QM, or possibility a misunderstanding of calculus.


    Perhaps those better equipped to understand Mills's writing than me could explain his comments on this? I may be missing something crucial, but otherwise this is such a glaring error that it alone would make me very cautious accepting anything similar that Mills has said.


    Quote from Wyttenbach

    In 3D (+t) space an (valence) electron probability inside the nucleus (center) is physical nonsense. This would violate even the underlaying laws of QM


    As Eric points out, this assertion is not obvious and indeed is not true. The exclusion principle in no way prevents a lepton from overlapping a nucleus in this way, and the Schroedinger equation predicts this for the case of s orbitals (which in the case of H are in fact valence, though that does not seem relevant to me).


    It is true that to obtain a non-zero overlap probability we need a physically (and experimentally) realistic model of the nucleus which has finite size. That again is predicted by QCD, observed directly in very many experiments, and not in any way problematic.

    Quote

    Did you mean that we had another theoretical correction of it 2007 and that one matches the 2010 result, do you have a reference


    No time today for considered post. To answer this, in my post I referenced the correction. For precise ref see the first page of the later linked reference which covers the 2007 work. Basically, after publishing a well-written comprehensive theoretical treatment in 2006, not surprisingly it gets crawled over by others (who, unlike Mill's work, understand the derivations) and any errors get found. There was one - not a "manual adjustment" but a real error in the math. This is not "maunal tuning" but definitive calculation with free parameters just the lepton mass ratios + some small corrections based on hadron interactions. Those interactions, and the mass ratios, are otherwise experimentally determined.


    Mills would claim that gS does not depend on any of this stuff, which basically means there are not allowed the type of Standard Model Feynman diagram interactions that are used with vast success to explain PB worth of LHC and other accelerator data. How can that suddenly switch off? It makes no sense!


    You don't have to see the Standard model as the only thing - but over a wide range of observations it has predictive power and so you can't just arbitrarily switch it off when calculation gS because your name is Mills and you'd like everything to be a classically-derived number...

    @stephan


    Mills claims an expression for the anomaly based only on alpha, without high order corrections from QED interactions. That would be plausible except:
    (1) Do we have any record of Mills actually predicting experimental data from this formula? It looks suspiciously ad hoc to me. When did Mills introduce these terms?
    (2) The evidence for QED interactions is inescapable from other experiments. QED has made many other great predictions with experiment, using the same model, and if lepton mass ratios can influence this system (as all the other experimentally validated evidence would suggest) then there CANNOT be a closed form for these high order corrections independent of these.


    Let us look at QED. Here is a 2006 prediction based on much independent derivation of the complex maths (corrected again 1 year later by Aoyama when all these theoretical corrections were indistinguishable by experiment). It is 10X more accurate than the best available measurements of alpha and g (since this calculation, g, and alpha are all related any two can be used to check the other). Now look 4 years later: https://arxiv.org/abs/1012.3627. QED goes on being validated to extraordinary and ever-increasing accuracy by a newer 10X more accurate independent experimental measurement.



    Mills ideas look good if you ignore the rest of theoretical physics, or reckon its wrong. the trouble is that the stuff that has to be wrong includes a lot of phenomena very well validated by experiment.


    PS - have you checked how accurately this number tracks the experimental values of alpha and g-2? I'm willing to bet that unless Mills updates it with new ad hoc terms it will not track the latest experimental data. Of course, that experimental data comes from other QED calculations...

    Quote from Wyttenbach

    I posted the magnetic energy sample some months ago. It's the most compelling proof that Mills does something correct. The anomalous magnetic moment can be calculated (first order - maxwell) in GUT-CP where as QM has to relay on experimental data.


    I'm afraid I don't quite understand this: QED has done a phenomenally good job of calculating the anomalous magnetic moment of the electron (that is gS-2 - where gS is the electron spin correction) with exquisite precision (and without hand-waving) that matches experiment. One of the predictions that makes physicists so confident that whatever GUT we will in the end have, it must be isomorphic to QM at most scales.


    I don't see anywhere a derivation of this exact value of gS-2 in Mills's writing. He does however quote known 1-loop approximate result (alpha/2pi) - as derived in my ref below - without justifying it. It is true that the 1 loop result is a lot simpler than its QED proof! But stating results is not physics, especially when priority of publication does not exist. I'm not sure when the QED result was first published, but I go back at least as far as Schwinger (1948) for a QM-based radiative correction of alpha/2pi. When did Mills develop his ideas?


    I reference a decent introduction to the gS calculation. It is complex, but both self-consistent and predictive (maths is like that - a correct bit of maths, however complex, stays correct).


    And Wikipedia gives an OK intro to the comparison of QED-derived theory and experiment. I'll answer the "QED needs experimental data to calculate gS" after Wyttenbach and perhaps Eric have commented. I'm very willing to be corrected.


    Quote from Axil

    There are 18 versions of quantum mechanics. Which version is the correct one?en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics


    @axil - I think you confuse theory with interpretation. Interpretations of QM do indeed abound - and it looks as though now we are getting close to a very exciting resolution with the latest speculative ideas based on spacetime as an emergent quantum phenomena. But who knows? That is the fun.


    But critically all of these interpretations give rise to the same numeric predictions which can be validated by comparison with experiment. I realise you are mostly interested in levels of description that do not make numerical predictions, so might not pick up on the importance of this, but it is in fact the level at which different interpretations and philosophical ideas coalesce to become a definite theory.


    THH

LENR Partners