I'm talking about temperature quantization, not time or power input and it changed at least 4 times with matching breaks in data. Total breaks in data occured 11 times. You would of course get this if the data was scaled before cut and paste. Each time the behavior of the reactor altered at these points as can be seen from the power/temperature relationship.
I am airport hopping all day today, so I can't easily dig out the information. If I remember correctly the power meter pulses once every 12500 Joules (or 1250, or ?). So the time interval between pulses is essentially random, especially with the PID flipping power off and on. So the power graph line is plotted by using the number of seconds between pulses, calculating the power in that time period, then averaging the Wh result over several minutes, incorporating a near- random number of pulse calculations on either side of the pulse by normalizing the time period. So the power at plotted at any one particular point on the graph does not precisely reflect the power used at any particular moment in regards to a temperature at that point in time, and seem to incorporate values in the "future" of the time where they are plotted. This use of future values seems to accentuate the steps.
I was able to almost eliminate the power steps using a different averaging techniques, but they were always there to some degree.
I don't recall if this affects the temperature steps.
Ah, yes I was aware of the timing issue and understand, this is not related to that. I took Parkhomov's data and performed some simple analysis work in a new speadsheet. It seems to highlight some serious issues with the temperature data.
Although it was some good sleuthing, I found no happiness from working it out.
It made me sad. It was so unnecessary. Still, Parkhomov may have been covering something up. Not the temperature data, it was probably roughly the shown temperature, but the fact that it was missing. What happened?
This is the story that I pieced together. Parkhomov was running his computer on battery. Okay, that makes sense. But, wait! Why was he doing this?
Because when he plugged the computer in, he had so much noise on the thermocouple input lines that he could not use them to read temperature. His device is a nice little transformer, with a possibly variable core.
Instead of shielding, he floated the notebook, reducing noise. However, there might then be noise depending on stray capacitance in the environment, i.e., he touches the notebook or the like.
The data gap revealed how sloppy his work was, and he did not want to admit that.
It makes me sad because he left science behind, and started to believe in his own ideas, and became motivated to prove himself right. I actually didn't mind that the work was sloppy (or we could call it "quick and dirty" as engineering often is), as long as he didn't cover it up!
There was a suggestion of heater coil noise being introduced into the thermocouple, due to the mulite resistance dropping at high temperatures. I am not aware that anyone has shown that this was indeed the cause, or exactly how much noise a thermocouple would see. The effect was demonstrated however
Thanks for posting that Alan. I didn't want to have to dig that up myself.
Let me just wrap up the paste story by saying that I would have been much more satisfied with finding a calculation problem, an explainable artifact, or best, find that it was totally unexplained by any mundane explanation but contained a decent hint about some sort of new effect.
Anyways, we have wandered way off into the past. I hope that there will be some more information regarding this latest experiment. The results seem to be in line with a whole bunch of recent experiments, which gives hope that whatever is doing it can be nailed down, whether it is a weird artifact or an interesting new phenomenon.
Just been reading your neat little paper on AC 'leakage' at high temperatures. I have found evidence of this too, but have also noticed that it is very dependent on the orientation of the TC's - and in my case a bit counterintuitive. I found that with the Model T reactor a TC at 90 deg. to the heater coils is more susceptible to picking up spurious voltages than one which is parallel to them. Not at all what you would expect. And this btw is using pulsed DC.
I found that with the Model T reactor a TC at 90 deg. to the heater coils is more susceptible to picking up spurious voltages than one which is parallel to them.
Hello Alan: That sounds interesting! I would make some experiments with variable degrees and compile a statistcs there after.
It may be helpful to contemplate whether the signal seen on the TC is inductively-coupled, purely resistive leakage, or a combination of both.
Your finding suggests the last option, and this can be further investigated by comparing the signal on a TC with separated leads to that on a TC with the leads twisted on the portion leading away from (or along side) the heater coil. You might also measure the signal on a TC arranged close to the reactor but not touching it, thus breaking the conduction path.
All good advice, and my helper and I did that and a bit more - including extra shielding, checking for ground loops and so on. We are both (formerly) radio hams -which helps. In the end the answer was to orient the TC's so that we didn't see any spurious voltages. 'Simples' ! Suffice it to say it made the problem go away entirely.
Some of the more esoteric papers on electric field/ magnetic vector potentials suggest (to me at any rate) that once you go past the Debye temperature of some materials, things change in unexpected ways. Perhaos that is what we saw. Quien sabe?
I think that if the coupling was purely inductive, some phase shift would be visible. I saw none in my testing.
Ah, but there is a difference - although our TC's are buried in an Alumina foam block there is a Quartz tube between coil and TC - that probably explains why we saw slightly different effects to your experience. Apologies, only just occurred to me. It's probably six months ago now, and I can't remember what the scope showed - I think that when we lost the problem we just felt grateful and carried on.
Looks like a good experiment. There is still some hope for the Ni-H lenr !
Why didn't he ask about the powder details? Grain size/type/manufacturer etc etc? Isn't that the main spice?
Unlike many, I am somewhat encouraged by this latest experimental set up and results, though there are some (apparent) anomalies which I would hope further experiments might address.
First, the apparent measurement of 100 MJ of energy released puts the energy far in excess of anything that could come from a chemical reaction (about 30,000 x). It does seem that, with the calorimeter being calibrated to within 3% ( one can be fairly confident that this energy was released (notwithstanding comments about possible reactions of the water jacket walls with the fluid, which seem highly unlikely with liquid water below 100 degrees C, and the amount of material that would have to react).
Coefficient of Power
The “coefficient of power” being in the range of 1.2 says little about the viability of the reaction for energy production, and more about the relative values of thermal conductivity when “activating” the nickel (raising it to 1200 degrees C) and when extracting power from the nickel. It is clear that the input energy is almost entirely going directly into the output water bath, not changing the state of the Ni or LiAlH4 (otherwise, the input power would be HIGHER than the output power in the early phases of the experiment). Thus, with low thermal conductivity to the “output” in the loading phase, and conductivity such that the temperature is maintained at 1200 degrees in the running phase while delivering power, the experimental COP could be very high (apparently greater that 5).
Experimental error sources
No experiment is perfect. I daresay, if this weren’t a “cold fusion” experiment where some insist that “extraordinary claims require extraordinary evidence”, the representations made in the experiment regarding energy released would have been accepted at face value. The real question is, given potential sources of experimental error, are the conclusions of the experiment still qualitatively justified. I.e., is there a release of energy far exceeding what can be explained by chemical reactions. To answer this, one needs to quantify the consequences of experimental error vs reported (very high—3 x 109 joules per mole of nickel—assuming all the nickel reacted—vs what might be expected from a normal chemical reaction in the range of 105 joules per mole) total energy release. The only factors affecting the energy out and the energy in calculations are:
Energy in: Voltage and current at the driving point of the wire wound resistor
Energy out: Change in temperature and water flow rate of the calorimeter
Leakage of energy through the insulation of the calorimeter (which would tend to decrease the apparent energy generated)
To justify an error of apparent energy out-energy in/energy in of, say, 25% one would have to have a combination of flow rate error and temperature measurement error adding up to 25%. Though this is possible, this would be hard to believe given appropriately chosen flow rates and therefore temperature rises. If, for example the temperature rise were 15 degrees [Parkhomov says he has a 20 degree rise for maximum reactor temperature and power], measured to an accuracy of .15 degree (+- .07 degree), and the flow rate were constant to 1% , the energy error would be well within acceptable ranges (essentially +- 1.5% ).
Parkhomov says he used an Oregon Scientific Rain Gauge to measure the flow rates, and that it’s accuracy is 1% (.1 cc in 10 cc’s). However, I haven’t found any Oregon Scientific Rain Gauges with a digital interface to a computer. This may imply that once the flow was set, the flow was assumed to be constant (as determined by a pressure head established by the tray visible above the experiment, and either a valve or resistance in the tubing). If so, this would be a dangerous assumption since water viscosity (and therefore flow rate) is a strong function of temperature (1.04 at 20 C, and about .6 at 40 C), which would result in inaccurate estimates of energy output (though one would think it probably would underestimate, rather than overestimate energy output). Either the flow has to be monitored continuously, or a positive displacement pump should be used to force a flow rate (and I didn’t see such a pump in the block diagram or photo).
One would like to measure the type and quantity of ash to discover what the reaction is.If one assumes for example, that each reaction produces 5 MeV of energy (8 10-13 joules), then the 100M joules should have required 1.25 1020 reactions out of the .034 moles of Ni, or .034 x 6.02 1023 atoms = 1.8 1022 Ni atoms present, or about .7% (by number, not mass) to react.Thus we should expect about this much ash of some isotope.If this amount is higher than 10% of the naturally occurring abundance, it should be easily detectable.For solids, this should be possible in most circumstances. For gasses, like He4 or Tritium, it should be difficult but possible. The volumes are about 1.5 ( for molecules) and 3 (inert gasses) standard cc’s over the life of the experiment, and would require sophisticated collection apparatus, and a good residual gas analyzer, which clearly aren’t built into the experimental setup.
Measuring reaction temperature
It isn’t required to measure reaction temperature to determine energy output, but it would be extremely useful to know the reaction temperature to calculate activation energies. However, since the reaction is extremely exothermic and the heat produced is large and probably non-uniformly distributed; the morphology of the reactants is likely changing and the conductivity of alumina is low…it is likely that temperature is not uniform across the reaction region.For example, if just 1 watt is placed across a ¼ cm2 cross section of alumina 1 cm long, the temperature differential is W l / A k,or 1 1 / .25 (.035) or 114 degrees C. There could easily be hot spots in the heat generating section.Further, the nickel will conduct in some regions, but not in others as it aggregates, leading to variability of perceived temperature in a small spot with time.This could lead the control circuit to apply power because of an apparent fluctuation of temperature at that location.In this way, the apparent temperature of the core could appear to be stable (as regulated by computer and measured by the controlling thermocouple), whereas the power output from the calorimeter could vary (normally one would expect the measured calorimeter power to track the reaction temperature). An improvement to the experiment could include multiple thermocouples, or external cladding to the inner ceramic that is highly thermally conductive, and a thin walled ceramic that minimizes the temperature drop from the heat generating nickel to the added highly conductive cladding.
One issue that others have mentioned is the apparent increase in output power whilst the apparent reaction temperature is constant. One possible explanation for that has been given just above.Possibly related anomalies appear at 28.04 and 07.05, when the apparent heat supplied decreases, then increases in a step function.This could be the result of changing efficiency of the heater as heater coils displace, and the fraction of the heat going directly to the calorimeter (and not to raising the temperature of the reactants away from the thermocouple) changes.
The reaction has created much more energy than can come from chemical reactions—a very important result.The term COP is not a particularly useful measure of the viability of the Ni:H system as a power source—since COP is determined as much or more by the experiment configuration than whether the reaction produces a lot, or a little energy.Probably more useful in this case is the energy output per mole—which is apparently greater than 100Mj/.034 moles, or about 3 109 joules per mole. Controlling highly exothermic reactions is a challenge. Doing it by controlling reaction temperature probably isn’t a workable solution given the temperature rise occurring in a material of modest thermal conductivity. At some point hot spots may form affecting the crystalinity of the nickel.Possibly this could limit the reaction rate.
Did you provide the flow meter for this experiment, Jed?
Did you provide the flow meter for this experiment, Jed?
No, I had nothing to do with it.
It is pretty good flowmeter though. I like that type. Bob Higgins posted a photo of this model in Vortex, with this description:Quote
When one cup fills to 10g of water, it flows over and presents the other cup. Each flop causes a magnet to pass a reed switch which causes a pulse. Parkhomov said he measured a noise of about +/- 0.1 g for each flop. The +/- 0.1 g may not have been the repeatability or noise - for example the left cup could be 9.9g and the right cup 10.1g depending on the level of the system.
My comment: Ed Storms and Mike McKubre both used this style of mass-flow meter. This has some advantages over turbine types, calorimetric types (that heat the water) and others described here:
The direct mass flow ones are less likely to clog up, and they are accurate over a broad range of flow rates.
The Q&H from Higgins does not have much detail, but I like what I see so far. The calorimetry is much better than it was in Parkhamov's earlier experiments.
I've used some flowmeters based on coriolis effects, specifically from Micro-Motion. They are expensive (about $5k each) but I have had good luck buying used from ebay for under $200. and sending them to factory authorized calibration shops. The advantage is that they work with 2-phase flow as long as the quality is reasonable. Liquid with gas bubbles is OK, as is gas with some droplets. They don't handle slug flow very well. That was needed in my application measuring flow of liquid oxygen just a few degrees below saturation. They are legal for trade, but the problem is that you have to trust them, as there is no corollary to filling a known cup and dumping it.