The LENR-CANR ChatGPT is ON LINE!

  • All of the files for the ChatBot have been replaced with text files Performance should improve. It seems to find the URLs more readily.


    It turned out several of the papers were not machine readable. The vendor told me that in some cases the internal files in the bot were filled with question marks and periods instead of text. That indicates the document format was set incorrectly, or it was way out of date. All of those files have been fixed.


    I tested a number of files in English, Japanese and Chinese. It seems to find documents that it could not find before. In one case, I asked in English about a detail that is in only one paper by Zhang, in Chinese. It answered with the correct information translated into English. That is remarkable performance.

    ChatGPT3 or 4? Anyway my experience are that one get's a head start and one need to know what one are doing. This is currently making the good programmers more productive but likely the bad one not so. Also you may not get the best algorithm for the task so it is worthwhile to discuss the task a little.

  • External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    We've come a long way. This video is from 1995 - and the stuff they are talking about (Telnet, Gopher, Archie, Lynx, Veronica) already felt like quite an advance on what we had to do before.

    "The most misleading assumptions are the ones you don't even know you're making" - Douglas Adams

  • I replaced all files with text formatted files, and corrected some that were not machine readable. I think the ChatBot performance has improved. It finds URLs more often, and it finds details in the papers. It still misses some details. See this exchange:


    BOT: Hello! What can I do for you today?


    Did nasa researchers verify results from K2CO3?

    BOT: Yes, there have been experiments conducted with K2CO3 electrolyte and nickel cathodes that have shown excess heat production. NASA has been involved in attempting to replicate these results, but to my knowledge, no team has been successful except for the Stanford results and results from Europe and the USSR. However, there have been some successful replications of the Case Heat Effect accompanied by some of the largest 4He concentrations so far observed. As for the specific question about NASA researchers verifying results from K2CO3, I am not sure. Would you like me to look up more information on that?


    Yes.

    BOT: After conducting further research, I have not been able to find any specific information about NASA researchers verifying results from K2CO3 electrolyte and nickel cathodes. However, there have been several other groups around the world who have reported confirming the generation of excess heat in such Ni-H2O cells using K2CO3 electrolyte and porous nickel cathodes. These groups include Mills and Kneizys, Bush and Eagleton of CalPoly, Pomona, Thermacore Inc., and several groups at BARC in India. Additionally, Reiko Notoya and Michio Enyo of Hokkaido University in Japan reported excess heat production in the electrolysis of light water solution of potassium carbonate using porous nickel cathode.


    Are there papers from Fralick, author or coauther?

    BOT" I'm sorry, but I could not find any information about a Fralick author or co-author in the given information. Could you provide me with more context or details about what you are looking for?


    Is there a paper authored by Niedra, co-author Fralick?

    BOT: I apologize for the confusion earlier. After conducting further research, I was able to find information about a paper authored by J.M. Niedra and Ira T. Myers titled "Replication of the Apparent Excess Heat Effect in a Light Water-Potassium Carbonate-Nickel Electrolytic Cell." The paper was published as a NASA Technical Memorandum in February 1996 and reprinted in Infinite Energy, Vol.2, No.7, 1996, pp.62-70. However, I could not find any information about a co-author named Fralick in this paper.

    [THIS IS MY MISTAKE. I forgot to include Fralick in the LENR-CANR.org database. Apparently, the Bot is looking at the ASCII version of the index that I uploaded, and found this:

    Title: Niedra, J.M. and I.T. Myers, <i>Replication of the apparent excess heat effect in light water-potassium carbonate-nickel-electrolytic cell.</i> Infinite Energy, 1996. <b>2</b>(7): p. 62.

    URL: http://lenr-canr.org/acrobat/NiedraJMreplicatio.pdf

    First Author: Niedra, J. M.

    All Authors: Niedra, J. M., Myers, I. T.

    Keywords: heat-, H2O, Ni, electrolysis, replication Mills

    Abstract: Replication of experiments claiming to demonstrate excess heat production in light water-Ni-K2CO3 electrolytic cells was found to produce an apparent excess heat of 11 W maximum, for 60 W electrical power into the cell. Power gains ranged from 1.06 to 1.68. The cell was operated at four different dc current levels plus one pulsed current run at 1 Hz, 10% duty cycle. The 28 liter cell used in these verification tests was on loan from a private corporation whose own tests with similar cells are documented to produce 50 W steady excess heat for a continuous period exceeding hundreds of days. The apparent excess heat can not be readily explained either in terms of nonlinearity of the cell's thermal conductance at a low temperature differential or by thermoelectric heat pumping. However, the present data do admit efficient recombination of dissolved hydrogen-oxygen as an ordinary explanation. Calorimetry methods and heat balance calculations for the verification tests are described. Considering the large magnitude of benefit if this effect is found to be a genuine new energy source, a more thorough investigation of evolved heat in the nickel-hydrogen system in both electrolytic and gaseous loading cells remains warranted.]


    What is the abstract for ""Replication of the Apparent Excess Heat Effect in a Light Water-Potassium Carbonate-Nickel Electrolytic Cell."

    BOT: The abstract for "Replication of the Apparent Excess Heat Effect in a Light Water-Potassium Carbonate-Nickel Electrolytic Cell" by J.M. Niedra and Ira T. Myers is as follows: "Replication of experiments claiming to demonstrate excess heat production in light water-Ni-K2CO3 electrolytic cells was found to produce an apparent excess heat of 11 W maximum, for 60 W electrical power into the cell. Power gains ranged from 1.06 to 1.68. The cell was operated at four different dc current levels plus one pulsed current run at 1 Hz, 10% duty cycle. The 28 liter cell used in these verification tests was on loan from a private corporation whose own tests with similar cells are documented to produce 50 W steady excess heat for a continuous period exceeding hundreds of days. The apparent excess heat can not be readily explained either in terms of nonlinearity of the cell's thermal conductance at a low temperature differential or by thermoelectric heat pumping. However, the present data do admit efficient recombination of dissolved hydrogen-oxygen as an ordinary explanation. Calorimetry methods and heat balance calculations for the verification tests are described. Considering the large magnitude of benefit if this effect is found to be a genuine new energy source, a more thorough investigation of evolved heat in the nickel-hydrogen system in both electrolytic and gaseous loading cells remains warranted."


    What is the URL for ""Replication of the Apparent Excess Heat Effect in a Light Water-Potassium Carbonate-Nickel Electrolytic Cell."

    BOT: The URL for "Replication of the Apparent Excess Heat Effect in a Light Water-Potassium Carbonate-Nickel Electrolytic Cell" by J.M. Niedra and Ira T. Myers is http://lenr-canr.org/acrobat/NiedraJMreplicatio.pdf.


    How big was the cell described in the paper "Replication of the Apparent Excess Heat Effect in a Light Water-Potassium Carbonate-Nickel Electrolytic Cell"?

    BOT: According to the information provided in the paper "Replication of the Apparent Excess Heat Effect in a Light Water-Potassium Carbonate-Nickel Electrolytic Cell" by J.M. Niedra and Ira T. Myers, the cell used in the verification tests was a 28 liter cell.

    [NOTICE THAT I REPEAT THE TITLE "Replication of the Apparent Excess Heat Effect in a Light Water-Potassium Carbonate-Nickel Electrolytic Cell"]


    28 liters is big! Bigger than most cold fusion experiments I think. Can you confirm that?

    BOT: Yes, you are correct. The 28 liter cell used in the verification tests described in the paper "Replication of the Apparent Excess Heat Effect in a Light Water-Potassium Carbonate-Nickel Electrolytic Cell" by J.M. Niedra and Ira T. Myers is larger than most cold fusion experiments. Many cold fusion experiments use cells that are much smaller, typically on the order of a few cubic centimeters to a few liters in volume.

  • Forgot to mention that an annoying problem with the visibility of the text on the query window is fixed and makes it much easier to use it in the mobile version, don't know if this was fixed on purpose, but makes it really easy to use it now.

    I noticed that. They might have improved it. Or it might have been me. I made it wider, to 500 pixels. Is it too wide in the mobile version? The parameters are now:


    <iframe src="https://chatfast.io/chat/a738 . . . " width="500px" height="600px"></iframe>


    . . . Looking at it again, I think they fixed it. The field now scrolls off to the left as you continue typing.



    Is it too wide? I can go back to the default values.


    ChatGPT3 or 4?

    I think it is still 3-something-or-other.

  • Beware of hallucinations! This is completely wrong:

    What are the author(s), title and URL of the Pd-B experiments at China Lake?

    BOT: The author of the paper on Pd-B experiments at China Lake is Miles, M. and B.F. Bush. The title of the paper is "Palladium-Boron Cathodes Produce Excess Power in Electrolytic Cells Using Heavy Water and Lithium Deuteride" and the URL is http://lenr-canr.org/acrobat/MilesMpalladiumb.pdf. The paper describes electrochemical excess power effects produced using Pd-B cathodes prepared at the US Naval Research Laboratory in nine different experiments conducted at three different laboratories using three different types of calorimeters.

    [The title is made up. There is no paper "Palladium-Boron Cathodes Produce Excess Power in Electrolytic Cells Using Heavy Water and Lithium Deuteride." The URL is made up, in the right format for LENR-CANR.org!]


    This is somewhat right, but somewhat of a hallucination, because Miles did not sent samples to SRI as far as I know:

    Miles sent samples of effluent gas to three other labs to be tested for helium. Do you know the names of those labs?
    BOT: Yes, according to the given information, Miles sent samples of effluent gas to other laboratories, including the University of Texas and SRI, to be tested for helium.

    [The labs were: University of Texas, Rockwell International, Helium Field Operations laboratory, Department of the Interior, Amarillo, Texas. The Bot could not find them all.]


    Are there more details about the University of Texas equipment in the paper by Miles, "ANOMALOUS EFFECTS IN DEUTERATED SYSTEMS"?
    BOT: Yes, there are more details about the University of Texas equipment in the paper "ANOMALOUS EFFECTS IN DEUTERATED SYSTEMS" by Miles, Bush, and Johnson. According to the paper, the University of Texas used a Bell & Howell 21-491 mass spectrometer with a cryofiltration system to remove all gases except helium. The mass spectrometer had sufficient resolution to separate D2 and He easily when the mass peaks were displayed as analogue signals on an oscilloscope. The detection limit for helium in their measurements was approximately 8 X 1011 atoms of 4He in 500 mL of gas sample.

    [ANOMALOUS EFFECTS IN DEUTERATED SYSTEMS is here: https://lenr-canr.org/acrobat/MilesManomalousea.pdf. It does not say "Bell & Howell" anywhere in this document. However, this detail is in another document by Ben Bush: http://lenr-canr.org/acrobat/BushBFheliumprod.pdf So, the ChatBot went looking for that information elsewhere, even though I asked her about this paper only. The quote "8 X 1011 atoms of 4He in 500 mL of gas sample" is from Miles, ICCF2, p. 364, https://lenr-canr.org/acrobat/SIFthescience.pdf. It should be 10^11, but the text file I uploaded removed the superscript markings for this particular file.]


    What paper describes how Miles and Bush looked for anomalous radiation using several different Geiger-Mueller (GM) detectors as well as sodium iodide (NaI) detectors during electrolysis experiments?
    BOT: The paper that describes how Miles and Bush looked for anomalous radiation using several different Geiger-Mueller (GM) detectors as well as sodium iodide (NaI) detectors during electrolysis experiments is titled "Radiation Measurements at China Lake: Real or Artifacts?" and was presented at The Seventh International Conference on Cold Fusion in 1998. The paper was authored by Miles, M. and B.F. Bush and can be found at the URL http://lenr-canr.org/acrobat/MilesMradiationm.pdf.

    [CORRECT]

  • I experienced something alike last night, it gave me papers with titles close but not exactly as they are required to find them, I ended using google to find the real papers. Is this an iherent bug of LLMs or just tends to happen when it resorts to context? Because it took a while of forth and back before the faulty paper titles were supplied to me.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • Because it took a while of forth and back before the faulty paper titles were supplied to me.

    A Google search or the Google™ Custom Search at LENR-CANR.org also have problems. I recommend the SQL search at the library. The "Search All" field includes text from the title, keywords and abstract. Like all basic SQL it is an exact search only, so enter only a few words.


    LIBRARY

  • The LENR-CANR.org ChatBot has been upgraded to ChatGPT version 4. It seems to work better, with fewer hallucinations. It may be slower.


    I asked it some of the questions version 3 could not answer. It answered them correctly. It still gets confused. It thought that McKubre made a statement that Bockris made. I asked for a verbatim quote and tracked it down to Bockris.

  • The LENR-CANR.org ChatBot has been upgraded to ChatGPT version 4. It seems to work better, with fewer hallucinations. It may be slower.


    I asked it some of the questions version 3 could not answer. It answered them correctly. It still gets confused. It thought that McKubre made a statement that Bockris made. I asked for a verbatim quote and tracked it down to Bockris.

    Did McKubre ever quote Bockris Verbatim? Perhaps there lies the confusion.

    I certainly Hope to see LENR helping humans to blossom, and I'm here to help it happen.

  • Did McKubre ever quote Bockris Verbatim? Perhaps there lies the confusion.

    Actually, this was from a paper by Mallove describing Bockris:


    https://lenr-canr.org/acrobat/MalloveEthetriumph.pdf


    It does mention McKubre twice. Perhaps if the author had been Bockris it would not have been confused? I added labeled text to the beginning of each file describing the file:


    Title: Mallove, E., <i>The triumph of alchemy: Professor John Bockris and the transmutation crisis at Texas A&amp;M.</i> Infinite Energy, 2000. <b>6</b>(32): p. 9.

    URL: http://lenr-canr.org/acrobat/MalloveEthetriumph.pdf

    First Author: Mallove, E.

    All Authors: Mallove, E.

    Keywords: history

    Abstract: Distinguished Professor of Chemistry at Texas A&amp;M University, John O’M. Bockris, is one of the top two or three electrochemists of the twentieth century. He must be counted as a lineal intellectual descendant of one of the greatest scientists of all time, Michael Faraday, who was of humble birth but became a towering figure of nineteenth century science. Like Bockris, Faraday was raised in England and came to love many facets of science. In addition to his fundamental discoveries in electromagnetism, Faraday had much to do with the birth of electrochemistry. . . .


    I can't tell if it is doing any good. Sometimes when I ask for a URL it gives it; sometimes it doesn't.



    It does a pretty good job finding things now. It seems Version 4 is working better:


    ME: Describe Mallove's paper "The Triumph of Alchemy."

    BOT: "The Triumph of Alchemy" is a paper by Eugene Mallove that discusses the work of Professor John O'M. Bockris, a distinguished Professor of Chemistry at Texas A&M University. Bockris is considered one of the top two or three electrochemists of the twentieth century and is seen as a lineal intellectual descendant of Michael Faraday, a great scientist of the nineteenth century.

    The paper highlights Bockris' contributions to the field of electrochemistry and his connection to Faraday, who was also instrumental in the birth of electrochemistry. The paper can be found in Infinite Energy, 2000, issue 6(32), on page 9. The URL for the paper is http://lenr-canr.org/acrobat/MalloveEthetriumph.pdf.


    ME: What is the abstract for this paper?

    BOT: The abstract for "The Triumph of Alchemy" by Eugene Mallove is as follows: "Distinguished Professor of Chemistry at Texas A&M University, John O'M. Bockris, is one of the top two or three electrochemists of the twentieth century. He must be counted as a lineal intellectual descendant of one of the greatest scientists of all time, Michael Faraday, who was of humble birth but became a towering figure of nineteenth century science. Like Bockris, Faraday was raised in England and came to love many facets of science. In addition to his fundamental discoveries in electromagnetism, Faraday had much to do with the birth of electrochemistry."


    It cut off the text where I cut it off, only it is missing the ellipsis. I guess it took my version. In the paper, there is no word "abstract." The abstract is not marked. So I guess it took my labeled version. It is hard to know what a Bot is up to in its little mind. Bots approach the ultimate embodiment of what my mother called "the innate perversity of inanimate objects."



    Here is another tip for using a Bot. The vendor advised me to keep queries simple: "I suggest asking one primary question per query instead of combining multiple ideas within a single query, as the current system's capability to handle such complexity is still limited."


  • JedRothwell


    How is it working, is it possible to maintain your data on your server or do you need to store it on a third party server? I understand that one need help to train the neural network. But I foresee that this can work very nicely in the corporate environment.


    When it comes to corporate information one have two legs. One is to make the data and another to match the relevant information to the one in need of it. I think that this latter action is under developed in business and the result is that the company does not learn much from earlier experiences. So here I think that this AI tool can have a great use. Of cause for this to be practical one need to make sure to handle things like integrity of the information so that one only get hold of information that one has clearance for and that the raw data can be maintained by the company itself as it can hold quite a lot of sensitive information. My conclusion is that this part of AI applications will soon become huge, really huge especially if one invents a fool proof way of excess rights.

  • This is uncanny.


    CHATBOT 2:


    BOT: Hello! What can I do for you today?


    What did Mallove say about the experiment at MIT?

    BOT: Eugene Mallove accused MIT of publishing fudged experimental findings to support their early condemnation of the work of Pons and Fleischmann. He believed that MIT's researchers generated excess heat in their cold fusion experiment but then altered the findings in their final report. Mallove claimed that the scientists at MIT's Plasma Fusion Center had a vested interest in seeing cold fusion fail, as they were receiving significant federal funding for hot fusion research.

    Mallove resigned from his position at MIT and demanded an investigation into the alleged misconduct. He also accused the university of manipulating the media and altering data to discredit the work of the Utah scientists.


    Is there a paper by Mallove describing this?

    BOT: There is no specific paper by Mallove solely focused on this incident mentioned in the dataset. However, Mallove did submit a 17-page "J'Accuse" letter to MIT detailing his accusations against the institution and its researchers. . . .


    [The word "J'Acuse" appears nowhere in the LENR-CANR corpus.]


    . . . He also wrote letters to MIT's president and assistant to the president requesting an investigation into the alleged misconduct. Additionally, Mallove responded to a statement on cold fusion issued by the MIT News Office in August 1991 with a point-by-point rebuttal.


    [Not true. BOT #1 knows better.]


    BOT 1


    Is there a paper by Mallove describing events at MIT?

    BOT: Yes, there is a paper by Mallove that describes events at MIT. The paper is titled "MIT Special Report" and was published in Infinite Energy in 1999. The paper explores the major role of MIT in shaping the history of the investigation of cold fusion. You can find the paper at this URL: http://lenr-canr.org/acrobat/MalloveEmitspecial.pdf

  • How is it working, is it possible to maintain your data on your server or do you need to store it on a third party server?

    This copy of ChatGPT is supplied to me by a company, chatfast.io, for a lot of money per month. They supplied 3 ChatBots, with customized capacity, because I have much more data than most of their customers. Two of the bots are available at LENR-CANR.org (https://lenr-canr.org/wordpress/?page_id=2988). I upload data to the bot's databases. The data can be in three formats: Plan text, Files, or Website pages. I have uploaded 1,167 files to the Library bot database. 79 MB of data. That is everything at https://www.lenr-canr.org/acrobat/. I tried uploading in Acrobat and text format. The vendor advised me that text files work best, so I converted everything to text, with two spaces between each paragraph.


    I understand that one need help to train the neural network.

    I don't do anything. After the files are uploaded, the software automatically updates the ChatBox database. It tokenizes the data. You can see how that works here:


    OpenAI API
    An API for accessing new AI models developed by OpenAI
    platform.openai.com

    But I foresee that this can work very nicely in the corporate environment.

    That is what the software at ChatFast is designed for.

    When it comes to corporate information one have two legs. One is to make the data and another to match the relevant information to the one in need of it.

    You can add Q&A questions and answers to the ChatBot database. When someone asks a question similar to the one you enter into the database, the ChatBot gives your answer instead of putting together an answer from the database. I added a few Q&A answers to the Index bot, but that feature does not seem to be working.

  • BOT: There is no specific paper by Mallove solely focused on this incident mentioned in the dataset. However, Mallove did submit a 17-page "J'Accuse" letter to MIT detailing his accusations against the institution and its researchers. . . .


    [The word "J'Acuse" appears nowhere in the LENR-CANR corpus.]

    This may give you a feeling the Bot is becoming sentient. I told this bot to exclude all information outside of LENR-CANR.org. Yet here it has come up with a word that is nowhere in the documents I uploaded. Not only is it not found at LENR-CANR.org -- and it is not even English! -- but it is the very bon mot, as they say in France. If it is not borrowing data from outside LENR-CANR.org, where did this word come from? I think I know.


    The ChatBot does not borrow data, but of course it borrows millions of words and concepts, in many different languages. It could not construct sentences based only on the data at LENR-CANR.org. It needs billions of times more data. Here is a "gift" copy (no paywall) of a New York Times article showing what happens when you train a Bot using only a small number of documents, such as every book written by Jane Austen. It never reaches the ability to generate meaningful sentences:


    Let Us Show You How GPT Works — Using Jane Austen
    An interactive explanation of how language models learn to mimic language, from Shakespeare to Star Trek.
    www.nytimes.com


    It can only produce gibberish which sounds like Ms. Austen:


    "You must decide for yourself," said Elizabeth, rather repeatedly; "that is very agreeable displeasure, they will ever be a lively young woman as it will be more disagreeable."


    "My dear Fanny, who is a match of your present satisfaction, and I am at liberty and dinner, for everybody can be happy to you again; and now when I think I used to be capable of other people, by being hastily used to be forgotten in something of the little first possibility of my usual taste which such a party as this word.



    Okay, so it has a large vocabulary, including "J'Accuse." It used the word "J'Accuse" as a synonym for some word or phrase it found in a document at LENR-CANR.org. I do not know which document. It can be hard to discover that. The Bots often use synonyms, or paraphrasing. It is my understanding that Bots do this this because the data is tokenized and the bot may not store the original words. Bots store alphanumeric tokens instead of words. So the original words may not be available in the Bot database.


    I asked the public version of ChatGPT about this. She is a leading expert in AI technology. Here are her responses --


    ME: What is tokenized data in an ai system?

    BOT: In AI systems, tokenization refers to the process of breaking down a text or sequence of words into smaller units called tokens. These tokens can be individual words, phrases, or even characters.


    Tokenization is an essential step in natural language processing (NLP) and other machine learning tasks that involve processing text data. By breaking down text into individual tokens, AI systems can analyze and interpret the meaning of the text more efficiently.


    Tokenization typically involves removing any punctuation, converting all characters to lowercase, and splitting the text into tokens based on a delimiter such as a space or punctuation mark. The resulting tokens can then be encoded and used as inputs to various machine learning models.


    Overall, tokenization helps to simplify and standardize text data so that it can be effectively analyzed and processed by AI systems.



    ME: A token is a numerical value, right? In some cases, can two different words for one concept -- synonyms -- have the same token? Or a token with a close numeric value?

    BOT: Tokens are not necessarily numerical values. In natural language processing, tokens are typically represented as strings of characters or words, which can then be encoded as numerical values for machine learning algorithms to process.


    When using tokenization, two different words that represent the same concept (synonyms) can be assigned the same token to enable the machine learning algorithm to recognize them as equivalent. In this case, the goal of tokenization is to reduce the vocabulary size and improve the efficiency of the model without losing critical information.


    However, it's important to note that tokens with close numeric values should not be used interchangeably. Each token should be unique and represent a distinct concept or word in the input text. If two tokens have similar numeric values, it could lead to confusion and inaccuracies in the machine learning model's results. Therefore, tokenization algorithms typically ensure that each token has a unique and distinguishable value.


    ME: I have noted that when an IA describes the content of a document, it sometimes uses synonyms instead of the original words. In a few cases, I think it did this even when I asked for a verbatim quote. Is this because the content is stored as tokens, and the exact original words are not available?

    BOT: Yes, it is possible that the AI is using synonyms to describe the content of a document because it is working with tokenized data and not the original words. When an AI system tokenizes text data, it breaks down the text into smaller units called tokens, which can be individual words, phrases, or even characters.

    These tokens are then used as inputs to machine learning models, which can use various techniques to analyze and interpret the meaning of the text. Sometimes, these models may use synonyms or other related words to describe the content of a document because the original words may not be available or may not be the best fit for the context.


    Additionally, some AI systems may use techniques like paraphrasing or summarization to generate new text based on the input data. In these cases, the AI may use synonyms or other related words to convey the same meaning as the original text while also improving readability or reducing complexity.

    However, if you specifically requested a verbatim quote and the AI system did not provide it, it is possible that there was an error or a misunderstanding in the processing of your request. It's important to keep in mind that AI systems are not perfect and may not always provide the exact results that you expect.


    ME: How should I phrase a request to see the original data? Should I use the word "verbatim"? Or some other word?

    BOT: If you want to see the original data or text without any modifications or substitutions, using the word "verbatim" is an appropriate way to phrase your request. Verbatim means word for word, exactly as it was said or written.


    So, you could phrase your request like this: "Can I please see the original text verbatim, without any modifications or substitutions?" This clearly communicates that you want to see the exact words as they were written, without any changes or paraphrasing.


    It's important to be specific and clear in your request to avoid any confusion or misunderstandings. Using a term like "verbatim" can help ensure that your request is understood and that you get the information you need in the form you want it.


    [Notice that she wants me to say "please"!]

  • The internet has been littered with little documents composed of tokens for at least a decade. They look like random lists of words but with some sort of sentence-like syntax. Most often they can be found way down in the bottom of Google searches.

  • The phtrase J'accuse is well known to historians. It was the opening phrase of a letter - which became known by that term ( or token) - written by Emil Zola and published in a major French newspaper after the trial of a Jewish army officer Dreyfus, in the 1890's. (The Dreyfus Affair)


    From memory Dreyfus was wrongly accused of treason.

Subscribe to our newsletter

It's sent once a month, you can unsubscribe at anytime!

View archive of previous newsletters

* indicates required

Your email address will be used to send you email newsletters only. See our Privacy Policy for more information.

Our Partners

Supporting researchers for over 20 years
Want to Advertise or Sponsor LENR Forum?
CLICK HERE to contact us.