Arbitrage Blog

Read the latest blog post!


Trust But Verify

Written by Arbitrage2024-07-30 00:00:00

Arbitrage Blog Image

Can you trust your artificial intelligence (AI)? How do you know it is doing what you want it to do? While AI can process and analyze data at a scale and speed far beyond our human capabilities, its outputs are only as good as the data and algorithms it uses. Although AI algorithms are designed to be objective, if the data is incomplete, biased, or flawed, then the content it generates will reflect these issues. With the rapid advancement of AI, an increasing amount of content is being generated by these machines - everything from news articles and blog posts to product descriptions and even reviews. How can you ensure the reliability and accuracy of this machine-produced content?

As of 2023, a typical AI model is not assessing whether the information it provides is correct; its goal when it receives a prompt is to generate what it thinks is the most likely string of words to answer that prompt. Sometimes this results in a correct answer, but sometimes it doesn't - and the AI cannot interpret or distinguish between the two. AI can be wrong in several ways: it can give the wrong answer, it can omit information, it can mix truth with fiction, and it can completely make up fake people, events, and information. AI systems do make errors, and those errors can sometimes be difficult to detect without thorough verification. As Dr. Arvind Narayanan, a computer science professor at Princeton, said, "The danger is that you can't tell when it's wrong unless you already know the answer." Hence, we cannot solely rely on the output generated by any AI.


You should treat AI outputs like a document that provides no sources. To ensure the accuracy of AI-generated content, cross-referencing is vital. "[AI] is going to be the most powerful tool for spreading misinformation that has ever been on the internet. Crafting a new false narrative can now be done at dramatic scale, and much more frequently," said Gordon Crovitz, a co-chief executive of NewsGuard, a company that tracks online misinformation. Before relying on AI-generated content, it is essential to take the time to determine its credibility by looking to outside human-created credible sources.


A critical characteristic about AI is that it will take what you provide it and try to answer your question as best it can, but it will not fact check you or spot incorrect assumptions in the prompt you give it. Remember, the AI is producing what it believes is the most likely next word to answer your prompt. This does not mean it is giving you the ultimate answer!


Here is a fascinating example, given by the University of Maryland. When ChatGPT was prompted, "Write a 5 paragraph essay on the role of elephants in the University of Maryland's sports culture. Be sure to only include factual information. Provide a list of sources at the end and cite throughout to support your claims," it returned an answer full of false information. It made up some elephant-related traditions and falsely claimed that elephants helped build the United States railroads during the Civil War. It also generated a list of non-existent news articles and fake website links supporting both of those claims. By contrast, when ChatGPT was prompted, "Does UMD's sports culture involve elephants? Give a detailed answer explaining your reasoning. Be sure to only include factual information. Provide a list of sources at the end and cite throughout to support your claims," it returned a correct answer with information about the real mascot, Testudo the diamondback terrapin. ChatGPT interpreted the first prompt as "taking it as a given that UMD's sports culture involves elephants, write an answer justifying this." However, with the way the second prompt was phrased, the AI was free to answer the question based on its training data, and returned the correct answer. In the same way that we all had to learn the skill of how to Google properly, we have to learn how best to ask AI for answers.


When choosing to use AI, it is wise to use it as a beginning rather than an end. Instead of asking, "Who is behind this information?" we have to ask, "Who can confirm this information?" In this age of AI-generated content, critical thinking has become more important than ever before.

Like this article? Share it with a friend!