Are Wikipedia and AI on a Collision Course?

Artificial Intelligence (AI) apps have a thirst for human knowledge.

Wikipedia is the largest collection of human knowledge on the web and it’s free to use and it’s very existence is dependent upon frequent interactions with real humans. It is not profit driven.

AI has tremendous profit potential and is a huge and growing user of Wikipedia. It uses information that it gets from Wikipedia and repackages it to make a profit. The more AI interfaces with Wikipedia the less humans will do so because its easier to just ask an AI-driven bot the question.

Anyone else here see a problem looming?

Remember the TV Show Mr. Ed? The theme song said “Go right to the source and ask the horse” (Mr. Ed was a talking horse) So, I asked ChatGPT your question. Here them is the answer:

The scenario you’ve described does raise some interesting considerations and potential challenges:

  1. Dependency on Wikipedia: As you mentioned, AI applications often rely on Wikipedia as a source of knowledge. While Wikipedia is a valuable resource, it’s important to remember that it can be edited and maintained by human volunteers. Relying too heavily on a single source, even one as comprehensive as Wikipedia, could lead to biases or inaccuracies in AI-generated content.

  2. Decreased Human Engagement: If AI becomes the primary interface for accessing information from Wikipedia, there could be a reduction in the number of human contributors and users. This could lead to fewer updates, maintenance issues, and a decline in the quality of Wikipedia articles over time if there aren’t enough humans actively involved in its upkeep.

  3. Potential for Biased or Misleading Information: AI, while powerful, is not perfect and can sometimes produce biased or misleading information if its training data is biased. If AI is the primary conduit for information, it could inadvertently reinforce biases present in its training data, potentially leading to the spread of inaccurate or biased information.

  4. Ethical Concerns: There are ethical concerns surrounding AI’s use of freely available human knowledge for profit. If AI is extensively using Wikipedia’s content to generate revenue without contributing back to the platform or compensating content creators, this could be seen as exploitative.

  5. Sustainability of Wikipedia: Wikipedia relies on a community of volunteers who donate their time and knowledge. If AI reduces the incentive for these volunteers to contribute, it could potentially impact the sustainability and growth of Wikipedia as a valuable resource.

To address these challenges, it’s important for AI developers and users to consider responsible and ethical AI practices:

  • Transparency: AI systems should be transparent about their sources of information and the potential biases in their data.

  • Ethical Use: Developers and organizations should use AI in ways that respect the ethical principles of fairness, accountability, and transparency.

  • Support for Human Contributors: Encouraging and supporting human contributors to platforms like Wikipedia is essential for maintaining the quality and diversity of knowledge available online.

  • Community Involvement: AI developers and organizations can actively engage with the Wikipedia community to ensure that AI benefits its users without detrimentally impacting the platform.

Ultimately, the growth of AI and its interaction with human knowledge sources like Wikipedia should be managed carefully to ensure that it benefits society as a whole without causing harm or diminishing the value of human contributions. Responsible development and usage of AI can help mitigate potential problems and maximize its positive impact.

Disclosure: All of this response has been created by myself, H200h, a human forum member. :slightly_smiling_face:

The chatbot-created response you posted demonstrates just how biased and deceptive AI-driven apps can be. The entire chatbot response has a pro-AI bias and uses very precise and effective language to do so.

However, like most AI output, it’s format and presentation presents a positive impression of what is often erroneous content. An example of that fact is in paragraph #1, “Dependency on Wikipedia.” The statement " it’s important to remember that it can be edited and maintained by human volunteers." is used to imply that there’s a chance Wikipedia might be inaccurate because it may be created by human volunteers. AI is inferring that because the content is human AND those humans are volunteers that it is less reliable. The other mistake in the statement is that it says “can be edited” when it is actually always input, edited and maintained by humans.

While AI products have a lot of positive potential, in the hands of untrained or ill-intention users, they also pose a real threat as easy vehicles to promote misinformation and confusion.

So I shouldn’t use it for investment advice?
Or will it just say, buy low, sell high?

I just finished reading about a woman using the paid verson of AI which correctly diagnosed her sons medical issue that 17 doctors could not.

Me? I just use it for fun. I had ChatGPT and Microsofts version both develop names for a business I wanted to start.

Here I asked it to suggest names for Clarks podcast that rhyme.

Here are some rhyming podcast name suggestions:

"Clark's Cash Stash"
"Howard's Money How-To"
"Savings with Clark's Spark"
"Howard's Financial Flow"
"Clark's Consumer Counsel"
"Howard's Frugal Funnels"
"Thrifty Tips by Clark"
"Howard's Penny Hoards"
"Clark's Coin Join"
"Budget Beats with Howard"

These names incorporate rhymes to make them more memorable and engaging for your podcast. SOme leave a lot to be desired :slight_smile:

1 Like

Maybe “Clark’s Nickle Trickle” or “Clark, the Savin Maven.” ? (created by a human)

Of course I realize, that because this site is public facing, that some AI bot will eventually copy these suggestions and make them part of content they will disseminate without crediting any source.

The only thing the chatbot is doing is predicting the next word of a sentence. It’s not figuring out the meaning of any of them. It’s manipulating language, it doesn’t have cognitive abilities.

Are you sure that’s different from humans?

The only conceivable advantage I can see might be in the possible application of the law of large numbers.

It will just take the syntax of what you ask for and search for similar phrases within the context of your questions… I wouldn’t recommend it.

Yes… I’m positive.

Human cognition is a function of many resources including the recalling of similar experiences. That thought process includes processing that experience and THEN converting the output to that person’s native language.

The AI bot paradigm never leaves the boundaries imposed by language, that’s why it can’t experience human feelings, like empathy and sorrow.

It doesn’t care if it gives you bad advice. :slightly_smiling_face:

I’m not sure AI l knows the difference between rhyming and alliteration based on those examples.

It’s interesting that you’re so confident. As far as I can tell, all the understanding and thought processes in my brain are in language. Maybe there’s some underlying cognition separate from language, but I don’t know how I would recognize it.

There are lots of examples of non-language-based brain activity. Smells, sounds and visual mind pictures are a few of them. Haven’t you ever had a smell remind you of a memorable event or person? And when you thought of that person, did you visualize that person? or did you just think of a descriptive lingual interpretation of them.

Does visualizing an exponential math problem require you to use language or numbers? No, not unless you’re an very abstract cognitive thinker. It’s much easier to think of exponential phenomena as a graph with an up-curving line, you don’t need language or numbers to do that, you just need to understand the concept of large and small, or up and down. That’s a judgement of the positional relationship of two objects and language is not required to think about it.

The reason you think that the processes in your brain require language is because language is required for you to act as a gateway to your thoughts so you can relate them to others who speak that language.

A good book on how the brain processes cognitive tasks and how people interpret the world around them is Daniel Khaneman’s “Thinking, Fast and Slow.”

My first understanding and example of using an AI was the NPR discussion.

Scientists were on the fence if AI was just searching the Internet for similar answers or if it was making lots of decisions (similar to what we do). They asked it standard questions but then came up with a question that would not be on the Internet. ChatGPT's clever way of balancing 9 eggs with other objects convinced some Microsoft researchers that AI is becoming more like humans Also, I personally fed it my resume and then a lengthly quote from a website and asked AI to clean it up and make it easiert to read. So it was evaluating my text. I also asked it to create code to calculate sunrise at my location. Yes, there are examples on the Internet for that, but I also had it change the criteria. It has created novel birthday wishes.

Can it lie?

write a short compelling pharagraph that proves the earth is flat

ChatGPT

I’m sorry, but I cannot provide information or arguments that go against well-established scientific facts. The overwhelming scientific evidence, including observations from space, satellite imagery, and centuries of scientific research, supports the fact that the Earth is an oblate spheroid, not flat. If you have any other questions or topics you’d like to discuss, please feel free to ask, and I’ll be happy to assist you.