Wikipedia could be expanded and updated at lightning speed with the help of AI. But the time is not yet right, says Wiki founder Jimmy Wales.
The Wiki community is fascinated by ChatGPT and the like, says Wales, but they know the current models are not good enough.
"I think we’re still a way away from: ‘ChatGPT, please write a Wikipedia entry about the empire state building’, but I don’t know how far away we are from that, certainly closer than I would have thought two years ago," says Wales.
Hallucinations hamper AI as wiki author
According to Wales, the main argument against using AI text generators such as ChatGPT is the generation of falsehoods, known in the jargon as "hallucinations" - or simply "bullshit".
"It has a tendency to just make stuff up out of thin air, which is just really bad for Wikipedia — that’s just not OK. We’ve got to be really careful about that," says Wales.
For example, he says, ChatGPT claimed that no plane had ever crashed into the Empire State Building. It later corrected itself and apologized for the earlier error, saying it was a B-25 bomber - which is still wrong. Wales also sees a risk that AI could reinforce pre-existing biases on Wikipedia.
Human-machine collaboration also applies to Wikipedia
Wales sees potential in the evaluation of Wiki articles by a suitably trained AI, for example to identify contradictions between two articles.
"A human could detect this, but you’d have to read both articles side by side and think it through — if you automate feeding it in, so you get out hundreds of examples I think our community could find that quite useful," says Wales. AI could also help identify information gaps by comparing the world's knowledge with Wikipedia's database.
Last September, Meta unveiled PEER, a collaborative language model trained on Wikipedia's editing history to reason about text changes as a writing assistant. Previously, Meta demonstrated Side, an open-source AI model designed to review and improve Wikipedia sources.