GPT-4 aces another test, this time the Torrance Tests of Creative Thinking (TTCT).
The chatbot's creativity was assessed using the Torrance Tests of Creative Thinking (TTCT), which has been used to assess human creativity for decades.
ChatGPT generated eight answers and human students generated 24. The results were submitted to the Scholastic Testing Service, which was unaware that AI was involved.
GPT-4 vs. 2,700 students
GPT-4 was in the top percentile for fluency - the ability to generate numerous ideas - and originality - the ability to generate new ideas. GPT-4 fell slightly in flexibility - the ability to generate different types and categories of ideas - and only outperformed 97% of all participants.
Compared to the 2,700 students who participated in the TTCT in 2016, GPT-4 outperformed the vast majority.
According to author Dr. Erik Guzik, the older GPT-3 performed way worse. The economist had expected ChatGPT, as a generative AI, to generate a lot of ideas. But he was surprised at how well the system generated original ideas.
Is GPT-4 creative?
So is GPT-4 creative? "We were very careful at the conference to not interpret the data very much," Guzik said. "We just presented the results." But the work provided strong evidence that GPT-4 had developed some creative abilities that equaled or exceeded those of humans, says Guzik.
But the results also showed that we don't really understand human creativity, he said.
He sees great potential for "creative" AI in entrepreneurship and regional and national innovation processes. Systems like GPT-4 could be a real game changer.