WELL, IT IS PROGRAMMED BY HUMANS: AI is just as overconfident and biased as humans can be, study shows.
GPT-4 outperformed GPT-3.5 when answering problems with clear mathematical solutions, showing fewer mistakes in probability and logic-based scenarios. But in subjective simulations, such as whether to choose a risky option to realize a gain, the chatbot often mirrored the irrational preferences humans tend to show.
“GPT-4 shows a stronger preference for certainty than even humans do,” the researchers wrote in the paper, referring to the tendency for AI to tend towards safer and more predictable outcomes when given ambiguous tasks.
What happens when AI decides humans are too unpredictable?