IT’S ONLY A MODEL: Our weird behavior during the pandemic is screwing with AI models.

It took less than a week at the end of February for the top 10 Amazon search terms in multiple countries to fill up with products related to covid-19. You can track the spread of the pandemic by what we shopped for: the items peaked first in Italy, followed by Spain, France, Canada, and the US. The UK and Germany lag slightly behind. “It’s an incredible transition in the space of five days,” says Rael Cline, Nozzle’s CEO. The ripple effects have been seen across retail supply chains.

But they have also affected artificial intelligence, causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are breaking as a result.

How bad the situation is depends on whom you talk to. According to Pactera Edge, a global AI consultancy, “automation is in tailspin.” Others say they are keeping a cautious eye on automated systems that are just about holding up, stepping in with a manual correction when needed.

What’s clear is that the pandemic has revealed how intertwined our lives are with AI, exposing a delicate codependence in which changes to our behavior change how AI works, and changes to how AI works change our behavior. This is also a reminder that human involvement in automated systems remains key. “You can never sit and forget when you’re in such extraordinary circumstances,” says Cline.

We don’t know how to model human ingenuity, and humans become even more ingenious during a crisis. So if you were hoping to AI your way out of a crisis, I have some bad news for you.