Artificial Intelligence is smart (as a 7year old)
I often tell this story, both as evidence of what Artificial Intelligence is capable of, and what its limitations are.
Let’s start from the year 2003, when Palantir was founded by Peter Thiel (one of the earliest investors in Facebook) and others. Palantir was made to provide technological solutions to modern organizations using data and analytics. US government was their earliest customer. They wanted their help with problems like detecting patterns in child trafficking, drugs, cross border terrorism on social media platforms. And preventing financial fraud.
This story is about an attempt to prevent financial fraud through credit cards. It is said that the smart people at Palantir build sophisticated AI models that could look at a presented transaction and predict whether it had come from a fraudster using someone else’s credit card or not.
A control room with a wall full of TV panels was set to show real-time credit card fraud rates across the concerned territories. They switch on the model, and wait with their breaths held…
And voila, the rates started to go down as they watched. Their model was able to identify fraudulent transactions and prevent them in real-time.
But the next day brought a different story. Fraud rates started to go up. Their super smart model was not good anymore. Something had changed. Was it a bug?
They fine-tuned the model again. And voila, it started detecting frauds again. But only in a few hours, did the frauds start rising again.
When they analyzed, the findings were no short of incredible. As it turns out, on the other end of the fence, the bad guys were also using smart AI models. They soon figured out the rules of the system that was preventing their transactions and they modified theirs.
In other words, the jailors made a higher fence and the thieves learned to jump higher.
AI by itself was not proving useful. It is then that Palantir changed their approach radically — they combined their AI models with human beings. Now a model would flag a transaction as fraudulent, and the human being would verify it using other markers.
The model has since been running successfully.