AI is not fool-proof but why?

AI is not fool-proof but why?

With the rise of ChatGPT and other spectacular AI models on the rise, the world is constantly amazed. These models are amazing at what they are meant to do: Create new things from minimal user input.

Machine Learning systems are great at what they are trained to do, and that is something to think about. Let's take a scenario, you moved into your new house and placed a smart doorbell that alerts you of suspicious activity on your porch. Now, imagine a nocturnal animal decides to create its new home on your porch. This should trigger your security system, alerting you of activity at odd times usually at night. This will happen a couple of times, and then eventually the system will stop reporting that activity altogether. Why do you think that's the case?

Well, if the AI model present in your security system learns that the activity at night hours is the "new normal", it stops detecting it as a threat. Now, God forbid, suppose a burglar tries to enter your house at the usual time the animal comes out. The system is now used to activity at this hour and doesn't alert you even though this is a major security threat.

This is called "Model drift". Essentially, deploying a well-performing model is a great success for a corporation, but if the system fails at crucial moments like the one mentioned above, the effects can be disastrous to both the customer and the corporation. It is not very hard to corrupt the data of what the model learns from and this is a serious issue.

I am sure corporations and large companies have already thought this through and found methods to prevent such an accident from happening. At its core, it is good to retrain your model at intervals to prevent the model from becoming fraudulent and producing false negative predictions.

As one of the big pioneers in AI, Andrew Ng says that it is no more a matter of Big Data (quantity) but the quality of the data that an AI model is being trained on. The cleaner and more reliable the data, the better the model's performance in the real world.

Maintaining an AI model after deployment can largely increase its performance and save a corporate's reputation. Serving and deploying a model is the easier part compared to preventing the model from becoming fraudulent. This is a matter to think about and hence made me share this with you.

Did you find this article valuable?

Support AI with Shrey by becoming a sponsor. Any amount is appreciated!