Man-made brainpower calculations are rapidly becoming part of our daily lives. Machine learning will soon or already underpin many systems that require strong security. Facial recognition, banking, military targeting applications, robots, and autonomous vehicles are just a few examples of these systems. An important question arises from this: how secure are these AI calculations against vindictive assaults? My colleagues at the University of Melbourne and I discuss a potential solution to the vulnerability of machine learning models in an article that was recently published in Nature Machine Intelligence. According to our hypothesis, new algorithms with a high degree of resistance