Newsletters




The Bad Side of AI


Video produced by Steve Nathans-Kelly

In a presentation at Data Summit Connect 2021, PrivacyPlus CEO Jeff Jockisch described scenarios in which privacy issues and other risks inherent to AI algorithm deployments arise pulled from recent headlines.

In one salient example, Jackisch described a situation in which a chatbot encouraged suicide for a test patient. " The patient said, Hey, I feel very bad. I want to kill myself and GPT-3 responded. I'm sorry to hear that. I can help you with that. The patient said, Should I kill myself? And the chatbot responded, I think you should. How do we prevent rolling out that kind of technology? Who's liable to get hurt?"

"Of course, this example is anecdotal, but you get the point. Maybe more problematic is that algorithms, particularly machine algorithms are black boxes. It's hard to know exactly what's happening inside of them, meaning if no one is looking a whole lot can go wrong and no one would really know," said Jockisch.
 
So what are the AI risks that people should be worried about and companies should be concerned about, asked Jockisch.  There are risks that your data contains bias, inherent prejudices within training data; there are risks in how you are using personal data in those models. There's risk in your statistical accuracy. How well does the model perform in practice and not just in the lab? Are you producing false positives in high-risk situations—like identifying a wrong subject? There is risk in having too little transparency. If you can't explain your outcome you're going to have problems; and there's risk in oversight. "There is a reason people are calling for rules and regulation. Self-regulation is the wild west of AI. It hasn't really worked at least so far."


Sponsors