With Regulation Coming, it’s Time to Take a Closer Look at How You’re Implementing AI

Share

What do Facebook, a COVID19 prediction model, and Bronx Honda all have in common? All three were specifically referenced in a recent announcement by the Federal Trade Commission aimed to encourage “truth, fairness and equity” in the use of AI. The FTC noted negative outcomes of AI models and warned those using AI to “hold yourself accountable – or be ready for the FTC to do it for you.” Equally newsworthy that same week, the European Union proposed comprehensive AI regulation. The immediate takeaway from all of this is that new laws governing AI are likely coming, and that it’s now a perfect moment to examine how we build, market, and implement AI such that we ensure it’s ethical and compliant. 

I spoke in detail with CCNG President David Hadobas and IV.AI CEO Vince Lynch about how senior executives and call center stakeholders should be thinking about this. The central takeaway is not to look at AI as a simple off-the-shelf technology solution you implement as a point solution, but rather like a new leader on your team who requires necessary training, performance monitoring and continuous improvement processes. 

The overarching theme in all of these regulatory discussions is that negative outcomes most often arise from AI models trained with data that has human bias. For example, the COVID prediction model was designed to help allocate important resources like ICU beds and ventilators, however it was built on data that reflected existing racial bias in the American healthcare system. And because AI models are only as good as the data that trains them, the model learned to be biased and actually amplified the racial disparities in access to those COVID resources. In our discussion, Vince, David and I also spoke about an AI bot built by Microsoft that was trained to learn from public social media discussion. The unintended consequence is that the bot learned to be horribly racist and offensive after a slew of trolls began tweeting insults at it.

If this news gives business leaders pause when it comes to implementing AI, I think that’s a good thing. However it’s not a call to back away from AI and watch from the sidelines at how regulations will play out. I would argue that it’s an opportunity to lean into how your organization addresses bias, human or otherwise, and how it can affect your customer experience. Think about how you can address and mitigate such biases, and then take these learnings to your internal AI teams and external partners. Challenge them to ensure they’re addressing those biases when designing AI systems and training their AI models. Ensure they have processes to assess and retrain systems at regular intervals, and ensure those processes are transparent. 

At IV.AI we feel strongly that AI represents an opportunity too great to pass up when it comes to maximizing your human capital, and that now is the perfect time to think critically about how you  can use this technology to modernize your contact center. This is the one biased data point we don’t see as problematic. 

Owen McGrath is Head of US Sales with IV.AI tasked with growing the company’s enterprise customer base and further building out the team.