Artificial intelligence and cybersecurity risks: Take steps to address AI vulnerabilities
Artificial intelligence (AI) technology is a powerful asset in business, allowing machines to think for themselves – and at a faster pace than ever before. But AI systems can pose cybersecurity challenges, which can cause operational, financial, health and safety, and reputational damage.
BDO Lixar, which is BDO Canada’s national technology consulting business arm, helps organizations recognize and manage such risks. Partners Rocco Galletto, head of cybersecurity, and Daryl Senick, who is responsible for data and AI, as head of financial services, talk about the potential vulnerabilities of AI and what can be done to make these systems safe and secure.
How much is AI used in business systems, in what sectors and why?
DS: AI is no longer an emerging technology; it has truly become mainstream, whether it’s for cost and inventory optimization or the analysis of consumer sentiment and behaviour. It’s used in contexts from manufacturing, for “shop-floor-to-top-floor” automation, to financial services, where it’s used to rate risk in credit and insurance products. Organizations that don’t leverage AI could lose their competitive edge.
What cybersecurity threats does this pose?
RG: Threats exist throughout the AI life cycle. Data is collected in both structured and unstructured ways and it’s stored for analysis, and this can include sensitive information about individuals.
DS: The integrity of the data itself is also critical. These systems are built to learn, adapt, adjust and in some cases make decisions from the data they’re fed, so any tampering or manipulating of that data can influence outcomes. This can lead to less-than-ideal business decisions or even bring harm to individuals.
How do the vulnerabilities of traditional information technology (IT) systems compare with those in AI?
RG: IT systems are typically targeted through doors that are left open or bugs in code that allow adversaries to infiltrate the network. AI systems can be hit in much the same way, but there’s added attack vectors here. Data is at the core of what makes the system function – or malfunction. In an “input attack,” an adversary can fool the AI system into accepting bad data and making a mistake. And a “poisoning attack” can stop an AI system from operating correctly, impacting its ability to accurately make predictions.
What can some of the consequences be?
DS: For instance, where underwriting processes are automated, say for loans or insurance products, if you introduce a bias because of bad data, then you could give high-risk people low-rate loans or low-rate insurance, and that's going to create financial loss. Or a retailer could be exposed to reputational risk if decisions about advertising or products are based on inappropriate inputs.
How can this be avoided?
RG: Taking a broader view, organizations must follow a strict set of guidelines to protect the data they collect and the systems that process that data. We need to make sure from a cybersecurity standpoint that these systems remain available, that the data integrity is preserved and that its confidentiality is maintained.
What kind of services does BDO provide to help?
DS: Our services include data advisory and data engineering, data visualization and data science. We help our clients plan through their entire data journey, including strategy, roadmap, implementation and operations. We cover the full cycle of data management and data governance, providing insights and analytics and supporting an overall data-driven culture.
RG: Along the journey that Daryl describes, our security team is tied at the hip with developers, data scientists and data engineers to help our clients remain secure. All regulatory issues and cyber risks are considered, assessed and managed.
How can we ensure that AI systems remain safe and secure in the future?
RG: Considerations for managing ethics, bias, trust and security must be made right from the design and planning phases of any AI project, as with all new technology initiatives. Then throughout the product life cycle, from implementation to ongoing operations, it’s critical to monitor for any anomalies the system may encounter.
DS: The footprint of vulnerability – and the sophistication of attack – continually grow over time. It's important to constantly evolve and to understand the risks associated with AI, because AI is here to stay, and we want to make sure we can move forward effectively, while managing the risks.
To view this report on The Globe's website, visit globeandmail.com
To view the full report as it appeared in The Globe's print edition: Fraud Prevention Month