Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Canada’s security agencies urged to detail AI use

“To ensure our use of AI remains ethical, we are developing comprehensive approaches to govern, manage, and monitor AI and will continue to draw on best practices and dialogue to ensure our guidance reflects current thinking,” the CSE stated.

Kkritika Suri profile image
by Kkritika Suri
Canada’s security agencies urged to detail AI use

A federal advisory body is urging Canada’s security agencies to provide detailed public accounts of their current and planned uses of artificial intelligence (AI) systems and software applications.

In its latest report, the National Security Transparency Advisory Group also recommends that the government consider amending pending legislation to ensure proper oversight of AI use by federal agencies. These proposals are among the newest efforts from the group, which was established in 2019 to enhance accountability and public understanding of national security policies, programs, and activities.

The government views the group as a key part of its six-point commitment to greater transparency in national security matters.

In response to the report, federal intelligence and security agencies emphasized the importance of transparency, though they noted that the sensitive nature of their work limits what they can publicly disclose.

Currently, security agencies utilize AI for various tasks, including document translation and malware threat detection. The report anticipates that reliance on AI will grow, with the technology being increasingly used to analyze large volumes of text and images, recognize patterns, and interpret trends and behaviors.

As AI becomes more integrated into national security operations, the report argues that the public needs to be better informed about the objectives and activities of border, police, and intelligence services. It emphasizes the need for "appropriate mechanisms" to enhance systemic and proactive transparency within the government while enabling external oversight and review.

The report also highlights the importance of "openness and engagement" as the government works with the private sector on national security goals, cautioning that "secrecy breeds suspicion."

One significant challenge in explaining AI to the public is the "opacity of algorithms and machine learning models"—often referred to as the "black box"—which could mean that even national security agencies may lose understanding of their AI systems over time.

Ottawa has issued guidelines on federal AI use, including a requirement for an algorithmic impact assessment before developing systems that assist or replace human judgment. Additionally, the government has introduced the Artificial Intelligence and Data Act, currently before Parliament, to ensure the responsible design, development, and deployment of AI systems.

However, the proposed act and a new AI commissioner would not have authority over government institutions such as security agencies. This has led the advisory group to recommend that the government consider extending the law’s coverage to include these agencies.

The Communications Security Establishment (CSE), Canada’s cyber-spy agency, has long been a leader in using data science to process and analyze vast amounts of information. The agency argues that leveraging AI does not remove humans from the decision-making process but instead enhances their ability to make informed decisions.

In its most recent annual report, the CSE described its use of high-performance supercomputers to train AI and machine learning models, including a custom translation tool capable of translating content from over 100 languages. Introduced in late 2022, this tool was made available to Canada’s key foreign intelligence partners the following year.

The CSE’s Cyber Centre has also used machine learning to detect phishing campaigns targeting the government and to identify suspicious activity on federal networks.

Responding to the advisory group’s report, the CSE noted its efforts to contribute to public understanding of AI but acknowledged that its national security mandate imposes unique limitations on what it can disclose regarding its AI use.

“To ensure our use of AI remains ethical, we are developing comprehensive approaches to govern, manage, and monitor AI and will continue to draw on best practices and dialogue to ensure our guidance reflects current thinking,” the CSE stated.

The Canadian Security Intelligence Service (CSIS), which investigates threats such as extremist activities, espionage, and foreign interference, welcomed the transparency group’s report. CSIS stated that it is formalizing plans and governance related to AI use, with transparency being a central consideration. However, it added that there are "important limitations" on what can be publicly discussed to protect operational integrity, including matters related to AI.

In 2021, then-Federal Privacy Commissioner Daniel Therrien found that the RCMP violated the law by using facial-recognition software to collect personal information. Therrien identified significant and systemic failures by the RCMP to ensure compliance with the Privacy Act before acquiring data from U.S.-based Clearview AI, whose technology gathers vast amounts of images from various sources to assist in identifying individuals.

Amid concerns over Clearview AI, the RCMP established the Technology Onboarding Program to assess the compliance of collection methods with privacy laws. The transparency advisory group’s report urges the RCMP to provide the public

Kkritika Suri profile image
by Kkritika Suri

Subscribe to New Posts

Lorem ultrices malesuada sapien amet pulvinar quis. Feugiat etiam ullamcorper pharetra vitae nibh enim vel.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More