Canada’s New Guidelines for Responsible AI Use: Developers Take Responsibility


THUNDER BAY – TECHNOLOGY – Canada has taken a significant step in the regulation of artificial intelligence (AI) by introducing guidelines that place the responsibility for potential AI-related risks squarely on developers’ shoulders. These guidelines were revealed by Innovation Minister François-Philippe Champagne and aim to promote responsible development and use of generative AI systems.

Generative AI is a rapidly evolving branch of AI, exemplified by technologies like OpenAI’s ChatGPT, which uses data inputs to create content such as text, images, and sounds. While this technology offers numerous advantages, it also poses several risks, including the creation of misleading content, potential breaches of privacy, and the risk of biases contaminating the datasets that these systems rely on.

Minister Champagne introduced these guidelines after consultations and discussions with experts in the field. The intention is to take immediate steps to establish trust in AI products while Canada works on developing AI-related laws.

The guidelines assign critical responsibilities to developers in the AI sector. Developers are now mandated to:

  • Take accountability for managing risks associated with their AI systems.
  • Conduct safety assessments before deploying AI tools.
  • Address any potential discriminatory aspects of their AI systems.

In addition to these responsibilities, the code calls for transparency with the public to ensure informed engagement with AI technology. It also emphasizes the need for human oversight during the use of AI systems, alongside computer-based monitoring. Robust cybersecurity measures are highlighted to safeguard AI tools from cyberattacks.

Signatories to this code commit not only to responsible AI development but also to supporting the ongoing development of a robust AI ecosystem in Canada. This includes sharing information and best practices with other members of the AI community, collaborating with researchers working to advance responsible AI, and working with various stakeholders, including governments, to promote public awareness and education on AI.

It’s important to note that Ottawa’s code of conduct is voluntary and does not have legal binding. However, the companies and organizations endorsing it, including BlackBerry and OpenText, have expressed their commitment to cooperating with the government.

OpenAI, although owning ChatGPT, falls under American jurisdiction and is not bound by Canada’s guidelines. Instead, OpenAI has already agreed to adhere to a similar code established by the United States in July. This demonstrates the global nature of the AI landscape and the importance of ethical guidelines in this rapidly evolving field.

Previous articleChief Bruce Achneepineskum: Leading our own prosperity: What are we doing in our territory?
Next articleSmall Businesses Call for Extended Forgivable Deadline Amidst CEBA Repayment Changes