Articles

Part Three – The Digital Revolution (Part Fifteen)

✍️Economic Unit

 

Using our classifications in AI adoption
Four types of users can be used to frame interventions that help businesses achieve these goals. For example, the “over-confident” group may need guidance on ethical and accurate use of AI systems. The “cautious” and “unaware” groups may need support with reporting, transparency processes, and accuracy validation. All groups should have access to training to ensure employees have the skills required to use AI appropriately.
Given the rapid pace of technological advancement in this field, these classifications are flexible: unaware businesses may quickly acquire needed skills, while businesses that are currently “effective implementers” may fall behind as technology evolves.

Helping businesses become “effective implementers” of AI
A key objective for both the private sector and government should be to enable businesses to become “effective implementers.” Businesses must be able to identify use cases that deliver commercial value—such as increased productivity; acquire the skills needed to apply appropriate AI solutions to their business needs; and understand and mitigate risks that could hinder safe adoption.

AI regulation: creating a framework for trust
There is active debate about the need for regulation in the AI domain. In general, effective regulation is beneficial for innovative sectors if it is fact-based and aims to encourage—not restrict—innovation. Establishing a level playing field, clear rules, and fair competition overseen by a strong, neutral regulator is often essential for developing competitive economies.

The European Union concluded its political negotiations on the “AI Act” in December 2023. The Act includes measures to protect general-purpose AI, prohibits social scoring and the use of AI to manipulate or exploit user vulnerabilities, creates a new consumer right to file complaints and receive meaningful explanations, and introduces potential fines ranging from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover.

In the United States, President Biden issued an Executive Order on AI in October 2023 outlining a set of actions covering AI safety and security, consumer protection, worker support, and the promotion of innovation and competition (including helping small businesses commercialize AI advancements). A core focus of the White House approach is developing standards, tools, and tests to ensure AI systems are safe, secure, and trustworthy.

In Canada, the government is working to establish a “Responsible AI Framework” through the Artificial Intelligence and Data Act (AIDA), which requires high-impact AI systems to meet safety and human-rights requirements; creates a new AI and Data Commissioner to coordinate policy and enforce rules in parallel with technological progress; and bans reckless or harmful uses of AI. Similar to the U.S., AIDA follows a voluntary code of conduct and could come into force in 2025 if approved by lawmakers.

The United Kingdom has taken a different approach. Instead of introducing AI-specific legislation, it requires regulators to publish plans on how they will respond to AI-related risks and opportunities and provides funding to expand their AI capabilities. The government is also committed to consulting on an “AI risk register across the economy” and will review and consult on its future legal framework.
Other initiatives include the “AI Opportunities Forum,” which aims to encourage AI adoption across the private sector, and the “AI Safety Institute,” which focuses on advanced AI safety in the public interest.

Related Articles

Back to top button