Why the G7 should embrace ‘federated learning’

Why the G7 should embrace ‘federated learning’

Artificial intelligence (AI) is transforming the world, from diagnosing diseases in hospitals to catching fraud in banking systems. But it’s also raising urgent questions.

As G7 leaders prepare to meet in Alberta, one issue looms large: how can we build powerful AI systems without sacrificing privacy?

The G7 summit is a chance to set the tone for how democratic nations manage emerging technologies. While regulations are advancing, they won’t succeed without strong technical solutions.

In our view, what’s known as federated learning — or FL — is one of the most promising yet overlooked tools, and deserves to be at the centre of the conversation.




Read more:
6 ways AI can partner with us in creative inquiry, inspired by media theorist Marshall McLuhan


As researchers in AI, cybersecurity and public health, we’ve seen the data dilemma firsthand. AI thrives on data, much of it deeply personal — medical histories, financial transactions, critical infrastructure logs. The more centralized the data, the greater the risk of leaks, misuse or cyberattacks.

The United Kingdom’s National Health Service paused a promising AI initiative over fears about data handling. In Canada, concerns have surfaced about storing personal information — including immigration and health records — in foreign cloud services. Trust in AI systems is fragile. Once it’s broken, innovation grinds to a halt.

French President Emmanuel Macron delivers a speech during the Artificial Intelligence Action Summit in Paris in February 2025.
THE CANADIAN PRESS/Sean Kilpatrick

Why is centralized AI a growing liability?

The dominant approach to training AI is to bring all data into one centralized place. On paper, that’s efficient. In practice, it creates security nightmares.

Centralized systems are attractive targets for hackers. They’re difficult to regulate, especially when data flows across national or sectoral boundaries. And they concentrate too much power in the hands of a few data-holders or tech giants.

But instead of bringing data to the algorithm, FL brings the algorithm to the data. Each local institution — whether it’s a hospital, government agency or bank — trains an AI model on its own data. Only model updates — not raw data — are shared with a central system. It’s like students doing homework at home and submitting only their final answers, not their notebooks.

This approach dramatically lowers the risk of data breaches while preserving the ability to learn from large-scale trends.

Where is it already working?

FL could be a game-changer. When paired with techniques like differential privacy, secure multiparty computation or homomorphic encryption, it could dramatically reduce the risk of data leaks.

In Canada, researchers have already used FL to train cancer detection models across provinces — without ever moving sensitive health records.

a scientist in a lab coat, mask and goggles drops liquid into a test tube and digital graphics float in the air nearby
Artificial intelligence has been used to train cancer detectiom models.
(Shutterstock)

Projects like those involving the Canadian Primary Care Sentinel Surveillance Network have demonstrated how FL can be used to predict chronic diseases such as diabetes, while keeping all patient data securely within provincial boundaries.

Banks are using it to detect fraud without sharing customer identities.Cybersecurity agencies are exploring how to co-ordinate across jurisdictions without exposing their logs.




Read more:
Health-care AI: The potential and pitfalls of diagnosis by app


Why the G7 needs to act now

Governments around the world are racing to regulate AI. Canada’s proposed Artificial Intelligence and Data Act, the European Union’s AI Act, and the Executive Order on Safe, Secure, and Trustworthy AI in the United States are all major steps forward. But without a secure way to collaborate on data-intensive problems — like pandemics, climate change or cyber threats — these efforts may fall short.

FL allows different jurisdictions to work together on shared challenges without compromising local control or sovereignty. It turns policy into practice by enabling technical collaboration without the usual legal and privacy complications.

And just as importantly, adopting FL sends a political signal: that democracies can lead not just in innovation, but in ethics and governance.

Hosting the G7 summit in Alberta isn’t just symbolic. The province is home to a thriving AI ecosystem, institutions like the Alberta Machine Intelligence Institute and industries — from agriculture to energy — that generate vast amounts of valuable data.

Picture a cross-sector task force: farmers using local data to monitor soil health, energy companies analyzing emissions patterns, public agencies modelling wildfire risks — all working together, all protecting their data. That’s not a futuristic fantasy — it’s a pilot program waiting to happen.

a burned-out neighbourhood with mountains in the distance

A devastated neighbourhood in Jasper, Alta. on Aug. 19, 2024. Wildfire caused evacuations and widespread damage in the National Park and Jasper townsite.
THE CANADIAN PRESS/Amber Bracken

A foundation for trust?

AI is only as trustworthy as the systems behind it. And too many of today’s systems are based on outdated ideas about centralization and control.

FL offers a new foundation — one where privacy, transparency and innovation can move together. We don’t need to wait for a crisis to act. The tools already exist. What’s missing is the political will to elevate them from promising prototypes to standard practice.

If the G7 is serious about building a safer, fairer AI future, it should make FL a central piece of its plan — not a footnote.

The post “Why the G7 should embrace ‘federated learning’” by Abbas Yazdinejad, Postdoctoral Research Fellow, Artificial Intelligence, University of Toronto was published on 06/12/2025 by theconversation.com