As generative artificial intelligence (AI) tools like ChatGPT, Google Gemini and others reshape the digital landscape, Much of the conversation in Canada has focused on business innovation.
But what if AI were developed as? Non-profit and never as a business service? Canada's long history with public media – particularly CBC and Radio-Canada – provides a useful model for considering how AI might serve the general public growing calls for one Public interest approach to AI policy.
Commercial AI is basically based on the belief that user-generated content published online is on the market to coach business AI. Focusing a lot on the technical success of generative AI ignores that its innovations depend upon access to global cultural knowledge – the results of treating the Internet as “Knowledge Commons.”
AI would have been not possible without public data, and far of that data was taken without returning to the general public system. Canada actually has a historical connection to AI innovation.
Early work on automated translation It was a tape reel that was sent anonymously to IBM within the Eighties Contains Canadian Parliamentary Records. The multilingual material helped train early translation algorithms. What if Canada intentionally trained the long run of AI in the identical way?
CanGPT: a Canadian public AI
More and more countries are experimenting with national or publicly managed AI models. Switzerland, Sweden And the Netherlands They are all constructing AI systems with the aim of making public AI services. The Canadian Federal Service has conducted some experiments of its own with its own alternative to ChatGPT, CanChatbut it surely is just an internal tool.
Many in Montreal Arts organizations have begun discussing creating their very own commons-based AI infrastructure and toolsbut they lack the infrastructure and resources to advance their mission. A national initiative could help.
There are precedents for this approach. When radio and tv first emerged, many countries founded public broadcasters – just like the BBC (British Broadcasting Company) within the United Kingdom And the CBC in Canada — to be certain that latest communications technologies serve democratic needs.
THE CANADIAN PRESS/Tijana Martin
The same approach could work for AI. Instead of letting corporations shape the long run of AI, the Canadian Parliament could encourage the event of its own AI model and expand the mandate of a company just like the CBC to supply greater access to AI. Such a public model could draw on public domain materials, government datasets, and publicly licensed cultural resources.
CBC/Radio-Canada also has an enormous, multilingual archive of audio, video and text files dating back many years. This corpus could turn out to be a foundational dataset for a Canadian public AI if treated as a public good.
A national model could turn out to be an open source system, available either as an internet service or as a locally running application. Beyond providing public access, CanGPT could anchor a broader national AI strategy based on public values relatively than business incentives.
Setting democratic limits for AI
The development of CanGPT would spark a essential debate about what AI should and shouldn’t do. Generative AI is already linked to deepfake pornography and other types of technology-enabled violence.
Today, the guardrails for these harms are set privately by technology corporations. Some platforms require minimal moderation. others, like OpenAI, ban politicians and lobbyists from using ChatGPT for official campaign business. These decisions have profound political implications that influence content moderation and social media governance.
Content moderation and acceptable use policies could possibly be addressed through normative principles embedded in CanGPT. A publicly managed AI model could allow Canadians to debate and define these boundaries through democratic institutions relatively than tech corporations.
Why a public AI model is vital
Public AI is a unique approach than the federal government's infrastructure-heavy approach to AI. Despite growing fears that we’re in an AI bubble, the federal government has invested billions in a big, expensive project AI sovereign computing strategy.
The policy could possibly be ineffective Ultimately, most of it goes to American corporations And Dismantle Canada's capability to construct AI in the general public interest.
THE CANADIAN PRESS/Spencer Colby
Canada's AI agenda has one major impact on the environment. A public good framework could encourage the other: economical, energy-efficient models that run on smaller, local machines and prioritize targeted tasks, relatively than massive models with billions of parameters like ChatGPT. A smaller public model could contribute to this through a smaller ecological footprint.
This approach could possibly be in direct contrast to the federal government's efforts to construct large-scale AI, as reflected in the huge investments in data centers outlined in recent federal budgets. Canada did it large investments in large AI projects. However, if the bubble bursts, smaller AI initiatives could offer a less dangerous future.
Imagine a public future for AI
Creating CanGPT wouldn't be easy. Questions remain about funding, updating, and maintaining competitiveness in comparison with business AI.
But it will open a nationwide discussion in regards to the social purpose of AI, regulatory standards and the role of public institutions in digital infrastructure. CanGPT is admittedly an odd idea, but it surely could possibly be exactly what's missing from Canada's approach to public media and digital sovereignty.
At the very least, imagining a public AI model opens up the potential of fulfilling the guarantees of AI in ways aside from another subscription sold to us by Big Tech.

