HomeArtificial IntelligenceAustralia has its first framework for AI use in schools – but...

Australia has its first framework for AI use in schools – but we must proceed with caution

Federal and state governments have just published one national framework for generative AI in schools. This paves the best way for generative AI – Algorithms that may create recent content – ​​for routine use in classrooms across the country.

This provides much-needed guidance, a full yr after the launch of ChatGPT. Over the past 12 months there have been mixed reactions to the technology in schools, from banning it outright to attempting to integrate it into lessons.

What is included within the frame and what’s missing?

What is the framework?

The framework was agreed by state and federal education ministers in October and released publicly last week.

It goals to assist schools use generative AI “in a secure and effective way”. It notes it has “great potential to support teaching and learning and reduce administrative burdens in Australian schools”. At the identical time, it also warns of risks and consequences

the potential for errors and algorithmic bias in generative AI content; misuse of private or confidential information; and using generative AI for inappropriate purposes, reminiscent of discriminating against individuals or groups or undermining the integrity of student assessments.

Federal Education Minister Jason Clare also emphasized: “Schools shouldn’t use generative AI products that sell student data.”

What is within the frame?

The framework itself is just two pages long and includes six overarching principles and 25 “key statements.” The six principles are:

  • Teaching and learningThis includes schools explaining to students how these tools work, including their potential limitations and biases

  • human and social well-beingincluding using tools in a way that avoids reinforcing biases

  • transparencyincluding disclosing when tools are used and their impact

  • justiceincluding access for people from diverse and disadvantaged backgrounds

  • accountabilityincluding schools testing tools before using them, and

  • Privacy, security and protectionincluding using “robust” cybersecurity measures.

The frame is checked every 12 months.

Caution is suggested

The framework does vital work by recognizing the probabilities of this technology while highlighting the importance of wellbeing, privacy and security.

However, a few of these concepts are much less easy than the framework suggests. As experts in generative AI in education, we’ve moved from an optimism to a rather more cautious stance towards this technology during the last 12 months. Just like UNESCO did recently warned,

It is amazing how quickly generative AI technologies are being integrated into education systems without controls, rules or regulations.

The framework places a unprecedented burden on schools and teachers to finish demanding tasks for which they might not be qualified or for which they might not have the time or money to finish.

For example, the framework requires “explainability” – but even the developers of AI models have difficulty with this explain fully how they work.

The framework also calls on schools to do the identical Risk assessments of algorithmsdesign appropriate learning experiences, revise assessments, seek the advice of with communities, study and apply mental property rights and copyrights, and usually change into an authority in using generative AI.

It's not clear how this might possibly be achieved inside existing workloads that we all know exist already stretched. This is especially the case when the Nature and ethics of generative AI are complex and contested. We also know that the technology will not be foolproof – it makes mistakes.

Here are five areas that we consider have to be included in any future version of this framework.

1. A more honest attitude towards generative AI

We have to be clear that generative AI is biased. This is since it reflects that bias of its training materialsincluding content published on the Internet.

Such limited data sets are largely created by those that are white, male, and based within the United States or the West.

For example, a current version of ChatGPT doesn’t speak or use Aboriginal Australian words. There could also be valid reasons for this, reminiscent of not using cultural knowledge without permission. But this shows how white his “voice” is and the issues that include requiring students to make use of or depend on it.

2. More evidence

The use of technology doesn’t robotically improve teaching and learning.

To date, there’s little research showing the advantages of using generative AI in education. Actually, (a current UNESCO report confirms that there’s little evidence of improvement in learning through using digital technology in classrooms over many years.

But we’ve research that shows the harm of algorithms. For example, AI-driven feedback limits the forms of writing students can do and privileges white voices.

Schools need support in developing processes and procedures to observe and evaluate using generative AI by staff and students.

3. Recognize threats surrounding bots

There are a few years of research Demonstrating the risks of chatbots and their ability to impede human creativity and demanding pondering. This happens because people appear to robotically trust bots and their output.

The framework should aim to make clear which (low-level) tasks are suitable for generative AI and which aren’t for each students and teachers. For example, high stakes marking must be done by humans.

4. Transparency

So far the framework seems to concentrate on students and their activities.

Any use of generative AI in schools have to be disclosed. This also includes teachers using generative AI to arrange teaching materials and plan lessons.

5. Recognition of teachers' expertise

The value of the worldwide education technology (“edtech”) market has been estimated at roughly $300 billion (A$450 billion) from 2022. Some firms argue Edtech could be used to observe student progress and tackle roles traditionally held by teachers.

Australia's national education policy must be certain that the role of teachers will not be downgraded as using AI becomes more common. Teachers are experts in greater than just subject areas. They are experts in teaching multiple disciplines and addressing the needs of their students and communities.


Please enter your comment!
Please enter your name here

Must Read