This summer, 350 participants got here to MIT to handle an issue that has to date had few answers: How can education proceed to create opportunities for all when digital literacy isn’t any longer enough—in a world where students are actually required to be fluent in AI?
The AI and Education Summit The host was the MIT RAISE Initiative (Responsible AI for Social Empowerment and Education) in Cambridge, Massachusetts, with speakers from the App Inventor Foundation, the Mayor's Office of the City of Boston, the Hong Kong Jockey Club Charities Trust, and others. Highlights included an on-site “Hack the Climate” hackathon, where teams of novice and experienced MIT App Inventor users had a single day to develop an app to combat climate change.
In the opening wordsRAISE principal investigators Eric Klopfer, Hal Abelson and Cynthia Breazeal emphasized what latest goals for AI competency appear to be. “Education will not be nearly learning facts,” Klopfer said. “Education is an entire development process. And we’d like to take into consideration how we support teachers to be more practical. Teachers should be a part of the AI conversation.” Abelson emphasized the empowerment aspect of computer-based interventions, namely their immediate impact, that “what’s different from the many years of individuals teaching via computers is what kids can do now.” And Breazeal, director of the RAISE initiative, addressed AI-based learning, including the necessity to use technologies like robot companions within the classroom as a complement to what students and teachers can do together, not as a substitute for one another. Or as Breazeal emphasized in her presentation: “We actually need people to grasp in a correct way how AI works and find out how to design it responsibly. We need to make sure that that individuals have an informed voice in the case of how AI ought to be integrated into society. And we would like to provide everyone world wide the chance to make use of and leverage AI to resolve the necessary problems facing their communities.”
MIT AI + Education Summit 2024: Welcome speeches by MIT RAISE leaders Abelson, Breazeal and Klopfer
Video: MIT Open Learning
At the summit the invited winners the Global AI Hackathon. Prizes were awarded for apps in two areas: climate and sustainability and health and wellness. The winning projects addressed topics comparable to Translation of sign language into audioMoving object recognition for the visually impaired, empathy exercises through interactions with AI characters, and private health checks based on tongue images. Participants also took part in hands-on demos for MIT App Inventor, a “playground” for the Personal robot group's social robots and a training session for educators on responsible AI.
By bringing together people of various ages, skilled backgrounds and from so many alternative regions, the organisers were in a position to present a singular mixture of ideas for attendees to take home. Conference papers included real-world case studies on implementing AI at school environments, comparable to extracurricular clubs, considerations on student data security and large-scale experiments within the United Arab Emirates and India. And plenary speakers addressed Financing AI in educationthe role of the state government in supporting its introduction and — within the Keynote speech of the summit by Francesca Lazzeri, principal director of AI and machine learning at Microsoft, on the opportunities and challenges of using generative AI in education. Lazzeri talked about developing toolkits that introduce safeguards for principles comparable to fairness, security and transparency. “I firmly imagine that learning generative AI will not be only for computer science students,” said Lazzeri. “It's about all of us.”
Groundbreaking AI training at MIT
Of crucial importance for early AI education was the Hong Kong Jockey Club Charities Trust, a long-standing collaborator who helped MIT launch Computing power and project-based learning years before AI was even a widespread educational challenge. A summit committee discussed the history of his CoolThink projectwhich introduced such learning methods in 32 Hong Kong schools in grades 4 to six in an initial pilot project after which achieved the ambitious goal of introducing these methods in over 200 Hong Kong schools. Speaking on the panel, CoolThink director Daniel Lai said that the Trust, MIT, the Education University of Hong Kong and the City University of Hong Kong didn’t need to burden teachers and students with one other curriculum outside of college. Instead, they desired to “integrate this curriculum into our education system in order that every child has an equal opportunity to accumulate these skills and knowledge.”
MIT has been involved as a collaborator since CoolThink's launch in 2016. Professor and App Inventor founder Hal Abelson helped Lai get the project off the bottom. Several summit participants and former MIT research staff played a number one role in project development. Education technologist Josh Sheldon led the MIT team's work on the CoolThink curriculum and teacher skilled development. Karen Lang, then education and business development manager at App Inventor, was the lead curriculum developer for CoolThink's initial phase, writing the teachings and accompanying tutorials and worksheets for the three levels of the curriculum, with editorial support from the Hong Kong education team. And Mike Tissenbaum, now a professor on the University of Illinois at Urbana-Champaign, led the event of the project's research design and theoretical underpinnings. Among other necessary tasks, they conducted the initial teacher training for the primary two cohorts of Hong Kong teachers, consisting of a complete of 40 hours of sessions with about 40 teachers each.
The ethical requirements of today’s AI “distorting mirror”
Daniel Huttenlocher, Dean of the MIT Schwarzman College of Computing, gave the closing keynoteHe described the present state of AI as a “distorting mirror” that “distorts the world around us” and presented it as one other technology that presents humans with the moral challenge of finding positive, empowering uses for it that complement our intelligence but in addition mitigate its risks.
“One of the areas that I'm most enthusiastic about personally,” Huttenlocher said, “is people learning from AI,” with AI finding solutions that individuals haven't yet considered on their very own. As so many points of the summit showed, AI and education are something that should occur in collaboration. “(AI) will not be human intellect. That's not human judgment. That's something else.”