HomeEventsThe Future of AI with Ted Lechterman

The Future of AI with Ted Lechterman

We sat down with Ted Lechterman, UNESCO Chair in AI Ethics and Governance on the IE School of Humanities, to speak concerning the way forward for AI.

What does the long run of AI seem like?

It may very well be a world where people use AI properly to reinforce their capabilities and solve essential social problems. But it may be a world of cognitive atrophy, growing inequalities, intensifying conflicts, and serious threats to safety and security.

What is the good AI application you might have seen to date?

Deciphering and preserving indigenous languages

most of the things DeepMind does to advance scientific discovery.

Introducing AI in corporations: Name three advantages and three potential risks

Benefits include increased productivity, cost savings and performance improvements

The costs include inadequate oversight, lack of quality and abuse of power.

What excites you most a few world shaped by AI?

Progress in solving a few of humanity's best challenges comparable to disease and climate change.

How do you imagine the long run of human-machine collaboration?

In the near future, I expect we’ll use personalized AI assistants just like how we use smartphones today, but with more functionality and more integration. In the medium term, we’d like to reply critical questions on the deeper integration of AI into our biology and the best way AI-driven systems exercise power over people in various areas of social life.

Whose work do you admire most on the earth of AI?

Iason Gabriel, Seth Lazar, Shannon Vallor, Peter Railton

If you may use AI to resolve any global problem on the earth, what would it not be and why?

Curing serious diseases would definitely be attractive, but equally attractive can be developing systems that enable equitable distribution of vaccines and healthcare. Moreover, I hold out hope that personalized AI assistants could sooner or later help us grow to be more epistemically responsible, i.e. more resilient to misinformation and more reflective in our moral judgments. But given the large ideological polarization we face globally today, the event of AI assistants could easily be co-opted by special interests to deepen ideological divisions.

What inspired you to participate as a speaker at this AI Summit and what message would you wish to convey to the audience?

I need to indicate why philosophical methods are essential for understanding and addressing the important thing challenges we face in developing, deploying, and governing AI. While there’s widespread agreement that certain values ​​- comparable to democracy – apply to AI, there is critical disagreement about how these values ​​ought to be interpreted, prioritized, and implemented. A philosophical evaluation helps us untangle these questions so we will make more informed decisions concerning the direction of AI. This way of considering is accessible to everyone, but like all skill, it requires patience and practice to develop.

Ted Lechtermann
Holder of the UNESCO Chair in AI Ethics and Governance
IE School of Humanities




LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read