HomeNewsLate art critic Brian Sewell's AI-generated review of the London Standard reveals...

Late art critic Brian Sewell's AI-generated review of the London Standard reveals a big philosophical threat

For the primary edition of its latest weekly print edition, the London Standard conducted an experiment in the shape of a AI generated assessment the National Gallery exhibition Van Gogh: Poets and Lovers, written within the sort of a late art critic Brian Sewell.

Experiments of this type The goal is often to link real observations with general theories. For experiments to successfully confirm or reject big ideas, they need a transparent design and purpose. Little was disclosed concerning the design of the usual experiment and details concerning the training data and algorithms used are unclear. But what concerning the purpose?

Is this a cautious examination of the role of the art critic? Does it aim to initiate a broader social dialogue about which human jobs are replaceable and which aren’t? Is this an ethical experiment in how technology might help us cope with the lack of valued lives? Or just one other Turing test to evaluate how far AI is from human intelligence?

If the aim can’t be determined, perhaps the review might be viewed as “experimental” in one other sense – as a strategy to provide a preliminary investigation right into a more philosophical query about how humans might be reduced to machines.



One possibility is that this latest technology may lead to what Israeli scientist and writer Yuval Noah Harari has called “de-individualization.” Because a lot about ourselves as humans – what we predict, what we consider, what we love, or who we love – might be reduced to AI data points, this leaves us broken apart or fragmented indirectly. Therefore, training an AI system on Sewell's collected writings fragments the person he once was.

Critics have mocked the AI-written review, calling it a pale copy that can not be captured “the waspishness and arrogance of Sewell's writings”. However, this view obscures a greater awareness of the philosophical threat this technology poses – the reduction of man to machine.

What philosophers say about this threat

Philosopher Hannah Arendt recommend a startling argument against such reductionism in her 1958 book The Human Condition. She warned of a world wherein powerful computing machines appear to strive for independent thought and consciousness. However, she argued that the query of whether this might count as a thought is dependent upon whether we’re willing to cut back our own pondering to mere calculation and calculation.

Arendt believed that we will and may resist such reduction because people have other ways of coping with the world. In The Human Condition she distinguishes between what she calls “work,” “labor,” and “motion.” If work is natural and work is artificial, then for Arendt motion lies more inside the realm of unfettered human creativity.

“Action” is what people do once they use language to inform the stories of their lives. It is a type of communication: with the assistance of language we’re in a position to articulate the meaning of our actions and to coordinate our actions with those of others who’re different from us.

Black and white photo of Hannah Arendt smiling
Hannah Arendt photographed by Barbara Niggl Radloff in 1958.
collection, CC BY-SA

But Arendt feared that this type of creative, human exchange through language and storytelling could be reduced to mechanical construction—to something artificial. She also emphasized that if telling the story requires the person to take a stand and act on the earth, its persistence is dependent upon there being other people to listen to it and retell it, perhaps in other forms. It is dependent upon trust to some extent. However, this relationship of trust is in danger when human actions are reduced to what a machine can achieve.

Another philosopher closer to our time who anxious concerning the lack of trust brought on by the widespread, unreflective development and adoption of AI was Daniel Dennett, who died earlier this 12 months. At his most alarming moments, Dennett argued that essentially the most pressing problem isn’t that AI will take away jobs or change warfare, but that it’s going to destroy human trust.

Even if large language models (AI systems able to understanding and producing human language by processing large amounts of text data) won’t ever think like humans, even when they’ll not give you the chance to think their very own There continues to be the very real possibility of telling stories out loud Dennettthat they’ll take us right into a world where we is not going to give you the chance to tell apart truth from untruth – where we is not going to know who to trust. And that's a scary thought experiment that the Standard could have (unintentionally) delivered to our attention.



LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read