HomeNewsThe hardest part of making conscious AI could also be convincing us...

The hardest part of making conscious AI could also be convincing us that it’s real

The American philosopher was already in 1980 John Searle distinguished between strong and weak AI. Weak AIs are only useful machines or programs that help us solve problems, whereas strong AIs would have real intelligence. A powerful AI can be conscious.

Searle was skeptical about the potential for strong AI, but not everyone shares his pessimism. The most optimistic are those that support it functionalisma preferred theory of mind that posits that conscious mental states are determined solely by their function. For a functionalist, the duty of developing strong AI is merely a technical challenge. If we will create a system that works like us, we will trust that it’s as conscious as we’re.

Is there anyone there?
Littlestar23

Recently we have now reached the tipping point. Generative AIs like Chat-GPT have now grow to be so advanced that their reactions are sometimes indistinguishable from those of an actual human – you see this exchange between Chat-GPT and Richard Dawkins for instance.

The query of whether a machine can idiot us into considering it’s human is the topic of a widely known test Developed in 1950 by English computer scientist Alan Turing. Turing claimed that if a machine could pass the test, we might need to conclude that it was truly intelligent.

In 1950 this was pure speculation, but based on a Pre-print study from earlier this yr – this can be a study that has not yet been peer-reviewed – the Turing test has now been adopted. Chat-GPT convinced 73% of participants that it was human.

The interesting thing is that no one buys it. Experts not only deny that Chat-GPT is conscious, but additionally apparently don't take the concept seriously. I actually have to confess, I agree with them. It just doesn't seem plausible.

The crucial query is: What would a machine actually need to do to persuade us?

Experts are likely to deal with the technical side of this query. That is, recognizing what technical includes a machine or program would wish to meet our greatest theories of consciousness. A Article 2023for instance, as reported here in The Conversation, has compiled an inventory of 14 technical criteria or “awareness indicators,” equivalent to learning from feedback (Chat-GPT didn’t make the cut).

But creating strong AI is each a psychological and technical challenge. It is one thing to provide a machine that meets the assorted technical criteria we have now specified by our theories, but it surely is sort of one other to assume that once we are finally confronted with something like this, we’ll consider it to be conscious.

The success of Chat-GPT has already demonstrated this problem. For many, the Turing Test was the benchmark for machine intelligence. But if it passed, because the pre-print study suggests, the targets have shifted. They may proceed to alter as technology improves.

Myna difficulties

Here we enter the murky realm of an age-old philosophical dilemma: the issue of other minds. Ultimately, you may never know of course whether something apart from yourself is conscious. In the case of humans, the issue is little greater than useless skepticism. None of us can seriously consider the chance that other persons are mindless automatons, but within the case of machines it appears to be the precise opposite. It's hard to simply accept that they may very well be anything but.

A selected problem with AIs like Chat-GPT is that they look like mere mimicry machines. They are just like the Myna bird, which learns to say words without knowing what it’s doing or what the words mean.

Myna bird
“Who do you call a stochastic parrot?”
Mizle

Of course, this doesn't mean we'll never construct a conscious machine, but it surely does suggest that we might need a tough time accepting it if we did. And that could be the ultimate irony: we have now succeeded in our goal of making a conscious machine, but we refuse to consider that we have now succeeded. Who knows, possibly it has already happened.

So what would a machine need to do to persuade us? A tentative suggestion is that it might have to exhibit the sort of autonomy that we observe in lots of living organisms.

Current AIs like Chat-GPT are purely responsive. Take your fingers off the keyboard and so they'll be as quiet because the grave. Animals should not like that, no less than not those we commonly consider as conscious, like chimpanzees, dolphins, cats and dogs. They have their very own impulses and inclinations (or no less than appear to have them) and a desire to pursue them. They initiate their very own actions on their very own terms and for their very own reasons.

If we could create a machine that had this type of autonomy – the sort of autonomy that takes it beyond a mere mimicry machine – perhaps would we actually accept that it’s conscious?

It's hard to know of course. Maybe we must always ask Chat-GPT.

Previous article
Next article

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read