HomeEthics & SocietyTech company cancels AI employees’ rights after pushback

Tech company cancels AI employees’ rights after pushback

HR software company Lattice, founded by Sam Altman’s brother Jack Altman, became the primary to provide digital employees official worker records but canceled the move just 3 days later.

Lattice CEO Sarah Franklin announced on LinkedIn that Lattice “made history to develop into the primary company to steer within the responsible employment of AI ‘digital employees’ by making a digital worker record to manipulate them with transparency and accountability.”

Franklin said “digital employees” might be securely onboarded, trained, and assigned goals, performance metrics, appropriate systems access, and even an accountable manager.”

If you’re thinking that this move to anthropomorphize AI is a step too far and tone-deaf within the face of looming job losses then you definitely’re not alone.

There was swift backlash from the net community, including from people within the AI industry.

Sawyer Middeleer, chief of staff at AI sales platform Aomni, commented on the post saying, “This strategy and messaging misses the mark in a giant way, and I say that as someone constructing an AI company. Treating AI agents as employees disrespects the humanity of your real employees. Worse, it implies that you simply view humans simply as “resources” to be optimized and measured against machines.”

Ed Zitron, CEO of tech media relations company EZPR, asked Franklin, “In the event of a unionization effort at Lattice, will these AI employees be allowed to vote?”

Lattice quickly realized that the world might not be ready for “digital employees” just yet. Only three days after the announcement, the corporate now says it has canceled the project.

Are they employees?

It’s becoming all too easy to humanize AI models and robots. They sound like us, emote, and are sometimes higher at being empathetic than we’re.

But are they conscious or sentient to the purpose where we must always consider affording them “employees’” rights? Lattice may simply have been attempting to get ahead of this unavoidable query, albeit a bit clumsily.

Claude 3 Opus surprised engineers when it revealed a measure of self-awareness during testing. Google fired one among its engineers way back in 2022 when he said that its AI model displayed sentience, and researchers claim that GPT-4 has passed the Turing test.

Whether AI models even have a measure of consciousness or not may not even be what decides whether digital employees are afforded rights in the longer term.

In April, researchers published an interesting study within the Neuroscience of Consciousness journal. They asked 300 US residents in the event that they believed ChatGPT was conscious and had subjective experiences like feelings and sensations. More than two-thirds of the respondents said they did, although most experts disagree.

The researchers found that the more ceaselessly people used tools like ChatGPT, the more likely they were to attribute some level of consciousness to it.

Our initial automatic response could also be to balk at the thought of an AI ‘colleague’ being afforded employees’ rights. This research suggests that the more we interact with AI, the more likely we’re to sympathize with it and consider welcoming it to the team.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read