Stanford and Google DeepMind researchers have created AI that may replicate human personalities with uncanny accuracy after only a two-hour conversation.
By interviewing 1,052 people from diverse backgrounds, they built what they call “simulation agents” – digital copies that might predict their human counterparts’ beliefs, attitudes, and behaviors with remarkable consistency.
To create the digital copies, the team uses data from an “AI interviewer” designed to have interaction participants in natural conversation.
The AI interviewer asks questions and generates personalized follow-up questions – a median of 82 per session – exploring every little thing from childhood memories to political beliefs.
Through these two-hour discussions, each participant generated detailed transcripts averaging 6,500 words.
The above shows the study platform, which incorporates participant sign-up, avatar creation, and a foremost interface with modules for consent, avatar creation, interview, surveys/experiments, and a self-consistency retake of surveys/experiments. Modules turn into available sequentially as previous ones are accomplished. Source: ArXiv.
For example, when a participant mentions their childhood hometown, the AI might probe deeper, asking about specific memories or experiences. By simulating a natural flow of conversation, the system captures nuanced personal information that standard surveys skim over.
Behind the scenes, the study documents what the researchers call “expert reflection” – prompting large language models to investigate each conversation from 4 distinct skilled viewpoints:
- As a psychologist, it identifies specific personality traits and emotional patterns – as an illustration, noting how someone values independence based on their descriptions of family relationships.
- Through a behavioral economist’s lens, it extracts insights about financial decision-making and risk tolerance, like how they approach savings or profession selections.
- The political scientist perspective maps ideological leanings and policy preferences across various issues.
- A demographic evaluation captures socioeconomic aspects and life circumstances.
The researchers concluded that this interview-based technique outperformed comparable methods – similar to mining social media data – by a considerable margin.
The above shows the interview interface, which features an AI interviewer represented by a 2-D sprite in a pulsating white circle that matches the audio level. The sprite changes to a microphone when it’s the participant’s turn. A progress bar shows a sprite traveling along a line, and options can be found for subtitles and pausing.
Testing the digital copies
The researchers put their AI replicas through a battery of tests to evaluate whether or not they accurately copied various facets of their human counterparts’ personalities.
First, they used the General Social Survey – a measure of social attitudes that asks questions on every little thing from political beliefs to spiritual beliefs. Here, the AI copies matched their human counterparts’ responses 85% of the time.
On the Big Five personality test, which measures traits like openness and conscientiousness through 44 different questions, the AI predictions aligned with human responses about 80% of the time. The system was superb at capturing traits like extraversion and neuroticism.
Economic game testing revealed fascinating limitations, nonetheless. In the “Dictator Game,” where participants resolve the best way to split money with others, the AI struggled to perfectly predict human generosity.
In the “Trust Game,” which tests willingness to cooperate with others for mutual profit, the digital copies only matched human selections about two-thirds of the time.
This suggests that while AI can grasp our stated values, it still can’t fully capture the nuances of human social decision-making.
Real-world experiments
The researchers also ran five classic social psychology experiments using their AI copies.
In one experiment testing how perceived intent affects blame, each humans and their AI copies showed similar patterns of assigning more blame when harmful actions seemed intentional.
Another experiment examined how fairness influences emotional responses, with AI copies accurately predicting human reactions to fair versus unfair treatment.
The AI replicas successfully reproduced human behavior in 4 out of 5 experiments, suggesting they’ll model not only individual topical responses but broad, complex behavioral patterns.
Easy AI clones: What are the implications?
AI clones are big business, with Meta recently announcing plans to fill Facebook and Instagram with AI profiles that may create content and have interaction with users.
TikTok has also jumped into the fray with its recent “Symphony” suite of AI-powered creative tools, which incorporates digital avatars that could be utilized by brands and creators to supply localized content at scale.
With Symphony Digital Avatars, TikTok is enabling recent ways for creators and types to captivate global audiences using generative AI. The avatars can represent real individuals with a wide selection of gestures, expressions, ages, nationalities and languages.
Stanford and DeepMind’s research suggests such digital replicas will turn into much more sophisticated – and easier to construct and deploy at scale.
“If you may have a bunch of small ‘yous’ running around and really making the choices that you simply would have made — that, I believe, is ultimately the long run,” lead researcher Joon Sung Park, a Stanford PhD student in computer science, describes to MIT.
Park describes that there are upsides to such technology, as constructing accurate clones could support scientific research.
Instead of running expensive or ethically questionable experiments on real people, researchers could test how populations might reply to certain inputs. For example, it could help predict reactions to public health messages or study how communities adapt to major societal shifts.
Ultimately, though, the identical features that make these AI replicas beneficial for research also make them powerful tools for deception.
As digital copies turn into more convincing, distinguishing authentic human interaction from AI has turn into tough, as we’ve observed in deep fakes.
What if such technology was used to clone someone against their will? What are the implications of making digital copies which are intently modeled on real people?
The research team acknowledges these risks. Their framework requires clear consent from participants and allows them to withdraw their data, treating personality replication with the identical privacy concerns as sensitive medical information. It at the very least provides some theoretical protection against more malicious types of misuse.
In any case, we’re pushing deeper into the uncharted territories of human-machine interaction, and the long-term implications remain largely unknown.