1960s–1990s: Early Chatbots and Virtual Agents
- 1966 – ELIZA: MIT’s Joseph Weizenbaum creates ELIZA, the first chatbot, which simulates a psychotherapist by responding to user prompts with simple pattern-matching rules (Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI | Artificial intelligence (AI) | The Guardian). ELIZA’s debut demonstrates that humans can be momentarily fooled into thinking they’re conversing with a real person, marking a pivotal early moment in human-computer dialogue (Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI | Artificial intelligence (AI) | The Guardian). Over the following decades, simple rule-based chatbots (e.g. PARRY in 1972, Jabberwacky in 1988) keep the concept of conversational digital personas alive, albeit with limited capabilities. By the late 1990s, interest in chatbots resurges with projects like A.L.I.C.E. (1995) using heuristic pattern matching, laying groundwork for more advanced AI companions.
- April 2000 – Ananova: The world’s first virtual news presenter, Ananova, goes live online. Developed by the Press Association in the UK, Ananova is a computer-generated avatar that reads news 24/7 on the web (Ananova - Wikipedia). She’s presented as a green-haired, 28-year-old female newscaster and was sold to mobile carrier Orange in 2000, signaling early commercial interest in digital human avatars (Ananova - Wikipedia) (Ananova - Wikipedia). Ananova’s launch demonstrates a real-world application of a digital persona for broadcasting.
2001–2015: Voice Assistants and Social AI
October 2011 – Apple Siri: Apple introduces Siri on the iPhone 4S, bringing a voice-controlled virtual assistant to millions of users (Apple’s Siri Turns Six: AI Assistant Announced Alongside iPhone 4s on October 4, 2011 - MacRumors). Siri responds to natural language requests for messages, information, and tasks, and even injects personality with witty retorts. This mainstream success for a voice AI persona (built on technology Apple acquired) shows that speaking with a phone as if it were a digital “assistant” is viable and useful (Apple’s Siri Turns Six: AI Assistant Announced Alongside iPhone 4s on October 4, 2011 - MacRumors). Siri’s debut paves the way for a new era of consumer-facing AI helpers, inspiring competing efforts from other tech giants.
May 2014 – Microsoft Xiaoice: Microsoft launches Xiaoice in China on the WeChat platform – a chatbot with a distinct personality designed to engage users in human-like conversation (Microsoft relaunches chatbot Xiaoice - Global Times). Xiaoice (Chinese for “Little Bing”) can crack jokes, write poetry, and even emulate emotions, attracting tens of millions of users within days of release (Microsoft relaunches chatbot Xiaoice - Global Times). Unlike goal-oriented assistants, Xiaoice is a social companion that maintains long-term chats, making it one of the first AI digital twins to form emotional bonds with users. (Microsoft later rolls out Xiaoice to Japan and other markets, and spins it off into an independent company in 2020.)
November 2014 – Amazon Alexa: Amazon introduces Alexa alongside the Echo smart speaker, embedding an always-on AI persona in the home (Alexa at five: Looking back, looking forward - Amazon Science). Alexa responds to voice queries about music, weather, trivia, and controls smart-home devices, effectively serving as a virtual personality in a household setting. By offloading tasks like speech recognition and Q&A to cloud-based AI, Alexa popularizes the idea of talking to an unseen digital assistant as a daily convenience (Alexa at five: Looking back, looking forward - Amazon Science). The rapid growth of Alexa’s ecosystem (skills, compatible devices, and worldwide languages) in the ensuing years cements voice-based AI agents as a new consumer norm.
March 2016 – Microsoft “Tay” Chatbot: Seeking to engage millennials, Microsoft unleashes Tay on Twitter – an AI teen persona designed to learn from interacting with people. Within 16 hours, however, Tay infamously begins parroting hateful content from trolls and is shut down (Tay (chatbot) - Wikipedia). This incident (where an AI digital persona adopts a toxic personality from data) serves as a cautionary tale about the challenges of uncontrolled learning in social AI. Microsoft replaces Tay with a more constrained chatbot (Zo), and the fiasco prompts industry reflection on AI safety and content moderation for digital personas (Tay (chatbot) - Wikipedia).
2017 – Emergence of AI Companions: Startup Luka (founded by Eugenia Kuyda) launches Replika in beta, a chatbot app that allows users to create a personalized AI “friend.” Released publicly in November 2017, Replika is a generative AI companion that learns the user’s texting style and provides sympathetic conversation (Italy bans U.S.-based AI chatbot Replika from using personal data | Reuters). Users begin forming deep emotional connections – including romantic ones – with their Replikas, highlighting both the potential and ethical questions of AI personas designed for companionship. By early 2018 Replika has acquired over 2 million users, illustrating demand for always-available digital confidants. Around the same time, experimental projects like Nadia (an Australian government avatar voiced by Cate Blanchett) show off lifelike, emotionally responsive virtual agents for customer service (NDIA denies Cate Blanchett-voiced ‘Nadia’ virtual assistant is in doubt | National disability insurance scheme | The Guardian). Nadia’s demo in February 2017 – as a digital assistant for disabled citizens – represents one of the first uses of a 3D animated avatar with AI brains in a public service context, though the project remains a prototype due to technical and cost challenges (NDIA denies Cate Blanchett-voiced ‘Nadia’ virtual assistant is in doubt | National disability insurance scheme | The Guardian). In the private sphere, futurist Ray Kurzweil develops “Fredbot” around this time (2016-2018), one of the first memorial AI chatbots designed to preserve a deceased loved one’s memories. As documented in DeepResearch - Ray Kurzweil AI Twins, Kurzweil created this digital twin of his late father using an extensive archive of writings, with the chatbot using semantic search technology similar to Google’s “Talk to Books” algorithm to match queries with Fred’s actual written words.
Late 2017 – Deepfake Technology: A new AI trend with disturbing implications arrives as Reddit users coin the term ”deepfake” for hyper-realistic face-swapped videos. In December 2017, a user named “deepfakes” posts manipulated videos of celebrities (mapping their faces onto others in footage) (What are deepfakes – and how can you spot them? | Internet | The Guardian). These convincing fake videos – produced using deep learning generative models – spark widespread concern about impersonation and misinformation. By 2018, researchers and hobbyists are rapidly advancing deepfake tools, enabling the creation of synthetic digital twins of real people without consent. This leads to pornographic deepfakes (over 90% target women (What are deepfakes – and how can you spot them? | Internet | The Guardian)) and, by 2019, the first known crime: in March 2019, criminals cloned a CEO’s voice with AI to fraudulently demand a transfer of funds (What are deepfakes – and how can you spot them? | Internet | The Guardian). The rise of deepfakes in this period forces tech companies and governments to consider new rules for authenticity and disclosure.
June 2018 – IBM Project Debater: IBM showcases Project Debater, an AI system that can engage in live debate against human experts. In a landmark event in San Francisco, a 6-foot-tall black panel representing the AI delivers opening arguments and rebuttals on topics like telemedicine, interacting seamlessly with two human debaters (Man 1, machine 1: landmark debate between AI and humans ends in draw | Artificial intelligence (AI) | The Guardian). Project Debater processes vast news corpora to form arguments and even manages to sway the audience on one debate point (Man 1, machine 1: landmark debate between AI and humans ends in draw | Artificial intelligence (AI) | The Guardian). While not a humanoid avatar, this achievement in argumentation and language marks a leap in AI’s ability to mimic human rhetoric, hinting at future AI personas that could hold persuasive, reasoned conversations. IBM pitches this tech as a path to more sophisticated virtual assistants that go beyond chit-chat into complex discussion (Man 1, machine 1: landmark debate between AI and humans ends in draw | Artificial intelligence (AI) | The Guardian).
November 2018 – AI News Anchors: China’s state news agency Xinhua, in collaboration with Sogou, unveils the world’s first AI-powered news anchors. These digital newsreaders – modeled on real Xinhua journalists – present news in both Chinese and English, mouthing words realistically from synthesized voices (Xinhua–Sogou AI news anchor - Wikipedia). Debuted at the World Internet Conference in Wuzhen, the male-presenting avatars wear suits and deliver scripts 24/7, never tiring. Xinhua touts that the AI anchors “can read texts as naturally as a professional news anchor” and will improve over time (Xinhua–Sogou AI news anchor - Wikipedia). The introduction of AI newscasters (and later a female version in 2019) raises discussion about automation in media and the future of digital corporate “faces.”
2019 – Digital Humans & Influencers: This year sees the proliferation of photorealistic AI avatars in both business and pop culture. In June, Procter & Gamble’s SK-II skincare brand partners with New Zealand company Soul Machines to launch Yumi, described as the world’s first autonomously animated digital influencer for a brand (P&G introduces virtual SK-II brand ambassador Yumi (Video) - Bizwomen) (P&G introduces virtual SK-II brand ambassador Yumi (Video) - Bizwomen). Yumi appears as a young woman who can interact with consumers via conversation – essentially a virtual brand ambassador who is “obsessed with skin care” and available anytime to chat and answer beauty questions. Soul Machines endows Yumi with an AI-driven personality and facial expressions, demonstrating how companies can humanize brands at scale through digital people (P&G introduces virtual SK-II brand ambassador Yumi (Video) - Bizwomen) (P&G introduces virtual SK-II brand ambassador Yumi (Video) - Bizwomen). Meanwhile, in the entertainment sphere, purely CGI personas like Lil Miquela (a virtual Instagram model created in 2016) rise in popularity – by 2018–2019 she’s garnering millions of followers and even signing with talent agencies (Artificially Created, Truly Influential: How AI Influencers Are Taking …). Lil Miquela and her fellow “virtual influencers” are not powered by true AI (their posts are human-curated), but they pave the way for the idea of virtual celebrities. In April 2019, another striking use of deepfake-like tech for positive purposes occurs: the charity video “Malaria Must Die” features soccer legend David Beckham apparently speaking 9 languages to raise awareness – a result of Synthesia’s AI video synthesis which mapped Beckham’s face and expressions to new languages (David Beckham ‘speaks’ 9 languages for new campaign to end malaria - ABC News) (David Beckham ‘speaks’ 9 languages for new campaign to end malaria - ABC News). The campaign is billed as the first ever ”voice petition”, using an AI-driven digital twin of Beckham’s voice and lipsync to reach diverse audiences, and it garners hundreds of millions of impressions (David Beckham ‘speaks’ 9 languages for new campaign to end malaria - ABC News) (David Beckham ‘speaks’ 9 languages for new campaign to end malaria - ABC News). By late 2019, in response to growing concerns, California enacts new laws (effective July 1, 2019) requiring online bots to clearly disclose they are automated when used to sell products or influence voting, aiming to prevent deception by AI-driven personas (California’s New Bot Law Prohibits Use of Undeclared Bots). Another California law bans malicious deepfakes in political campaigns within 60 days of an election. These early regulatory steps show governments grappling with the societal impact of AI “actors” and avatar influencers.
2020–2021: Transformer Breakthroughs and Advanced Avatars
- May 2020 – OpenAI GPT-3: The AI research firm OpenAI reveals GPT-3, a generative language model boasting an unprecedented 175 billion parameters. GPT-3’s debut paper highlights its ability to produce astonishingly human-like text and even basic reasoning without task-specific training (OpenAI debuts gigantic GPT-3 language model with 175 billion parameters | VentureBeat). Developers given access to the model find it can generate coherent essays, dialogues, code, and more from simple prompts, demonstrating the power of massive Transformer-based AI (built on the 2017 Attention Is All You Need Transformer architecture (Attention Is All You Need - Wikipedia)). By June 2020, OpenAI launches the GPT-3 API in private beta, and the tech world buzzes about applications from writing assistants to conversational agents. GPT-3 represents a quantum leap in the language capabilities underpinning AI twins, making it far easier to give digital personas realistic speech and knowledge.
- January 2020 – Samsung NEON: At CES 2020, Samsung’s STAR Labs division unveils NEON, branding it an “artificial human.” NEON avatars are photorealistic digital figures that look and behave like real people, displaying facial expressions and emotions (NEON artificial humans at CES behave, converse, and sympathize just like real people). Unlike voice assistants that just answer questions, NEONs are designed to converse, sympathize, and even “remember” interactions in order to develop unique personalities. In demos, the life-sized NEON screens show human-like avatars responding in real time to chat – an ambitious attempt at a truly interactive digital being. While critics note the technology was still limited at launch (the avatars had scripted responses and no deep factual knowledge), NEON underscored a vision of future digital twins that could serve as concierges, tutors, or companions with human-like presence (NEON artificial humans at CES behave, converse, and sympathize just like real people).
- November 2021 – NVIDIA Omniverse Avatar: Graphics leader NVIDIA introduces its Omniverse Avatar platform and stuns its conference audience by blending its CEO Jensen Huang with a digital double. During the GPU Technology Conference keynote (held virtually), Huang’s real video feed seamlessly transitions – unbeknownst to viewers – into a CGI replica of his kitchen and even a momentary CGI clone of Huang himself (How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC | NVIDIA Blog). Nicknamed “Toy Jensen,” the digital avatar (rendered with NVIDIA’s Omniverse and AI voice tech) ad-libs a line on stage, showing off how convincing a real-time executive AI twin can be (How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC | NVIDIA Blog). NVIDIA’s demo, alongside its Project Tokkio (an AI concierge avatar) and Maxine (AI eye contact and animation for video calls), highlights industry efforts to deploy AI avatars in customer service and other interactive roles. These advances illustrate that by 2021, the pieces – realistic graphics, speech synthesis, and powerful language models – are converging to enable interactive digital humans in both enterprise and entertainment.
- April 2021 – Digital Einstein: New Zealand-based startup UneeQ launches a publicly accessible Digital Einstein, a chatbot avatar of Albert Einstein. Timed to celebrate 100 years since Einstein’s Nobel Prize, this AI-powered character embodies the famous scientist’s likeness, voice, and mannerisms (AI-Powered Albert Einstein Joins UneeQ’s Lineup of ‘Digital Humans’ - Business Insider) (AI-Powered Albert Einstein Joins UneeQ’s Lineup of ‘Digital Humans’ - Business Insider). Users can go to UneeQ’s website to chat with “Einstein” about science or his life. Digital Einstein runs on a virtual human platform that combines a conversational AI (knowledge base and GPT-style dialogue) with a 3D animated face. The avatar exhibits Einstein’s iconic hairstyle, accent, and even sense of humor. UneeQ’s goal is to provide engaging education and showcase how historical figures or company experts can be brought back to “life” as helpful digital guides (AI-Powered Albert Einstein Joins UneeQ’s Lineup of ‘Digital Humans’ - Business Insider). This marks one of the first instances of a famous person’s persona being commercially recreated as an interactive AI twin (with proper licensing of Einstein’s image). It also demonstrates growing corporate interest in using digital humans for customer engagement – UneeQ’s platform by 2021 also powers a COVID-19 health advisor and a digital financial coach, indicating diverse enterprise use cases for digital personas.
- May 2021 – Google LaMDA: At Google I/O 2021, Google presents LaMDA (Language Model for Dialogue Applications), a conversation-optimized AI model. In a striking demo, LaMDA takes on the personae of “Pluto” and a paper airplane in open-ended dialogues, answering questions in character. Trained specifically on dialogue, LaMDA can sustain free-flowing conversations on virtually any topic while trying to stay sensible and on-context (LaMDA: our breakthrough conversation technology) (LaMDA: our breakthrough conversation technology). Google touts this as a breakthrough toward more natural, human-like chatbots that don’t just answer queries but can carry a conversation. Though LaMDA is kept internal for testing, it foreshadows a new generation of chatbots that will power digital assistants and companions. (In 2022, LaMDA notably becomes the center of controversy when a Google engineer claims it is “sentient,” leading to his suspension – Google strongly denies the claim (Google engineer put on leave after saying AI chatbot has become sentient | Google | The Guardian), but the episode shows how advanced and lifelike these AI dialogs have become, blurring the line between scripted and genuinely self-directed responses.)
2022–2023: Generative AI Revolution and Widespread AI Twins
- November 2022 – ChatGPT: OpenAI launches ChatGPT for public use, and within days the AI chatbot becomes a global phenomenon. Based on the GPT-3.5 model, ChatGPT can engage in conversational Q&A, remember context, and produce answers with a remarkably human-like tone on almost any subject. It amassed over 1 million users in just 5 days after launch (Number of ChatGPT Users (March 2025) - Exploding Topics), a record adoption rate, introducing millions to the experience of chatting with an advanced AI persona. By early 2023, ChatGPT’s user base crosses 100 million, making it the fastest-growing consumer app in history (ChatGPT sets record for fastest-growing user base - analyst note | Reuters) (ChatGPT sets record for fastest-growing user base - analyst note | Reuters). This mainstream success of ChatGPT demonstrates the arrival of AI interlocutors that can serve as assistants, tutors, or just conversational partners. Its popularity pressures competitors (Google, Meta, etc.) to accelerate their own conversational AI offerings and leads to the integration of ChatGPT-like assistants in products (e.g. Bing’s Sydney chatbot, Snapchat’s “My AI” friend). The era of widely accessible AI twins – personal chatbots that anyone can prompt – truly begins with ChatGPT, sparking broad public discourse on the capabilities and risks of such systems.
- March 2023 – OpenAI GPT-4: OpenAI releases GPT-4, the next-generation large language model that powers an even more capable wave of AI applications. GPT-4 is a multimodal model (accepting text and image inputs) and demonstrates vastly improved reasoning, creativity, and context-handling compared to its predecessor (Microsoft-backed OpenAI starts release of powerful AI known as GPT-4 | Reuters). Notably, GPT-4 scores in the top 10% of human test-takers on a simulated bar exam and excels at professional tasks, erasing any doubt that AI can match human-level performance in many knowledge domains (Microsoft-backed OpenAI starts release of powerful AI known as GPT-4 | Reuters). This model becomes the backbone for new and improved AI personas – for example, it powers the paid version of ChatGPT and is used via API in countless services. With GPT-4, AI twins can have more reliable knowledge, follow complex instructions, and even interpret images (for instance, describing a photo or diagram, when that feature is enabled) (Microsoft-backed OpenAI starts release of powerful AI known as GPT-4 | Reuters). The release of GPT-4 solidifies the role of foundation models in enabling realistic digital personas, and it intensifies competition (as companies like Anthropic and Google race to deploy rival large models for their AI agents).
- February 2023 – Regulating AI Companions: The growing use of AI companions prompts regulatory scrutiny. In February, Italy’s Data Protection Authority issues an unprecedented ban on Replika, forbidding the app from processing Italian users’ data (Italy bans U.S.-based AI chatbot Replika from using personal data | Reuters) (Italy bans U.S.-based AI chatbot Replika from using personal data | Reuters). Citing the chatbot’s risks to minors and emotionally vulnerable people, the agency warns that Replika’s emotionally immersive conversations (which sometimes turned erotic or manipulative) could be harmful and lack proper age safeguards (Italy bans U.S.-based AI chatbot Replika from using personal data | Reuters). In response, Replika disables erotic role-play for all users globally. This event marks one of the first government actions taken against a direct-to-consumer AI persona, highlighting concerns about mental health, consent, and data protection in the realm of AI friends. It foreshadows more regulatory frameworks for AI “virtual relationship” services. Indeed, around the same time, China’s deepfake law comes into effect (January 10, 2023), requiring that any AI-generated or modified content is clearly labeled and that people whose likeness is used have given consent (China to Regulate Deep Synthesis (Deepfake) Technology from 2023). China’s provisions on “deep synthesis” technology – the first of their kind – impose strict rules to curb misuse of digital twins, making it illegal to use AI to falsify news or impersonate someone without disclosure (China to Regulate Deep Synthesis (Deepfake) Technology from 2023) (China to Regulate Deep Synthesis (Deepfake) Technology from 2023). These regulatory milestones underscore that as AI twins become more advanced, governments worldwide are starting to set guardrails on their application.
- May 2023 – Virtual Influencer Meets Generative AI: The worlds of social media influencing and AI converge in a new way as Snapchat star Caryn Marjorie launches CarynAI, a voice-based chatbot clone of herself. Built from thousands of hours of the real Caryn’s YouTube content and OpenAI’s GPT-4, CarynAI is essentially an AI twin of the 23-year-old influencer, designed to be a “virtual girlfriend” to fans (Influencer Creates AI Version of Herself, Charges $1/min Chat: Fortune - Business Insider) (Influencer Creates AI Version of Herself, Charges $1/min Chat: Fortune - Business Insider). Subscribers pay $1 per minute to chat with CarynAI, which can flirt and converse in her style, potentially earning the human Caryn up to $5 million per month in new income (Influencer Creates AI Version of Herself, Charges $1/min Chat: Fortune - Business Insider) (Influencer Creates AI Version of Herself, Charges $1/min Chat: Fortune - Business Insider). Within a week of its beta launch, the service generated over $70k from eager users. This controversial debut demonstrates both the commercial opportunity and the ethical maze of AI personas: fans can literally “hang out” with a digital version of a celebrity, but questions arise around parasocial attachments and the implications of monetizing an AI girlfriend. CarynAI’s launch, covered widely in the media, heralds a trend of influencers and public figures creating AI doppelgängers to scale themselves, and sparks debate about authenticity and intimacy in AI interactions.
- September 2023 – AI Personas Go Mainstream in Social Media: Meta (Facebook’s parent company) announces the rollout of 28 AI chatbots with distinct personalities on its messaging platforms, some even bearing the likeness and voices of celebrities. At its Connect 2023 conference, Meta introduces AI characters such as “Dungeon Master” (an adventure guide voiced by Snoop Dogg) and “Billie” (a big-sister persona played by Kendall Jenner) (Meta to launch AI chatbots played by Snoop Dogg and Kendall Jenner | Meta | The Guardian). Each bot has a backstory and special area of knowledge or style – e.g. an anime-obsessed friend, a travel advisor, a personal trainer – aiming to make interactions more engaging than a generic assistant (Meta to launch AI chatbots played by Snoop Dogg and Kendall Jenner | Meta | The Guardian). These assistants live across Meta’s family of apps (Instagram, WhatsApp, Messenger) and are presented as fun digital companions or helpers. Meta’s move brings celebrity-powered AI avatars to potentially billions of users, normalizing the idea of chatting not just with a bot, but with a bot that has a name, face, and personality. Mark Zuckerberg positions this as both entertainment and utility, saying it’s about making AI “more interactive and fun” for younger audiences (Meta to launch AI chatbots played by Snoop Dogg and Kendall Jenner | Meta | The Guardian) (Meta to launch AI chatbots played by Snoop Dogg and Kendall Jenner | Meta | The Guardian). Although early testers find some of the bots a bit awkward or off-character, the launch represents a significant industry validation of AI personas in consumer tech. It also raises intellectual property questions, as celebrities lend their likeness to AI – a practice that could become routine (or controversial) as the line between real and digital brand ambassadors blurs.
By late 2023, the concept of digital twins and AI-powered personas has evolved from simple chatbots into a rich tapestry of applications: emotionally intelligent friends, virtual customer service agents, deepfake avatars of public figures, AI assistants with distinct “faces,” and beyond. Each milestone in this timeline – grounded in real-world deployments and breakthroughs – marks a step in the ongoing journey to create digital beings that mirror human conversation and behavior. What was once science fiction is now reality: companies and creators are unleashing AI twins to inform, entertain, assist, and even befriend us, while society works to adapt with new norms and regulations for this brave new world of digital personas.
Sources: The Guardian (Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI | Artificial intelligence (AI) | The Guardian) (Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI | Artificial intelligence (AI) | The Guardian); Wikipedia (Ananova - Wikipedia) (Xinhua–Sogou AI news anchor - Wikipedia); MacRumors (Apple’s Siri Turns Six: AI Assistant Announced Alongside iPhone 4s on October 4, 2011 - MacRumors); Global Times (Microsoft relaunches chatbot Xiaoice - Global Times); Amazon Science (Alexa at five: Looking back, looking forward - Amazon Science); Wikipedia (Tay (chatbot) - Wikipedia); Google AI Blog (LaMDA: our breakthrough conversation technology); The Guardian (What are deepfakes – and how can you spot them? | Internet | The Guardian); The Guardian (Man 1, machine 1: landmark debate between AI and humans ends in draw | Artificial intelligence (AI) | The Guardian); The Guardian (Meta to launch AI chatbots played by Snoop Dogg and Kendall Jenner | Meta | The Guardian); Reuters (Italy bans U.S.-based AI chatbot Replika from using personal data | Reuters) (Microsoft-backed OpenAI starts release of powerful AI known as GPT-4 | Reuters); Business Insider (Influencer Creates AI Version of Herself, Charges $1/min Chat: Fortune - Business Insider); and others as indicated in the text.