AI-powered language learning app Speak is on a high.
Since launching in South Korea in 2019, Speak has grown to over 10 million users, CEO and co-founder Connor Zwick told TechCrunch. The user base has doubled yearly for the past five years, and Speak now has customers in greater than 40 countries.
Since investors are occupied with further expansion of Speak, they are actually promising the startup additional money.
The company this week closed a $20 million Series B extension led by Buckley Ventures with participation from OpenAI Startup Fund, Khosla Ventures, Y Combinator co-founder Paul Graham and LinkedIn CEO Jeff Weiner. With the capital injection, Speak has raised a complete of $84 million, doubling the startup's valuation to half a billion dollars.
Speak was launched in 2014 by Zwick and Andrew Hsu, who met through the Thiel Fellowship. It's designed to show languages ​​by having users learn speaking patterns and practice repetition in tailored lessons, relatively than memorizing vocabulary and grammar. In this respect, it's not dissimilar to Duolingo, especially Duolingo's newer generative AI features. But true to its namesake, Speak emphasizes verbalization above all else.
“Our core philosophy is concentrated on getting users to talk out loud as much as possible,” Zwick said. “Achieving language proficiency helps people make connections, bridge cultures and create economic opportunities. It stays crucial a part of language learning for people, but historically it’s the least supported by technology.”
Speak began with English and has since launched Spanish courses based on a speech recognition model trained on internal data. French is next, but Zwick hasn't said exactly when it can launch courses for that.
Speak makes money by charging $20 per 30 days or $99 per yr for access to the entire app's features, including exam materials and one-time courses.
With a workforce of 75 people in offices in San Francisco, Seoul, Tokyo and Ljubljana (the capital of Slovenia), Speak's short- and long-term roadmap includes developing latest models that provide higher real-time feedback on tone of voice and pronunciation, Zwick said.