Elle Russell, co-founder of Cairns, Australia-based NightCafe, which offers a set of AI-powered art-creating tools, prefers to avoid the highlight.
“I wish to remain hidden behind my monitors,” she told me in a recent interview.
NightCafe is similarly low profile.
The company, which Russell helped her partner, Angus Russell, launch five years ago, doesn’t get the identical publicity as a few of its rivals, like Midjourney. Yet NightCafe — a wholly bootstrapped enterprise that’s profitable “most months,” based on Elle — has enormous reach. Its over 25 million users have created nearly a billion images with its tools.
To pull back the curtain on certainly one of the net’s oldest generative art marketplaces, I spoke with Elle about NightCafe’s origins, a few of the challenges the platform faces, and where she and Angus see it evolving from here.
An internet site for wall art
As NightCafe’s founding story goes, Angus had recently moved right into a semi-detached house in Sydney’s Inner West area and hadn’t had a probability to brighten it with much artwork. “You should get some art; the partitions are bare,” remarked one guest. And while Angus agreed, he couldn’t find any prints online that spoke to him.
So in 2019, Angus, who had a level in design and who’d co-founded a number of design-focused startups, began a side hustle: a web site where people could buy and sell AI-generated art. He called it NightCafe, after Vincent Van Gogh’s “The Night Café.”
It was an abject failure.
People liked creating the art, which NightCafe didn’t charge for. But they didn’t wish to pay for wall prints, which was the one way the positioning made money.
Then one fateful week, Angus noticed that his hosting bill was a number of hundred dollars higher than usual. Someone had generated hundreds of images in only a number of days. He implemented a credit system to forestall that from happening again.
Soon after, Angus’ inbox was flooded with requests so as to add an choice to buy more credits, which he did. Practically overnight, the positioning became breakeven.
It’s at this point Elle joined NightCafe to run the business side of the operation. “I actually have two undergraduate bachelor degrees, in business and communications, and I’m also a CPA,” she said. “It made sense.”
NightCafe’s viral success
NightCafe got its second big break a pair years later, in mid-2021, when OpenAI announced DALL-E.
DALL-E, OpenAI’s first image-generating AI model, was state-of-the-art for the time. OpenAI opted to not release it, but it surely wasn’t long before enthusiasts managed to reverse-engineer a few of the methods behind DALL-E and construct open source models of their very own.
Angus, who’d been closely following the developments, quickly worked to get certainly one of the more popular DALL-E alternatives, VQGAN+CLIP, on NightCafe. He shelled out for tons of of GPUs to scale it up.
The investment soon paid for itself.
Images created with NightCafe’s VQGAN+CLIP blew up on Reddit; NightCafe made $17,000 in a single day. Angus decided to quit his job at Atlassian to work on the platform full-time.
A model marketplace
The NightCafe of today is sort of different from the NightCafe of several years ago.
The platform still runs some models by itself servers, including recent versions of Stable Diffusion and Ideogram. But it also integrates APIs from AI vendors that provide them, delivering what amounts to custom interfaces for third-party generators.
That is to say, NightCafe layers tools on top of models from elsewhere, including OpenAI, Google and Black Forest Labs. And, because it has since 2019, the positioning provides printing services for patrons who want mugs, T-shirts and prints of any art they generate.
“We’re a UI and community company,” Elle said. “NightCafe doesn’t have any internal AI or machine learning capability; we aggregate the available image models and make them fun and accessible to make use of.”
In NightCafe’s chatrooms, users can share their art and collaborate, or kick off “AI art challenges.” The platform also hosts official competitions where people can submit their creations for featured placement.
Last 12 months, NightCafe introduced fine-tuning, which allows users to coach a model to re-create a particular style, face or object by uploading example images. Fine-tuned models on NightCafe are subject to certain restrictions; for instance, they’ll’t be trained on images showing nudity, celebrities or people under the age of 18, and they need to be manually approved by NightCafe’s moderation team. (That’s to mitigate the chance of deepfakes.)
NightCafe is free to make use of, but only as much as a certain variety of images. Packs of image-generation credits might be purchased à la cart, and choose features are gated behind a subscription. For fees starting from $4.79 to $50 per 30 days (undercutting Midjourney and Civitai), users get priority access to more-capable models, the power to tip creators, the aforementioned fine-tuning capability and the next image-generation limit.
It’s a model that’s worked exceptionally well for NightCafe.
A source near the corporate tells TechCrunch that NightCafe is raking in $4 million in annualized revenue with a gross margin of nearly 50%, meaning that NightCafe is generating roughly $2 million a 12 months in profit after expenses (inclusive of payroll for its nine staff).
Roughly one million individuals are visiting NightCafe every month, Elle says, and 20,000 have a subscription.
“Any AI art generator online is competing for money from the identical people, though our users skew older than plenty of the industry,” she said. “We consider our biggest competitors to be other apps which have a robust community: Leonardo, Civitai and Midjourney.”
Copyright concerns over AI art
By opting not to coach its own AI (and moderating fine-tuning), NightCafe is attempting to keep away from the legal stand-off that’s ensnared most of the AI vendors whose models it aggregates.
Stability AI, Midjourney and a pair of other model providers, DeviantArt and Runway, face a category motion lawsuit filed by artists who allege that the vendors engaged in copyright infringement by training their models on art without permission. (The vendors claim a good use defense.) Some parts of the suit have been struck down. But a federal judge allowed it to maneuver into the invention stage early this month.
NightCafe could also be protected by Section 230 of the Communications Decency Act, which holds users, not platforms, responsible for illegal content (like copyright-violating artwork) as long as the platforms remove the content upon request. Australia, NightCafe’s home base, has the Broadcasting Services Act, which closely mirrors Section 230 with the exception that it imposes higher additional fees for failing to expeditiously remove “extreme violent material.”
Of course, should a court rule that the models NightCafe uses are essentially plagiarism machines, that’d be disruptive to the corporate’s business. But what about copyright because it pertains to NightCafe’s users and the art they generate?
According to the platform’s terms of service, users retain the copyright for his or her AI-generated works in countries that recognize these kind of works as copyrightable (just like the U.S.) — at the least so long as there’s permission to make use of any third-party branding, logos or trademarks inside.
A post last May on NightCafe’s blog sheds more light on this: “Legitimate creators recognize and acknowledge where the inspiration used to create their images derived from one other source. AI art creation tools are also evolving quickly, with systems in development to support the continued creative environment while ensuring that users can only access source material with the (consent) of the unique artist — in much the identical way that a royalty-free photography image could also be permitted to be used provided the creator is referenced.”
In other words, in NightCafe’s view, it’s the users, not NightCafe, who must cover their bases. And in the event that they don’t, the platform won’t defend them from the wrath of IP holders.
But evidently IP holders don’t intimidate many users.
Cursory searches of NightCafe bring up images of Pokémon and Donald Duck, celebrities like Britney Spears, brands reminiscent of Coca-Cola and LEGO and artwork within the type of artists like Stanley “Artgerm” Lau. None appears to have been generated with the blessing of the copyright holders.
“Users may also report content that got through automated filters, and we now have a team of human moderators working 24/7 on moderating flagged content,” Elle said when asked about this.
Political policies and deepfakes
As my interview with Elle segued to moderation, we dove into NightCafe’s general content guidelines, particularly its policies around politics and deepfakes.
Platforms, including Midjourney, have taken the step of banning users from generating images of political figures like Donald Trump and Kamala Harris leading as much as the U.S. presidential election. But NightCafe hasn’t — and it doesn’t intend to, based on Elle.
“Generating images of Trump and other political and public figures is allowed,” she said. “However, we don’t want NightCafe to be a spot for political arguments.”
How can NightCafe have it each ways? While the platform won’t prevent users from publishing political images elsewhere, it flag those images for review if a user tries to post them to NightCafe’s public feeds.
That being the case, it’s trivial to search out images of Biden in a wheelchair, Trump holding a gun and questionable Harris memes in NightCafe’s public gallery. With polls showing that the vast majority of Americans are concerned concerning the spread of AI propaganda and deepfakes, NightCafe actually hasn’t made enforcement easier on itself.
As for what content is or isn’t allowed: It depends.
“Political bait,” glorification of divisive figures or purposely unflattering or demeaning images, are no-gos (regardless of what my searches turned up). Most content the common person would find harmful or offensive can be prohibited; NightCafe’s community standards list calls out things like racist and homophobic images, spam, offensive swear words, terrorism themes, images mocking individuals with disabilities, and depictions of hate groups and symbols.
These subjects may be disallowed. But type a term like “suicide bomber” into NightCafe’s search bar and there’s an honest probability you’ll come across at the least one image that seems to fly within the face of the platform’s rules.
Elle tells me that it’s ultimately as much as moderators to interpret NightCafe’s guidelines and that repeatedly publishing images in a banned category, or circumventing automated filters, could lead to a warning or ban.
NightCafe has a somewhat small moderation team given its size (and the undeniable fact that the positioning’s users generate at the least 700 images a day): five paid moderators and 20 volunteer moderators who get compensation in the shape of premium NightCafe features. The paid moderators monitor content, while the volunteers handle comments, NightCafe’s chatrooms and the fine-tuned model queue.
Considering the poor working conditions content moderators are sometimes subject to, I asked Elle for more details about NightCafe’s moderator recruitment practices. She said that the paid team is run through an outsourcing firm based in Indonesia (she wouldn’t name which) and overseen by an internal NightCafe staff member.
All paid moderators get a “market wage,” Elle said. (In Jakarta, the minimum wage was around $325 per 30 days as of early 2024.)
Similar to Civitai, NightCafe has a policy carve-out for “NSFW” content: in need of outright nudity, but permissive of suggestive poses (with “bare breasts and bums”), blood and gore, graphic depictions of war, and pictures of illegal drug use (e.g., Mickeys smoking blunts). This is somewhat depending on the model; OpenAI’s DALL-E 2 has a stricter set of filters, as an illustration.
Why allow NSFW images despite the risks and with none type of watermarking (which could soon be legally mandated in California) to forestall abuse? To the primary query, Elle says that it might stifle “artistic freedom.”
“We do allow mild artistic nudity and adult themes on the positioning when tagged as NSFW, but not outright porn. We’ve tried our greatest to ‘draw the road’ for our users in our community standards in order that they understand what’s allowed and what’s not,” she added. “We pride ourselves on our community and being the ‘hub’ for all things AI art.”
From my few searches, NightCafe doesn’t seem with boundary-crossing objectionable stuff. But I couldn’t help but notice that the majority of the “sexy” images featured women — an unlucky pattern on platforms reminiscent of these.
Where NightCafe goes from here
Like many startups within the AI-powered art-generating space, NightCafe appears to be in a little bit of a holding pattern. It’s bringing latest models online, including video-generating models like Stable Video Diffusion. But it’s not rocking the boat much — the unsaid reason being that a single court decision or regulation could force NightCafe to rethink its entire operation.
Still, Elle seems to think NightCafe has legs and doesn’t need outside investment.
“The majority of our competitors raised money during the last two years while image generation was hot,” Russell said. “Pretty much all of them were, or are, offering image generation at a loss to accumulate users. Not all of them can succeed; NightCafe pioneered the intersection of AI and art but additionally championed the concept creativity using advanced technology must be accessible for all.”
There’s no plans for an enterprise NightCafe offering, despite how lucrative such a product could prove to be (moderation roadblocks aside). Elle says that the main target will remain on constructing a community and “social hub” atop the most recent generative models.
“One challenge that the industry faces is that image-generation models are getting so good, they’ll soon be commoditized,” she said. “What do firms compete on then? At NightCafe, we’ve chosen to deal with being an aggregator of the highest models to offer the perfect variety and highest level of technology.”
We’ll see the way it navigates the choppy waters from here.