Technology

Apple CEO Tim Cook speaks at Apple’s Worldwide Developer Conference (WWDC) at the San Jose Convention Center in San Jose, California on Monday, June 4, 2018.
Josh Edelson | AFP | Getty Images

Large language models like ChatGPT can produce entire blocks of text that read like a human wrote them. Companies are racing to integrate ChatGPT into their apps, including Microsoft, Snap, and Shopify. But the trend could be stalled if Apple decides to restrict ChatGPT-based apps from its App Store, which is the only way to install software on an iPhone.

Blix, an email app maker that has regularly clashed with Apple over its App Store rules, says it ran into that hurdle this week.

Co-founder Ben Volach told the Wall Street Journal that Apple rejected an update to its BlueMail app because it integrated ChatGPT to help write emails, and it didn’t include content filtering over the output of the chatbot. Volach has also claimed on Twitter that Apple is “blocking” an AI update.

Apple said that without content filtering, the Blue Mail chatbot could produce words that aren’t appropriate for children, and the email app would have to raise its recommended age to 17 and up, according to the report.

Apple is investigating and the developers can appeal the decision, a spokesperson told CNBC.

Regardless, the Blue Mail episode isn’t a sign of an impending Apple crackdown on AI apps.

In fact, ChatGPT-powered features are already in Snapchat and the Microsoft Bing app, which are currently being distributed through the App Store. Other AI apps, such as Lensa, have also been distributed and have flourished in the App Store.

There is no formal AI or chatbot policy in Apple’s App Store Guidelines, a document that outlines what Apple permits on the App Store. Apple has employees in a department called App Review load up and briefly use all apps and updates before it approves them.

Apple could add AI-specific guidelines in the future. For crypto apps, for example, Apple explicitly introduced a section about cryptocurrency in the guidelines allowing wallet apps and banning on-device mining in a 2018 update. Apple introduced new rules about NFTs last year. The company often releases updates to its guidelines in June and October.

But the Blue Mail episode does reflect that Apple’s App Store is strict about content that’s generated at massive scale — either by users (in the case of social media apps, for example), or, more recently, by AI.

If an app can display content that infringes intellectual property, or messages that amount to cyberbullying, for example, then the app must have a way to filter that material and a way for users to report it, Apple says.

The content moderation rule was likely at the heart of a skirmish with Elon Musk’s Twitter late last year and was the reason Apple booted Parler from the App Store in 2021. Apple let Parler back on the App Store when it added content moderation.

Before it was released on the iPhone in the Bing app, the ChatGPT-based AI in Bing produced some creepy conversations, including threats against its users and pleas for help.

But Bing does have content moderation and filtering tools built into it. Microsoft’s AI allows users to downvote harmful responses, and includes a “safety system” that includes content filtering and abuse detection. Microsoft also updated its Bing chatbot in recent weeks to tamp down those creepy conversations, with the chatbot now often refusing to engage topics that could cause it to go off the rails.