On November 13, 2025, Apple updated the App Store Review Guidelines to explicitly require disclosure and user permission before sharing personal data with third‑party AI services. The change lands in 5.1.2(i) and spells out what many teams have hand‑waved: if your app ships personal data to external AI providers, you must tell users who you’re sending it to and why, then get explicit consent. If you build AI features for iOS, this update changed your release checklist overnight.
Let’s cut to what matters: what changed, who’s affected, and the fastest path to compliance without wrecking your onboarding funnel. I’ll share a battle‑tested workflow we’ve used across consumer and B2B apps, plus copy you can adapt and a few edge cases that trip teams in review.
What changed in the App Store Review Guidelines?
Apple’s edit is short but consequential. The App Store Review Guidelines now call out third‑party AI in the privacy and data sharing section. Practically, this means:
- If your app transmits personal data to an external AI service (for example: transcripts to speech recognition, messages to an LLM, images for moderation), you must clearly disclose where that data goes and obtain explicit permission.
- Your privacy policy must match the reality of your data flows—who gets the data, for what purpose, how long, and how users can revoke.
- App Review can ask for proof: screenshots of the prompt, navigation to the in‑app policy, and evidence that the feature gates on consent.
Note the emphasis on explicit permission. This is stronger than “implied” consent in a buried policy. Users need a visible, contextual choice at the moment data could leave the device.
Does “third‑party AI” apply to my app?
If any of these sound familiar, you’re in scope:
- Your chat or help feature calls an external LLM with user content or identifiers.
- You send voice notes to cloud speech‑to‑text, or generate voice with cloud TTS based on user profile data.
- You run cloud image/video moderation on user uploads tied to an account.
- You extract PII from forms using an AI service to auto‑categorize or enrich records.
- You re‑rank search results with a hosted reranker using queries bound to a user.
Common misconception: “We forward data to our own API—so we’re fine.” Not automatically. If your server then sends personal data to an external AI provider, you’re still sharing with a third party. Apple expects your disclosure to reflect actual downstream processing.
How to comply without wrecking UX
Here’s the playbook we deploy with product teams. It keeps the experience clean while covering your bases with App Review and privacy counsel.
1) Map the flows with DDP (Data, Destinations, Purpose)
Spend one hour with engineering, product, and legal to fill this three‑column map:
- Data: What specific fields or content leave the device? Include payloads, metadata, and identifiers. Don’t forget logs and error traces.
- Destinations: Every external system that receives personal data—model provider, telemetry backends that store prompts/transcripts, content delivery services for AI outputs.
- Purpose: Why are you sharing it? Transcription, classification, personalization, safety—plain English.
DDP becomes your living source of truth for copy, UI, privacy policy, and review notes.
2) Design a contextual, two‑step consent
Best practice is a lightweight pre‑explain screen followed by a system‑style permission sheet. The pre‑explain screen earns trust and reduces friction on the actual decision.
- Pre‑explain: “To transcribe your voice note, we’ll send audio to Acme Speech. We store transcripts in your account so you can edit or delete them anytime.”
- Choice: Allow (default off), Not now, and Learn more linking to your in‑app privacy view.
- Timing: Trigger only when the user taps the AI feature, not at first launch.
- Granularity: Separate toggles for distinct AI uses (e.g., transcription vs. recommendations).
Keep the feature usable without consent where sensible—offer a manual mode or on‑device fallback. The goal is not to box users into “agree or leave.”
3) Update policy and in‑app disclosure together
Your privacy policy and in‑app privacy view should list the AI providers, data categories, retention, and controls using the same terms users saw in the pre‑explain screen. Keep copies in‑app and link them in Settings. If your product positioning intersects search and acquisition risks, see our guidance on adapting to changing SERPs in Google AI Mode and simpler SERPs for how to write user‑first, trustworthy copy.
4) Gate execution on consent and log signals
The consent state must be enforced in code. Add guards in the call path so no personal data leaves the device when toggles are off. Emit analytics events for consent_shown, consent_granted, consent_revoked, and block events that would leak payloads without consent.
5) Implement revocation and data deletion
Users need to change their mind. Provide a settings toggle, an in‑app data export, and a deletion flow. If your AI provider enables deletion by identifier, wire it up. If not, document retention and rotate logs aggressively.
A practical checklist you can ship this week
- Inventory: Build the DDP map for each AI feature. Confirm which data are personal (directly or indirectly re‑identifiable).
- Decide providers: Freeze the list of third‑party AI services and model endpoints your app hits in production.
- Write copy: Draft a one‑screen pre‑explain for each feature using provider names and purposes.
- Build UI: Add the consent screen and a persistent toggle in Settings > Privacy.
- Wire guards: Block outbound AI calls when consent is off. Add unit tests.
- Update policy: Sync your privacy policy and in‑app view with DDP; include retention and deletion.
- Prove it: Capture screenshots and a 20‑second screen recording showing the consent flow for App Review notes.
- Docs for support: Prep macros for “How do I turn this off?” and “Delete my data.”
- QA with real payloads: Verify no analytics or crash logs contain personal prompts/audio when consent is off.
- Plan fallback: Offer an on‑device or manual mode so the feature isn’t dead without consent.
Edge cases and gotchas
Server‑side AI still counts. If your backend calls an LLM with user text, that’s third‑party AI sharing if the model is external to your company. Disclose it.
“Anonymous” isn’t a magic word. Hashing identifiers or dropping names doesn’t guarantee de‑identification. If prompts or transcripts can reasonably tie back to a person (directly or via account context), treat it as personal data.
Logs and observability. Many teams forget that request bodies land in reverse proxies, APM traces, and error reporting. Scrub or redact those paths—or route AI traffic through a separate pipeline configured to drop bodies by default.
Kids and sensitive categories. If you target children or process health/financial data, expect stricter scrutiny. Gate features by age, minimize collection, and avoid sending sensitive fields unless absolutely necessary.
Model swapping. If you change providers post‑release (say, you A/B different STT engines), the disclosure must remain accurate. Consider a generic pre‑explain that lists providers and links to a live in‑app page with the current roster.
Sample copy you can adapt
Screen title: “Use AI to transcribe your voice notes”
Body: “With your permission, we’ll send your audio to Acme Speech to generate a transcript. We store the transcript in your account so you can edit or delete it anytime.”
Buttons: [Allow] [Not now] [Learn more]
In‑app privacy view snippet: “When you enable Voice Transcription, the app sends your audio to Acme Speech (USA) for processing. Acme retains audio for up to 24 hours for reliability checks; transcripts are stored by us until you delete them in Settings. Turn this off anytime in Settings > Privacy > Voice Transcription.”
How review might enforce it
Expect reviewers to test the AI feature path. If there’s no pre‑explain and consent stop, you risk a 5.1.2 rejection. Bug‑fix reviews can move faster, but privacy and legal issues can still block a build. Include a short note in your submission: where to find the privacy view, how to reproduce the consent prompt, and which toggles control data sharing.
If your team needs a dry‑run, our product and compliance sprints are designed for this—see services for mobile teams, and how we scope engagements on what we do. If you’re comparing privacy changes across channels (web vs. app), our take on identity and adtech shifts in third‑party cookies aren’t dying—now what? helps you align the messaging.
What this means for your 2026 AI roadmap
Strategically, you have two levers: reduce the amount of personal data sent to clouds, and increase transparency when you must. On‑device AI will keep expanding, but most production workloads will remain hybrid—some on device, some in the cloud. Design for that hybrid reality with feature‑level toggles, per‑provider isolation, and observability that proves compliance.
Also consider procurement posture. If you ever need to swap AI vendors, a clean abstraction layer plus transparent disclosures mean you can change providers without re‑training users. Document providers in a live in‑app list to avoid resubmissions for trivial swaps.
People also ask
Do I need to name the AI provider in the app?
Yes—name the provider in your pre‑explain and in‑app privacy view. Users deserve to know who processes their data, and reviewers look for specificity.
If I hash user IDs first, do I still need consent?
Typically yes. If the content or context can reasonably be tied back to a person, treat it as personal data and request permission.
Does this affect analytics SDKs?
Analytics are covered by your broader privacy obligations and ATT where applicable. The new wrinkle is AI‑specific sharing of personal data. If your analytics pipeline starts using AI to process user content, that becomes in scope.
What about on‑device models?
If processing stays on device and no personal data is shared externally, the new third‑party AI disclosure isn’t triggered. Still, explain what the feature does and provide a toggle.
Let’s get practical—implementation notes for engineers
Add a ConsentManager that exposes feature flags (e.g., consent.voiceTranscription). Wrap AI calls at the boundary and require the flag. Use dependency injection so your feature code can swap a NoopAIClient when consent is off. In tests, assert that outbound clients are never resolved without consent. For observability, redact request bodies in network logs and APM, and set a X-User-Consent: true|false header for server auditing.
On the backend, terminate AI calls in a separate service with strict payload logging rules—ideally none. If you must log, use field‑level redaction. Implement deletion by user ID and keep a job to reconcile provider‑side deletion receipts with your own records.
What to do next
- Developers: ship the consent UI, wire guards, and add tests that fail builds if an AI call fires without consent.
- Product: decide where consent prompts appear, craft copy, and define fallbacks for non‑consenting users.
- Legal/Privacy: update policy text, align data retention with provider contracts, and set DSAR response playbooks.
- Founders/Leads: make provider naming a product standard and review it each quarter. Treat privacy as part of experience quality.
If you want a fast, done‑with‑you pass, reach out via contact us. We’ve recently helped teams harden cloud AI features, optimize infra costs, and ship without drama—see our portfolio highlights for the kind of outcomes we chase.
