Apple Intelligence Update — Foundation Models, Multimodal Features & New Leadership
Apple pushes ahead with its biggest intelligence upgrade yet — new multimodal capabilities, on-device foundation models, Live Translation, visual intelligence improvements, and a major shift in AI leadership.
Overview — What’s New in Apple Intelligence?
Apple has expanded Apple Intelligence in a significant December 2025 update, introducing stronger multimodal processing, new Foundation Models for developers, improved Live Translation, enhanced visual intelligence, and organizational changes aimed at accelerating AI innovation.
The updates reflect Apple's long-term strategy: deliver powerful AI while keeping privacy and on-device processing at the center.
New Foundation Models for Developers
Apple released its Foundation Models framework — allowing third-party developers to integrate Apple Intelligence tools directly into their apps, including summarization, rewriting, visual interpretation, and extended context understanding.
- Optimized for on-device execution across iPhone, iPad, and Mac.
- Designed with strict privacy and offline-first capabilities.
- Supports multimodal tasks including OCR, table extraction, and screen understanding.
Multimodal Upgrades
Apple significantly expanded the multimodal layer inside Apple Intelligence, enabling the system to understand audio, images, and on-screen content more deeply.
Live Translation
Expanded to more languages and integrated across Phone and FaceTime, providing real-time two-way translation powered by on-device models.
Visual Intelligence
Apple Intelligence can now analyze on-screen content, interpret documents, extract tables from images, summarize videos, and generate actionable suggestions.
Siri 2.0 — The Gradual Rollout
Siri continues its transition into a more capable multimodal assistant. While many improvements are live, Apple confirmed that several advanced voice and context features will roll out throughout 2025 and 2026.
Siri is expected to take full advantage of the Foundation Models, including improved reasoning, better conversational context, and deeper integration across apps.
Leadership Change: Amar Subramanya Becomes VP of AI
Apple appointed Amar Subramanya as its new Vice President of AI, succeeding John Giannandrea. This leadership shift signals Apple’s intent to rapidly accelerate Apple Intelligence development.
Industry experts expect the move to bring faster feature shipping cycles and deeper integration of AI across Apple’s ecosystem.
Developer Impact — What You Can Do Now
- Use Foundation Models APIs to embed summarization, rewriting, OCR, and visual analysis.
- Test multimodal prompts involving audio + images + text for app-specific workflows.
- Adapt apps for Live Translation and more natural interactions powered by Apple’s runtime engine.
- Follow Apple’s strict privacy model when handling user data.
Limitations & Things to Watch
- Some multimodal features remain region-limited.
- On-device constraints mean very large models still require server fallback.
- Rollout timing for Siri enhancements varies across devices.
- Developers must follow Apple's privacy guidelines to avoid rejection in App Review.
Final Thoughts
Apple Intelligence is evolving into a powerful, privacy-first intelligence layer across the entire Apple ecosystem. With expanded multimodal capabilities, a developer-facing Foundation Models framework, smarter translation and vision tools, and major leadership momentum, Apple is clearly positioning its AI as a defining pillar of upcoming product generations.