Apple unveiled Apple Intelligence, their eagerly anticipated approach to generative AI, at WWDC.
The system supposedly handles a wide range of jobs while preserving context, privacy, and speed.
Large language models (LLMs) are used for task execution, tailored image creation (including your pals), and notification and message summarization. You can ask Siri to play the podcast you last shared or to relocate a file. Individual context is very important. Apple Intelligence can comprehend your regular commute and the person you are speaking with.
Apple asserted that it will not, however, compromise on privacy. The A17 Pro and M-series CPUs are capable of managing tasks within the device and storing data locally. Private Cloud Compute may leverage larger server-based LLMs while transferring only the necessary data and keeping it from being stored elsewhere, and you have control over who can access your data.
Even outside researchers can examine Apple’s data usage to ensure that no improper activity is occurring.
Meanwhile, Siri enhances its functionality with the help of Apple Intelligence. It can accept typed commands, rectify errors, and understand what you’re talking about (e.g., “there”). Even if you are unsure of a feature’s precise name, it can nevertheless provide device assistance.
Siri is even capable of in-app functions including sharing, editing, and jumping to a specific photo. This can be leveraged by developers by including App Intents into their software.