AI

Apple Intelligence privacy, explained

Privacy as the priority supported by three pillars

Apple Intelligence privacy, explained
David Bernal Raspall

David Bernal Raspall

  • October 16, 2024
  • Updated: October 25, 2024 at 10:41 AM
Apple Intelligence privacy, explained

Apple is well known for its strong commitment to our privacy, and with the launch of Apple Intelligence at the end of this month, the company reinforces its focus on this aspect, now in the field of artificial intelligence. Artificial intelligence is not known for being particularly respectful of our privacy, but Apple has designed Apple Intelligence with privacy as a priority. This is reflected in its triple strategy to ensure that our data will always be safe.

Apple Support Download

Local processing: the foundation of privacy

One of Apple Intelligence’s fundamental pillars is its focus on “on-device” processing, that is, directly on our device. Apple has advocated for this practice for years, aiming to minimize the transfer of data to external servers. The advantages of this approach are clear: first, on-device processing is faster as it does not rely on an external connection, and second, our data remains on our device at all times.

Apple Intelligence is based on this “on-device first” architecture to offer its AI features, which means that, in most cases, the data will not be sent to the cloud. This ensures a high level of privacy and distinguishes Apple Intelligence from other similar solutions from the very beginning.

Private Cloud Compute: security in the cloud

There are situations where some AI tasks cannot be efficiently performed with on-device models, and it is in these cases where Private Cloud Compute comes into play. Apple has designed this cloud architecture with a clear focus: to provide a level of security as high as on-device processing.

Apple’s Private Cloud Compute is based on five key principles:

  1. Stateless computation: data is not stored and is only used for the specific purpose for which it was sent.
  2. Executable guarantees: the architecture is designed so that privacy and security promises are technically enforceable.
  3. Unprivileged access: not even Apple has a bypass mechanism to access data processed in the cloud.
  4. No individualization: no user can be individually targeted by an attacker.
  5. Verifiable transparency: Apple allows external researchers to verify the security, operation, and transparency of the system.

These principles provide a solid security foundation, which means that even when data leaves the device to be processed in the cloud, privacy remains a priority. A priority that has led Apple to allow anyone to audit the operation of the servers to verify that they work exactly as they are supposed to.

Integration with ChatGPT and other external services—yes, guarantees too

In a few weeks, Apple Intelligence will integrate ChatGPT into Siri and other similar tools. Starting with iOS 18.2, we will be able to take advantage of ChatGPT’s capabilities to answer questions. Apple has made it clear that this integration will operate under a pre-authorization model.

That is, we must give our explicit consent before our data is sent to OpenAI’s servers, which operate under their own privacy policy. This consent approach will also apply if Apple integrates other external AI services, such as Google Gemini.

Without our consent, no data will leave the secure environment of Apple Intelligence.

Despite the complexity of AI technologies, Apple has made it clear that its goal is to ensure that our data is protected, whether on the device or in the cloud. An approach that we do not see in many other companies and services and that distinguishes Apple Intelligence from our first request.

Latest Articles

Loading next article

OSZAR »