SafetyCore: Is Google paving the way for the end of encryption?
Without any prior notice, Google has introduced a new feature called Android System SafetyCore on all Android devices through a silent Google Play update, without any user consent.
Without any prior notice, Google has introduced a new feature called Android System SafetyCore on all Android devices through a silent Google Play update, without any user consent.
Google describes SafetyCore as a "system service that provides safety features for Android devices," but in reality, it has nothing to do with the security of the Android system.
Instead, SafetyCore is an AI powered machine learning engine that powers Google Messages' new "anti-spam" feature, the pre-installed messaging app on commercial Android devices.
Thanks to SafetyCore, Google Messages can detect potential scams, malware, or even nudity, blocking them or—when it comes to explicit images—introducing a blur function that requires user consent to view the uncensored image.
Content is analyzed locally, meaning the machine learning engine runs on the phone’s memory without sharing data with third parties. In theory, messages on Google Messages benefit from end-to-end encryption, provided that users have Rich Communication Services (RCS) enabled and are messaging other Android users with sufficiently updated systems.
In these cases, Google cannot access the contents of the messages. Therefore, local scanning performed by SafetyCore does not add any risk of remote surveillance. However, there are still scenarios where messages are not encrypted, making them potentially accessible to third parties.
Either way, the introduction of SafetyCore raises some significant concerns.
Lack of consent and transparency
The first issue concerns how Google deployed SafetyCore.
As is often the case with Google Play updates, the feature was installed on users’ smartphones without any consent and without notification. Today, most people have this app on their devices without even knowing it.
Google has demonstrated that it can modify the core functions of billions of devices without asking for permission. This time, it has introduced a content-scanning engine that users cannot control and about which very little information is available.
From what we can tell, SafetyCore is one step away from becoming dangerous spyware. The risk is currently mitigated—for now—only by Google's claim that the service does not communicate scan results to anyone.
However, it’s crucial to emphasize that not only was SafetyCore installed without user consent or warning, but even the most basic information to understand its functionality is lacking. The machine learning model is closed source and it’s not possible to verify its internal code.
Moreover, there is no guarantee that such a scanning service won’t evolve in the future, introducing data-sharing mechanisms with third parties without properly informing users.

The Anti-Encryption political shift
The second issue relates to the broader political shift in Western governments, which have been increasingly hostile toward encryption in communications and financial transactions.
The attack on encryption follows the same script every time: terrorism, organized crime, and child exploitation are used to justify legislative efforts such as the Online Safety Bill, the EARN IT Act, or the European Chatcontrol regulation.
A particularly relevant document on this topic comes from the European Council in 2020, stating that "in the absence of complementary measures, the spread of end-to-end encryption will make it increasingly difficult to identify potential crimes such as those related to child exploitation, as currently available detection tools do not work on end-to-end encrypted communications."
In 2020, advanced AI tools capable of running on smartphones and consumer-grade devices were still in their infancy. Five years later, the landscape has changed dramatically. Most new smartphones now feature Neural Processing Units (NPUs) that work alongside traditional CPUs, significantly enhancing AI-related computational power.
SafetyCore, therefore, emerges within a political, regulatory, and technological context that is leading us toward a new form of mass surveillance: on-device machine learning that scans content before it’s encrypted—or after it’s decrypted.
Legislation such as those mentioned above explicitly mandates that tech companies providing communication services and social networks implement automated scanning systems to detect illegal material. As the European Council noted back in 2020, end-to-end encryption prevents scanning messages in transit. That’s why the tech landscape has shifted toward solutions like Google’s.
And Google is not the first major corporation to attempt this.
In 2021, Apple tried to introduce NeuralHash, a local scanning system that bypassed encryption by analyzing content directly on the device’s storage, circumventing iMessage’s encryption. The backlash was so severe that Apple was forced to backtrack.
NeuralHash was never officially launched, but the algorithm still exists and has been included in iOS 14.3 and later, ready to be activated at any moment.
A dangerous convergence
Both systems appear to be quietly normalizing anti-encryption policies, gradually accustoming consumers to these functions within their operating systems—so that when lawmakers demand more intrusive surveillance measures, the infrastructure will already be in place.
Today, Google claims that SafetyCore does not communicate data to Google or third parties. But for how long? This is precisely the kind of technology that lawmakers are considering for implementing mass surveillance systems like the UK’s Online Safety Bill and the EU’s Chatcontrol. Systems that will automatically scan devices and report ‘illegal’ content to authorities.
If this convergence between politics and technology materializes, end-to-end encryption will become little more than an illusion. For most people, digital privacy will be eroded from within—leaving only a hollow shell of supposed security.
A possible solution
For now, the best course of action is to uninstall the SafetyCore app, knowing that it may be automatically reinstalled with the next Google Play update—and that, one day, it may become mandatory, evolving into a full-fledged surveillance system.
For those willing to take things a step further, I recommend switching to an alternative Android OS such as GrapheneOS, CalyxOS, or LineageOS—and avoiding Apple altogether.
These operating systems offer significantly greater security and control compared to the stock Android versions preinstalled on smartphones. Among them, GrapheneOS is undoubtedly the best, as confirmed by multiple comparisons.
Individual sovereignty means controlling your own data and devices. Allowing Google or Apple to install scanning engines without consent is a step toward unchecked surveillance—without any real safeguards.
Don’t miss out!
Don't miss any updates on the global fight against privacy and encryption. Read the Libertas Obscura section of Cyber Hermetica to stay ahead and understand how regulatory and technological changes impact your privacy.
If you liked this, please share it with your friends and help me build the Cyber Hermetica community!
I tried to.look.for this on my phone with another guide, but could not find it.
I am operating myandroid without Google accounts, even though it's not scrapped from the Google apps, I have not registered the Google account on it.
Does this still concern? To I need to take a deeper dive to find this?
Surely these types of functions could be built into the hardware, so that there is no way for users to bypass them.