Secure Messaging: What to Consider and How to Choose

Digital security and secrecy have long been a concern for users who wish to keep themselves safe online, especially in communication systems. Now, we have many different applications that focus on various functions. What to consider for secure communication and how to choose between them is especially important.

What we need

There are generally 4 security aspects of messaging that a user may consider when choosing between providers.

Message privacy

Message privacy is an easy-to-understand requirement: no one except the recipients and I can see the message’s content during transmission. Not the service provider, not the authority, not anyone else. This is usually implemented with end-to-end encryption (E2EE). This is straightforward for private messages, but more difficult for group messages.

Between two devices, E2EE is easy: each device generates a key pair locally as an identity and uses it to authenticate key agreement protocols. During message sending, each message is encrypted with an ephemeral key that both provides forward secrecy (a compromised key does not leak previous messages), and prevents key reuse. Without the private key, the data cannot be decrypted. When using the account on multiple devices, there are various ways to handle it: older services often sync the key itself, while newer devices recognise multiple devices belonging to the same account and use multiple key sets, encrypting messages for them separately.

When designing the E2EE algorithm for a group, more factors must be considered. The message needs to be decrypted by everyone in the group, and we want to avoid encrypting it individually, so some key materials must be shared. When someone leaves the group, their key must be invalidated immediately so they cannot decrypt new group messages. When someone joins the group, there needs to be a way to share the secret with them so they can compute the key and participate in the message. A widely recognised method is the Signal protocol used by WhatsApp, which is a nice balance between efficiency and security. Later, the MLS standard was proposed and recognised, and is now the IETF standard for this type of problem.

Many applications now claim that they have E2EE implemented. However, unless the client is open source, there is no way to be absolutely certain that they do not collect information on the private key, or even gather the private key itself. This is largely dependent on service reputation, risk factors, and many other things. For example, a company may be pressured by the country in which it operates to collect and disclose user data. There have been speculations that private messages are being collected in some companies, and as of 16/01/2026, the Chinese government has issued a new regulation stating that private messages will be regulated and monitored for security reasons. Given the current direction of EU digital surveillance plans, it’s unclear whether other countries will follow, but the potential risk is too great to ignore.

Cryptographic deniability

For sensitive use cases, cryptographic deniability might come in useful. It’s the ability to claim that a message wasn’t sent by the account itself, but could have been forged. Not all applications need this, and sometimes people want the opposite: an audit trail and immutable proof that someone definitely sent the message. This only comes into play when the message could become incriminating evidence, and the sender prefers to avoid the risk by introducing reasonable doubt.

This slightly goes against the design goal of message authorship. For example, an email may be (and usually is nowadays) signed to ensure that the sender is indeed the person he claims to be. This assures the recipient that the message is genuine, especially in systems where the sender’s information is relatively easy to forge.

Many systems handle this by unlinking the key from the identity, so it can only be proved that the message is encrypted with a specific key, but the key may come from an attacker rather than the account owner. The OTR algorithm and its implementation in the XMPP plugin do this. Signal protocol itself is deniable by design, but only in protocol level — the transcript cannot be externally verified because the symmetric message authentication can be forged by the verifier. However, the UI/UX and backup mechanisms make this hard to achieve in real life — a screenshot is a different type of evidence than the application transcript.

However, cryptographic deniability usually complicates other features of a messaging system, such as moderation and attribution, and, in many jurisdictions, this is undesirable. The EU has a Chat Control proposal, and the UK has the Investigatory Powers Act and the Online Safety Act. These do not ban technology such as E2EE and deniable design, but may introduce compelled changes or compliance measures that make these systems harder or impractical to implement and stay compliant.

Metadata security

Aside from the message’s content, its metadata is also an important factor in a secure messaging system. The metadata we discuss here is a broad concept that includes, for example, the time the message is sent, the sender’s device identifier/IP address, the sender’s account, and even information associated with the account, such as a phone number or email address.

Most applications collect at least some metadata, and large service providers usually require some form of identity verification, such as a phone number, to prevent abuse of the system. The protection of this information becomes a real concern. By itself, the metadata can only indicate that someone sent a message from somewhere; combined with the leaked message content, it may pose a substantial risk. For example, even with the best deniability design, if a message is linked to an account that has someone’s phone number, and is from his residential IP address, the practical deniability is greatly compromised. There are companies that offer minimal information registration, such as Proton, which lets you get an email address without providing any personal information. But unless you’re careful, metadata may leak from various places: login IP address, user pattern, user agent, payment method, etc.

Device security

Device security is largely orthogonal to the protocol design and implementation of a messaging app, so it’s often overlooked. However, an application is only as secure as its container: a lost phone password or a keylogger can expose everything you send. In most digital systems, human-related errors are the hardest to eliminate, and many security specialists consider the human element the weakest link.

There are many topics to discuss regarding device security, such as encrypted backups, long passwords, biometric weaknesses, malware/backdoor scanning and more. Those are not the focus of this article, and more of a generic good practice for anything to do with digital devices. Instead, we will discuss what the application itself can do to improve.

Two important features are data encryption at rest and application passwords. If properly set up, they can help ensure that if someone else gains access to the device, they cannot easily extract the app’s data or just open the app themselves to see it. Combined with correctly configured device-wide encryption, such as BitLocker or an Android encrypted data partition, they form an important defence against digital forensic attempts.

Another nice-to-have feature is the ability to defend against forced reveal. If the device owner is forced to unlock his device and account for some reason, the ability to quickly and discreetly erase everything or show a separate profile can reduce the risk of data compromise. This is almost never implemented in the messaging app itself, because it focuses on different aspects. Some devices offer this function, such as Android OS, which allows unlocking a separate profile with a different password, or Ripple from Guardian Project, which allows wiping some application data with a button press. Some handle this at the software level, such as using VeraCrypt containers to conceal the application and its data.

How to choose

The natural question that would arise after the discussion is “What do we need?” There is no single application that handles everything perfectly — and this is often impossible. The correct way to choose is to consider the threat model and decide which features are critical and which are just nice-to-have backups.

Considering possible compromising factors

There are generally 3 types of sources of compromise involved in this:

  • An internet-based attack where the attacker tries to compromise the network, service provider, your account password, or something similar.
  • Unauthorised access to the device, where someone gains physical access, forced or discreetly, and may then access the data or application on it.
  • State or governance pressure in which the legal or regulatory system compels a search or disclosure of private data.

And they correspond to different features we discussed earlier. An attacker who gains access to a server database can be effectively stopped from seeing message content with E2EE, while if your friend grabs your computer when you’re away, you’d want an application password for it. In rare cases, for example, a journalist may need to deal with all three of the above. In this case, the user may want to consider protections beyond just choosing a different messaging app.

Good practice

This article does not aim for a thorough comparison between applications, because:

  • Many articles already do that, and
  • Apps change and evolve all the time for technological and regulatory changes, and
  • Many apps market a feature, while skillfully leaving out some small details that could be important when considering security, and closed-source systems are hard to audit independently, and
  • I haven’t used them all myself

Instead, I will describe my process for selecting which application (and, combined with what other methods of protection) I use, and what I would consider.

Strong encryption at both transit and at rest is definitely a core feature I want. It helps mitigate common online attacks and narrows other attack vectors. Additionally, the more usable it is, the better — you don’t want to use an app to talk to someone, only to find that he can’t use it in his region.

On top of a usable and (generally) secure in-transit application, the less information I provide for it, the better. For example, I may use a disposable email address, a VPN to hide my IP address, and a device identifier fuzzer (although this usually requires a custom ROM, rooting, or special profile management). This removes the trust that the service provider will not log this information, and turns it into ‘I’m not providing the information in the first place.

Lastly, I would combine it with device-level security. Disabling biometric unlock, or using a truly secure biometric system other than just a cheap fingerprint sensor, is often the first step. Then application lock, data encryption or device encryption, volatile OS, these can all come into play depending on how complicated I want it to be and how much threat I foresee.

Personal preference

This is definitely not the “most secure” way of messaging, but it fits my perceived threats well, and is practical enough for me to do.

For the messaging app, I chose Session myself. It’s an open source project which you can self host, or join the hosted session network. It requires no information to sign up, just your device generate a private key. It’s E2EE capable, and uses LokiNet to connect to the Session servers, protecting user IP address. Session is relatively more censorship resistent because of the routing design, and is reported to work in many places where other apps fails, like China. Recently it has supported voice call and video call, although active calls are not routed through LokiNet. All these combined makes it an anonymous, E2EE and secure enough messaging app of my choice. However, the downside of Session is that the connection is typically slower, and if your local network blocks Tor-like behaviour (like eduroam), you may need another VPN to get it working first. It also has no concept of username and password, and all account recovery is using the recovery phases of the private key. If it is lost, the account is gone forever, much like crypto wallets.

For OS, the mobile OS I would use is GrapheneOS for Google Pixel. Not only it’s a ROM developed specifically for security purposes and has many of the aforementioned features built in, it adds its signing key directly to Pixel phones, thanks to Google’s policy of allowing third party signing key to be installed. This means that the bootloader can be safely re-locked, everything would work (in theory) as if it’s a stock ROM, from the perspective of hardware security level. It supports multiple profile, each with its own lock, and allows using a password to quickly wipe the phone, by deleting the data partition key. Additionally, it’s very strict in security policy itself, has no default Google apps or other apps that may compromise privacy, and gives user granular control of what permission they are willing to give. But unfortunately, it’s not good for daily use: no Google framework means many apps will not work. Even though it can install Google Play in a sandboxed mode, some apps still does not behave correctly, like some banking apps, some work apps, or small things like some cameras that relies on hardware features that Graphene does not give. Additionally, latest Pixels have special Google related features like built-in AI functions, using Graphene means completely giving up on those, ending up not fully utilising the hardware. Thus, I would only recommend it on a backup phone dedicated for this purpose.

For desktop OS, TAILS Linux is usually the default choice in privacy protection. However, Session does not work on it — mainly due to Session needs UDP and other different networking functions that cannot work with Tor. I have no good solution to offer, but installing Session into a VeraCrypt container is a solid first step — it will not defend against advanced forensic attempts such as memory analysis, or makes it harder for malware to get to the running process, but if other good practices are followed, the risk factor is minimised in normal usage.


Posted

in

,

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *