Greetings!
A friend of mine wants to be more secure and private in light of recent events in the USA.
They originally told me they were going to use telegram, in which I explained how Telegram is considered compromised, and Signal is far more secure to use.
But they want more detailed explanations then what I provided verbally. Please help me explain things better to them! ✨
I am going to forward this thread to them, so they can see all your responses! And if you can, please cite!
Thank you! ✨
I can’t speak about telegram, but signal is absolutely not secure to use. Its a US-based service (that must adhere to NSLs), and requires phone numbers (meaning your real identity in the US).
Matrix, XMPP, or SimpleX are all decentralized, and don’t require US hosting.
This gets shared a lot as a major concern for all services requiring a phone number. It is definitely true that by definition, a phone number is linked to a person’s identity, but in the case of signal, no other information can be derived from it. When the US government requests data for that phone number from Signal, like they occasionally do, the only information Signal provides them with is whether they do have a signal account and when they registered it last and when they last signed in. How is that truly problematic? For all other services which require a phone number, you would have much more information which is where it is truly problematic, say social graph, text messages, media, locations, devices etc. But none of that is accessible by Signal. So literally the only thing signal can say is whether the person has an account, that’s about it. What’s the big deal about it? Clearly the US government already has your phone number because they need it to make the request for Signal, but they gain absolutely no other information.
Your data is routed through Signal servers to establish connections. Signal absolutely can does provide social graphs, message frequency, message times, message size. There’s also nothing stopping them from pushing a snooping build to one user when that user is targeted by the NSA. The specific user would need to check all updates against verified hashes. And if they’re on iOS then that’s not even an option, since the official iOS build hash already doesn’t match the repo.
Do you have anything to back this up?
They have to. They can’t route your messages otherwise.
They have to know who the message needs to go to, granted. But they don’t have to know who the message comes from, hence why the sealed sender technique works. The recipient verifies the message via the keys that are exchanged if they have been communicating with that correspondent before or else it is a new message request.
So I don’t see how they can build social graphs if they don’t know who the sender if all messages are, they can only plot recipients which is not enough.
Anyone who’s worked with centralized databases can tell you that even if they did add something like that, with message timestamps, it’d be trivial to find the real sender of a message. You have no proof that they even use that, because the server is centralized, and closed source. Again, if their response is “just trust us”, then its not secure.
From what I understand, sealed sender is implemented on the client side. And that’s what’s in the github repo.
How does that work? I wasn’t able to find this. Can you find documentation or code that explains how the client can obscure where it came from?
As you say yourself (cryptocraphic nerd here):
So a shame there are no free servers, are the server soft not open source, only the signal app itself?
The server is supposedly open source, but they did anger the open source community a few years back, by going a whole year without posting any code updates. Either way that’s not reliable, because signal isn’t self-hostable, so you have no idea what code the server is running. Never rely on someone saying “just trust us.”
I have read that it is self hostable (but I haven’t digged into it) but as it’s not a federating service so not better than other alternative out there.
Also read that the keys are stored locally but also somehow stored in the cloud (??), which makes it all completely worthless if it is true.
That said, the three letter agencies can probably get in any android/apple phones if they want to, like I’m not forgetting the oh so convenient “bug” heartbleed…
Which keys? Are they always stored or are they only stored under certain conditions? Are they encrypted as well? End to end encrypted?
It doesn’t, because what you described above could be fine or could have huge security ramifications. As it is, my guess is that you’re talking about how Signal supports secure value recovery. In that case:
The main criticism of this is that you can’t opt out of it without opting out of the Registration Lock, that it necessarily uses the same PIN or passphrase, and that, particularly because it isn’t clear that your PIN/passphrase is used for encryption, users are less likely to use more secure pass phrases here.
But even without the extra steps that we can’t 100% confirm, like the use of the Secure Enclave on servers and so on, this is e2ee, able to be opted out by the user, not able to be used to recover past messages, and not able to be used to decrypt future messages.
Nice try FBI.
Well, if my pin is four numbers, that’ll make it so hard to crack. /s
If you can’t show hard evidence that everything is offline locally, no keys stored in the cloud, then it’s just not secure.
BTW, “keys” when talking about encryption is the keys used to encrypt and decrypt, it wouldn’t be very interesting to encrypt them, because now you have another set of keys you have to deal with.
Wouldn’t “NSA” or “CIA” be more appropriate here?
If you’re using a 4 number PIN then that’s on you. The blog post I shared covers that explicitly: “However, there’s a limit to how slow things can get without affecting legitimate client performance, and some user-chosen passwords may be so weak that no feasible amount of “key-stretching” will prevent brute force attacks” and later, “However, it would allow an attacker with access to the service to run an “offline” brute force attack. Users with a BIP39 passphrase (as above) would be safe against such a brute force, but even with an expensive KDF like Argon2, users who prefer a more memorable passphrase might not be, depending on the amount of money the attacker wants to spend on the attack.”
If you can’t share a reputable source backing up that claim, along with a definition of what “secure” means, then your claim that “it’s just not secure” isn’t worth the bits taken to store the text in your comment.
You haven’t even specified your threat model.
Are you being earnest here? First, even if we were just talking about encryption, the question of what’s being encrypted is relevant. Secondly, we weren’t just talking about encryption. Here’s your complete comment, for reference:
Just so you know, “keys” are used for a number of purposes in Signal (and for software applications in general) and not all of those purposes involve encryption. Many keys are used for verification/authentication.
Assuming you were being earnest: I recommend that you take some courses on encryption and cybersecurity, because you have some clear misconceptions. Specifically, I recommend that you start with Cryptography I (by Stanford, hosted on Coursera. See also Stanford’s page for the course, which contains a link to the free textbook). Its follow-up, Crypto II, isn’t available on Coursera, but I believe that this 8 hour long Youtube video contains several of the lectures from it. Alternatively, Berkeley’s Zero Knowledge Proofs course would be a good follow-up, and basically everything (excepting the quizzes) appears to be freely available online.
The link I shared with you has 6 keys (stretched_key, auth_key, c1, c2, master_key, and application_key) in a single code block. By encrypting the master key (used to derive application keys such as the one that encrypts social graph information) with a user-derived, stretched key, Signal can offer an optional feature: the ability to recover that encrypted information if their device is lost, stolen, wiped, etc., though of course message history is out of scope.
Full disk encryption also uses multiple keys in a similar way. Take LUKS, for example. Your drive is encrypted with a master key. You derive the master key by decrypting one of the access keys using its corresponding pass phrase. (Source: section 4.3 in the LUKS1 On-Disk Format Specification (I don’t believe this basic behavior was changed in LUKS2).)
And it’s I who should take a course in encryption and cybersecurity.
ROFL
Good to see you have your study material at hand though, and yes cryptography is complicated but you’ll get the hang of it eventually.
Yes. I was trying to be nice, but you’re clearly completely ignorant and misinformed when it comes to information security. Given that you self described as a “cryptography nerd,” it’s honestly embarrassing.
But since you’ve doubled down on being rude, just because I pointed out that you don’t know what you’re talking about, it’s unlikely you’ll ever learn enough about the topic to have a productive conversation, anyway.
Have fun protecting your ignorance.