One of the most basic tenets of cybersecurity is that you must “consider your threat model” when trying to keep your data and your communications safe, and then take appropriate steps to protect yourself.
This means you need to consider who you are, what you are talking about, and who may want to know that information (potential adversaries) for any given account, conversation, etc. The precautions you want to take to protect yourself if you are a random person messaging your partner about what you want to eat for dinner may be different than those you’d want to take, if, hypothetically, you are the Secretary of Defense of the United States or a National Security Advisor talking to top administration officials about your plans for bombing an apartment building in Yemen.
Things you might consider when doing any sort of communication, if you are thinking about your threat model, would be “what messaging app should I use?”, “Is it end-to-end-encrypted?”, “What device should I use to send the message,” “Do I have two-factor authentication on?”, “What type of two-factor authentication is it (app or SMS based? Hardware based?),” and, crucially, “How widely do I want to share this information?” End-to-end encryption means that a message is encrypted on the device itself before being sent; this means that it is then decrypted at the “endpoint,” meaning that only the intended recipient should be able to read it.
This is all, of course, a very long way of saying that there is no messaging app that can protect you if you are wildly careless, or more generally an idiot. There is no threat modeling that can account for you sending information directly to someone who you do not want to have it, which is exactly what Pete Hegseth, national security advisor Michael Waltz, vice president JD Vance, director of national intelligence Tulsi Gabbard, and a host of other top administration officials did when texting about their plans to bomb a suspected terrorist’s girlfriend’s apartment building in Yemen. When doing threat modeling from here on out, it is now unfortunately important to consider the question "Am I a moron?"
As Joseph has laid out here, there are design changes that Signal could make that would make it less likely for someone to accidentally message the wrong person or accidentally add them to the wrong group chat. At the moment, it can be difficult to know who someone is after you’ve added them to your contacts, because Signal doesn’t force you to select a profile picture or set nicknames for contacts, and, you can’t always see a person’s username or phone number after you’ve begun chatting with them on Signal.
THAT SAID, top officials in the executive branch should not be using Signal to communicate about military actions at all because the threat model for this sort of communication is so extraordinary and unique (and bound by retention laws) that they should be communicating on existing government channels designed for this exact purpose and which don’t have disappearing message functionality. And even if Signal’s UI could be slightly better or less confusing, if you are sharing bombing plans then you should probably take extra steps to make sure “We are currently clean on OPSEC” is actually true.
Since the first Atlantic story broke, people in my life have asked me if Signal is secure. Of the commercially available, widely-used messaging apps, Signal has extremely good security. But using Signal on whatever device the officials happened to be using makes those devices a target, and sophisticated nation state actors capable of hacking iPhones and other new smartphones are definitely in Pete Hegseth’s and Michael Waltz’s threat model. The truth of the matter is that no phone, no app, no encryption can protect you from yourself if you send the information you’re trying to hide directly to someone you don’t want to have it.