Of all the tech companies that have benefitted from the massive shift to telecommuting that the global pandemic has forced, Zoom stands at the top. The company’s multi-platform videoconferencing software was well known before, being a frequently used, market-leading choice mentioned in the same breath as Adobe Connect, Cisco Webex, GoToMeeting, Microsoft Teams, and Skype. But now Zoom has become a verb among businesses, schools, and people making social connections.
That’s partly because of the scope of Zoom’s free tier, which allows up to 100 streaming video participants, and the way it has focused its service on scheduling or creating “meetings” that people can join with just a URL, either downloading a simple app on any platform or using an in-browser alternative. In “Videoconferencing Options in the Age of Pandemic” (2 April 2020), I examined all the major free options for videoconferencing, and Zoom stood out both among no-cost options and in a section in which I summarized paid services.
As the stock market has plummeted, Zoom’s share price has doubled—though that’s likely driven more by enthusiasm for the service rather than the ultimate size of the paid videoconferencing market. Zoom said on 1 April 2020 that daily meeting participants increased from 10 million to 200 million between December 2019 and March 2020. The fact that Zoom was able to handle a 20-fold increase in usage in that time is impressive.
But any accounting of Zoom’s success must also acknowledge a host of problems, too. Any time we discuss Zoom and consider recommending its use or thinking about its future, we have to look at a series of bad programming, security, marketing, and privacy decisions the company has taken.
Let me put it bluntly: Zoom is sloppy. Evidence of this began to accumulate last year with a screw-up discovered in mid-2019 that exposed macOS users to significant privacy exposure: your video camera could have been activated by visiting a page that loaded a malicious link. The problematic disclosures have accelerated since January 2020 with a series of errors in judgment and programming flaws. Zoom may have a top-notch technical solution and user experience, but the company deserves to take its knocks for sloppy and negligent programming.
Zoom also has made poor privacy decisions, some of which have already been remediated, by positioning itself more like a marketing firm than one that provides personal, academic, and business services over which we conduct private, confidential, or secret conversations.
Almost as bad, from my perspective, is that Zoom seemed unwilling to admit any failing, avoided apologizing, and didn’t provide a roadmap on how it will do better. Instead, it tended to fix problems while remaining on the defensive.
Zoom may face legal action over those statements and other matters. A class-action lawsuit based on California’s relatively new data protection law was filed on 30 March 2020 over the leak of data to Facebook described below. On the same day, New York State’s attorney general sent Zoom a letter asking for details on how it’s managing security risks given its history.
TidBITS contacted Zoom for its insights about how it has handled security and privacy issues, but the company didn’t reply. As I finished this article and in a few days that followed, however, Zoom publicly responded to disclosures of several new security problems. The first response, unlike most previous ones, was a blog post with an apology and a full explanation. A subsequent post laid out the company’s plans for how it will improve its software and its culture around security and privacy. It’s a glimmer of hope for the future. A third responded to a privacy group’s investigation into the company’s weak choices in encryption algorithms and in routing some meeting traffic through China for non-Chinese participants. The rapid response and general frankness was in stark contrast to earlier behavior.
In this article, I walk through the many software, security, and privacy issues Zoom has encountered and its response to each.
You may prefer to not use Zoom after reading this article. For my part, I continue to rely on it, sometimes daily. However, many people—perhaps tens of millions—have to use Zoom for school and work. Given that not using it isn’t an option for them, I want to offer advice on configuring it as safely as possible.
A Tiny macOS Web Server and Automatic Reinstallation
In mid-2019, security researcher Jonathan Leitschuh posted a lengthy report on Medium about security flaws and undesirable behavior by Zoom software for macOS:
- The Zoom client app installed a tiny Web server without disclosing this to users.
- This Web server bypassed a security improvement in Safari designed to require users to click Allow each time a URL with an application-based link was loaded or the user’s Web browser redirected to such a URL. Instead of prompting, the redirection was captured by Zoom’s Web server, which launched the Zoom app.
- Zoom’s tiny Web server lacked basic security features, so an attacker could direct you to a URL or load a URL within a Web page and, either way, trigger the Web server to join a Zoom meeting without any prompt. That could allow the attacker to hear your audio and see your video. This attack worked only if you had changed the default settings to start audio or video automatically upon joining a meeting, something many users did.
- If you removed the Zoom client app, Zoom’s Web server remained in place and continued to run. If you subsequently clicked a link to join a Zoom meeting, the Web server would quietly and automatically reinstall and launch the Zoom client.
Leitschuh followed responsible security disclosure principles and alerted Zoom. It took him a few attempts to get the company to respond and more to get them to engage. He started an industry-standard 90-day countdown to his public disclosure well into that process. Zoom eventually agreed to make a few changes but disagreed with him on the severity of the vulnerability.
Leitschuh made his public disclosure on 8 July 2019. (See “Zoom and RingCentral Exploits Allow Remote Webcam Access,” 9 July 2019.) A media and consumer firestorm followed. Zoom tacked and decided to remove the Web server entirely from the install process. The following day, Apple took the unprecedented step of adding the Web server to its malicious software list distributed quietly and automatically to macOS, which uninstalled the Zoom Web server even if a user hadn’t installed an update.
Zoom’s response was inadequate. The company published a blog post and updated it several times as the public-relations debacle unfolded, but it never used any apologetic terms, and it included what the researcher claims was an inaccurate description of his interest in a bug bounty program, which companies run to offer fees for private submission of security flaws.
While Wired reported Leitschuh’s remark that Zoom CEO Eric Yuan had said he was sorry—“He came in and chatted with us and apologized and made a full about face.”—Wired offered no confirmation in the story from Zoom, and the company has never made a public statement of regret.
Action needed by you: None. Not only did Zoom remove the unwanted Web server, Apple’s security update deleted it from Mac users who didn’t update their Zoom app.
A Failure To Anticipate Repeated Attempts To Find Open Meetings
Part of designing an Internet-connected service for security and safety is to think maliciously: how would someone try to attack or break into your system? All too often, programmers and system administrators don’t get into that mindset or are discouraged from implementing a solution—even though there are piles of blog posts, white papers, conference talks, and books about security best practices.
Zoom fell into two of the oldest traps in the book with design and implementation flaws that could—and did—lead to unwanted parties joining open public meetings. Every Zoom meeting has a meeting ID that’s 9 to 11 digits in length. (You can optionally set a password, a practice that Zoom is increasingly encouraging. By default, meetings created by some kinds of accounts default to having a password; see more tips at the end of this article.)
Zoom’s first flaw was to provide an insufficient addressable space. In other words, a 9-to-11 digit number is too small relative to the number of meetings conducted. While using 9 digits allows for 900 million possibilities and using 11 digits gives you 90 billion possibilities, Zoom may be generating tens of millions of meeting IDs every day. (Zoom doesn’t allow a leading zero, so all IDs start with 1 to 9, reducing the total for each length by 10%.) That offers the opportunity for “collisions,” in which someone could test out potential meeting IDs against those actually in use. (Zoom also creates a fixed 10-digit personal meeting ID for each account that can only be changed to another 10-digit ID at paid tiers.)
Security researchers at Check Point Research decided to probe this scenario for weaknesses. They wrote a script that generated random numbers in the range Zoom employs and tested them against Zoom’s site. They found that 4% of the randomly generated numbers matched actual meeting IDs!
Such a test shouldn’t have been possible, however, and that’s the second flaw. Zoom didn’t have a throttle on the server it uses to convert a meeting ID request into a Zoom meeting URL—and the server responded with an error for invalid IDs. Researchers could send thousands of URLs to Zoom’s server and quickly determine which were legitimate. If a malicious entity had done this, they could have then attempted to connect to valid meetings. (Throttles are not a new concept. Nearly 20 years ago, based on best-practice advice then, I built a throttle for my own Web sites that prevents large numbers of requests over short periods—I’m still using it today.)
Check Point used industry-standard disclosure principles as well and released its report on 28 January 2020; it had initially provided details to Zoom on 22 July 2019, just weeks after Zoom’s mistakes with the macOS app situation. Zoom was apparently more receptive to Check Point.
Zoom’s response was to block excessive requests to its meeting ID conversion URL and always return a Zoom meeting link instead of reporting whether the ID is valid or not. Only when you go through the overhead of connecting via a Zoom app or in-app browser connection can you determine if the meeting ID is legitimate.
However, on 2 April 2020, Trent Lo of SecKC, a group of folks who meet up for security talk in Kansas City, Missouri, sent details to Brian Krebs of Krebs on Security that the number-space flaw could still be exploited using “war dialing” methods reminiscent of dial-up modem days.
This required, in part, using a different IP address for every connection, subverting Zoom’s throttling approach. The tool the group developed had a whopping 14% success rate for finding public meetings. Krebs noted that the method used by the SecKC group also revealed “the date and time of the meeting; the name of the meeting organizer; and any information supplied by the meeting organizer about the topic of the meeting.”
The ultimate solution that Zoom will have to implement eventually is one I recall reading about in the mid-1990s, when it was already old hat: create a much larger addressable space. That’s typically done by mixing numbers and letters to increase the ID length substantially—to 20 characters, say—and reduce the odds of guessing a public meeting number to practically nil.
In the meantime, as of 4 April 2020, Zoom has changed the default behavior to require passwords for free accounts, upgraded education accounts, and its single-license (one paying host) account. The password settings cannot be disabled for these tiers and users, and most previously scheduled meetings have had passwords added to them.
This change effectively blocks “war dialing,” as even finding a valid meeting ID at random won’t allow a connection without the associated password.
Action needed by you: None. Remain vigilant when sharing Zoom URLs that embed the password, however, or when sharing a meeting ID and its associated password, as discussed in advice at the end of this article.
Leaking Behavior in iOS via Facebook’s Developer Kit
The Zoom app notifies Facebook when the user opens the app, [and provides] details on the user’s device such as the model, the time zone and city they are connecting from, which phone carrier they are using, and a unique advertiser identifier created by the user’s device which companies can use to target a user with advertisements.
Given that the leak affected only the iOS version of Zoom and not any of the company’s other supported client apps, this seemed like a careless implementation of Facebook support, rather than an intentional violation of user privacy. Regardless of whether it was intentional or not, it was a violation of user privacy, and in some US states and some countries, such violations may be subject to a fine or other sanctions.
Zoom initially didn’t respond to Motherboard, which provided details days ahead of publishing its story. Then, on 27 March 2020, Zoom told Motherboard that sending analytic data to Facebook was an error, claiming that it was Facebook’s fault (“we were recently made aware that the Facebook SDK was collecting unnecessary device data”). Zoom updated its iOS app to remove the Facebook SDK entirely, instead forcing Zoom users who want to log in using their Facebook credentials to go through the browser-based dance used by other apps and Web sites.
Unlike previous instances, Zoom expressed regret via a statement this time, saying: “We sincerely apologize for this oversight, and remain firmly committed to the protection of our users’ data.”
Action needed by you: None. Zoom has updated its software to remove the Facebook connection.
With increased attention on Zoom, privacy and consumer advocates are focusing on the terms of service and promises the company makes about keeping its users’ audio and video sessions, text chats, and personal information private. In late March, the venerable magazine Consumer Reports, Internet thinker and Cluetrain Manifesto author Doc Searls, and others engaged the company in a full-court press about its stated policies.
Action needed by you: None. Zoom’s changes are effectively retroactive because the company claims it never used any of the data that its policy said it could.
A Zoom Host Knew When Your Attention Slipped
This wasn’t a security flaw, and it wasn’t exactly primarily a privacy problem, but it’s worth noting in passing. Zoom optionally let the host of a meeting know if you’ve shifted your focus away from the Zoom app to another piece of software for more than 30 seconds.
As Motherboard explained, “attention tracking” put a small icon in the list of participants indicating that they had moved out of the app. The idea was to let the host know if people had stopped paying attention, but it’s rare to have a videoconference in which people don’t need to switch to other apps for legitimate reasons. The tracking may not have worked correctly with all of Zoom’s in-browser conference apps, either.
In some cases, participants in a meeting operate under rules that allow for some kinds of limited, appropriate monitoring, such as employees in a business meeting or students participating in a teacher-led class session. In others, it might be unwanted or inappropriate.
Regardless, in response to the negative feedback, Zoom removed this feature on 1 April 2020.
Action needed by you: None.
Misuse of macOS Preflight Installation Scripting
A few days ago, a forum member at Hacker News posted their realization that Zoom’s macOS installer bypasses the normal multi-step process in a standard installer we are all familiar with. In a typical installation process, you may be asked onto which disk or for which users to install an app, then asked to approve an end-user license agreement (EULA), and finally have to click Install.
On 29 March 2020, “mrpippy” posted at Hacker News, saying that he noticed that the installation happened quite early on during what’s called the “preflight” process. That’s the stage at which an installer may check to see where and how it should install and require user prompts to proceed. (Apple breaks the installer system into multiple steps and allows developers to run scripts at each of those steps.)
In relation to this approach in a round-up of recent disclosures about Zoom’s security and privacy, Daring Fireball’s John Gruber wrote:
Again, that’s clearly not an oversight or honest mistake. Everyone knows what ‘preflight’ means. It’s a complete disregard for doing things properly and honestly on Zoom’s part. There’s no way to check what files will be installed and where before their installer has gone ahead and installed them.
It’s easy to argue this kind of installer behavior is more akin to malware, even though you intended to install the software. It’s another incidence of behavior by Zoom that avoids disclosure and bypasses user intent in the interests of ensuring its software is rapidly installed.
On 31 March 2020, Zoom’s CEO responded directly to a technical researcher on Twitter who popularized the finding there stating, “Your point is well taken and we will continue to improve.” On 2 April 2020, Zoom released an updated version that follows normal installation practices. The researcher told The Verge, “I must say that I am impressed.”
Action needed by you: None. The next version of Zoom you install will use Apple’s installer system in the appropriate fashion.
Confusion over Zoom’s Explanation of End-to-End Encryption
End-to-end encryption, sometimes abbreviated E2E, is one of the most powerful ways to protect your privacy. Prior to the spread of E2E, most encryption used a client/server approach with digital certificates, such as with HTTPS for a secure Web link.
Instead, E2E encryption secures data at each endpoint, which could be a device or a piece of software. The endpoints possess encryption keys that are typically generated locally. PGP, invented in 1991, allowed encrypted peer-to-peer communication using public-key cryptography. Your PGP software generated a key pair: one key was public and could be shared; the other key was private, and you had to keep it secret. Messages encrypted with the public key could be decrypted only with the private key. Public messages could also be signed with the private key so that a recipient could verify the message hadn’t been modified since it was created.
Public-key cryptography eventually became the basis of secure person-to-person and group-based communications. With PGP, you had to start by figuring out how to trust another person’s public key. By creating a centralized public-key infrastructure, a company manages that problem by distributing its own root of trust, a bit of global validation, as part of its software.
Skype pioneered this approach, having its client apps generate encryption keys that were stored only in the client. The company itself never needed to know (nor, in the design of the system, could know) what the keys were. Apple’s iMessage works similarly.
With this kind of E2E encryption, the company that maintains the public-key infrastructure—Microsoft or Apple, in this case—doesn’t know any of the encryption keys, but because it controls the root trust, it could modify the system in such a way to intercept or decrypt data.
You can take E2E encryption a big step further by generating encryption keys at endpoints such that the company running the central system can never access them. Apple relies on endpoint-controlled keys with iCloud Keychain, certain aspects of facial recognition details in Photos, and HomeKit Secure Video. The Signal communication app relies on the same approach for its messaging system.
In either E2E approach, without the endpoint encryption keys, an attacker could intercept all the data transmitted and never be able to decipher it. Nor could the company hand over data to any government authority. And the system is highly resistant, maybe impenetrable, because the keys are unrecoverable.
Zoom has marketed itself as offering end-to-end encryption. But on 31 March 2020, The Intercept reported that Zoom appeared to employ a simpler form of transport-layer security in which connections from meeting endpoints are encrypted to Zoom’s central servers, where the data is decrypted (but not stored) before being re-encrypted and transmitted to other participants via text, audio, or video.
A spokesperson from Zoom seemed to confirm this behavior by telling The Intercept in a statement, “Currently, it is not possible to enable E2E encryption for Zoom video meetings,” as well as, “When we use the phrase ‘End to End’ in our other literature, it is in reference to the connection being encrypted from Zoom end point to Zoom end point.”
However, on 1 April 2020, Zoom published an apologetic blog post in which Chief Product Officer Oded Gal explained that Zoom operates something closer to the first kind of E2E system (centralized company management of keys) than the second (endpoint-only possession of keys).
Zoom apparently does provide end-to-end encryption between participants using Zoom native and Web apps. The data passes across Zoom’s servers without decryption and re-encryption. However, in order to connect sessions to other kinds of services, Zoom operates “connectors” that will decrypt data in certain circumstances. For instance, if you enable the company’s cloud-based recording option, sessions have to be briefly decrypted within Zoom’s cloud system. Similarly, for someone to call into a Zoom meeting from a regular phone or stream a Zoom meeting through a conduit, the session has to be decrypted.
This means that Zoom—like Microsoft with Skype or Apple with iMessage and FaceTime—retains the potential ability to intercept and view session data or disclose it to parties outside a meeting and other than the host. Gal wrote in the blog post:
Zoom has never built a mechanism to decrypt live meetings for lawful intercept purposes, nor do we have means to insert our employees or others into meetings without being reflected in the participant list.
This winds up being a bit of having one’s cake and eating it, too, because Zoom wants to offer E2E but also client/server connections. If you never use any of Zoom’s connectors and trust the CEO’s statement, you get the benefits of E2E. But if you enable a connector, Zoom has to decrypt some part of the stream—audio for a phone call, audio and video for a cloud-based recording—which renders the approach much more vulnerable and less secure. For instance, an attacker could conceivably be able to force cloud-recording of sessions quietly and redirect the data stream with no need to break into the actual meeting. That’s simply infeasible with Skype, iMessage, or FaceTime without Microsoft or Apple rewriting its software. (None of these firms offer sufficient independent code auditing, however, so it’s impossible to ensure that my statement is absolutely true.)
On 3 April 2020, the nonprofit privacy and security research organization Citizen Lab released a report examining Zoom’s E2E technology and other implications (discussed below). Citizen Lab says Zoom uses a single shared key among all meeting participants, that the key generated uses a weak algorithm susceptible to cracking, and that the keys are generated not by endpoints, but by company-run servers.
Gal noted in the Zoom blog post that the company already offers a corporate-focused option that keeps all encryption within a company’s local control, and it plans to offer more such choices later in the year, though likely just to paid accounts of a minimum size. However, without more detail about key algorithm and generation, that isn’t entirely reassuring.
The Intercept suggests Zoom may have violated Federal Trade Commission regulations. While the FTC doesn’t enforce how a company manages data, it does have legal oversight over trade practices and can sue to enforce changes and levy penalties. If Zoom’s advertising and description of its encryption comprised unfair or deceptive trade practices, the FTC could opt to intervene.
Ashkan Soltani, the former FTC chief technologist and an avid investigator of security and privacy practices, told The Intercept:
If Zoom claimed they have end-to-end encryption, but didn’t actually invest the resources to implement it, and Google Hangouts didn’t make that claim and you chose Zoom, not only are you being harmed as consumer, but in fact, Hangouts is being harmed because Zoom is making claims about its product that are not true.
Taking Zoom’s explanation as accurate, calling their method E2E is not a distortion, though it now has several caveats that were not previously understood.
What’s potentially as useful to note here is that Zoom’s blog post begins with the statement, “we want to start by apologizing for the confusion we have caused by incorrectly suggesting that Zoom meetings were capable of using end-to-end encryption.” Plus, its author notes near the end, “We are committed to doing the right thing by users when it comes to both security and privacy, and understand the enormity of this moment.” That’s an evolution for Zoom.
On 3 April 2020, in a blog post, CEO Eric Yuan responded directly to the Citizen Lab report and promised improvements on encryption: “We recognize that we can do better with our encryption design.”
Action needed by you: None. But consider whether Zoom’s E2E encryption implementation matches the level of security you are looking for.
Assuming Everyone with the Same Domain Knew Each Other
On 1 April 2020, Motherboard reported that Zoom shares contact information among everyone whose email address has the same domain name, excluding major consumer hosting and email services like Gmail, Hotmail, and Yahoo. The company effectively assumes that all users of that domain work for the same company without validating in any fashion that the assumption is true. (In Slack, a vaguely similar feature allows a private workspace owner to allow anyone using an email in a given domain to be invited to the workspace and automatically added. But that’s for private groups, managed by the workspace owner, and is opt-in.)
Zoom does maintain an extensive “blocklist” of domains to exclude, but it’s still just a list. For any user at any domain not included in that list, the Contacts tab in the Zoom client allows access to the email address, full name, profile picture, and current status of all other registered users of that domain. That also allows incoming one-to-one audio and video calls to that person.
Zoom’s response to Motherboard was to note that it had added ISP domains Motherboard inquired about, such as a few popular Dutch providers, but otherwise has left its policy the same.
Action needed by you: Zoom says people and organizations can use its Submit a Request page to request that a domain be added to its blocklist.
Improper Vetting of Link Conversion on Windows
Zoom automatically converts anything it thinks is a link into a hyperlink in a chat session. In Windows, that included file paths that, when clicked, opened remote SMB file-sharing sessions! If a user clicked such a link, their Windows system would send encrypted but vulnerable credentials that use an outdated security approach. A remote attacker could then access the machine.
A link could also be formatted to point to a DOS program under Windows. If clicked, Windows asks a user to confirm that they want to run the program.
Zoom fixed this vulnerability on 1 April 2020.
Action needed by you: None.
Harvesting of Participant Information via a LinkedIn Conduit
On 2 April 2020, the New York Times reported that participants in a Zoom meeting could have substantial personal information made available to any other participant who had signed up for LinkedIn Sales Navigator, a tool designed for finding new prospects for marketers. Without the act being disclosed, every Zoom participant’s name and email address (if available) was matched against LinkedIn’s database and, if they had a profile, connected to it.
This could be true even if the user didn’t provide their correct name when connecting to a particular meeting, as Zoom relied on the user’s account profile if logged in. Any participant who subscribed to the LinkedIn feature could merely hover over a participant’s name to see their LinkedIn profile card.
Zoom permanently removed the feature on 1 April 2020 after the New York Times contacted the company to ask about it.
Action needed by you: None.
Use of Vulnerable Mac Frameworks Leads to Zero-Day Local Exploits
On 30 March 2020, noted Mac and iOS security researcher Patrick Wardle posted a lengthy entry on his Objective-See blog about two “zero-day” bugs that leave Zoom’s Mac users vulnerable to exploits by someone who can gain access to their computer (which seems less likely in today’s stay-at-home days). These exploits can’t be invoked remotely unless a malicious party could embed the exploit in software they convince someone to download and install, such as a Trojan horse or malware disguised as something useful.
Wardle apparently didn’t provide advance disclosure to Zoom, hence the “zero-day” term, which means the vulnerability remains exploitable at the time it was revealed. Since these bugs are local-only problems that can be exploited only during a Zoom app installation or update, the likelihood of an attacker taking advantage of it is low.
One bug could let an attacker replace a script that’s part of Zoom with software of their choosing that would be installed with the highest privileges. The other could let a ne’er-do-well access a Mac’s microphone and camera without the knowledge or permission of the user.
Zoom fixed these issues and released a new version of the client on 1 April 2020.
Action needed by you: Install the latest version of Zoom for macOS by choosing Check for Updates from the zoom.us menu.
Zoom Chat Transcripts Export a Host’s Private Messages
When you host a meeting with a paid account, you can opt to save a recording of the meeting. That can include a text transcript of public chat messages sent among participants, and all participants gain access to that transcript as well as the video recording.
But it will also include all private messages between the host and other participants. A professor posted this discovery on Twitter, and Forbes confirmed it with Zoom. Private messages among other participants aren’t included.
This is a bad design choice, because private messages should, by their nature, remain private. Zoom explains in its documentation for saving in-meeting chats:
You can automatically or manually save in-meeting chat to your computer or the Zoom Cloud. If you save the chat locally to your computer, it will save any chats that you can see– those sent directly to you and those sent to everyone in the meeting or webinar. If you save the chat to the cloud, it will only save chats that were sent to everyone and messages sent while you were cloud recording.
It’s all too easy to end up with something embarrassing in a chat that could be saved or accidentally shared later.
Action needed by you: If you’re a host, either don’t engage in a private chat that could be problematic or have the discussion in a separate secure app, such as Messages.
Chinese Ownership or Involvement
In the previously noted Citizen Lab report that examines poor encryption choices, the organization also described how it tracked down both Chinese companies associated with Zoom and the generation of encryption keys by servers located in China even when all participants were outside that country.
Chinese ownership or involvement by itself isn’t necessarily problematic in business in general. However, where encryption and communications are involved, it’s straightforward. As the report notes, the Chinese government reserves the right under local law to compel companies to provide authorities access to otherwise encrypted sessions. The report notes, “Zoom may be legally obligated to disclose these keys to authorities in China.”
For sessions originating in or taking place entirely in China, Zoom may be legally required by Chinese authorities to carry all traffic across Chinese servers.
The Citizen Lab report concludes with a laundry list of people and organizations that it says it discourages from using Zoom where “use cases…require strong privacy and confidentiality.” The list includes activists, healthcare providers, businesses fearing industrial espionage, and governments “worried about espionage.”
These recommendations may seem overblown, but China has an extensive and well-documented history in obtaining private information from individuals, businesses, and governments.
On 3 April 2020, following the Citizen Lab report, Zoom’s CEO wrote in a blog post that including Chinese servers for meetings that didn’t involve participants in China was an error brought on by the company’s efforts to scale up capacity massively.
Action needed by you: Avoid Zoom if you’re in a sensitive category until Zoom’s server changes are verified and more information becomes available about its use of companies in China.
It’s Easy To Find Recordings of Zoom Meetings by Searching the Web
On 3 April 2020, the Washington Post reported that it was trivial to find video recordings made in Zoom by searching on the common file-naming pattern that Zoom applies automatically. The Post writes:
Videos viewed by The Post included one-on-one therapy sessions; a training orientation for workers doing telehealth calls, which included people’s names and phone numbers; small-business meetings, which included private company financial statements; and elementary-school classes, in which children’s faces, voices and personal details were exposed.
This scenario combines the power of Web search engines, technical choices Amazon made with its cloud-storage system, the misunderstanding by users that obscurity equals online privacy, and a failure by Zoom to consider unintended consequences:
- Web search engines: Google and other Web search engines make it simple to find anything whose name or contents matches a pattern. That’s why you can also easily find hundreds of thousands of unprotected company and home security cameras through straightforward Google searches: many cameras’ have identically named administration pages.
- Amazon choices: Amazon’s Simple Storage System (S3) allows public “buckets,” its term for uniquely named storage repositories. Public buckets can be used for software distribution and making datasets available to researchers. People often accidentally configure a bucket as public, or intentionally do so without realizing that the contents could wind up indexed by a search engine if they are referenced through a link by anyone, anywhere on the public Internet. Many Zoom videos were found in public Amazon buckets.
- Privacy through obscurity: Users see an obscurely named file, or think that no one would pay attention to their account, and conclude that there’s no harm in posting. Many people also simply don’t understand the privacy implications of posting videos online. The Post found many Zoom recordings on YouTube and Vimeo, even though they contained private or privileged information or scenes, or included minors in classes. (Schools must typically obtain explicit permission to post photos or videos of children in any form.)
- Unintended consequences: For ease of use, Zoom chose to name each saved video recording with a standard pattern. That’s typical behavior for photo and video capture devices, which often name in a sequential pattern. But for files that might wind up stored on publicly accessible sites, it’s a bad idea.
Zoom could mitigate this problem by revising its naming algorithm to avoid patterns that could be found in an online search, or it could simply require that whoever is doing the recording enter their own name at the start or finish.
Actions needed by you: You may need to do several things:
- Confirm that you haven’t uploaded any private recordings of videos to places that are publicly accessible.
- Contact hosts of sessions in which you’ve participated to alert them and have them check their upload locations.
- In the future, after recording a meeting, rename the file immediately to avoid the potential of a searchable link.
Basic Choices Help Reduce Zoom Trolling
Publicly available Zoom sessions aren’t a security risk as such, so Zoom isn’t precisely to be blamed for the massive rise in “zoombombing”—a fresh word for the pandemic era. Zoom provides a lot of power under the hood, even on its free tier, and with great power comes…a lot of nonsense by people whose goal in life seems to be to revel in others’ grief.
As of 4 April 2020, Zoom has implemented several changes that are intended to reduce the potential of unwanted or harassing intruders:
- Passwords are now turned on for all meetings for free-tier accounts and for single-license (one host) paid accounts, including upgraded education accounts.
- Passwords are now mandatory for any meeting created by those accounts and cannot be disabled.
- No matter how a participant joins or the kind of a meeting, a password will be required. That takes care of “ad hoc” instant meetings that aren’t scheduled, scheduled meetings, and participants who join by a dial-in phone number.
- Most kinds of meetings already scheduled or created by 4 April 2020 will have a password applied retroactively, and you may need to redistribute a URL or distribute a password.
These changes mean that someone guessing Zoom meeting IDs or finding a meeting link without a password will be unable to join.
Meeting URLs can embed the password, as in the one below. If you want to share the URL without the password, copy the bold text before
?pwd=. (You can also disable password embedding in Personal > Settings under “Embed password in meeting link for one-click join.”)
I discourage sharing the full URL or the password on any forum, social network, or Web page that random garbage people could be scanning manually or via automatic scraping tools. However, if it’s necessary for the kind of meeting you run, you can take a handful of steps to reduce the chance that an intruder will join your Zoom meeting using that link. First, when publicizing your meeting:
- Discourage invited participants from sharing the URL to others who aren’t part of your group, organization, or movement.
- Tell people that the URL is coming and then post it quite close to the event start time, such as 30 minutes before.
- Don’t enable in-browser clients. Zoom has somewhat reduced-functionality clients that work entirely in a Web browser without requiring an app download. However, reports indicate these are more easily subverted by malicious people to allow them to re-join meetings after being kicked out. The option to allow people to use Web apps is off by default. Check that you’ve kept it that way in Personal > Settings, where it’s listed as “Show a ‘Join from your browser’ link.”
Then, in the setup for a public Zoom meeting, configure it this way:
- Select Generate Automatically for the Meeting ID. You don’t want to share your Personal Meeting ID for a public meeting.
- Set Participant Video to Off. People can enable their video once they’re in the meeting.
- Uncheck “Enable join before host” to prevent anyone from starting the meeting before you.
- Check “Mute participants on entry.” Again, people can turn their microphones on once they’re in.
- Ensure Enable Waiting Room is turned on. This option allows screening of users before they enter the meeting and can chat, speak, or share a screen or their video. You can add everyone in a waiting room at once, so you can scan through a list of people, and if they’re all acceptable, click a single button. On 4 April 2020, Zoom made this option the default for all accounts and meetings, but you can disable it by meeting or turn off the default setting in your account. For small meetings and a meeting that may have been distributed widely, this makes sense. For larger meetings, it’s a far more difficult option to manage.
Zoom has a blog post with extensive additional advice for setup and managing a meeting to deter crashers.
Stay Tuned to This Channel
As detailed as this article is, I fear that this list of problems and choices will be far from the last we hear about Zoom’s security and privacy troubles. In fact, while writing and editing this article over the last 48 hours, we had to add six additional exploits, design-choice errors, and privacy concerns.
Zoom has gone into what’s known as “technical debt.” The company’s developers made a lot of poor decisions in the past, which are likely difficult and costly to fix. The longer it takes Zoom to address the core problems, the harder and more costly future fixes will be, as additional code is built upon that weak foundation.
It’s like when a city defers infrastructure maintenance year after year. You can only paint a bridge for so long to pretend it’s fresh and up to date. Eventually, its roadbed will start to crumble, and girders will rust through, and all that deferred maintenance will come home to roost in a lengthy rebuilding project. (I live in Seattle; I know this quite well.)
In the first draft of this article, before Zoom’s announcements late on April 1st, I wrote:
Zoom is in an incredible position to fix the past. It should launch a crash program, if it hasn’t already, to beat on its own products and find the next five or 500 weaknesses and prioritize fixing them before security researchers and—more importantly—malicious parties get there first.
Do I predict the future or just anticipate its necessity, I wondered to myself, when I read Zoom CEO Eric Yuan’s words:
Over the next 90 days, we are committed to dedicating the resources needed to better identify, address, and fix issues proactively. We are also committed to being transparent throughout this process. We want to do what it takes to maintain your trust.
The company is freezing all its features to focus on trust, safety, and privacy. It’s consulting third parties, users, and corporate information security officers. Zoom is one of the most important companies in the world right now, whether it deserves to be or not. It needs to step up to that responsibility, and we can hope Yuan’s words mean that Zoom has now accepted its role.
I’m writing a book for Take Control Books about Zoom and would welcome your tips and input in the comments. I would also encourage you to download a free copy of Take Control of Working from Home Temporarily, a book I wrote to help people with the sudden adjustment in their working lives. It contains a number of videoconferencing tips, among many others provided by Take Control authors, TidBITS editors and contributors, and others who donated their experiences and insights.