In recent months, the audio-based social media app Clubhouse has become Silicon Valley’s latest disruptive favorite. The format feels familiar: part Twitter, part Facebook Live, part talking on the phone. But as Clubhouse continues to expand, its security and privacy flaws have come under increased scrutiny and have left the company struggling to correct issues and manage expectations.
Clubhouse, still in beta and available only on iOS, offers its users “rooms” that are essentially group audio chats. They can also be configured as public addresses or discussion boards where some users are “speakers” and the rest are members of the audience. The platform reportedly has more than 10 million users and is valued at $ 1 billion. Since last year, it has been an exclusive haven for Silicon Valley celebrities and elite, including an appearance by Elon Musk earlier this month. But the company has struggled with both concrete security issues and more fleeting questions about how much privacy its users should expect.
“With newer and smaller social media platforms, we need to be on guard against our data, especially when it is experiencing huge growth and testing many of the controls,” says security researcher Robert Potter. “Things you could have gotten away with with just 100,000 people on the platform – you increase those numbers tenfold and the level of exposure increases, the threat increases, the number of people polling your platform increases.”
Recent security concerns about Clubhouse range from vulnerabilities to questions about the underlying infrastructure of the application. Just over a week ago, researchers at the Stanford Internet Observatory put the spotlight on the platform when they discovered that the app was transmitting users’ Clubhouse IDs and chat room ID numbers in clear, meaning that a third party could have potentially tracked your actions on the app. The researchers further noted that part of the Clubhouse infrastructure is run by a Shanghai-based company and it appeared that the app’s data traveled through China at least part of the time, potentially exposing users to targeted surveillance or even widespread Chinese government. Then on sunday Bloomberg confirmed that a third-party website was scraping and compiling audio from the Clubhouse discussions. Early Monday, more revelations followed that Clubhouse discussions were being removed for an unaffiliated Android app, allowing users of that operating system to listen in real time.
Potter, one of the researchers who investigated the different Clubhouse data mining projects, explains that these apps and websites did not appear malicious; they just wanted the Clubhouse content to be available to more people. But the developers were only able to do it because Clubhouse had no anti-scratch mechanisms that could have stopped it. Clubhouse didn’t limit the number of rooms a single account could stream from at once, for example, so anyone could create an app programming interface to stream all public channels at the same time.
More mature social networks like Facebook have more developed mechanisms to block your data, both to avoid violations of user privacy and to defend the data they have as an asset. But even they can have potential exposures from creative scraping techniques.
Clubhouse has also come under scrutiny for its aggressive collection of user contact lists. The app strongly encourages all users to share their address book data so that Clubhouse can help you make connections with people you know who are already on the platform. It also requires you to share your contact list to invite other people to the platform, as the Clubhouse is still guest-only, bringing a sense of exclusivity and privacy. However, numerous users have pointed out that when you go to invite others, the app also makes suggestions based on which phone numbers of your contacts are also in the contacts of the largest number of Clubhouse users. In other words, if you and your local friends use the same florist, doctor, or drug dealer, it is very possible that they will appear on your list of suggested invitees.
Clubhouse did not respond to a request from WIRED for comment at press time about its recent security hiccups. In a statement to researchers at the Stanford Internet Observatory, Clubhouse detailed the specific changes it planned to make to strengthen its security, including cutting off pings to servers in China and strengthening their encryption. The company also said it would work with a third-party data security firm to help see the changes. In response to the unauthorized website that was rebroadcasting the Clubhouse discussions, the company told the media that it had permanently banned the user behind it and would add additional “safeguards” to prevent the situation from happening again.
Although Clubhouse appears to be taking the researchers’ comments seriously, the company has not been specific about all the security enhancements it has implemented or plans to add. Also, since the app does not appear to offer end-to-end encryption to its users, the researchers say there is still a feeling that Clubhouse has not given adequate thought to its security posture. And that’s even before you deal with some of the fundamental privacy questions the app raises.
When you start a new clubhouse room, you can choose between three settings: any user on the platform can access an “open” room, a “social” room only admits the people you follow, and a “closed” room restricts the guest access. Each comes with its own implicit level of privacy, which the Clubhouse could make more explicit.
“I think for public rooms, Clubhouse should give users the expectation that public means public to all users, as anyone can join in and record, take notes, etc.” says David Thiel, chief technology officer for the Stanford Internet Observatory. “For private rooms, they can convey that, as with any communication mechanism, an authorized member can record content and identities, so be sure to set expectations and trust the participants.”
Like any prominent social network, Clubhouse has also struggled to deal with abuse on the platform. The app’s terms of service prohibit hate speech, racism, and harassment as of November, and the platform offers some moderation features, such as the ability to block users or mark a room as potentially abusive. But one of the most important features of Clubhouse is also a problem for the fight against abuse: people can use the platform without the responsibility that their contributions will be automatically saved as posts. This may encourage some users to make abusive or derogatory comments, thinking that they will not be logged and will face no consequences.
Stanford’s Thiel says the Clubhouse is currently temporarily storing discussion tapes to review for abuse claims. However, if the company implemented end-to-end encryption for security, it would be even more difficult to keep up with the abuse, because it would not be able to make those recordings so easily. All social media platforms face some version of this tension, but security experts agree that, when relevant, the benefits of adding end-to-end encryption are worth the added challenge of developing more nuanced anti-abuse solutions and creative.
Even end-to-end encryption does not eliminate the additional possibility that any Clubhouse user can externally record the conversation they are in. That is not something Clubhouse can easily figure out. But you can at least set expectations accordingly, no matter how friendly and off the record the conversation feels. “The clubhouse should be clear about what is going to contribute to your privacy,” says Potter, “so that you can establish what you are going to talk about accordingly.”
This story originally appeared on wired.com.