Support all your favorite nonprofits with a single donation.Donate safely, anonymously & monthly, in any amount. It's a smarter way to give online. Learn more
Tor is free software and an open network that helps you defend against a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security known as traffic analysis.
Welcome to the forty-sixth issue in 2014 of Tor Weekly News, the weekly newsletter that covers what’s happening in the Tor community.
Tor Browser 4.5-alpha-1 is out
Mike Perry announced the first alpha release in the Tor Browser 4.5 series. This version goes some way to restoring one of the features most missed by users following the removal of the now-defunct Vidalia interface from Tor Browser — the ability to quickly visualize the Tor circuit that the current page is using. Clicking on the green Torbutton icon in the Tor Browser window now brings up a small diagram showing the IP addresses of all relays in a circuit, and the states in which they are located; this may help users evaluate the suitability of the circuits their Tor has selected, and also to quickly identify a malicious exit relay if they notice unusual behavior in downloaded pages and files.
Another key user-facing innovation in this release is the “security slider”. Users can now choose from four security settings in Torbutton’s “Preferences” window — “low (default)”, “medium-low”, “medium-high”, and “high” — that allow them to customize their Tor Browser based on their own security and usability needs, while still working to prevent “partitioning” attacks, which try to identify users based on their unusual browser configuration.
For other important additions in this series, please see the full changelog in Mike’s post. If you want to try out this alpha version, you can find it on the Tor Browser project page or in the distribution directory; please report any bugs you find!
Tor Browser on 32-bit Macs approaches end-of-life
Now that Apple has discontinued support for the last remaining 32-bit Mac systems, Mike Perry announced that the Tor Browser team will soon stop distributing 32-bit builds of its software. This week’s 4.5-alpha-1, like all future releases in the 4.5 series, is only available in a 64-bit build, and all support for 32-bit systems will end once 4.5 supersedes 4.0.
“32-bit Mac users likely have a month or two to decide what to do”, wrote Mike. “If your actual Mac hardware is 64-bit capable, you can upgrade to either the 64-bit edition of OSX 10.6 (which we will continue to support for a bit longer), or use the app store to upgrade to 10.9 or 10.10. If your hardware is not 64-bit capable and won’t run these newer Mac operating systems, you should still be able to use Tails, which contains the Tor Browser.”
As a side effect of this transition, Tor Browser 4.0’s experimental in-browser secure updater will not handle the upgrade to the 64-bit build correctly for any Mac user; the old version must instead be replaced manually with the new one.
Roger Dingledine and Sambuddho Chakravarty responded on the Tor blog to inaccurate reports of a new attack against Tor, based on a recent study co-authored by Sambuddho. “It’s great to see more research on traffic correlation attacks, especially on attacks that don’t need to see the whole flow on each side. But it’s also important to realize that traffic correlation attacks are not a new area”, wrote Roger.
The Tails team set out the December release schedule for version 1.2.1 of the anonymous live operating system.
Giovanni Pellerano announced version 3.1.30 of Tor2web, which now supports web access to Tor hidden services over TLS. Access to the Facebook hidden service, the most high-profile instance of an HTTPS-enabled .onion site, is blocked in this version, as Tor2web offers no benefit in cases where there exists an identical service on the regular or “naked” web, and may actually present additional risk of compromise.
Griffin Boyce requested feedback on a “very rough” version of Stormy, the simple hidden service setup wizard. “I’d love to get feedback on places where it breaks and where it could use a major structural change […] the current setup is entirely for development and should not be used as-is.”
Virgil Griffith started a discussion on the suitability of the name “hidden services” as opposed to other possible terms like “onion service” or “onion site”. Among the many responses, Roger Dingledine suggested that an alternative name like “onion service” “makes people have to learn what it is rather than guessing (and often guessing wrong)”, while Nathan Freitas pointed out that as “typical users don’t talk about web services, they talk about web sites or pages”, “onion site” might be a term worth adopting.
Tom Ritter put forward a number of improvements to the integration of HTTPS certificates and hidden services, following “a spirited debate on IRC”.
The Wikimedia Foundation is the latest high-profile organization to set up a non-exit Tor relay. “It’s just a small contribution to the network. Really — anyone can do it.”
This issue of Tor Weekly News has been assembled by Harmony and Lunar.
Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!
This release features a circuit status reporting UI (visible on the green Tor onion button menu), as well as isolation for circuit use. All content elements for a website will use a single circuit, and different websites should use different circuits, even when viewed at the same time. The Security Slider is also present in this release, and can be configured from the green Tor onion's Preferences menu, under the Privacy and Security settings tab. It also features HTTPS certificate pinning for selected sites (including our updater), which was backported from Firefox 32.
This release also features a rewrite of the obfs3 pluggable transport, and the introduction of the new obfs4 transport. Please test these transports and report any issues!
Note to Mac users: As part of our planned end-of-life for supporting 32 bit Macs, the Mac edition of this release is 64 bit only, which also means that the updater will not work for Mac users on the alpha series release channel for this release. Once you transition to this 64 bit release, the updater should function correctly after that.
Here is the complete changelog since 4.0.1:
- All Platforms
- Bug 3455: Patch Firefox SOCKS and proxy filters to allow user+pass isolation
- Bug 11955: Backport HTTPS Certificate Pinning patches from Firefox 32
- Bug 13684: Backport Mozilla bug #1066190 (pinning issue fixed in Firefox 33)
- Bug 13019: Make JS engine use English locale if a pref is set by Torbutton
- Bug 13301: Prevent extensions incompatibility error after upgrades
- Bug 13460: Fix MSVC compilation issue
- Bug 13504: Remove stale bridges from default bridge set
- Bug 13742: Fix domain isolation for content cache and disk-enabled browsing mode
- Update Tor to 0.2.6.1-alpha
- Update NoScript to 184.108.40.206
- Bug 13586: Make meek use TLS session tickets (to look like stock Firefox).
- Bug 12903: Include obfs4proxy pluggable transport
- Update Torbutton to 220.127.116.11
- Bug 9387: Provide a "Security Slider" for vulnerability surface reduction
- Bug 13019: Synchronize locale spoofing pref with our Firefox patch
- Bug 3455: Use SOCKS user+pass to isolate all requests from the same url domain
- Bug 8641: Create browser UI to indicate current tab's Tor circuit IPs
- Bug 13651: Prevent circuit-status related UI hang.
- Bug 13666: Various circuit status UI fixes
- Bug 13742+13751: Remove cache isolation code in favor of direct C++ patch
- Bug 13746: Properly update third party isolation pref if disabled from UI
- Bug 13443: Re-enable DirectShow; fix crash with mingw patch.
- Bug 13558: Fix crash on Windows XP during download folder changing
- Bug 13091: Make app name "Tor Browser" instead of "Tor"
- Bug 13594: Fix update failure for Windows XP users
- Bug 10138: Switch to 64bit builds for MacOS
We are planning to discontinue support for 32 bit Macs for Tor Browser.
We're doing this for two main reasons. First, Apple itself no longer supports 32 bit Macs. The only remaining 32 bit Mac users are on OSX 10.6, which Apple ended support for in February of this year. Second, 64 bit software has improved security properties by way of improved address space layout randomization (ASLR), which makes exploitation more difficult.
This transition will happen in phases: First, the upcoming 4.5-alpha series will only be available as 64 bit builds. We will continue releasing 32 bit versions of the 4.0 series until the 4.5 series is declared stable. When the 4.5 release stabilizes, all support for 32 bit Macs will end. The 4.5 release series will be stable "when it's ready", but we know we need to perform at least two more releases in this series to properly exercise the new security features in the updater. This means that 32 bit Mac users likely have a month or two to decide what to do.
Current 32 bit Mac users have a few options. If your actual Mac hardware is 64 bit capable, you can upgrade to either the 64 bit edition of OSX 10.6 (which we will continue to support for a bit longer), or use the app store to upgrade to 10.9 or 10.10.
If your hardware is not 64 bit capable and won't run these newer Mac operating systems, you should still be able to use Tails, which contains the Tor Browser. You can run Tails in a virtual machine such as VirtualBox, or you can manually install it to a USB disk from the Mac OS Terminal. If you install Tails to a USB stick, you will need to reboot your Mac in order to use it.
All Mac Users: Manual Update Required
The transition from 32 bit Mac bundles to 64 bit Mac bundles means that our (currently experimental) in-browser updater will not transition any Mac users to the 64bit version automatically, even if your Mac is already running a 64 bit version of Mac OS. You must perform this update manually, as you had to do prior to the 4.0 series. This means downloading the DMG and dragging the new 64 bit Tor Browser over your old Tor Browser, and replacing the application.
Once you are running the 64 bit version, updates should function correctly again.
People are starting to ask us about a recent tech report from Sambuddho's group about how an attacker with access to many routers around the Internet could gather the netflow logs from these routers and match up Tor flows. It's great to see more research on traffic correlation attacks, especially on attacks that don't need to see the whole flow on each side. But it's also important to realize that traffic correlation attacks are not a new area.
This blog post aims to give you some background to get you up to speed on the topic.
First, you should read the first few paragraphs of the One cell is enough to break Tor's anonymity analysis:
First, remember the basics of how Tor provides anonymity. Tor clients route their traffic over several (usually three) relays, with the goal that no single relay gets to learn both where the user is (call her Alice) and what site she's reaching (call it Bob).
The Tor design doesn't try to protect against an attacker who can see or measure both traffic going into the Tor network and also traffic coming out of the Tor network. That's because if you can see both flows, some simple statistics let you decide whether they match up.
Because we aim to let people browse the web, we can't afford the extra overhead and hours of additional delay that are used in high-latency mix networks like Mixmaster or Mixminion to slow this attack. That's why Tor's security is all about trying to decrease the chances that an adversary will end up in the right positions to see the traffic flows.
The way we generally explain it is that Tor tries to protect against traffic analysis, where an attacker tries to learn whom to investigate, but Tor can't protect against traffic confirmation (also known as end-to-end correlation), where an attacker tries to confirm a hypothesis by monitoring the right locations in the network and then doing the math.
And the math is really effective. There are simple packet counting attacks (Passive Attack Analysis for Connection-Based Anonymity Systems) and moving window averages (Timing Attacks in Low-Latency Mix-Based Systems), but the more recent stuff is downright scary, like Steven Murdoch's PET 2007 paper about achieving high confidence in a correlation attack despite seeing only 1 in 2000 packets on each side (Sampled Traffic Analysis by Internet-Exchange-Level Adversaries).
Second, there's some further discussion about the efficacy of traffic correlation attacks at scale in the Improving Tor's anonymity by changing guard parameters analysis:
Tariq's paper makes two simplifying assumptions when calling an attack successful [...] 2) He assumes that the end-to-end correlation attack (matching up the incoming flow to the outgoing flow) is instantaneous and perfect. [...] The second one ("how successful is the correlation attack at scale?" or maybe better, "how do the false positives in the correlation attack compare to the false negatives?") remains an open research question.
Researchers generally agree that given a handful of traffic flows, it's easy to match them up. But what about the millions of traffic flows we have now? What levels of false positives (algorithm says "match!" when it's wrong) are acceptable to this attacker? Are there some simple, not too burdensome, tricks we can do to drive up the false positives rates, even if we all agree that those tricks wouldn't work in the "just looking at a handful of flows" case?
More precisely, it's possible that correlation attacks don't scale well because as the number of Tor clients grows, the chance that the exit stream actually came from a different Tor client (not the one you're watching) grows. So the confidence in your match needs to grow along with that or your false positive rate will explode. The people who say that correlation attacks don't scale use phrases like "say your correlation attack is 99.9% accurate" when arguing it. The folks who think it does scale use phrases like "I can easily make my correlation attack arbitrarily accurate." My hope is that the reality is somewhere in between — correlation attacks in the current Tor network can probably be made plenty accurate, but perhaps with some simple design changes we can improve the situation.
The discussion of false positives is key to this new paper too: Sambuddho's paper mentions a false positive rate of 6%. That sounds like it means if you see a traffic flow at one side of the Tor network, and you have a set of 100000 flows on the other side and you're trying to find the match, then 6000 of those flows will look like a match. It's easy to see how at scale, this "base rate fallacy" problem could make the attack effectively useless.
And that high false positive rate is not at all surprising, since he is trying to capture only a summary of the flows at each side and then do the correlation using only those summaries. It would be neat (in a theoretical sense) to learn that it works, but it seems to me that there's a lot of work left here in showing that it would work in practice. It also seems likely that his definition of false positive rate and my use of it above don't line up completely: it would be great if somebody here could work on reconciling them.
For a possibly related case where a series of academic research papers misunderstood the base rate fallacy and came to bad conclusions, see Mike's critique of website fingerprinting attacks plus the follow-up paper from CCS this year confirming that he's right.
I should also emphasize that whether this attack can be performed at all has to do with how much of the Internet the adversary is able to measure or control. This diversity question is a large and important one, with lots of attention already. See more discussion here.
In summary, it's great to see more research on traffic confirmation attacks, but a) traffic confirmation attacks are not a new area so don't freak out without actually reading the papers, and b) this particular one, while kind of neat, doesn't supercede all the previous papers.
(I should put in an addendum here for the people who are wondering if everything they read on the Internet in a given week is surely all tied together: we don't have any reason to think that this attack, or one like it, is related to the recent arrests of a few dozen people around the world. So far, all indications are that those arrests are best explained by bad opsec for a few of them, and then those few pointed to the others when they were questioned.)
[Edit: be sure to read Sambuddho's comment below, too. -RD]
Welcome to the forty-fifth issue in 2014 of Tor Weekly News, the weekly newsletter that covers what’s happening in the Tor community.
Mozilla announces Polaris Privacy Initiative
Mozilla, makers of the Firefox browser upon which Tor Browser is based, announced a series of projects to “accelerate pragmatic and user-focused advances in privacy technology for the Web, giving users more control, awareness and protection in their Web experiences”. The Tor Project is one of Mozilla’s two partners in this Polaris Privacy Initiative, and the collaboration will involve looking at the Firefox codebase to see if its relationship to Tor Browser and the Tor development process can be made more efficient, giving Tor engineers more time to focus on other important issues. Mozilla also stated their intention to run several high-capacity Tor middle relays, contributing to a faster and more stable Tor network.
As Andrew Lewman wrote on the Tor blog, “the Tor Browser is one of the best ways to protect privacy on the web and this partnership is a huge step in advancing people’s right to freedom of expression online”. Watch for more announcements as work on these two fronts continues.
Tor and Operation Onymous
An international coalition of law enforcement authorities announced the seizure of over 400 Tor hidden services allegedly engaging in illegal activity. Once the desired headlines had been written, something approaching the facts began to emerge, with the claimed number of seized services dropping sharply to 27; more troublingly, several high-capacity Tor relays with no apparent connection to the hidden services were also seized. However, in contrast to the last major takedown of hidden services, which involved one shared hidden service hosting platform, there was no obvious single feature linking all of the seized sites, leading to concern in the Tor community that an exploit against the Tor network may have been responsible for their discovery.
It could be that these services were deanonymized individually over a period of months using a variety of means, then all seized at once for maximum effect: as Andrew Lewman and others wrote in a response posted to the Tor blog, these methods could include operational security mistakes by service operators, exploitation of flaws in poorly-written website code, or attacks on the Bitcoin cryptocurrency that is widely used on hidden service marketplaces. On the other hand, if an attack on the Tor network itself is at play, it may be a variant of the class of attack known as “traffic confirmation”, like the one observed earlier this year. “Unfortunately,” as the blog post notes, ”the authorities did not specify how they managed to locate the hidden services”; even if they had, recent disclosures concerning “parallel construction” in law enforcement mean that the public would not necessarily be able to trust their explanation.
“Hidden services need some love” has become a familiar refrain in recent months, and even though the story behind these seizures may remain unknown, they have reinvigorated some long-running threads on improvements to the security of this important technology. George Kadianakis coded a patch that allows hidden service operators to “specify a set of nodes that will be pinned as middle nodes in hidden service rendezvous circuits”, while the theory behind this continues to be discussed, as does the hidden service authorization feature and how widely it is used in practice.
“The attention hidden services have received is minimal compared to their social value and compared to the size and determination of their adversaries.” If you are a hidden service operator concerned by these seizures, or you want to help ensure the possibility of free and uncensorable publishing online, see the group blog post for more details, and feel free to join in with the discussions on the tor-dev mailing list.
More monthly status reports for October 2014
Roger Dingledine sent out the report for SponsorF.
Arturo Filastò reported on OONI’s ongoing study of Tor bridge reachability in different countries, and the recent hackfest on the same topic.
Karsten Loesing offered an update on developments in the world of Onionoo, including new mirrors and search improvements.
Help desk round up
The help desk has been asked how to run Tor Browser on a Chromebook. ChromeOS does not allow any programs to be executed except Google Chrome, including other browsers like Tor Browser. The workaround for this is to install a Debian or Ubuntu environment within ChromeOS using crouton. Once crouton is ready, Tor Browser for Linux can be downloaded and installed in the Debian or Ubuntu environment. Crouton users should seek support from the crouton team and not from the Tor help desk.
This issue of Tor Weekly News has been assembled by Harmony, Matt Pagan, Karsten Loesing, and Lunar.
Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!
Mozilla announced that the Tor Project and the Center for Democracy & Technology will be part of their new privacy initiative called Polaris, a collaboration to bring even more privacy features into Mozilla’s products. We are honored to be working alongside Mozilla as well as the Center for Democracy & Technology to give Firefox users more options to protect their privacy.
Mozilla is an industry leader in developing features to support the user’s desire for increased privacy online and shares the Tor Project's mission of helping people protect their security online. At the core of Mozilla's values is the belief that individuals’ privacy cannot be treated as optional. We share this belief. Millions of people around the world rely on the protection of the Tor software and network to safeguard their anonymity. We appreciate companies like Mozilla that see the importance of safeguarding privacy. The Tor volunteer network has grown to the point that large companies can usefully contribute without hurting network diversity. The Tor network will get even better with Mozilla's help, and we hope that their participation will encourage even more organizations to join us.
The initial projects with Mozilla will focus on two areas:
The Tor Browser is built on the Firefox platform and we are excited to have the resources of Mozilla’s engineers to help us merge the many Firefox privacy fixes into the Mozilla codebase. The increased attention from Mozilla will give us time to focus on finding and fixing new issues rather than maintaining our fork.
Tor's network size constrains the number of users that can use Tor concurrently. In the short term, Mozilla will help address this by hosting high-capacity Tor middle relays to make Tor’s network more responsive and allow Tor to serve more users.
We believe that the Tor Browser is one of the best ways to protect privacy on the web and this partnership is a huge step in advancing people’s right to freedom of expression online.
Has a Tor bridge already been blocked in a given country? Being able to answer that question would allow Tor to provide more efficient circumvention methods to those who need them. OONI, the Open Observatory of Network Interference is now actively collecting data on bridge reachability. We are also interested in having a better understanding of how reactive censors are in blocking new bridges distributed via Tor Browser and how effective they are at inhibiting usage of particular pluggable transport.
The countries we are focusing on in this survey are China, Iran, Russia and Ukraine. We call these our test vantage points.
From every test vantage point we perform two types of measurements:
- A Bridge reachability measurement that attempts to build a Tor circuit using the bridge in question.
- A TCP connect measurement that simply does a TCP connect to the bridge IP and port.
To establish a baseline to eliminate the cases in which the bridge is marked as blocked, while it is in fact just offline, we measure also from a vantage point located in the Netherlands.
So far we have collected about a month worth of data and it is as always publicly available for download by anybody interested in looking at it.
To advance this study at the end of October we did a OONI hackfest in Berlin. Helped by the ubiquitous sticky notes we were able to come up with a plan for those days of work and for continuing the project.
The first visualisation we produced is that of the reachability of bridges categorised by country and pluggable transport over time. This simple visualisation already conveys a lot of information and has proven itself a useful tool also in debugging issues with ooniprobe and the tools we use.
You can visit the actual page by clicking on the picture above.
Please note that because the tests are new and experimental you might find inaccuracies or bugs, so don't seriously rely on it for research just yet.
We also developed a data pipeline that places all of the collected OONI reports into a database. This makes it much easier to search/aggregate and visualise the data of the reports.
To read more about this project check out the ooni-dev mailing list thread on this topic.
This project is still in it's very early stages of development, but we would love to hear feedback on it or your cool visualization ideas, as well as any questions regarding Tor bridge reachability (or more in general on Internet censorship) that you would like us to answer!
Recently it was announced that a coalition of government agencies took control of many Tor hidden services. We were as surprised as most of you. Unfortunately, we have very little information about how this was accomplished, but we do have some thoughts which we want to share.
Over the last few days, we received and read reports saying that several Tor relays were seized by government officials. We do not know why the systems were seized, nor do we know anything about the methods of investigation which were used. Specifically, there are reports that three systems of Torservers.net disappeared and there is another report by an independent relay operator. If anyone has more details, please get in contact with us. If your relay was seized, please also tell us its identity so that we can request that the directory authorities reject it from the network.
But, more to the point, the recent publications call the targeted hidden services seizures "Operation Onymous" and they say it was coordinated by Europol and other government entities. Early reports say 17 people were arrested, and 400 hidden services were seized. Later reports have clarified that it was hundreds of URLs hosted on roughly 27 web sites offering hidden services. We have not been contacted directly or indirectly by Europol nor any other agency involved.
Tor is most interested in understanding how these services were located, and if this indicates a security weakness in Tor hidden services that could be exploited by criminals or secret police repressing dissents. We are also interested in learning why the authorities seized Tor relays even though their operation was targetting hidden services. Were these two events related?
How did they locate the hidden services?
So we are left asking "How did they locate the hidden services?". We don't know. In liberal democracies, we should expect that when the time comes to prosecute some of the seventeen people who have been arrested, the police would have to explain to the judge how the suspects came to be suspects, and that as a side benefit of the operation of justice, Tor could learn if there are security flaws in hidden services or other critical internet-facing services. We know through recent leaks that the US DEA and others have constructed a system of organized and sanctioned perjury which they refer to as "parallel construction."
Unfortunately, the authorities did not specify how they managed to locate the hidden services. Here are some plausible scenarios:
The first and most obvious explanation is that the operators of these hidden services failed to use adequate operational security. For example, there are reports of one of the websites being infiltrated by undercover agents and the affidavit states various operational security errors.
Another explanation is exploitation of common web bugs like SQL injections or RFIs (remote file inclusions). Many of those websites were likely quickly-coded e-shops with a big attack surface. Exploitable bugs in web applications are a common problem.
Apparently, there are ways to link transactions and deanonymize Bitcoin clients even if they use Tor. Maybe the seized hidden services were running Bitcoin clients themselves and were victims of similar attacks.
Attacks on the Tor network
The number of takedowns and the fact that Tor relays were seized could also mean that the Tor network was attacked to reveal the location of those hidden services. We received some interesting information from an operator of a now-seized hidden service which may indicate this, as well. Over the past few years, researchers have discovered various attacks on the Tor network. We've implemented some defenses against these attacks, but these defenses do not solve all known issues and there may even be attacks unknown to us.
For example, some months ago, someone was launching non-targetted deanonymization attacks on the live Tor network. People suspect that those attacks were carried out by CERT researchers. While the bug was fixed and the fix quickly deployed in the network, it's possible that as part of their attack, they managed to deanonymize some of those hidden services.
Another possible Tor attack vector could be the Guard Discovery attack. This attack doesn't reveal the identity of the hidden service, but allows an attacker to discover the guard node of a specific hidden service. The guard node is the only node in the whole network that knows the actual IP address of the hidden service. Hence, if the attacker then manages to compromise the guard node or somehow obtain access to it, she can launch a traffic confirmation attack to learn the identity of the hidden service. We've been
discussing various solutions to the guard discovery attack for the past many months but it's not an easy problem to fix properly. Help and feedback on the proposed designs is appreciated.
*Similarly, there exists the attack where the hidden service selects the attacker's relay as its guard node. This may happen randomly or this could occur if the hidden service selects another relay as its guard and the attacker renders that node unusable, by a denial of service attack or similar. The hidden service will then be forced to select a new guard. Eventually, the hidden service will select the attacker.
Furthermore, denial of service attacks on relays or clients in the Tor network can often be leveraged into full de-anonymization attacks. These techniques go back many years, in research such as "From a Trickle to a Flood", "Denial of Service or Denial of Security?", "Why I'm not an Entropist", and even the more recent Bitcoin attacks above. In the Hidden Service protocol there are more vectors for DoS attacks, such as the set of HSDirs and the Introduction Points of a Hidden Service.
Finally, remote code execution exploits against Tor software are also always a possibility, but we have zero evidence that such exploits exist. Although the Tor source code gets continuously reviewed by our security-minded developers and community members, we would like more focused auditing by experienced bug hunters. Public-interest initiatives like Project Zero could help out a lot here. Funding to launch a bug bounty program of our own could also bring real benefit to our codebase. If you can help, please get in touch.
Advice to concerned hidden service operators
As you can see, we still don't know what happened, and it's hard to give concrete suggestions blindly.
If you are a concerned hidden service operator, we suggest you read the cited resources to get a better understanding of the security that hidden services can offer and of the limitations of the current system. When it comes to anonymity, it's clear that the tighter your threat model is, the more informed you need to be about the technologies you use.
If your hidden service lacks sufficient processor, memory, or network resources the DoS based de-anonymization attacks may be easy to leverage against your service. Be sure to review the Tor performance tuning guide to optimize your relay or client.
*Another possible suggestion we can provide is manually selecting the guard node of a hidden service. By configuring the EntryNodes option in Tor's configuration file you can select a relay in the Tor network you trust. Keep in mind, however, that a determined attacker will still be able to determine this relay is your guard and all other attacks still apply.
The task of hiding the location of low-latency web services is a very hard problem and we still don't know how to do it correctly. It seems that there are various issues that none of the current anonymous publishing designs have really solved.
In a way, it's even surprising that hidden services have survived so far. The attention they have received is minimal compared to their social value and compared to the size and determination of their adversaries.
It would be great if there were more people reviewing our designs and code. For example, we would really appreciate feedback on the upcoming hidden service revamp or help with the research on guard discovery attacks (see links above).
Also, it's important to note that Tor currently doesn't have funding for improving the security of hidden services. If you are interested in funding hidden services research and development, please get in touch with us. We hope to find time to organize a crowdfunding campaign to acquire independent and focused hidden service funding.
Thanks to Griffin, Matt, Adam, Roger, David, George, Karen, and Jake for contributions to this post.
* Added information about guard node DoS and EntryNodes option - 2014/11/09 18:16 UTC