Support all your favorite nonprofits with a single donation.

Donate safely, anonymously & monthly, in any amount. It's a smarter way to give online. Learn more
The Tor Project
Dedham, MA
givvers: jason, emerssso + 4 others

Tor is free software and an open network that helps you defend against a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security known as traffic analysis.

The Tor Project is a 501(c)3 organization.

Latest News

Aug 20, 2014

Welcome to the thirty-third issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the community around Tor, Aphex Twin’s favorite anonymity network.

Tor Browser 3.6.4 and 4.0-alpha-1 are out

Erinn Clark took to the Tor Blog to announce two new releases by the Tor Browser team. The stable version (3.6.4) contains fixes for several new OpenSSL bugs, although since Tor should only be vulnerable to one of them, and “as this issue is only a DoS”, it is not considered a critical security update. This release also brings Tor Browser users the fixes that give log warnings about the RELAY_EARLY traffic confirmation attack explained last month. Please be sure to upgrade as soon as possible.

Alongside this stable release, the first alpha version of Tor Browser 4.0 is now available. Among the most exciting new features of this series is the inclusion of the meek pluggable transport. In contrast to the bridge-based transports already available in Tor Browser, meek relies on a principle of “too big to block”, as its creator David Fifield explained: “instead of going through a bridge with a secret address, you go through a known domain (www.google.com for example) that the censor will be reluctant to block. You don’t need to look up any bridge addresses before you get started”. meek currently supports two “front domains”, Google and Amazon Web Services; it may therefore be especially useful for users behind extremely restrictive national or local firewalls. David posted a fuller explanation of meek, and how to configure it, in a separate blog post.

This alpha release also “paves the way to [the] upcoming autoupdater by reorganizing the directory structure of the browser”, as Erinn wrote. This means that users upgrading from any previous Tor Browser series cannot extract the new version over their existing Tor Browser folder, or it will not work.

You can consult the full list of changes and bugfixes for both versions in Erinn’s post, and download the new releases themselves from the Tor website.

The Tor network no longer supports designating relays by name

Since the very first versions of Tor, relay operators have been able to specify “nicknames” for their relays. Such nicknames were initially meant to be unique across the network, and operators of directory authorities would manually “bind” a relay identity key after verifying the nickname. The process became formalized with the “Named” flag introduced in the 0.1.1 series, and later automated with the 0.2.0 series. If a relay held a unique nickname for long enough, the authoritywould recognize the binding, and subsequently reserve the name for half a year.

Nicknames are useful because it appears humans are not very good at thinking using long strings of random bits. Initially, they made it possible to understand what was happening in the network more easily, and to designate a specific relay in an abbreviated way. Having two relays in the network with the same nickname is not really problematic when one is looking at nodes, or a list in Globe, as relays can always be differentiated by their IP addresses or identity keys.

But complications arise when nicknames are used to specify one relay to the exclusion of another. If the wrong relay gets selected, it can become a security risk. Even though real efforts have been made to improve the situation, properly enforcing uniqueness has always been problematic, and a burden for the few directory authorities that handle naming.

Back in April, the “Heartbleed” bug forced many relays to switch to a new identity key, thus losing their “Named” flag. Because this meant that anyone designating relays by their nickname would now have a hard time continuing to do so, Sebastian Hahn decided to use the opportunity to get rid of the idea entirely.

This week, Sebastian wrote: “Code review down to 0.2.3.x has shown that the naming-related code hasn’t changed much at all, and no issues were found which would mean a Named-flag free consensus would cause any problems. gabelmoo and tor26 have stopped acting as Naming Directory Authorities, and — pending any issues — will stay that way.”

This means that although you can still give your relay a nickname in its configuration file, designating relays by nickname for any other purpose (such as telling Tor to avoid using certain nodes) has now stopped working. “If you — in your Tor configuration file — refer to any relay by name and not by identity hash, please change that immediately. Future versions of Tor will not support using names in the configuration at all”, warns Sebastian.

Miscellaneous news

meejah announced the release of version 0.11.0 of txtorcon, a Twisted-based Python controller library for Tor. This release brings several API improvements; see meejah’s message for full release notes and instructions on how to download it.

Mike Perry posted an overview of a recent report put together by iSEC Partners and commissioned by the Open Technology Fund to explore “current and future hardening options for the Tor Browser”. Among other things, Mike’s post addresses the report’s immediate hardening recommendations, latest thoughts on the proposed Tor Browser “security slider”, and longer-term security development measures, as well as ways in which the development of Google Chrome could inform Tor Browser’s own security engineering.

Nick Mathewson asked for comments on Trunnel, “a little tool to automatically generate binary encoding and parsing code based on C-like structure descriptions” intended to prevent “Heartbleed”-style vulnerabilities from creeping into Tor’s binary-parsing code in C. “My open questions are: Is this a good idea? Is it a good idea to use this in Tor? Are there any tricky bugs left in the generated code? What am I forgetting to think of?”, wrote Nick.

George Kadianakis followed up his journey to the core of what Tor does when trying to connect to entry guards in the absence of a network connection with another post running through some possible improvements to Tor’s behavior in these situations: “I’m looking forward to some feedback on the proposed algorithms as well as improvements and suggestions”.

Arturo Filastò requested feedback on some proposed changes to the format of the “test deck” used by ooni-probe, the main project of the Open Observatory of Network Interference. “A test deck is basically a way of telling it ‘Run this list of OONI tests with these inputs and by the way be sure you also set these options properly when doing so’…This new format is supposed to overcome some of the limitations of the old design and we hope that a major redesign will not be needed in the near future”, wrote Arturo.

Tor’s importance to users who are at risk, for a variety of reasons, makes it an attractive target for creators of malware, who distribute fake or modified versions of Tor software for malicious purposes. Following a recent report of a fake Tor Browser in circulation, Julien Voisin carried out an investigation of the compromised software, and posted a detailed analysis of the results. To ensure you are protected against this sort of attack, make sure you verify any Tor software you download before running it!

Arlo Breault submitted a status report for July.

As the annual Google Summer of Code season draws to a close, Tor’s GSoC students are submitting their final reports. Israel Leiva reported on the revamp of GetTor, Marc Juarez on the framework for website fingerprinting countermeasures, Juha Nurmi on ahmia.fi, Noah Rahman on Stegotorus enhancement, Amogh Pradeep on Orbot+Orfox, Daniel Martí on consensus diffs, Mikhail Belous on the multicore tor daemon work, Zack Mullaly on the secure ruleset updater for HTTPS Everywhere, and Quinn Jarrell on Fog, the pluggable transport combiner.

Tor help desk roundup

The help desk has been asked if it is possible to set up an anonymous blog using Tor. The Hyde project, developed by Karsten Loesing, documents the step-by-step process of using Tor, Jekyll, and Nginx to host an anonymous blog as a hidden service.

News from Tor StackExchange

The Tor StackExchange site is looking for another friendly and helpful moderator. Moderators need to take care of flagged items (spam, me-too comments, etc.), and are liaisons between the community and StackExchange’s community team. So, if you’re interested, have a look at the theory of moderation and post an answer to the question at the Tor StackExchange Meta site.


This issue of Tor Weekly News has been assembled by Lunar, harmony, David Fifield, qbi, Matt Pagan, Sebastian Hahn, Ximin Luo, and dope457.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Aug 18, 2014

In May, the Open Technology Fund commissioned iSEC Partners to study current and future hardening options for the Tor Browser. The Open Technology Fund is the primary funder of Tor Browser development, and it commissions security analysis and review for all of the projects that it funds as a standard practice. We worked with iSEC to define the scope of the engagement to focus on the following six main areas:

  1. Review of the current state of hardening in Tor Browser
  2. Investigate additional hardening options and instrumentation
  3. Perform historical vulnerability analysis on Firefox, in order to make informed vulnerability surface reduction recommendations
  4. Investigate image, audio, and video codecs and their respective library's vulnerability history
  5. Review our current about:config settings, both for vulnerability surface reduction and security
  6. Review alternate/obscure protocol and application handlers


The complete report is available in the iSEC publications github repo. All tickets related to the report can be found using the tbb-isec-report keyword. General Tor Browser security tickets can be found using the tbb-security keyword.

Major Findings and Recommendations

The report had the following high-level findings and recommendations.

  • Address Space Layout Randomization is disabled on Windows and Mac

  • Due to our use of cross-compilation and non-standard toolchains in our reproducible build system, several hardening features have ended up disabled. We have known about the Windows issues prior to this report, and should have a fix for them soon. However, the MacOS issues are news to us, and appear to require that we build 64 bit versions of the Tor Browser for full support. The parent ticket for all basic hardening issues in Tor Browser is bug #10065.

  • Participate in Pwn2Own

  • iSEC recommended that we find a sponsor to fund a Pwn2Own reward for bugs specific to Tor Browser in a semi-hardened configuration. We are very interested in this idea and would love to talk with anyone willing to sponsor us in this competition, but we're not yet certain that our hardening options will have stabilized with enough lead time for the 2015 contest next March.

  • Test and recommend the Microsoft Enhanced Mitigation Experience Toolkit on Windows

  • The Microsoft Enhanced Mitigation Experience Toolkit is an optional toolkit that Windows users can run to further harden Tor Browser against exploitation. We've created bug #12820 for this analysis.

  • Replace the Firefox memory allocator (jemalloc) with ctmalloc/PartitionAlloc

  • PartitionAlloc is a memory allocator designed by Google specifically to mitigate common heap-based vulnerabilities by hardening free lists, creating partitioned allocation regions, and using guard pages to protect metadata and partitions. Its basic hardening features can be picked up by using it as a simple malloc replacement library (as ctmalloc). Bug #10281 tracks this work.

  • Make use of advanced ParitionAlloc features and other instrumentation to reduce the risk from use-after-free vulnerabilities

  • The iSEC vulnerability review found that the overwhelming majority of vulnerabilities to date in Firefox were use-after-free, followed closely by general heap corruption. In order to mitigate these vulnerabilities, we would need to make use of the heap partitioning features of PartitionAlloc to actually ensure that allocations are partitioned (for example, by using the existing tags from Firefox's about:memory). We will also investigate enabling assertions in limited areas of the codebase, such as the refcounting system, the JIT and the Javascript engine.

Vulnerability Surface Reduction (Security Slider)

A large portion of the report was also focused on analyzing historical Firefox vulnerability data and other sources of large vulnerability surface for a planned "Security Slider" UI in Tor Browser.

The Security Slider was first suggested by Roger Dingledine as a way to make it easy for users to trade off between functionality and security, gradually disabling features ranked by both vulnerability count and web prevalence/usability impact.

The report makes several recommendations along these lines, but a brief distillation can be found on the ticket for the slider.

At a high level, we plan for four levels in this slider. "Low" security will be the current Tor Browser settings, with the addition of JIT support. "Medium-Low" will disable most of the JIT, and make HTML5 media click-to-play via NoScript. "Medium-High" will disable the rest of the JIT, will disable JS on non-HTTPS url bar origins, and disable SVG. "High" will fully disable Javascript, block remote fonts via NoScript, and disable all media codecs except for WebM (which will remain click-to-play).

The Long Term

A web browser is a very large and complicated piece of software, and while we believe that the privacy properties of Tor Browser are better than those of every other web browser currently available, it is very important to us that we raise the bar to successful code execution and exploitation of Tor Browser as well.

We are very eager to see the deployment of sandboxing support in Firefox, which should go a long way to improving the security of Tor Browser as well. To improve security for their users, Mozilla has recently shifted 10 engineers into the Electrolysis project, which provides the groundwork for producing a multiprocess sandbox architecture for the desktop Firefox. This will allow them to provide a Google Chrome style security sandbox for website content, to reduce the risk from software vulnerabilities, and generally impede exploitability.

Until that time, we will also be investigating providing hardened builds of Tor Browser using the AddressSanitizer and Virtual Table Verification features of newer GCC releases. While this will not eliminate all vectors of memory corruption-based exploitation (in particular, the hardening properties of AddressSanitizer are not as good as those provided by SoftBounds+CETS for example, but that compiler is not yet production-ready), it should raise the bar to exploitation. We are hopeful that these builds in combination with PartitionAlloc and the Security Slider will satisfy the needs of our users who require high security and who are willing to trade performance and usability in order to get it.

We also hope to include optional application-wide sandboxes for Tor Browser as part of the official distribution.

Why not Google Chrome?

It is no secret that in many ways, both we and Mozilla are playing catch-up to reach the level of code execution security provided by Google Chrome, and in fact closely following the Google Chrome security team was one of the recommendations of the iSEC report.

In particular, Google Chrome benefits from a multiprocess sandboxing architecture, as well as several further hardening options and innovations (such as PartitionAlloc).

Unfortunately, our budget for the browser project is still very constrained compared to the amount of work that is required to provide the privacy properties we feel are important, and Firefox remains a far more cost-effective platform for us for several reasons. In particular, Firefox's flexible extension system, fully scriptable UI, solid proxy support, and its long Extended Support Release cycle all allow us to accomplish far more with fewer resources than we could with any other web browser.

Further, Google Chrome is far less amenable to supporting basic web privacy and Tor-critical features (such as solid proxy support) than Mozilla Firefox. Initial efforts to work with the Google Chrome team saw some success in terms of adding APIs that are crucial to addons such as HTTPS-Everywhere, but we ran into several roadblocks when it came to Tor-specific features and changes. In particular, several bugs required for basic proxy-safe Tor support for Google Chrome's Incognito Mode ended up blocked for various reasons. The worst offender on this front is the use of the Microsoft Windows CryptoAPI for certificate validation, without any alternative. This bug means that certificate revocation checking and intermediate certificate retrieval happen outside of the browser's proxy settings, and is subject to alteration by the OEM and/or the enterprise administrator. Worse, beyond the Tor proxy issues, the use of this OS certificate validation API means that the OEM and enterprise also have a simple entry point for installing their own root certificates to enable transparent HTTPS man-in-the-middle, with full browser validation and no user consent or awareness. All of this is not to mention the need for defenses against third party tracking and fingerprinting to prevent the linking of Tor activity to non-Tor usage, and which would also be useful for the wider non-Tor userbase.

While we'd love for this situation to change, and are open to working with Google to improve things, at present it means that our only option for Chrome is to maintain an even more invasive fork than our current Firefox patch set, with much less likelihood of a future merge than with Firefox. As a ballpark estimate, maintaining such a fork would require somewhere between 3 and 5 times the engineering staff and infrastructure we currently have at our disposal, in addition to the ramp-up time to port our current feature set over.

Unless either our funding situation or Google's attitude towards the features we require changes, Mozilla Firefox will remain the best platform for us to demonstrate that it is in fact possible to provide true privacy by design for the web for those who want it. It is very distressing that this means playing catch-up and forcing our users to make usability tradeoffs in exchange for improved browser security, but we will continue to do what we can to improve that situation, both with Mozilla and with our own independent efforts.

Aug 15, 2014

The recently released 4.0-alpha-1 version of Tor Browser includes meek, a new pluggable transport for censorship circumvention. meek tunnels your Tor traffic through HTTPS, and uses a technique called “domain fronting” to hide the fact that you are communicating with a Tor bridge—to the censor it looks like you are talking to some other web site. For more details, see the overview and the Child’s Garden of Pluggable Transports.

You only need meek if your Internet connection is censored so that you can’t use ordinary Tor. Even then, you should try other pluggable transports first, because they have less overhead. My recommended order for trying transports is:

  1. obfs3
  2. fte
  3. scramblesuit
  4. meek
  5. flashproxy

Use meek if other transports don’t work for you, or if you want to help development by testing it. I have been using meek for my day-to-day browsing for a few months now.

All pluggable transports have some overhead. You might find that meek feels slower than ordinary Tor. We’re working on some tickets that will make it faster in the future: #12428, #12778, #12857.

At this point, there are two different backends supported. meek-amazon makes it look like you are talking to an Amazon Web Services server (when you are actually talking to a Tor bridge), and meek-google makes it look like you are talking to the Google search page (when you are actually talking to a Tor bridge). It is likely that both will work for you. If one of them doesn’t work, try the other.

These instructions and screenshots are for the 4.0-alpha-1 release. If they change in future releases, they will be updated at https://trac.torproject.org/projects/tor/wiki/doc/meek#Quickstart.

How to use meek

First, download a meek-capable version of Tor Browser for your platform and language.

Verify the signature and run the bundle according to the instructions for Windows, OS X, or GNU/Linux.

On the first screen, where it says Which of the following best describes your situation?, click the Configure button.

Tor Network Settings “Which of the following best describes your situation?” screen with the “Configure” button highlighted.

On the screen that says Does this computer need to use a proxy to access the Internet?, say No unless you know you need to use a proxy. meek supports using an upstream proxy, but most users don’t need it.

Tor Network Settings “Does this computer need to use a proxy to access the Internet?” screen with “No” selected.

On the screen that says Does this computer's Internet connection go through a firewall that only allows connections to certain ports?, say No. As an HTTPS transport, meek only uses web ports, which are allowed by most firewalls.

Tor Network Settings “Does this computer's Internet connection go through a firewall that only allows connections to certain ports?” screen with “No” selected.

On the screen that says Does your Internet Service Provider (ISP) block or otherwise censor connections to the Tor Network?, say Yes. Saying Yes will lead you to the screen for configuring pluggable transports.

Tor Network Settings “Does your Internet Service Provider (ISP) block or otherwise censor connections to the Tor Network?” screen with “Yes” selected.

On the pluggable transport screen, select Connect with provided bridges and choose either meek-amazon or meek-google from the list. Probably both of them will work for you, so choose whichever feels faster. If one of them doesn’t work, try the other. Then click the Connect button.

Tor Network Settings “You may use the provided set of bridges or you may obtain and enter a custom set of bridges.” screen with “meek-google” selected.

If it doesn’t work, you can write to the tor-dev mailing list, or to me personally at dcf@torproject.org, or file a new ticket.

Aug 13, 2014

Welcome to the thirty-second issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

Torsocks 2.0 is now considered stable

Torsocks is a wrapper program that will force an application’s network connections to go through the Tor network. David Goulet released version 2.0.0, blessing the new codebase as stable after more than a year of efforts.

David’s original email highlighted several reasons for a complete rewrite of torsocks. Among the issues were maintainability, error handling, thread safety, and a lack of proper compatibility layer for multiple architectures. The new implementation addresses all these issues while staying about the same size as the previous version (4,000 lines of C according to sloccount), and test coverage has been vastly extended.

Torsocks comes in handy when a piece of software does not natively support the use of a SOCKS proxy. In most cases, the new version may be safer, as torsocks will prevent DNS requests and non-torified connections from happening.

Integrators and power users should watch their steps while migrating to the new version. The configuration file format has changed, and some applications might behave differently as more system calls are now restricted.

Next generation Hidden Services and Introduction Points

When Tor clients need to connect to a Hidden Service, the first step is to create a circuit to its “Introduction Point”. There, the Tor client serving the Hidden Service will be waiting through another circuit to agree on a “Rendezvous Point” and pursue the communication through circuits connecting to this freshly selected Tor node.

This general design is not subject to any changes in the revision of hidden services currently being worked on. But there are still some questions left unanswered regarding the best way to select Introduction Points. George Kadianakis summarized them as: “How many IPs should an HS have? Which relays can be IPs? What’s the lifetime of an IP?”

For each of these questions, George collected possible answers and assessed whether or not they could respond to several attacks identified in the past. Anyone interested should help with the research needed and join the discussion.

In the meantime, Michael Rogers is also trying to find ways to improve hidden service performance in mobile contexts. One way to do so would be to “keep the set of introduction points as stable as possible”. However, a naive approach to doing so would ease the job of attackers trying to locate a hidden service. The idea would be to always use the same guard and middle node for a given introduction point, but this might also open the doors to new attacks. Michael suggests experimenting with the recently published Java research framework to gain a better understanding of the implications.

More status reports for July 2014

The wave of regular monthly reports from Tor project members for the month of July continued, with submissions from Andrew Lewman, Colin C., and Damian Johnson.

Roger Dingledine sent out the report for SponsorF. Arturo Filastò described what the OONI team was up to. The Tails team covered their activity for June and July.

Miscellaneous news

Two Tor Browser releases are at QA stage: 4.0-alpha-1 including meek and a new directory layout, and 3.6.4 for security fixes.

The recent serious attack against Tor hidden services was also a Sybil attack: a large number of malicious nodes joined the network at once. This led to a renewal of interest in detecting Sybil attacks against the Tor network more quickly. Karsten Loesing published some code computing similarity metrics, and David Fifield has explored visualizations of the consensus that made the recent attack visible.

Gareth Owen sent out an update about the Java Tor Research Framework. This prompted a discussion with George Kadianakis and Tim about the best way to perform fuzz testing on Tor. Have a look if you want to comment on Tim’s approaches.

Thanks to Daniel Thill for running a mirror of the Tor Project website!

ban mentioned a new service collecting donations for the Tor network. OnionTip, set up by Donncha O’Cearbhaill, will collect bitcoins and redistribute them to relay operators who put a bitcoin address in their contact information. As the redistribution is currently done according to the consensus weight, Sebastian Hahn warned that this might encourage people to “cheat the consensus weight” because that now means “more money from oniontip”.

Juha Nurmi sent another update on the ahmia.fi GSoC project.

News from Tor StackExchange

arvee wants to redirect some TCP connections through Tor on OS X; Redsocks should help to route packets for port 443 over Tor . mirimir explained that given the user's pf configuration, the setting “SocksPort 8888” was probably missing.

meee asked a question and offered a bounty for an answer: the circuit handshake entry in Tor’s log file contains some numbers, and meee wants to know what their meaning is: “Circuit handshake stats since last time: 1833867/1833868 TAP, 159257/159257 NTor.”

Easy development tasks to get involved with

The bridge distributor BridgeDB usually gives out bridges by responding to user requests via HTTPS and email. A while ago, BridgeDB also gave out bridges to a very small number of people who would then redistribute bridges using their social network. We would like to resume sending bridges to these people, but only if BridgeDB can be made to send them via GnuPG-encrypted emails. If you’d like to dive into the BridgeDB code and add support for GnuPG-encrypted emails, please take a look at the ticket and give it a try.


This issue of Tor Weekly News has been assembled by Lunar, qbi, Karsten Loesing, harmony, and Philipp Winter.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Aug 12, 2014

The fourth pointfix release of the 3.6 series is available from the Tor Browser Project page and also from our distribution directory.

This release features an update to OpenSSL to address the latest round of OpenSSL security issues. Tor Browser should only be vulnerable to one of these issues - the null pointer dereference. As this issue is only a DoS, we are not considering this a critical security update, but users are advised to upgrade anyway. This release also features an update to Tor to alert users of the RELAY_EARLY attack via a log message, and a fix for a hang that was happening to some users at startup/Tor network bootstrap.

Here is the complete changelog for 3.6.4:

  • Tor Browser 3.6.4 -- All Platforms
    • Update Tor to 0.2.4.23
    • Update Tor launcher to 0.2.5.6
    • Update OpenSSL to 1.0.1i
    • Backported Tor Patches:
      • Bug 11654: Properly apply the fix for malformed bug11156 log message
      • Bug 11200: Fix a hang during bootstrap introduced in the initial
        bug11200 patch.
    • Update NoScript to 2.6.8.36
      • Bug 9516: Send Tor Launcher log messages to Browser Console
    • Update Torbutton to 1.6.11.1
      • Bug 11472: Adjust about:tor font and logo positioning to avoid overlap
      • Bug 12680: Fix Torbutton about url.

In addition, we are also releasing the first alpha of the 4.0 series, available for download on the extended downloads page.

This alpha paves the way to our upcoming autoupdater by reorganizing the directory structure of the browser. This means that in-place upgrades from Tor Browser 3.6 (by extracting/copying over the old directory) will not work.

This release also features Tor 0.2.5.6, and some new defaults for NoScript to make the script permissions for a given url bar domain automatically cascade to all third parties by default (though this may be changed in the NoScript configuration).

  • Tor Browser 4.0-alpha-1 -- All Platforms
    • Ticket 10935: Include the Meek Pluggable Transport (version 0.10)
      • Two modes of Meek are provided: Meek over Google and Meek over Amazon
    • Update Firefox to 24.7.0esr
    • Update Tor to 0.2.5.6-alpha
    • Update OpenSSL to 1.0.1i
    • Update NoScript to 2.6.8.36
      • Script permissions now apply based on URL bar
    • Update HTTPS Everywhere to 5.0development.0
    • Update Torbutton to 1.6.12.0
      • Bug 12221: Remove obsolete Javascript components from the toggle era
      • Bug 10819: Bind new third party isolation pref to Torbutton security UI
      • Bug 9268: Fix some window resizing corner cases with DPI and taskbar size.
      • Bug 12680: Change Torbutton URL in about dialog.
      • Bug 11472: Adjust about:tor font and logo positioning to avoid overlap
      • Bug 9531: Workaround to avoid rare hangs during New Identity
    • Update Tor Launcher to 0.2.6.2
      • Bug 11199: Improve behavior if tor exits
      • Bug 12451: Add option to hide TBB's logo
      • Bug 11193: Change "Tor Browser Bundle" to "Tor Browser"
      • Bug 11471: Ensure text fits the initial configuration dialog
      • Bug 9516: Send Tor Launcher log messages to Browser Console
    • Bug 11641: Reorganize bundle directory structure to mimic Firefox
    • Bug 10819: Create a preference to enable/disable third party isolation
    • Backported Tor Patches:
      • Bug 11200: Fix a hang during bootstrap introduced in the initial
        bug11200 patch.
  • Tor Browser 4.0-alpha-1 -- Linux Changes
    • Bug 10178: Make it easier to set an alternate Tor control port and password
    • Bug 11102: Set Window Class to "Tor Browser" to aid in Desktop navigation
    • Bug 12249: Don't create PT debug files anymore

The list of frequently encountered known issues is also available in our bug tracker.

Aug 06, 2014

Welcome to the thirty-first issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

Tor and the RELAY_EARLY traffic confirmation attack

Roger Dingledine ended several months of concern and speculation in the Tor community with a security advisory posted to the tor-announce mailing list and the Tor blog.

In it, he gave details of a five-month-long active attack on operators and users of Tor hidden services that involved a variant of the so-called “Sybil attack”: the attacker signed up “around 115 fast non-exit relays” (now removed from the Tor network), and configured them to inject a traffic header signal consisting of RELAY_EARLY cells to “tag” any hidden service descriptor requests received by malicious relays — a tag which could then be picked up by other bad nodes acting as entry guards, in the process identifying clients which requested information about a particular hidden service.

The attack is suspected to be linked to a now-cancelled talk that was due to be delivered at the BlackHat security conference. There have been several fruitful and positive research projects involving theoretical attacks on Tor’s security, but this was not among them. Not only were there problems with the process of responsible disclosure, but, as Roger wrote, “the attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name)”, thereby “[putting] users at risk indefinitely into the future”.

On the other hand, it is important to note that “while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here”. In other words, the tagging mechanism used in this case is the innovation; the other element of the attack is a known weakness of low-latency anonymity systems, and defending against it is a much harder problem.

“Users who operated or accessed hidden services from early February through July 4 should assume they were affected” and act accordingly; in the case of hidden service operators, this may mean changing the location of the service. Accompanying the advisory were two new releases for both the stable and alpha tor branches (0.2.4.23 and 0.2.5.6-alpha); both include a fix for the signal-injection issue that causes tor to drop circuits and give a warning if RELAY_EARLY cells are detected going in the wrong direction (towards the client), and both prepare the ground for clients to move to single entry guards (rather than sets of three) in the near future. Relay operators should be sure to upgrade; a point-release of the Tor Browser will offer the same fixes to ordinary users. Nusenu suggested that relay operators regularly check their logs for the new warning, “even if the attack origin is not directly attributable from a relay’s point of view”. Be sure to read the full security advisory for a fuller explanation of the attack and its implications.

Why is bad-relays a closed mailing list?

Damian Johnson and Philipp Winter have been working on improving the process of reporting bad relays. The process starts by having users report odd behaviors to the bad-relays mailing list.

Only a few trusted volunteers receive and review these reports. Nusenu started a discussion on tor-talk advocating for more transparency. Nusenu argues that an open list would “likely get more confirm/can’t confirm feedback for a given badexit candidate”, and that it would allow worried users to act faster than operators of directory authorities.

Despite being “usually on the side of transparency”, Roger Dingledine described being “stuck” on the issue, “because the arms race is so lopsidedly against us”.

Roger explains: “we can scan for whether exit relays handle certain websites poorly, but if the list that we scan for is public, then exit relays can mess with other websites and know they’ll get away with it. We can scan for incorrect behavior on various ports, but if the list of ports and the set of behavior we do is public, then again relays are free to mess with things we don’t look for.”

A better future and more transparency probably lies in adaptive test systems run by multiple volunteer groups. Until they come to existence, as a small improvement, Philipp Winter wrote it was probably safe to publish why relays were disabled, through “short sentence along the lines of ‘running HTTPS MitM’ or ‘running sslstrip’”.

Monthly status reports for July 2014

Time for monthly reports from Tor project members. The July 2014 round was opened by Georg Koppen, followed by Philipp Winter, Sherief Alaa, Lunar, Nick Mathewson, Pearl Crescent, George Kadianakis, Matt Pagan, Isis Lovecruft, Griffin Boyce, Arthur Edelstein, and Karsten Loesing.

Lunar reported on behalf of the help desk and Mike Perry for the Tor Browser team.

Miscellaneous news

Anthony G. Basile announced a new release of tor-ramdisk, an i686 or x86_64 uClibc-based micro Linux distribution whose only purpose is to host a Tor server. Version 20140801 updates Tor to version 0.2.4.23, and the kernel to 3.15.7 with Gentoo’s hardened patches.

meejah has announced a new command-line application. carml is a versatile set of tools to “query and control a running Tor”. It can do things like “list and remove streams and circuits; monitor stream, circuit and address-map events; watch for any Tor event and print it (or many) out; monitor bandwidth; run any Tor control-protocol command; pipe through common Unix tools like grep, less, cut, etcetera; download TBB through Tor, with pinned certs and signature checking; and even spit out and run xplanet configs (with router/circuit markers)!” The application is written in Python and uses the txtorcon library. meejah describes it as early-alpha and warns that it might contain “serious, anonymity-destroying bugs”. Watch out!

Only two weeks left for the Google Summer of Code students, and the last round of reports but one: Juha Nurmi on the ahmia.fi project, Marc Juarez on website fingerprinting defenses, Amogh Pradeep on Orbot and Orfox improvements, Zack Mullaly on the HTTPS Everywhere secure ruleset update mechanism, Israel Leiva on the GetTor revamp, Quinn Jarrell on the pluggable transport combiner, Daniel Martí on incremental updates to consensus documents, Noah Rahman on Stegotorus enhancements, and Sreenatha Bhatlapenumarthi on the Tor Weather rewrite.

The Tails team is looking for testers to solve a possible incompatibility in one of the recommended installation procedures. If you have a running Tails system, a spare USB stick and some time, please help. Don’t miss the recommended command-line options!

The Citizen Lab Summer Institute took place at the University of Toronto from July 28 to 31. The event brought together policy and technology researchers who focus on Internet censorship and measurement. A lot of great work was presented including but not limited to a proposal to measure the chilling effect, ongoing work to deploy Telex, and several projects to measure censorship in different countries. Some Tor-related work was also presented: Researchers are working on understanding how the Tor network is used for political purposes. Another project makes use of TCP/IP side channels to measure the reachability of Tor relays from within China.

The Electronic Frontier Foundation wrote two blog posts to show why Tor is important for universities and how universities can help the Tor network. The first part explains why Tor matters, gives several examples of universities already contributing to the Tor network, and outlines a few reasons for hosting new Tor nodes. The second part gives actual tips on where to start, and how to do it best.

Tor help desk roundup

Users occasionally ask if there is any way to set Tor Browser as the default browser on their system. Currently this is not possible, although it may be possible in a future Tor Browser release. In the mean time, Tails provides another way to prevent accidentally opening hyperlinks in a non-Tor browser.

Easy development tasks to get involved with

Tor Launcher is the Tor controller shipped with Tor Browser written in JavaScript. Starting with Firefox 14 the “nsILocalFile” interface has been deprecated and replaced with the “nsIFile” interface. What we should do is replace all instances of “nsILocalFile” with “nsIFile” and see if anything else needs fixing to make Tor Launcher still work as expected. If you know a little bit about Firefox extensions and want to give this a try, clone the repository, make the necessary changes, run “make package”, and tell us whether something broke in interesting ways.


This issue of Tor Weekly News has been assembled by Lunar, harmony, Matt Pagan, Philipp Winter, David Fifield, Karsten Loesing, and Roger Dingledine.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Jul 30, 2014

This advisory was posted on the tor-announce mailing list.

SUMMARY:

On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks.

The attacking relays joined the network on January 30 2014, and we removed them from the network on July 4. While we don't know when they started doing the attack, users who operated or accessed hidden services from early February through July 4 should assume they were affected.

Unfortunately, it's still unclear what "affected" includes. We know the attack looked for users who fetched hidden service descriptors, but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up). The attack probably also tried to learn who published hidden service descriptors, which would allow the attackers to learn the location of that hidden service. In theory the attack could also be used to link users to their destinations on normal Tor circuits too, but we found no evidence that the attackers operated any exit relays, making this attack less likely. And finally, we don't know how much data the attackers kept, and due to the way the attack was deployed (more details below), their protocol header modifications might have aided other attackers in deanonymizing users too.

Relays should upgrade to a recent Tor release (0.2.4.23 or 0.2.5.6-alpha), to close the particular protocol vulnerability the attackers used — but remember that preventing traffic confirmation in general remains an open research problem. Clients that upgrade (once new Tor Browser releases are ready) will take another step towards limiting the number of entry guards that are in a position to see their traffic, thus reducing the damage from future attacks like this one. Hidden service operators should consider changing the location of their hidden service.

THE TECHNICAL DETAILS:

We believe they used a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack.

A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:
https://blog.torproject.org/blog/one-cell-enough

The particular confirmation attack they used was an active attack where the relay on one end injects a signal into the Tor protocol headers, and then the relay on the other end reads the signal. These attacking relays were stable enough to get the HSDir ("suitable for hidden service directory") and Guard ("suitable for being an entry guard") consensus flags. Then they injected the signal whenever they were used as a hidden service directory, and looked for an injected signal whenever they were used as an entry guard.

The way they injected the signal was by sending sequences of "relay" vs "relay early" commands down the circuit, to encode the message they want to send. For background, Tor has two types of cells: link cells, which are intended for the adjacent relay in the circuit, and relay cells, which are passed to the other end of the circuit. In 2008 we added a new kind of relay cell, called a "relay early" cell, which is used to prevent people from building very long paths in the Tor network. (Very long paths can be used to induce congestion and aid in breaking anonymity). But the fix for infinite-length paths introduced a problem with accessing hidden services, and one of the side effects of our fix for bug 1038 was that while we limit the number of outbound (away from the client) "relay early" cells on a circuit, we don't limit the number of inbound (towards the client) relay early cells.

So in summary, when Tor clients contacted an attacking relay in its role as a Hidden Service Directory to publish or retrieve a hidden service descriptor (steps 2 and 3 on the hidden service protocol diagrams), that relay would send the hidden service name (encoded as a pattern of relay and relay-early cells) back down the circuit. Other attacking relays, when they get chosen for the first hop of a circuit, would look for inbound relay-early cells (since nobody else sends them) and would thus learn which clients requested information about a hidden service.

There are three important points about this attack:

A) The attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name). The encoded signal is encrypted as it is sent over the TLS channel between relays. However, this signal would be easy to read and interpret by anybody who runs a relay and receives the encoded traffic. And we might also worry about a global adversary (e.g. a large intelligence agency) that records Internet traffic at the entry guards and then tries to break Tor's link encryption. The way this attack was performed weakens Tor's anonymity against these other potential attackers too — either while it was happening or after the fact if they have traffic logs. So if the attack was a research project (i.e. not intentionally malicious), it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.

(This concern is in addition to the general issue that it's probably unwise from a legal perspective for researchers to attack real users by modifying their traffic on one end and wiretapping it on the other. Tools like Shadow are great for testing Tor research ideas out in the lab.)

B) This protocol header signal injection attack is actually pretty neat from a research perspective, in that it's a bit different from previous tagging attacks which targeted the application-level payload. Previous tagging attacks modified the payload at the entry guard, and then looked for a modified payload at the exit relay (which can see the decrypted payload). Those attacks don't work in the other direction (from the exit relay back towards the client), because the payload is still encrypted at the entry guard. But because this new approach modifies ("tags") the cell headers rather than the payload, every relay in the path can see the tag.

C) We should remind readers that while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here. So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work. See https://blog.torproject.org/blog/one-cell-enough for more discussion.

Then the second class of attack they used, in conjunction with their traffic confirmation attack, was a standard Sybil attack — they signed up around 115 fast non-exit relays, all running on 50.7.0.0/16 or 204.45.0.0/16. Together these relays summed to about 6.4% of the Guard capacity in the network. Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.

We actually noticed these relays when they joined the network, since the DocTor scanner reported them. We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relays. (In general having a widely diverse set of relay locations and relay operators, yet not allowing any bad relays in, seems like a hard problem; on the other hand our detection scripts did notice them in this case, so there's hope for a better solution here.)

In response, we've taken the following short-term steps:

1) Removed the attacking relays from the network.

2) Put out a software update for relays to prevent "relay early" cells from being used this way.

3) Put out a software update that will (once enough clients have upgraded) let us tell clients to move to using one entry guard rather than three, to reduce exposure to relays over time.

4) Clients can tell whether they've received a relay or relay-cell. For expert users, the new Tor version warns you in your logs if a relay on your path injects any relay-early cells: look for the phrase "Received an inbound RELAY_EARLY cell".

The following longer-term research areas remain:

5) Further growing the Tor network and diversity of relay operators, which will reduce the impact from an adversary of a given size.

6) Exploring better mechanisms, e.g. social connections, to limit the impact from a malicious set of relays. We've also formed a group to pay more attention to suspicious relays in the network:
https://blog.torproject.org/blog/how-report-bad-relays

7) Further reducing exposure to guards over time, perhaps by extending the guard rotation lifetime:
https://blog.torproject.org/blog/lifecycle-of-a-new-relay
https://blog.torproject.org/blog/improving-tors-anonymity-changing-guard...

8) Better understanding statistical traffic correlation attacks and whether padding or other approaches can mitigate them.

9) Improving the hidden service design, including making it harder for relays serving as hidden service directory points to learn what hidden service address they're handling:
https://blog.torproject.org/blog/hidden-services-need-some-love

OPEN QUESTIONS:

Q1) Was this the Black Hat 2014 talk that got canceled recently?
Q2) Did we find all the malicious relays?
Q3) Did the malicious relays inject the signal at any points besides the HSDir position?
Q4) What data did the attackers keep, and are they going to destroy it? How have they protected the data (if any) while storing it?

Great questions. We spent several months trying to extract information from the researchers who were going to give the Black Hat talk, and eventually we did get some hints from them about how "relay early" cells could be used for traffic confirmation attacks, which is how we started looking for the attacks in the wild. They haven't answered our emails lately, so we don't know for sure, but it seems likely that the answer to Q1 is "yes". In fact, we hope they *were* the ones doing the attacks, since otherwise it means somebody else was. We don't yet know the answers to Q2, Q3, or Q4.

Jul 30, 2014

Welcome to the thirtieth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

Tor Browser 3.6.3 is out

A new pointfix release for the 3.6 series of the Tor Browser is out. Most components have been updated and a couple of small issues fixed. Details are available in the release announcement.

The release fixes import security updates from Firefox. Be sure to upgrade! Users of the experimental meek bundles have not been forgotten.

New Tor stable and alpha releases

Two new releases of Tor are out. The new 0.2.5.6-alpha release “brings us a big step closer to slowing down the risk from guard rotation, and fixes a variety of other issues to get us closer to a release candidate”.

Once directory authorities have upgraded, they will “assign the Guard flag to the fastest 25% of the network”. Some experiments showed that “for the current network, this results in about 1100 guards, down from 2500.”

The complementary change to moving the number of entry guards down to one is the introduction of two new consensus parameters. NumEntryGuards and NumDirectoryGuards will respectively set the number of entry guards and directory guards that clients will use. The default for NumEntryGuards is currently three, but this will allow a reversible switch to one in a near future.

Several important fixes have been backported to the stable branch in the 0.2.4.23 release. Source packages are available at the regular location . Binary packages have already landed in Debian (unstable, experimental) and the rest should follow shortly.

Security issue in Tails 1.1 and earlier

Several vulnerabilities have been discovered in I2P which is shipped in Tails 1.1 and earlier. I2P is an anonymous overlay network with many similarities to Tor. There was quite some confusion around the disclosure process of this vulnerability. Readers are encouraged to read what the Tails team has written about it.

Starting I2P in Tails normally requires a click on the relevant menu entry. Once started, the security issues can lead to the deanonymization of a Tails user who visits a malicious web page. As a matter of precaution, the Tails team recommends removing the “i2p” package each time Tails is started.

I2P has fixed the issue in version 0.9.14. It is likely to be included in the next Tails release, but the team is also discussing implementing more in-depth protections that would be required in order to keep I2P in Tails.

Reporting bad relays

“Bad” relays are malicious, misconfigured, or otherwise broken Tor relays. As anyone is free to volunteer bandwidth and processing power to spin up a new relay, users can encounter such bad relays once in a while. Getting them out of everyone’s circuits is thus important.

Damian Johnson and Philipp Winter have been working on improving and documenting the process of reporting bad relays. “While we do regularly scan the network for bad relays, we are also dependent on the wider community to help us spot relays which don’t act as they should” wrote Philipp.

When observing unusual behaviors, one way to learn about the current exit relay before reporting it is to use the Check service. This method can be inaccurate and tends to be a little bit cumbersome. The good news is that Arthur Edelstein is busy integrating more feedback on Tor circuits being used directly into the Tor Browser.

Miscellaneous news

The Tor Project, Inc. has completed its standard financial audit for the year 2013. IRS Form 990, Massachusetts Form PC, and the Financial Statements are now available for anyone to review. Andrew Lewman explained: “we publish all of our related tax documents because we believe in transparency. All US non-profit organizations are required by law to make their tax filings available to the public on request by US citizens. We want to make them available for all.”

CJ announced the release of orWall (previously named Torrific), a new Android application that “will force applications selected through Orbot while preventing unchecked applications to have network access”.

The Thali project aims to use hidden services to host web content. As part of the effort, they have written a cross-platform Java library. “The code handles running the binary, configuring it, managing it, starting a hidden service, etc.” wrote Yaron Goland.

Gareth Owen released a Java-based Tor research framework . The goal is to enable researchers to try things out without having to deal with the full tor source. “At present, it is a fully functional client with a number of examples for hidden services and SOCKS. You can build arbitrary circuits, build streams, send junk cells, etc.” wrote Gareth.

Version 0.2.3 of BridgeDB has been deployed. Among other changes, owners of riseup.net email accounts can now request bridges through email.

The first candidate for Orbot 14.0.5 has been released. “This update includes improved management of the background processes, the ability to easily change the local SOCKS port (to avoid conflicts on some Samsung Galaxy and Note devices), and the fancy new notification dialog, showing your current exit IPs and country” wrote Nathan Freitas.

While working on guard nodes, George Kadianakis realized that “the data structures and methods of the guard nodes code are not very robust”. Nick Mathewson and George have been busy trying to come up with better abstractions. More brains working on the problem would be welcome!

Mike Perry posted “a summary of the primitives that Marc Juarez aims to implement for his Google Summer of Code project on prototyping defenses for Website Traffic Fingerprinting and follow-on research”. Be sure to have a look if you want to help prevent website fingerprint attacks.

A new draft proposal “for making all relays also be directory servers (by default)” has been submitted by Matthew Finkel. Among the motivations, Matthew wrote: “In a network where every router is a
directory server, the profiling and partitioning attack vector is reduced to the guard (for clients who use them), which is already in a privileged position for this. In addition, with the increased set size, relay descriptors and documents are more readily available and it diversifies the providers.” This change might make the transition to a single guard safer. Feedback welcome!

Noah Rahman reported on the progress of the Stegotorus Google Summer of Code project.

Tor help desk roundup

A number of Iranian Tor users have reported that Tor no longer works out of the box in Iran, and the Tor Metrics portal shows a corresponding drop in the number of directly-connecting users there. Collin Anderson investigated the situation and reported that the Telecommunication Company of Iran had begun blocking the Tor network by blacklisting connections to Tor’s directory authorities. Tor users can circumvent this block by getting bridges from BridgeDB and entering the bridge addresses they receive into their Tor Browser.


This issue of Tor Weekly News has been assembled by Lunar, Matt Pagan, harmony, and Philipp Winter.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!