05-07-2012, 12:51 PM
Nymble: Blocking Misbehaving Users in Anonymizing Networks
Blocking Misbehaving Users in Anonymizing Networks.pdf (Size: 722.28 KB / Downloads: 34)
Introduction
Anonymizing networks such as Crowds [RR98] and Tor [DMS04] route traffic through independent
nodes in separate administrative domains to hide a client’s IP address. Unfortunately, some users
have misused such networks — under the cover of anonymity, users have repeatedly defaced popular
websites such as Wikipedia. Since website administrators cannot blacklist individual malicious
users’ IP addresses, they blacklist the entire anonymizing network. Such measures eliminate malicious
activity through anonymizing networks at the cost of denying anonymous access to behaving
users. In other words, a few “bad apples” can spoil the fun for all. (This has happened repeatedly
with Tor.1)
There are several solutions to this problem, each providing some degree of accountability. In
pseudonymous credential systems [Cha90, Dam88, HS06, LRSW99], users are required to log into
websites using pseudonyms, which can be added to a blacklist if a user misbehaves. Unfortunately,
this approach results in pseudonymity for all users, and weakens the anonymity provided by the
underlying anonymizing network.
Anonymous credential systems such as Camenisch and Lysyanskaya’s systems [CL01, CL04] use
group signatures for anonymous authentication Basic group signatures [ACJT00, BSZ05, CvH91]
allow servers to revoke a misbehaving user’s anonymity by complaining to a group manager. In
these schemes, servers must query the group manager for every authentication, and this lack of
scalability makes it unsuitable for our goals. Traceable signatures [KTY04, vABHO06] allow the
group manager to release a trapdoor that allows all signatures generated by a particular user to be
traced; such an approach does not provide the backward unlinkability [NF05] that we desire, where
a user’s accesses before the complaint remain anonymous. Backward unlinkability allows for what
we call subjective blacklisting, where servers can blacklist users for whatever reason since the privacy
of the blacklisted user is not at risk. In contrast, approaches without backward unlinkability need
to pay careful attention to when and why a user must have all their connections linked, and users
must also worry about whether their (mis)behaviors will be judged fairly.
Subjective blacklisting is also better suited to servers such as Wikipedia, where misbehaviors
such as questionable edits to a webpage, are hard to define in mathematical terms. In some
systems, misbehavior can indeed be defined precisely. For instance, double-spending of an “e-coin”
is considered a misbehavior in anonymous e-cash systems [Bra93, Cha82], following which the
offending user is deanonymized. Unfortunately, such systems work for only narrow definitions of
misbehavior — it is difficult to map more complex notions of misbehavior onto “double spending”
or related approaches [TFS04].
With dynamic accumulators [CL02, Ngu05, TX03], a revocation operation results in a new
accumulator and public parameters for the group, and all other existing users’ credentials must
be updated, and it is thus difficult to manage in practical settings. Verifier-local revocation
(VLR) [BS01, AST02, BS04] fixes this shortcoming by requiring the server (“verifier”) to perform
only local updates during revocation. Unfortunately, VLR requires heavy computation at the
server that is linear in the size of the blacklist. For example, for a blacklist with 1,000 entries, each
authentication would take tens of seconds,2 a prohibitive cost in practice. In contrast, our scheme
takes the server about one millisecond per authentication, which is several thousand times faster
1The Abuse FAQ for Tor Server Operators lists several such examples at http://tor.efffaq-abuse.html.en.
2In the construction due to Boneh and Shacham [BS04], computation at the server involves 3 + 2|BL| pairing
operations, each of which takes tens of ms, according to a benchmark from: http://crypto.stanford.edu/pbc.
than VLR.
Lastly, any practical deployment of a credential scheme must address the Sybil attack [Dou02],
where a malicious user can potentially assume multiple identities.
Our solution
We present a secure system called Nymble, which provides all the following properties: anonymous
authentication, backward unlinkability, subjective blacklisting, fast authentication speeds, ratelimited
anonymous connections, revocation auditability (where users can verify whether they have
been blacklisted), and also addresses the Sybil attack to make its deployment practical.3
In Nymble, users acquire an ordered collection of nymbles, a special type of pseudonym, to
connect to websites. Without additional information, these nymbles are computationally hard to
link,4 and hence using the stream of nymbles simulates anonymous access to services. Websites,
however, can blacklist users by obtaining a seed for a particular nymble, allowing them to link
future nymbles from the same user — those used before the complaint remain unlinkable. Servers
can therefore blacklist anonymous users without knowledge of their IP addresses while allowing
behaving users to connect anonymously. Our system ensures that users are aware of their blacklist
status before they present a nymble, and disconnect immediately if they are blacklisted. Although
our work applies to anonymizing networks in general, we consider Tor for purposes of exposition.
In fact, any number of anonymizing networks can rely on the same Nymble system, blacklisting
anonymous users regardless of their anonymizing network(s) of choice.
Contributions of this paper
Our research makes the following contributions:
• Blacklisting anonymous users. We provide a means by which servers can blacklist users
of an anonymizing network while maintaining their privacy.
• Practical performance. A system such as ours will see widespread adoption only if its
performance is acceptable at the server. Our protocol makes use of inexpensive symmetric
cryptographic operations to significantly outperform the alternatives.
• Open-source implementation. With the goal of contributing a workable system, we have
built an open-source implementation of Nymble, which is publicly available.5 We provide
performance statistics to show that our system is indeed practical.
A preliminary work-in-progress version of this paper (suggesting the use of trusted hardware)
was presented at the Second Workshop on Advances in Trusted Computing [TKS06]. In our paper
presented at the Privacy Enhancing Technologies Symposium [JKTS07], we eliminated the trusted
hardware from our design, and outlined the initial version of our system. This paper represents
a significantly revised system, including major enhancements and a thorough evaluation of our
protocol, and an open-source release of our system for general use. Some of the authors of this paper
3We do not claim to solve the Sybil attack, which is a challenging problem, but we do provide a solution that
represents a tradeoff between blacklistability and practicality.
4Two nymbles are linked if one can infer that they belong to the same user with probability better than random
guessing.
The Nymble system architecture showing the various modes of interaction. Users interact
with the PM directly, and with the NM and servers though the anonymizing network. All
interactions not involving the user take place directly.
have published two anonymous authentication schemes, BLAC [TAKS07] and PEREA [TAKS08],
which eliminate the need for a trusted third party for revoking users. While BLAC and PEREA
provide better privacy by eliminating the TTP, Nymble provides authentication rates that are
several orders of magnitude faster than BLAC and PEREA (see Section 6). Nymble thus represents
a practical solution for blocking misbehaving users of anonymizing networks.
An Overview to Nymble
We now present a high-level overview of the Nymble system, and defer the entire protocol description
and security analysis to subsequent sections.
Resource-based blocking
Our system provides servers with a means to block misbehaving users of an anonymizing network.
Blocking a particular user, however, is a formidable task since that user could possibly acquire
several identities (called the Sybil attack [Dou02]). The Nymble system binds nymbles to resources
that are sufficiently difficult to obtain in great numbers. For example, we have used IP addresses
as the resource in our implementation, but our scheme generalizes to other resources such as email
addresses, identity certificates, trusted hardware, and so on.
We note that if two users can prove access to the same resource (e.g., if an IP address is
reassigned), they will obtain the same stream of nymbles. We will address the practical issues
related with resource-based blocking in Section 9, along with other examples of resources that
would be useful in limiting the Sybil attack.
At this point we emphasize that creating Sybil-free credentials is an independent research topic
in itself and we do not claim to solve the Sybil attack in this paper. This problem is faced by any
credential system [Dou02, LSM06], and we suggest some promising approaches based on resourcebased
blocking since we aim to create a real-world deployment.
The Pseudonym Manager
The user must first contact the Pseudonym Manager (PM) and demonstrate control over a resource;
for IP-address blocking, the user is required to connect to the PM directly (i.e., not through a known
anonymizing network), as shown in Figure 1. We assume the PM has knowledge about Tor routers,
for example, and can ensure that users are communicating with it directly.6 Pseudonyms are
deterministically chosen based on the controlled resource, ensuring that the same pseudonym is
always issued for the same resource.
Note that the user does not disclose what server he or she intends to connect to, and therefore
the user’s connections are anonymous to the PM. The PM’s duties are limited to mapping IP
addresses (or other resources) to pseudonyms. As we will explain, the user contacts the PM only
once per linkability window (e.g., once a day).
Blocking Misbehaving Users in Anonymizing Networks.pdf (Size: 722.28 KB / Downloads: 34)
Introduction
Anonymizing networks such as Crowds [RR98] and Tor [DMS04] route traffic through independent
nodes in separate administrative domains to hide a client’s IP address. Unfortunately, some users
have misused such networks — under the cover of anonymity, users have repeatedly defaced popular
websites such as Wikipedia. Since website administrators cannot blacklist individual malicious
users’ IP addresses, they blacklist the entire anonymizing network. Such measures eliminate malicious
activity through anonymizing networks at the cost of denying anonymous access to behaving
users. In other words, a few “bad apples” can spoil the fun for all. (This has happened repeatedly
with Tor.1)
There are several solutions to this problem, each providing some degree of accountability. In
pseudonymous credential systems [Cha90, Dam88, HS06, LRSW99], users are required to log into
websites using pseudonyms, which can be added to a blacklist if a user misbehaves. Unfortunately,
this approach results in pseudonymity for all users, and weakens the anonymity provided by the
underlying anonymizing network.
Anonymous credential systems such as Camenisch and Lysyanskaya’s systems [CL01, CL04] use
group signatures for anonymous authentication Basic group signatures [ACJT00, BSZ05, CvH91]
allow servers to revoke a misbehaving user’s anonymity by complaining to a group manager. In
these schemes, servers must query the group manager for every authentication, and this lack of
scalability makes it unsuitable for our goals. Traceable signatures [KTY04, vABHO06] allow the
group manager to release a trapdoor that allows all signatures generated by a particular user to be
traced; such an approach does not provide the backward unlinkability [NF05] that we desire, where
a user’s accesses before the complaint remain anonymous. Backward unlinkability allows for what
we call subjective blacklisting, where servers can blacklist users for whatever reason since the privacy
of the blacklisted user is not at risk. In contrast, approaches without backward unlinkability need
to pay careful attention to when and why a user must have all their connections linked, and users
must also worry about whether their (mis)behaviors will be judged fairly.
Subjective blacklisting is also better suited to servers such as Wikipedia, where misbehaviors
such as questionable edits to a webpage, are hard to define in mathematical terms. In some
systems, misbehavior can indeed be defined precisely. For instance, double-spending of an “e-coin”
is considered a misbehavior in anonymous e-cash systems [Bra93, Cha82], following which the
offending user is deanonymized. Unfortunately, such systems work for only narrow definitions of
misbehavior — it is difficult to map more complex notions of misbehavior onto “double spending”
or related approaches [TFS04].
With dynamic accumulators [CL02, Ngu05, TX03], a revocation operation results in a new
accumulator and public parameters for the group, and all other existing users’ credentials must
be updated, and it is thus difficult to manage in practical settings. Verifier-local revocation
(VLR) [BS01, AST02, BS04] fixes this shortcoming by requiring the server (“verifier”) to perform
only local updates during revocation. Unfortunately, VLR requires heavy computation at the
server that is linear in the size of the blacklist. For example, for a blacklist with 1,000 entries, each
authentication would take tens of seconds,2 a prohibitive cost in practice. In contrast, our scheme
takes the server about one millisecond per authentication, which is several thousand times faster
1The Abuse FAQ for Tor Server Operators lists several such examples at http://tor.efffaq-abuse.html.en.
2In the construction due to Boneh and Shacham [BS04], computation at the server involves 3 + 2|BL| pairing
operations, each of which takes tens of ms, according to a benchmark from: http://crypto.stanford.edu/pbc.
than VLR.
Lastly, any practical deployment of a credential scheme must address the Sybil attack [Dou02],
where a malicious user can potentially assume multiple identities.
Our solution
We present a secure system called Nymble, which provides all the following properties: anonymous
authentication, backward unlinkability, subjective blacklisting, fast authentication speeds, ratelimited
anonymous connections, revocation auditability (where users can verify whether they have
been blacklisted), and also addresses the Sybil attack to make its deployment practical.3
In Nymble, users acquire an ordered collection of nymbles, a special type of pseudonym, to
connect to websites. Without additional information, these nymbles are computationally hard to
link,4 and hence using the stream of nymbles simulates anonymous access to services. Websites,
however, can blacklist users by obtaining a seed for a particular nymble, allowing them to link
future nymbles from the same user — those used before the complaint remain unlinkable. Servers
can therefore blacklist anonymous users without knowledge of their IP addresses while allowing
behaving users to connect anonymously. Our system ensures that users are aware of their blacklist
status before they present a nymble, and disconnect immediately if they are blacklisted. Although
our work applies to anonymizing networks in general, we consider Tor for purposes of exposition.
In fact, any number of anonymizing networks can rely on the same Nymble system, blacklisting
anonymous users regardless of their anonymizing network(s) of choice.
Contributions of this paper
Our research makes the following contributions:
• Blacklisting anonymous users. We provide a means by which servers can blacklist users
of an anonymizing network while maintaining their privacy.
• Practical performance. A system such as ours will see widespread adoption only if its
performance is acceptable at the server. Our protocol makes use of inexpensive symmetric
cryptographic operations to significantly outperform the alternatives.
• Open-source implementation. With the goal of contributing a workable system, we have
built an open-source implementation of Nymble, which is publicly available.5 We provide
performance statistics to show that our system is indeed practical.
A preliminary work-in-progress version of this paper (suggesting the use of trusted hardware)
was presented at the Second Workshop on Advances in Trusted Computing [TKS06]. In our paper
presented at the Privacy Enhancing Technologies Symposium [JKTS07], we eliminated the trusted
hardware from our design, and outlined the initial version of our system. This paper represents
a significantly revised system, including major enhancements and a thorough evaluation of our
protocol, and an open-source release of our system for general use. Some of the authors of this paper
3We do not claim to solve the Sybil attack, which is a challenging problem, but we do provide a solution that
represents a tradeoff between blacklistability and practicality.
4Two nymbles are linked if one can infer that they belong to the same user with probability better than random
guessing.
The Nymble system architecture showing the various modes of interaction. Users interact
with the PM directly, and with the NM and servers though the anonymizing network. All
interactions not involving the user take place directly.
have published two anonymous authentication schemes, BLAC [TAKS07] and PEREA [TAKS08],
which eliminate the need for a trusted third party for revoking users. While BLAC and PEREA
provide better privacy by eliminating the TTP, Nymble provides authentication rates that are
several orders of magnitude faster than BLAC and PEREA (see Section 6). Nymble thus represents
a practical solution for blocking misbehaving users of anonymizing networks.
An Overview to Nymble
We now present a high-level overview of the Nymble system, and defer the entire protocol description
and security analysis to subsequent sections.
Resource-based blocking
Our system provides servers with a means to block misbehaving users of an anonymizing network.
Blocking a particular user, however, is a formidable task since that user could possibly acquire
several identities (called the Sybil attack [Dou02]). The Nymble system binds nymbles to resources
that are sufficiently difficult to obtain in great numbers. For example, we have used IP addresses
as the resource in our implementation, but our scheme generalizes to other resources such as email
addresses, identity certificates, trusted hardware, and so on.
We note that if two users can prove access to the same resource (e.g., if an IP address is
reassigned), they will obtain the same stream of nymbles. We will address the practical issues
related with resource-based blocking in Section 9, along with other examples of resources that
would be useful in limiting the Sybil attack.
At this point we emphasize that creating Sybil-free credentials is an independent research topic
in itself and we do not claim to solve the Sybil attack in this paper. This problem is faced by any
credential system [Dou02, LSM06], and we suggest some promising approaches based on resourcebased
blocking since we aim to create a real-world deployment.
The Pseudonym Manager
The user must first contact the Pseudonym Manager (PM) and demonstrate control over a resource;
for IP-address blocking, the user is required to connect to the PM directly (i.e., not through a known
anonymizing network), as shown in Figure 1. We assume the PM has knowledge about Tor routers,
for example, and can ensure that users are communicating with it directly.6 Pseudonyms are
deterministically chosen based on the controlled resource, ensuring that the same pseudonym is
always issued for the same resource.
Note that the user does not disclose what server he or she intends to connect to, and therefore
the user’s connections are anonymous to the PM. The PM’s duties are limited to mapping IP
addresses (or other resources) to pseudonyms. As we will explain, the user contacts the PM only
once per linkability window (e.g., once a day).