11-04-2013, 04:07 PM
Self-Policing Mobile Ad Hoc Networks by Reputation Systems
Self-Policing Mobile.pdf (Size: 129.84 KB / Downloads: 15)
ABSTRACT
Node misbehavior due to selfish or malicious
reasons or faulty nodes can significantly degrade
the performance of mobile ad hoc networks. To
cope with misbehavior in such self-organized
networks, nodes need to be able to automatically
adapt their strategy to changing levels of cooperation.
Existing approaches such as economic
incentives or secure routing by cryptography
alleviate some of the problems, but not all. We
describe the use of a self-policing mechanism
based on reputation to enable mobile ad hoc
networks to keep functioning despite the presence
of misbehaving nodes. The reputation system
in all nodes makes them detect misbehavior
locally by observation and use of second-hand
information. Once a misbehaving node is detected
it is automatically isolated from the network.
We classify the features of such reputation systems
and describe possible implementations of
each of them. We explain in particular how it is
possible to use second-hand information while
mitigating contamination by spurious ratings.
MISBEHAVIOR IN
MOBILE AD HOC NETWORKS
In mobile ad hoc networks, nodes are both
routers and terminals. For lack of routing infrastructure,
they have to cooperate to communicate.
Cooperation at the network layer means
routing (i.e., finding a path for a packet) and
forwarding (i.e., relaying packets for others).
Misbehavior means deviation from regular
routing and forwarding. It arises for several reasons;
unintentionally when a node is faulty.
Intentional misbehavior can aim at an advantage
for the misbehaving node or just constitute vandalism,
such as enabling a malicious node to
mount an attack or a selfish node to save power.
In game-theoretic terms, cooperation in mobile
ad hoc networks poses a dilemma. To save battery,
bandwidth, and processing power, nodes
should not forward packets for others. If this
dominant strategy is adopted, however, the outcome
is a nonfunctional network when multihop
routes are needed, so all nodes are worse off.
DETECTION AND
REPUTATION SYSTEMS
The goal of a detection and reputation system is
to enable nodes to adapt to changes in the network
environment caused by misbehaving nodes.
This is achieved by the following functions.
MONITORING
The goal of monitoring is to gather first-hand
information about the behavior of nodes in the
network. Monitoring systems detect misbehavior
that can be distinguished from regular behavior
by observation.
Not forwarding is just one of the possible
types of misbehavior in mobile ad hoc networks.
Several others, mostly concerned with routing
rather than forwarding, have been suggested
(e.g., black hole routing, gray hole routing, worm
hole routing). We classify misbehavior types as
packet dropping, modification, fabrication, or
timing misbehavior; many of these can be detected
by direct observation, as we have shown in a
testbed implementation [4].
To detect misbehavior, nodes take into
account the packets they receive (e.g., a received
acknowledgment from the destination means
that all the nodes on the route cooperated in
forwarding); they can also use enhanced passive
acknowledgments (PACKs) by overhearing the
transmissions of the next hop on the route, since
they are within range when using omnidirectional
antennas. For instance, if they do not overhear
a retransmission to the following node
within a timeout of, say, 100 ms, or the overheard
transmission shows that the packet header
has been illegitimately modified, they conclude
misbehavior. To distinguish from physical failures
of the next hop, the timeout allows for
retransmission attempts if the transmission of
the next hop fails. If there are link failures over
a longer time, the node can expect a route error
(RERR). To account for connectivity problems
at the monitoring node itself, it disregards PACK
timeouts in the case of link-layer error messages
received from its own interface.
FEATURES AND FUNCTIONS OF
REPUTATION SYSTEMS
The main goal of a reputation system for mobile
ad hoc networks is to make sense of gathered
information about the behavior of others. We
classify the features of a reputation system as
follows:
• Representation of information and classification.
These determine how monitored
events are stored and translated into reputation
ratings, and how ratings are classified
for response.
• Use of second-hand information. Reputation
systems can either rely exclusively on
their own observations or also consider
information obtained by others. Secondhand
information can, however, be spurious,
which raises the questions of how to
incorporate it in a safe way and whether to
propagate it.
• Trust. The use of trust influences the decision
of using second-hand information. The
design choices are about how to build trust,
out-of-band trust vs. building trust on experience,
how to represent trust, and how to
manage the influence of trust on responses.
USE OF
SECOND-HAND INFORMATION
In the scenario, since A is not in range with C, it
cannot directly observe its behavior and thus
cannot detect C’s misbehavior. This is solved by
allowing the use of second-hand information. In
CONFIDANT, in addition to keeping track of
direct local observation, nodes publish, as shown
in Fig. 1b, their first-hand information from time
to time by local broadcasts to exchange information
with other nodes. The published information
from others is called second-hand
information. It is not propagated further. Nodes
rely mostly on local information, but they can
also take into account the local information of
other nodes to gradually get a global view of the
network. A thus receives information from its
neighbors, here E, F, G, and B, about other
nodes, including C. Again, since A has no firsthand
information about C in our scenario, it can
only find out about C’s misbehavior by secondhand
information. There is, however, a problem
since second-hand information can be spurious
(e.g., false accusations). There is a trade-off
between the detection speed gained by secondhand
information (detection before encounter)
and the classification vulnerability introduced.
CONCLUSIONS
We have shown in this article how reputation
systems for self-policing and adaptation to network
cooperation can be built, and how they can
mitigate the deleterious effects of misbehavior in
self-organized networks by using monitoring to
generate reputation ratings which in turn allow
nodes to make informed decisions about their
response to the behavior of other nodes. We
have described how second-hand information
can be used to improve the response, while
avoiding the dangers of rumor spreading. Our
survey suggests that a reputation system is effective
as long as the number of misbehaving nodes
is not too large; it would be interesting to understand
this point theoretically.