Papers /

Trickle-CACM 2008

Reading

Outdoors

Games

Hobbies

LEGO

Food

Code

Events

Nook

sidebar

Trickle-CACM 2008

The Emergence of a Networking Primitive in Wireless Sensor Networks

Levis, Brewer, Culler, Gay, Madden, Patel, Polastre, Shenker, Szewczyk, Woo

sensor networking protocols gossip data collection dissemination trickle

@article{levis:cacm2008,
  title="The Emergence of a Networking Primitive in Wireless Sensor Networks",
  author="Philip Levis and Eric Brewer and David Culler and David Gay and Sam
          Madden and Neil Patel and Joe Polastre and Scott Shenker
          and Robert Szewczyk and Alec Woo",
  journal="Communications of the ACM",
  volume="51",
  number="7",
  month="July",
  year="2008",
  pages="99--105"
}

Sensor networking, liberated by force from Internet assumptions, has developed a number of different approaches and assumptions to networking

  • Primary constraint: Low power consumption
    • Radio comms dominate power considerations
  • Primary constraint: Limited memory
    • Memory is both costly and, perhaps more importantly, too much of a power drain
    • It doesn't seem like these factors are going to change anytime soon for these classes of devices

Deployments have highly varying densities, from very sparce to ultra dense

  • "the variety of network topologies and densities across which sensor network protocols must operate calls for a polite, density-aware, local retransmission scheme"
  • This paper introduces Trickle, which tries to achieve that in providing "eventual consistency"

Most sensor systems rely on two distinct types of network protocols:

  • Collection, for getting data out of the network
    • Readings, debug info, etc
    • Many point to point schemes construct overlays over multiple parallel collection topologies
  • Dissemination, for pushing data into the network
    • Code images, commands, etc
  • Both are working (in opposite directions) to eventually build consistent state across network

Flooding is obvious dissemination approach, but is unreliable and has high network/power costs

  • Adaptive flooding tries to curtail costs by guessing density, but that is tricky
    • Relationship to MPR approaches?

Eventual consistency does not provide full reliability, only that nodes will eventually wind up in the same state

  • Intervening messages may be missed, but this does not matter for many applications

Almost all collection protocols use unreliable datagram delivery to a collection point using minimum-cost routing tree

  • Originally hop count, then ETX, then link layer signals, then unified combo of all layers
  • Fixed size state may be ensured by only maintaining links to a small set of candidate parents
  • Maintaining awareness is a large issue, setting the advertising rate correctly is difficult
    • Too fast in a lull and you're wasting energy; too slow in a busy period and you may not react fast enough to changes, potentially even causing routing loops

Basic trickle algorithm:

  • Set a timer for (T/2) <= t <= T
  • Set counter c to 0
  • When consistent data is received, increment c
  • When inconsistent data is received, set T to T_l, reset c, re-pick t
  • When t expires, if c < K, transmit data
  • When T expires, double T if T < T_h, reset c, re-pick t

Discussion:

  • Selection of t between (T/2) and T is done to enforce a listen period at the start
  • Backing off T enables the network to reduce traffic when things are not changing, but then to rapidly ramp back up (drop to T_l) when necessary
  • MNP adjusts Trickle such that nodes with more neighbors are more likely to broadcast---low degree nodes cannot suppress high degree nodes
  • Trickle maintenance cost grows linearly with the number of data items
    • DIP addresses this with hash trees and randomized searches, at the cost of some discovery cost
Recent Changes (All) | Edit SideBar Page last modified on September 03, 2008, at 06:28 PM Edit Page | Page History