Papers /

Levis-NSDI 2004

Reading

Cycling

Backpacking

X-Wing

Infinity

Warhammer 40k

Boardgames

Hobby

LEGO

Food

Code



Nook

sidebar

Levis-NSDI 2004

Trickle: A Self-Regulating Algorithm for Code Propagation and Maintenance in Wireless Sensor Networks

Levis, Patel, Shenker, Culler

trickle propagation consistency dissemination gossip dtn sensor network

@inproceedings{levis:nsdi-2004,
  author= {Philip Levis and Neil Patel and
           Scott Shenker and David Culler},
  title={{Trickle}: A Self-Regulating Algorithm for Code Propagation
         and Maintenance in Wireless Sensor Networks},
  booktitle={USENIX/ACM Symposium on Networked Systems
             Design and Implementation ({NSDI})},
  year={2004},
  pages={15--28}
}

Polite gossip: Nodes periodically advertise contents summary, but stay quiet if neighbors have posted an identical summary

  • When a node has an update, it broadcasts it

Aggregate network transmission counts increases as logarithm of cell density

Trying to tradeoff two competing goals:

  • Updates need to be propagated quickly and efficiently
  • Steady state should not require much maintenance work
  • Use of low power radios complicates issue w/ high loss and disconnections

Acknowledgements used to provide reliability in wireless flooding can easily overwhelm the cost of the original broadcast

Core assumption that code is very small?

Sensor radios: Frequently on the order of tens of Kbps

  • Sending a single bit can consume as much energy as executing a thousand instructions
  • Mica hardware: Forty packets per second; 36 byte payload each

Assumes updates are not frequent, and that payload can be easily described with metadata

Maintenance cannot be free in the presence of loss

Every so often, node transmits its information if it has not heard the same thing from a few others

Detection occurs because either an updated node hears of a node with old data, or a non-updated node hears of an update

  • Break time into intervals and at a random point considers broadcasting data
    • If several other nodes have already gossiped the same metadata in this interval, it does not
  • When a node hears a neighbor is out of date, it broadcasts its data, updating everyone within range
  • When a node hears that it is out of date, it broadcasts its out-of date metadata, which causes the previous rule to trigger

Basic algorithm:

  • Counter c
  • Threshold k (small)
  • Time t in [r/2,r]
    • Choosing in [0,r] causes problems when clocks are unsynchronized
      • Nodes may transmit, then close their interval before others do
    • Listening for r and then selecting to transmit doesn't work in the case where all nodes are synchronized---they'll all transmit at the end of the interval
    • Randomly choosing t spreads the load randomly, varying it in each interval
  • When it hears metadata identical to its own, a node increments c
  • At time t, the node broadcasts a summary of its data if c < k
  • When time crosses r, c goes to 0 and t is reset to a value in [0,r]
  • If a node with data d_x hears of data d_{x-y}, it broadcasts a summary
  • If it hears of data d_{x+y}, it broadcasts a summary

By dynamically adjusting r, tricly may propagate quickly and then maintain slowly

  • Large r has low energy overhead but slower discovery rate
  • Small r has more energy overhead but faster discovery rate
  • r has lower bound r_l and upper bound r_h
  • When r expires, it doubles, up to r_h
  • When a node hears of an update, it resets r to r_l; ditto when it receives new code, i.e. that it did not hear an update advertisement for but received the actual update

Define hopcount as minimum cost path where each link has cost 1/1-loss

Studies varying r_h show that it has little effect on propagation time

  • The network becomes very active when an update is received
  • But can go very quiet
  • Though it still needs to listen, which is possibly a problem as it cannot sleep to save batteries...

Paper doesn't look at how code is actually propagated

  • Simply transmits three times
  • This acts as a local flood; in this sense Trickle is a flood control
  • Opposite of programs like SPIN which flood metadata but more closely control data

The important difference versus broadcast floods is that Trickle is maintaining consistency

  • It hits all nodes in the network, but also updates nodes which show up some time after the original transmission

Previous epidemic/gossip protocols use unicast comms

  • Trickle leverages the broadcast medium of the wireless sensor network

SPIN assumes metadata is inexpensive compared to data; this is not obviously true for sensor net code

TinyDB's query pulling (tuples have associated query IDs; when a node hears of a query it hasn't heard of, it requests it) may cause nodes to miss events for rare events

Of note:

  • SPIN
  • Demers
  • PlanetP
  • SRM
Recent Changes (All) | Edit SideBar Page last modified on February 20, 2009, at 01:11 PM Edit Page | Page History
Powered by PmWiki