Papers /

Temp

Reading

Outdoors

Games

Hobbies

LEGO

Food

Code

Events

Nook

sidebar

Temp

Corona: A High Performance Publish-Subscribe System for the World Wide Web. Ramasubramanian, Peterson, Sirer.

Corona is a system for distributing the polling required in a sensing loop. Its primary motivation is reducing the amount of Internet traffic spent checking web sites and news feeds for updates, while at the same time decreasing the latency at which updates are discovered. Further, the implemented system is trying to achieve this without modifying existing infrastructure (web servers) or user client software (web browsers, news readers, chat programs).
There are some issues with the proposed system, but it's not immediately clear how to resolve them. For example, updates without source-provided timestamps are forwarded to the primary owner, who then assigns an ordering. This sounds possible to do incorrectly, as well as potentially overwhelming that node. However, for the data (web sites) discussed, such sources are probably relatively rare.
The centralized user updating is probably mostly an implementation issue, though one that should be addressed. There seems to be no hurdle against creating a more distributed system, other than utilizing existing and widely installed software.
Also, interestingly, the paper focuse on analyzing the reduction in demand on the sources. No mention is made of the affect on the network as a whole, e.g. bandwidth consumption. It's not clear if there's a reduction in traffic across the network once all of the update propagations and other system messages are factored in. This could easily be the case, but it's not clear. The answer may also depend a great deal on how close users are to their owners.
Another small point is that there's a place here, and newsfeed reading in general, to apply richer models toward learn polling frequencies. Corona simply maintains an average of observed update intervals. It would be interesting and beneficial to learn a richer polling ruleset, e.g. "Don't poll over the weekend because nothing happens, but poll every minute on weekdays."

Mesh-Based Content Routing Using XML. Snoeren, Conley, Gifford. ACM SOSP 2001.

This paper presents work on XML-based, reliable publish/subscribe systems. Notably, the focus is less on high data rates and large quantities of nodes, and more on reliability. The major content in the paper is a scheme for utilizing redundant links and correctly ordering and merging received data. It is an interesting use of application layer routing and overlay networking to provide services not otherwise supported by the network (namely, reliability), but seems oriented toward toward systems with stable overlay & underlying network configurations, small numbers of nodes, significant networking infrastructure, and procedural/deployment coordination.
From an applications' perspective, one of the major features of the system is data streams encoded in XML and the use of XPath/XQuery. Some time is spent discussing and demonstrating the use of simple compression schemes to reduce the large bandwidth costs this incurs. This is a common argument, that the costs of using XML in a network setting can be significantly reduced by techniques such as compression, and benefits such as improved application integration easily outweigh those costs. An often overlooked point is that this assumes that network latency dominates processing time. There are settings where this is not the case. Some are relatively uncommon, e.g. satellite uplinks and sensor networks. Others are not; most PDAs and smartphones might be challenged to keep up with link capacities on Wi-Fi, Bluetooth, and other mediums. Even on high powered desktop and server machines, large numbers of nodes (e.g. on Internet scale overlays) could easily swamp available processing power.
Another smaller point is that the reordering of packets along internal links should be justified further. Some sort of retransmission scheme seems reasonable, but it seems feasible and potentially worthwhile from a throughput perspective to forward nodes as processed, rather than processing and forwarding in order. This is particularly true as other nodes may have already filled in those packets for all of a nodes' children and retransmissions may be unnecessary.
Some of the basic assumptions are also more or less suspect depending on the setting, in particular the assumption of link independence. Wireless networks and most home and office settings do not fall under that assumption. This, combined with the mobility points below, orients the system toward supporting servers with significant infrastructure (i.e. multiple independent Internet connections ISPs) in applications with high reliability requirements.
Coordination of the system also seems likely to require substantial background infrastructure & policies. The need to coordinate Application Number identifiers (ANs) across all roots may be difficult in some applications and networks, and increasingly so as there are more of them. Merging streams also seems problematic in practice, although the paper spends some time on it. Assigning usable identifiers and allowing nodes to flexibly work off split or merged streams does not seem to be well supported.
Construction and adaptation of the mesh is a definite weak point however. Most problematic, the given scheme seems likely to swamp root and other nodes with requests and network traffic and is not prepared to effectively operate in a changing network. There is no scheme for adapting to changing latencies, which is only marginally supported anyway. Changes in connectivity or node availability could also be handled more efficiently, e.g. by attempting to determine the cause of a loop rather than completely disconnecting the rejoining node. In addition, the given algorithm does not seem to guarantee given levels of resilience as it does not check the resilience of potential parents' and the independence of potential links.

A Gossip Protocol to Support Service Discovery with Heterogenous Ontologies in MANETs. Nedos, Singh, Cunningham, Clarke. Trinity College Dublin Tech Report TCD-CS-2006-34.

  • Concerned more with sharing and aligning ontologies than necessarily with matching for discovery.
  • Ontologies are spread by gossiping---each node opportunistically transmits a random portion of its knowledge, which is then aligned by receiving nodes and eventually forwarded on
  • This is important for MANET settings, which perhaps have many different but related ontologies in use
  • Also solves (in)persisntence of references and availability of resources---links for retrieval, CPU for alignment
  • Services described as input/output concept sets; some special annotation required, not very clear
  • Not totally clear here how services are actually found; data seems to be in "ontologies" and locally searched

Highly Distributed XQuery with DXQ. Fernandez, Jim, Onose, Morton, Simeon. SIGMOD 2007.

  • Ad hoc programs for distributed data processing are expensive to maintain, hard to deploy, and require hand optimization of the distributed data processing operations
  • Distributed XQuery processing can alleviate much of that
    • Especially as more data is captured, exchanged, and manipulated in XML, this is very natural
    • Enable easier development of reliable, extensible, efficient distributed data-intensive applications
  • DXQ provides for XQuery dissemination, remote invocation of XQuery programs, and optimization and placement of distributed XQueries
    • Examples demonstrated include DNS and Narada

Edutella: A P2P Networking Infrastructure Based on RDF. Nejdl, et al. WWW 2002.

  • Available at http://www2002.org/CDROM/refereed/597/
  • A lack of formal, general metadata specifications and search capabilities is fragmenting P2P infrastructure and use, instead of enabling universal P2P infrastructure to be developed and deployed.
  • Makes heavy use of JXTA, a set of XML based P2P protocols from Sun
  • P2P infrastructure enables organizations to share costs
  • Keyword search in filenames or keys is not sufficient to manage more exacting query needs.
  • Not clear how much better the metadata search is compared to the filename search; it's still proximity matches over natural strings for things like books by title.
  • Supporting mapping between schemas is important for large P2P networks containing subnetworks that may have there own versions or extensions.
  • Several query levels are defined, characterizing what kinds of queries a node may answer.
    • Conjunctive formulas
    • Conjunction and disjunction
    • Conjunction, disjunction, and negation of literals; this is essentially datalog
    • Transitive closure and linear recursive query definitions
    • Arbitrary recursive query definitions
    • Aggregation (COUNT, AVG, MIN, MAX); this level is more a characteristic that may be added to any of the other levels
  • No discussion on query propagation
  • Local query answering provided by rather standard transformation to instance matching in datalog or DBs

Toward Distributed Service Discovery in Pervasive Computing Environments. Chakraborty, Joshi, Yesha, Finin. IEEE Transactions on Mobile Computing, February 2006.

  • Two important components: Discovery architecture, service matching mechanism
  • Broadcast wastes processing time, network resources along paths that may not even lead to a service, nodes don't have enough memory to cache descriptions
  • Metrics: Average response time, average response hops, discovery efficiency, and average network load
  • References to work on cache replacement policies
  • Routes responses back along request forwarding routes, uses AODV when those routes have gone stale
  • Doesn't seem to define a concrete message model
    • Do requests get routed to just one provider? How is that managed?
    • What about explicit multicast support?
  • Service grouping and selective forwarding based on given ontology

A Scalable Distributed Information Management System. Yalagandula, Dahlin. SIGCOMM 2004.

This paper presents a system for disseminating information in large scale networks. It is focused on distributing "network weather" data, but is generic enough to be useful for a wide range of applications. Central to the approach is rendering explicit the routing trees constructed by the routing procedures of distributed hash table algorithms, and using those trees to aggregate, summarize, and disseminate data.
One moderately detailed point not discussed in the paper is how aggregation functions are disseminated. Given the implementation notes included in the paper, it's likely that the aggregation functions are a set of pre-distributed Java functions which the aggrfunc parameter of the install() function indexes in some fashion. Support for run-time description, dissemination, and execution of aggregation functions is likely to have a huge effect on the applications that may be supported by the system. Here, unlike in many systems, different data is treated very differently, in a fashion defined by the aggregation function. It's not geared toward generic data dissemination. Introducing new applications and data into a running system therefore requires dynamic distribution of some, potentially fairly arbitrary, executable function.
More generally, the aggregation functions mean that applications are limited in how they may use data. For example, a system with a type installed that aggregates the average of a particular data type would not be able to easily switch and determine the min, max, and median. A new aggregation function and tuple structure would have to be created, disseminated around the network, and then run long enough to collect useful data. This is not a system for doing relatively free-form network investigation (e.g. to analyze an attack on the fly), or for running a wide range of different applications at different times---it seems geared toward long running, mature services, not user-initiated and interactive applications.
On a related note, under this system there is no way to collect data about other nodes, except through the aggregates. For example, unless a data type were pre-arranged to support it, there is no way to ask for data from or about a specific node or nodes.
The aggregation scheme also introduces some amount of fragility and effectiveness to the system. A noteworthy feature of Chord, for example, was its ability to function, albeit inefficiently, simply by knowing local nodes in the ID space. The DHT trees (finger tables) are not necessary for correct behavior. Here they're much more necessary; without them no data will get around the network.
The system also has many knobs to support application-directed tuning; proper use of them is probably required for effective use. This is largely a good thing, and a key feature of the system. However, as mentioned briefly in the paper, it would be useful to have a layer abstracting and possibly learning much of this, such that applications are less involved in the specifics of shipping data around. For example, the ability to force a reaggregation after a reconfiguration is useful, but requires that application developers worry about the state of the network/overlay, how data gets passed around, and how to make a decision about the reaggregation. Such features also sound likely to be abused by application developers, similar to the abuse of UDP to avoid (TCP's) congestion control on the Internet.
Finally, the system does not seem to lend itself toward some common applications. In addition to network monitoring, this work also forms the basis for distributed file system and data replication schemes. However, this approach seems to require that many nodes hold much information. In a straight DHT-based file store, no intermediary node between a query and the key owning node need know anything about that file. In this scheme, it seems that nodes in the tree will know at least about the key's existence, are involved in computing some aggregation function to fetch the relevant data (e.g. concatenating lists of files held by child nodes), or are possibly storing the relevant data, depending on which update()/probe() depth strategy is in use. Further, the proposed schemes for finding the nearest source of the data soom to make too much of an assumption that local nodes in ID space are local on the network, and likely to generate much network traffic in any case as increasing levels of the tree are probed.
Recent Changes (All) | Edit SideBar Page last modified on February 25, 2011, at 02:59 PM Edit Page | Page History