Net2 meeting notes for 07/20/2007
* White bit * Dissemination
* Mike, Razvan, Rodrigo, Om, Phil
* White bit experiments (Om) * Send votes for Matus' application
Om - White bit: I got a really surprising result. LQI's delivery ratio was much lower than that of CTP. In the past, LQI always had higher (by a small margin) delivery ratio than CTP. In LQI's log, there were many WAIT and FAIL events. I will study the log in more detail to understand why LQI's performance is so poor.
Phil - Lets post the log on the wiki.
Om - Will do.
Om - Kaisen said he will not be able to call in but plans to contribute some code by the end of July and do a few iterations of implementation and testing by September.
Phil - Kaisen has algorithms that have gone through multiple iterations of design, implementation, and evaluation.
Phil - There are two approaches taken by Dissemination systems. 1. maintain separate trickle for each item. 2. maintain one trickle and only send a subset of metadata. goal - single trickle, independent of the number of times. O(1) communication and O(1) latency. When the data is stable, send a hash() and detect changes in hash. The receiver then requests hash of the first half of the data, then second half of data, etc (binary search) and once the changed metadata is identified exchanges the metadata information. Packet loss makes efficiency challenging - is it more efficient to do directed search or random search?
Rodrigo - When should algorithms like this kick in?
Phil - As soon as you can not fit all the metadata in one pkt.
Razvan - Is it useful when a lot of keys are changing or few keys are changing?
Phil - Both the times. The hard case - a node becomes disconnected. it has 100s of keys, need to efficiently search which ones are new and which ones are old. If all the neighboring nodes start searching the metadata in the new node with unicast traffic, the network will collapse.
Rodrigo - Any example use?
Phil - Drip. Disseminate 20 different parameters for system configuration.
Phil - When there is larger amount of memory, applications might want to disseminate much larger amount of information than they do today in which case we need a way to efficiently disseminate a large number of keys.