Net2WG/Notes/20071005 Meeting Notes
* 4B status * Link-layer SP
* Om, USC * Arsalan, Berkeley * Phil, Stanford * Razvan, JHU * Mike, JHU
Om: I will get testbed time tonight. I will basically draw figure 6 for the paper looking at the impact of those 4 bits and uni-directional estimate on collection performance. And, I will also look at the default white bit.
Phil: Kannan has been running some experiments for LTI. He ran CTP and CTP-4Bit. He also saw significant improvement, like lower cost and higher delivery ratio. It is still on Mirage now.
Om: I did a presentation in class about CTP. I received two comments. First, the error is still confusing. Second, understanding the layers is more difficult now because layers are now intertwined.
Om: We decided to look at when and how helpful SP is. I looked at CTP, and my observation is that most link-estimation work will no longer be necessary. SP wants to do the neighbor table management. I looked at Boomerang API, and I think most link-estimation work goes away, except the data-estimation part.
Arsalan: My question is what happen to the concept of different protocols have different interpretion of how a link estimation is evaluated. Last week, Phil distiguish link-estimation from neighbors. Is this the differentiation that we are making now?
Phil: SP takes the responsibility of link-estimation, so it does periodic beacon. The link-layer abstraction is not just a raw link-layer. It can also observe whether a packet is acked. Simiarly, there is no reason why Active Message can not update the link-estimation table for every packet sent. I think that cc2420 implementation that is b-mac based absolutely sends beacons.
Om: I think that it doesn't make link-estimation easier. It just that link- estimation will be completely implemented inside SP.
Phil: The part that is not is neighbor table entry management. In the case of 4- bit, they are the pin-bit and the compare-bit. One thing that comes out of Andreas' comment is that people have been using CC2420, that it is hard to get a sense of how other link-layer behaves. Arsalan, is it still CC2420 land at Berkeley?
Arsalan: Definitely. The new testbed is going to be partially MicaZ, partially Telosb, and the epic motes.
I was looking at DSN and its requirement for link-layer abstraction. DSN is the idea of being able to declaritively specify sensor network. The problem are what the core can specify and what applications can be built on top of it. The actual requirement is kind of minimal. I need a neighbor table maintained beneath me, and I would like a message pool with optimized sending. It's rare for me to make request in the sense that this packet is urgent and this is not. Then again, they don't need as fine-grained as the urgent bit, just specifying the time that the packet should be sent out. A link-estimator would be useful. If you can make an argument that a link-estimator should be pushed down, then DSN will certainly buy into that. Now, if you want to have an application that implements rate- control, then you need an mechanism underneath that can handle it.
Phil: SP takes an interesting position, and it is going to optimize the link as efficient as possible. However, the issue is that most efficient operation can lead to starvation. It is possible for you to say that I have 10 million packets to send to this other node, and SP would give you the opportunity to do spool packets forever. There is no mechanism to do fairness.
Arsalan: This goes back to the discussion last week. This is the purpose of a policy manager. I think a more fair argument is that an application has to be built to know how to handle failures.
Phil: I am not against the idea of policy manager. I am against the idea of messing the policy manager a lot. For example, Linux has scheduler. You can change the schedular if you want. However, with 99% of the applications, you don't. I don't like the idea that specifying a policy manager is a part of building an application.
Arsalan: I agree that policy manager is not something you want to twidle with often. But, what we disagreed last week with was that, in a particular deployment, is it possible for the policy manager to be tweaked to give a protocol more resources because it is more important.
Phil: There is an implicit statement that as soon as you take policy manager like there is no way for people to write a network protocol assuming any policy manager. You like something that maybe you can tweak things where all bets will be off, but then all bets will be off.
Om: But you can say the same thing about CTP. What are you suppose to assume? Should you assume a particular policy that is implemented?
Arsalan: So, if a policy manager decides that a protocol is more important and needs more bandwidth. What is that going to do to other protocols? And, is this fine?
Phil: This is a question of fairness and optimization argument. People are willing to give up the optimal case to avoid the disabled case. I buy that policy manager is a good software engineering approach. But, we should build it and see how it works and evaluates its benefits.
Phil: There is an observation about packet reception rate related to CTP. The distribution of packet reception that you see depends on how long you look. It turns out that how quickly you send, lets say 100 packets, changes the distribution. Specifically, the slower you send the packets, the more intermediate links you will get, where intermediate is between 90% and 10%. So, packet loss and success are correlated. So, if a packet is not transmitted successfully, rather than retransmit immediately, you wait 500ms. The result shows that if you do this, the cost of CTP drops by about 10%. We also observed similar thing on different link-layers, so it is not just 15.4.
Om: So, in the context of CTP, all you need is making the sendDone offset larger.