Jeff, Kyle, Arsalan, Rodrigo, Om, Phil
R: talked about services that should be in the one-hop link layer ("one-hop layer"). Do we want to specify an alternative to SP, enhance SP? Arslan said would think about these issues, was involved with SP.
A: looked at what was right & wrong with SP. One purpose was to have different net protocols run on any MAC. 1-hop nbr tbl management being done in this layer good. Upper layer tells lower layer wants to send msg, lower layer determines when. Keep policy out of link layer, push it up to net layer. Message futures. Problem: link layer does FIFO. Want to do medium reservation with collection; there is nothing available to do this. Currently LL SP is transparent; no headers; do we want to add info to the header?
R: So SP has policy of FIFO, right? To avoid that, what will be necessary?
A: Need resource accounting. Problem is that would be putting policy in the link layer.
R: Simply limiting size of futures is arbitrary.
P: Rather than service future till empty, send just a few. SP assumes you're sending packets quickly. Issues with Trickle, when you send packet is important.
P: SP has timeout?
A: Return on event is next msg. Have latency timeout. Urgent packets get priority during this time.
R: Seems that some needs weren't addressed by SP. The evolution of Straw needs to tell the LL not to send something for the next x millis; no way to tell SP to do that. Any other primitives from the LL that SP doesn't address? Maybe Kyle with Fusion?
om: Is there a congestion bit?
Arsalan: Yes, but it's meaning sort of disappeared. What does congestion mean? What is its definition? Does it mean, tried to send a packet but didn't receive ACK? It's still there, for legacy purposes, but it's not clear what it means. Some network-layer people were arguing that they didn't know how to use it. How many drops indiciate congestion?
Kyle: We need to think about where to put congestion control. I can't think of too much that needs to be at the link layer.
Rodrigo: Fusion uses an ECN.
Kyle: Right, downstream sets the bit. The next question is then how to communicate this information upstream.
Arsalan: One issue right now is that SP does not communicate any information. It does not put in any header fields. If you have a data-link level queue, should it have its own header? Should it start embedding its own information? Protocols might interact.
Rodrigo: We have input queues and output queues. Right now SP has an output queue. Each protocol has its own input queue. Can se say that this kind of congesion bit is inherently a network-layer congestion bit?
Arsalan: We're still considering that this link estimator will sit at the data-link layer, not at a network component.
Rodrigo: Yes, its talking about the quality of one-hop links.
Arsalan: Don't we want the LE to control the estimate to control the estimate depending on whether the link quality is due to congestion, interference, etc?
Rodrigo: This is tricky. Two kinds of acks. Acks at the link layer: I have successfully decoded this packet and can read it. Acks at the network layer: I will enqueue your packet and I will process it, or I will not take it.
Kyle: We've kind of been talking about hop-by-hop flow control. But that's only half of the story. The other part is rate-limiting sources. That will require header bits.
Rodrigo: Flush for example uses netowrk-level snooping to communicate this information. But there's definitely the need to flow the information backwards.
Kyle: Many ways to do it, depending on the link layer. If you don't have snooping, you can't use it. If you don't have synchronous acks, etc.
Phil: GE does something like this, by using 15.4 but not following it. They have their own ack packets.
Rodrigo: Could the network layer put a bit in the packet, saying whether it can handle the packet?
Phil: In theory, sure. You might have to break from 15.4 and not be compliant, in that you'll have an ack payload and probably different ack timing.
R: Issues that need to be discussed; what's the best way to have a productive discussion. Each issue generates host of emails and discussions. We have a wiki. TEP?
A: I can put together a draft TEP.
P: PANv6 wrote an RFC describing problems they are addressing. Reasonably informative, but prelim.
O: Arsalan mentioned about policies being lower layers.
P: Berkeley sensornet arch talked about fair queuing at the datalink layer. Have student starting to look at it; if it's feasible.
R: related to idea that each protocol has accounting mechanism? How far does tinyos-2 AM virtualization go?
P: solely in terms of packets. Datalink layer has a queue. Each sender has a single entry in the queue. Once you send a message, will check if any other sender has message sending if it sends yours again.
R: conflicting with msg futures.
P: original idea that this sits on top of message futures. Protcols that just send a pkt use it, high perf protocols use lower layer interface. Can bypass. Entire queue of packets is a single sendpool entry.
K: what's the extension to fair queueing?
P: still talking through it; core idea like in Flush or Straw; when you send pkt, can specify quiet time. During that time only person can send a packet is recipient. Time unit for occupying the channel.
R: how much can the link layer know about congestion by itself? how much can it control it?
O: if link is 100%, then suddenly 0%, can guess that it's congested.
P: wireless congestion, network congestion. basically we're trying to figure out division of responsibility between datalink and network layers. Difficult to have queue congestion without medium congestion. No route...
O: logging application trying to send too many packets?
P: if medium quiet, then unlikely you'll have queue overflows. Trying to tease apart multihop & singlehop layer. What are responsibilities of both?
K: also what do multihop & singlehop layer tell each other.
P: Jeff, you guys have had experience; thoughts? issues?
J: haven't been involved so much; can talk to Matt (been using my protocol).
R: which protocol?
J: Flows protocol.
P: TR or paper?
P: would be good to understand; just look at code, I guess
J: one reason to use flows is that seems to be a problem with MAC layer doing backokff. Broadcasts vs. unicasts. If you have two motes broadcasting packets, MAC layer doesn't backoff. Problem with current tinyos-1.x codebase.
P: When implementing 2.x stack, packet send for micaz is 600 pps, far beyond what backoff would allow. Wouldn't be surprised if that problem continued today. New 2.x stack John wrote doesn't have those issues.
J: Personally, interested in programming interface issues (buffer management). Avoided forcing programmer to do buffer mgmt.
R: Can discuss what interface above network layer looks like. Berkeley has a framework for buffer mgmt.