diff options
| author | mo khan <mo@mokhan.ca> | 2025-09-30 09:16:12 -0600 |
|---|---|---|
| committer | mo khan <mo@mokhan.ca> | 2025-09-30 09:16:12 -0600 |
| commit | 5630d2b8c00b049686e8b67d9361858178f0a07e (patch) | |
| tree | 9a12b66acb6d79e3803d425d1f49b3d789ffab38 /assignments/2 | |
| parent | 89f7e9811397b981df2490b8c4c779086119ea42 (diff) | |
formal answers
Diffstat (limited to 'assignments/2')
| -rw-r--r-- | assignments/2/README.md | 86 |
1 files changed, 85 insertions, 1 deletions
diff --git a/assignments/2/README.md b/assignments/2/README.md index 09af7ca..c72f500 100644 --- a/assignments/2/README.md +++ b/assignments/2/README.md @@ -18,6 +18,15 @@ unreliable best‑effort service. Study related sections of the textbook and articles from other sources. In your own words, explain how TCP provides a reliable data transfer service. +Answer: +- Error detection: a 16-bit ones' complement checksum over header and payload detects bit errors. +- Sequencing and in-order delivery: byte sequence numbers and reassembly buffers deliver a contiguous, ordered byte stream. +- Positive acknowledgments: cumulative ACKs (and optional SACK) inform the sender of data received and gaps. +- Retransmissions on loss: timeout-based retransmission using RTT/RTO estimation; fast retransmit after 3 duplicate ACKs repairs loss quickly. +- Sliding window pipelining: allows multiple unacknowledged segments in flight for efficiency. +- Flow control: receiver-advertised window (rwnd) prevents buffer overrun. +- Connection management: three-way handshake syncs initial sequence numbers and options; graceful close ensures all data is delivered. + ## 1.2 Go-Back-N Protocol (5%) > (5%) While the RDT protocols are essentially stop‑and‑wait @@ -25,6 +34,12 @@ protocols, the GBN protocol allows the sender to send multiple packets without waiting for acknowledgement from the receiving parties. How does GBN achieve that? +Answer: +- Pipelining with a sending window: up to N unacknowledged packets may be outstanding (window size N). +- Cumulative ACKs: the receiver ACKs the highest in-order packet; out-of-order packets are discarded (in basic GBN). +- One timer for the oldest unacked packet: on timeout, the sender retransmits that packet and all later packets in the window (go back N). +- As ACKs arrive, the window slides forward and new packets are sent immediately. + ## 1.3 IPv6 Transition (5%) > (5%) Invention and adoption of IPv6 is a big advance in computer @@ -32,12 +47,29 @@ networking. What problems was IPv6 intended to solve? With the large number of networking devices and applications using IPv4 still in use, how is the transition from IPv4 to IPv6 being resolved? +Answer: +- Problems IPv6 addresses: + - Address exhaustion: 128-bit addressing provides vastly more addresses than IPv4. + - Simpler, extensible header (with extension headers) and efficient forwarding. + - Improved autoconfiguration (SLAAC), multicast, and neighbor discovery; easier renumbering. + - Reduced reliance on NAT; IPsec support designed in. +- Transition mechanisms (since IPv4 persists): + - Dual stack hosts/routers choose IPv4 or IPv6 per destination. + - Tunneling (e.g., 6rd, GRE) to carry IPv6 over IPv4 (or vice versa). + - Translation (NAT64/DNS64, 464XLAT) to let IPv6-only clients reach IPv4 services. + - Incremental, piecemeal deployment driven by ISPs, enterprises, content and CDNs. + ## 1.4 SNMP Protocol (5%) > (5%) SNMP is a protocol for network management. It has seven message types. What are the purposes of the SNMP GetRequest and SetRequest messages? Why were UDP datagrams chosen to transport SNMP messages? +Answer: +- GetRequest: manager reads current values of MIB objects (OIDs) from an agent; agent replies with GetResponse. +- SetRequest: manager writes values to writable MIB objects to configure the device. +- UDP was chosen because it is simple and lightweight (no connection state), matches request/response with app-layer timeouts and retries, avoids TCP setup and head-of-line blocking for sporadic polls, and supports asynchronous notifications (Traps/Inform) easily. + ## 1.5 SDN-Enabled Devices (5%) > (5%) In today’s market and its applications, there are many @@ -46,12 +78,26 @@ an SDN-enabled networking device usually has? Reference: Kurose & Ross, 8th ed., Ch. 5 (SDN control plane, generalized forwarding). +Answer: +- Open southbound programming interfaces: OpenFlow, P4Runtime, or NETCONF/YANG. +- Flexible match-action pipeline (TCAM/ACL capacity), possibly P4-programmable parsing and tables. +- High-throughput, low-latency forwarding with QoS (queues, shaping, scheduling) and traffic engineering (ECMP, segment routing). +- Rich telemetry: streaming telemetry (gNMI), sFlow/NetFlow/IPFIX, accurate counters and timestamps; in-band network telemetry support. +- Secure, robust control-plane: TLS, RBAC, high availability, fast failover. +- Virtualization and overlay support: VRFs, VXLAN/EVPN, NAT/ACL features for multi-tenancy. +- APIs/SDKs and upgradable software for automation and programmability. + ## 1.6 BGP Loop Detection (5%) > (5%) BGP is a routing protocol used for routing among ISPs. One problem that BGP faces is detecting loops in paths. What are the loops? Why should loops be avoided? How does BGP detect the loops in paths? +Answer: +- Loops: a path that revisits the same autonomous system (AS) more than once (e.g., AS1->AS2->AS3->AS1), creating a forwarding loop. +- Why avoid: loops waste bandwidth, increase latency, may blackhole traffic, and can destabilize routing. +- Detection: BGP is path-vector; each route carries AS_PATH. If a router sees its own AS in the received AS_PATH, it rejects the route. Within an AS, iBGP loop-prevention uses rules with route reflectors (ORIGINATOR_ID, CLUSTER_LIST). + # Part 2: Long Answer Questions (70%) ## 2.1 1's Complement Checksum (10%) @@ -68,6 +114,12 @@ errors. Suppose you have the following 8‑bit bytes: 11011001, 01010010, > d) With this checksum scheme, is it possible that any 1‑bit error will go undetected? How about a 2‑bit error? Explain your answer. +Answer: +- a) Sum the 8-bit words using ones' complement addition (end-around carry). In decimal: 217 + 82 + 202 + 164 + 89 = 754. Reduce with ones' complement modulus 255: 754 - 2*255 = 244 (binary 11110100). The checksum is the ones' complement (bitwise invert) of 244, which is 00001011 (decimal 11). Check: 244 + 11 = 255 (all 1s). +- b) Using ones' complement of the sum makes the overall ones' complement sum of all words (data + checksum) equal to all 1s, enabling simple verification, incremental update, and better detection of many common error patterns than straight modulo-2^n addition. +- c) The receiver computes the ones' complement sum over the received data and checksum. If the result is 11111111 (all 1s), accept; otherwise, declare a checksum error. +- d) Any single-bit error changes the sum and is always detected. Some two-bit errors that are complementary in the same bit position across two words can cancel and go undetected; thus not all two-bit (or multi-bit) errors are detected. + ## 2.2 Dijkstra's Shortest Path Algorithm (20%) > (20%) The following table is used to compute the shortest path from A @@ -78,6 +130,17 @@ which is better known as Dijkstra’s shortest path algorithm. > b) Consider the network shown in the following diagram. With the indicated link costs, use Dijkstra’s shortest path algorithm to compute the shortest path from x to all other network nodes. Show how the algorithm works by computing a table like the one above. +Answer: +- a) Each row (Step k) shows the state after the k-th iteration of Dijkstra's algorithm. N' is the set of nodes whose shortest-path distance from the source is known (settled). For each other node n, D(n) is the current best-known distance from the source to n, and P(n) is the predecessor on that tentative path. At each iteration, the node outside N' with minimum D(n) is added to N', and distances to its neighbors are relaxed. +- b) From source x (using the link costs in the figure), the algorithm proceeds as follows: + - Step 0: N' = {x}; D(u)=1 via x; D(y)=1 via x; D(w)=3 via x; D(v)=inf; D(z)=inf + - Step 1: add y; N' = {x,y}; relax y->w and y->z: D(w)=2 via y; D(z)=3 via y; D(u)=1 via x; D(v)=inf + - Step 2: add u; N' = {x,y,u}; relax u->v (cost 2): D(v)=3 via u; others unchanged + - Step 3: add w; N' = {x,y,u,w}; no better paths found + - Step 4: add v; N' = {x,y,u,w,v}; no better paths found + - Step 5: add z; done + Shortest-path costs from x: to u = 1 (x->u); to y = 1 (x->y); to w = 2 (x->y->w); to v = 3 (x->u->v); to z = 3 (x->y->z). + ## 2.3 CIDR Routing (20%) > (20%) A router running classless interdomain routing (CIDR) has the following entries in its routing table: @@ -93,6 +156,18 @@ which is better known as Dijkstra’s shortest path algorithm. > c) 192.53.40.6 > d) 192.53.56.7 +Answer: +- CIDR routers use longest-prefix match: among all entries that match the destination, choose the one with the longest mask; if none match, use the default route. +- Entry coverage: + - 135.46.56.0/22 covers 135.46.56.0 to 135.46.59.255 -> Interface 0 + - 135.46.60.0/22 covers 135.46.60.0 to 135.46.63.255 -> Interface 1 + - 192.53.40.0/23 covers 192.53.40.0 to 192.53.41.255 -> Router 2 + - Default -> Router 3 +- Per address: + - a) 135.46.61.10 -> matches 135.46.60.0/22 -> forward to Interface 1. + - b) 135.46.53.16 -> no /22 match -> forward using Default -> Router 3. + - c) 192.53.40.6 -> matches 192.53.40.0/23 -> forward to Router 2. + - d) 192.53.56.7 -> no /23 match -> forward using Default -> Router 3. ## 2.4 TCP Congestion Control (20%) @@ -105,7 +180,7 @@ that each TCP segment size is 1,500 bytes; the two-way propagation delay of this connection is 15 msec; and this TCP connection is always in the congestion avoidance phase (ignore slow start). -Given: Single TCP flow over a 1 Gbps link, no buffering at the link; MSS=1500 B (12,000 bits); two-way propagation delay (RTT) ≈ 15 ms; always in congestion avoidance. +Given: Single TCP flow over a 1 Gbps link, no buffering at the link; MSS=1500 B (12,000 bits); two-way propagation delay (RTT) = 15 ms; always in congestion avoidance. > a) What is the maximum window size (in segments) that this TCP connection can achieve? @@ -115,6 +190,15 @@ Given: Single TCP flow over a 1 Gbps link, no buffering at the link; MSS=1500 B > d) Assume we want the 1 Gbps link to buffer a finite number of segments and always keep the link busy sending data. How would you choose a buffer size? Justify your answer. +Answer: +- a) Maximum window without queuing equals the bandwidth-delay product (BDP) in segments. + - BDP(bits) = 1e9 bps * 0.015 s = 15,000,000 bits. + - MSS(bits) = 1,500 B * 8 = 12,000 bits. + - Wmax = 15,000,000 / 12,000 = 1,250 segments. +- b) In AIMD congestion avoidance (Reno-like), cwnd sawtooths between Wmax and Wmax/2. Average window Wavg = 0.75 * Wmax = 937.5 segments. Average throughput = Wavg * MSS / RTT = 937.5 * 12,000 / 0.015 = 750,000,000 bps (0.75 Gbps). +- c) After a loss, cwnd halves to Wmax/2 = 625 segments and then grows by ~1 segment per RTT. Time to return to Wmax is 625 RTTs = 625 * 0.015 s = 9.375 s. +- d) To keep the link busy right after a multiplicative decrease, the queue should supply the missing in-flight data between BDP and cwnd. Provision about BDP/2 worth of buffering: ~625 segments. Then cwnd (625) + queue (625) = BDP, so the link remains fully utilized while cwnd ramps up (with some extra headroom and AQM/ECN recommended in practice). + # References - Kurose, J. F., & Ross, K. W. (8th ed.). Computer Networking: A Top‑Down Approach. |
