1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
|
---
title: "COMP-347: Computer Networks"
author: "Munir Khan (ID: 3431709)"
date: "September 2025"
subtitle: "Assignment 2"
institute: "Athabasca University"
geometry: margin=1in
fontsize: 11pt
linestretch: 1.0
---
# Part 1: Short Answer Questions (30%)
## 1.1 TCP Reliable Data Transfer (5%)
> (5%) TCP provides a reliable data transfer service on top of IP’s unreliable best‑effort service. Study related sections of the textbook and articles from other sources. In your own words, explain how TCP provides a reliable data transfer service.
TCP provides reliable data transfer over IP's best-effort service using:
- Sequence numbers and cumulative ACKs: Each byte is sequenced; ACK(n) confirms receipt of all bytes up to n−1.
- Checksums: Header+payload checksum detects corruption; corrupt segments are discarded (no ACK).
- Timeout and retransmission: RTO based on RTT estimation; missing ACKs trigger retransmission.
- Fast retransmit/fast recovery: Triple-duplicate ACKs imply loss and trigger immediate retransmission and window adjustment.
- Flow control: Receiver-advertised window prevents overrunning receiver buffers.
- Reordering/in-order delivery: Out-of-order segments are buffered and reassembled before delivery to the application.
- Connection management: Three-way handshake establishes state; graceful close ensures all data delivered.
## 1.2 Go-Back-N Protocol (5%)
> (5%) While the RDT protocols are essentially stop‑and‑wait protocols, the GBN protocol allows the sender to send multiple packets without waiting for acknowledgement from the receiving parties. How does GBN achieve that?
GBN achieves higher throughput than stop-and-wait via a sliding window that allows up to N unacknowledged packets in flight. The sender maintains base (oldest unACKed) and nextseqnum, uses a single timer for the oldest unACKed packet, and the receiver sends cumulative ACKs. On timeout, the sender retransmits from base ("goes back N"). This pipelining avoids per-packet waiting.
## 1.3 IPv6 Transition (5%)
> (5%) Invention and adoption of IPv6 is a big advance in computer networking. What problems was IPv6 intended to solve? With the large number of networking devices and applications using IPv4 still in use, how is the transition from IPv4 to IPv6 being resolved?
Motivation: Vast address space (no exhaustion), fewer routing entries via hierarchical aggregation, simplified autoconfiguration (SLAAC, DHCPv6), removal of many NAT constraints, and native IPsec support.
Transition: Dual stack (IPv4+IPv6), tunneling (e.g., 6in4/6to4/Teredo), and translation (NAT64/DNS64) allow coexistence while IPv6 adoption increases.
## 1.4 SNMP Protocol (5%)
> (5%) SNMP is a protocol for network management. It has seven message types. What are the purposes of the SNMP GetRequest and SetRequest messages? Why were UDP datagrams chosen to transport SNMP messages?
- GetRequest: Manager reads MIB object values from an agent.
- SetRequest: Manager changes configuration parameters on an agent.
- Why UDP: Low overhead and no connection setup; scalable for polling many devices; SNMP defines its own retry/timeout; resilient during control-plane stress when maintaining TCP connections is harder.
## 1.5 SDN-Enabled Devices (5%)
> (5%) In today’s market and its applications, there are many SDN-enabled networking devices. What are the preferrable features that an SDN-enabled networking device usually has?
Preferable features:
- Separation of control/data planes; secure controller channel; OpenFlow or similar southbound API.
- Programmable match-action pipelines and flow tables; counters/telemetry.
- Network virtualization (slicing/overlays), QoS, and service chaining support.
- Northbound APIs/integration for centralized policy and automation.
- Security via centralized policy (ACLs, microsegmentation).
## 1.6 BGP Loop Detection (5%)
> (5%) BGP is a routing protocol used for routing among ISPs. One problem that BGP faces is detecting loops in paths. What are the loops? Why should loops be avoided? How does BGP detect the loops in paths?
- Loops: Paths that revisit the same AS/router (e.g., A->B->C->B->...).
- Avoidance: Loops waste bandwidth, add delay, and cause instability.
- Detection: AS_PATH attribute lists traversed ASes; if a received route’s AS_PATH contains the local AS, discard it. Policies/filters and max-path-length provide additional safeguards.
---
# Part 2: Long Answer Questions (70%)
## 2.1 1's Complement Checksum (10%)
> (10%) UDP and TCP use 1’s complement for their checksums to detect errors. Suppose you have the following 8‑bit bytes: 11011001, 01010010, 11001010, 10100100 and 01011001.
> a) What is the 1’s complement of the sum of these 8‑bit bytes? Show all the details of your work.
> b) Why do UDP and TCP take the 1’s complement of the sum as their checksum, instead of the just sum of these bytes?
> c) With the 1’s complement scheme, how does the receiver detect errors?
> d) With this checksum scheme, is it possible that any 1‑bit error will go undetected? How about a 2‑bit error? Explain your answer.
Given 8-bit bytes: 11011001, 01010010, 11001010, 10100100, 01011001.
a) 1’s complement of the sum (8-bit 1’s-complement arithmetic with end-around carry):
- Interpret bytes (hex/dec): D9 (217), 52 (82), CA (202), A4 (164), 59 (89)
- Running sum with wrap-around:
- D9 + 52 = 0x12B → keep 0x2B, add carry 1 → 0x2C (44)
- 0x2C + CA = 0xF6 (246), no carry
- 0xF6 + A4 = 0x19A → keep 0x9A, add carry 1 → 0x9B (155)
- 0x9B + 59 = 0xF4 (244), no carry
- Final 1’s-complement sum = 0xF4 = 11110100
- Checksum = bitwise complement = 0x0B = 00001011
Answer: 00001011
b) Why 1’s complement? With 1’s-complement addition and end-around carry, adding data plus checksum yields an all-ones result in the absence of errors, which is easy to verify. It provides better coverage than a simple modulo-2^n sum while remaining simple to implement.
c) Receiver detection: Add all bytes including the checksum using 1’s-complement arithmetic. A correct segment sums to all 1s (for 8-bit, 0xFF); equivalently, the 1’s complement of the total is 0x00. Otherwise, declare error.
d) Error coverage: Detects all single-bit errors and most multi-bit errors. Certain balanced multi-bit patterns can evade detection (so some 2-bit errors may go undetected).
## 2.2 Dijkstra's Shortest Path Algorithm (20%)
> (20%) The following table is used to compute the shortest path from A to all other nodes in a network, according to the link‑state algorithm, which is better known as Dijkstra’s shortest path algorithm.
> a) Interpret the table above in your words: what it is showing and what are each row and each column showing?
> b) Consider the network shown in the following diagram. With the indicated link costs, use Dijkstra’s shortest path algorithm to compute the shortest path from x to all other network nodes. Show how the algorithm works by computing a table like the one above.
a) Interpretation of the sample table: Each row is one iteration. N’ lists nodes whose shortest-path distances from the source are finalized. Each D(n),P(n) shows the current best known distance to node n and its predecessor on the tentative shortest path. At each iteration, the node with the smallest tentative D(·) is added to N’, and edges from that node are relaxed to update neighbors’ D(·)/P(·).
b) Apply Dijkstra from source x (see figure). The following table shows the iterations and tentative distances; final paths are listed after the table.
| Step | N’ | D(s),P(s) | D(t),P(t) | D(u),P(u) | D(v),P(v) | D(w),P(w) | D(y),P(y) | D(z),P(z) |
|:----:|:----------------|:----------|:----------|:----------|:----------|:----------|:----------|:----------|
| 0 | {x} | 5,x | $\infty$ | $\infty$ | $\infty$ | 3,x | 5,x | $\infty$ |
| 1 | {x,w} | 5,x | $\infty$ | 4,w | $\infty$ | 3,x | 5,x | $\infty$ |
| 2 | {x,w,u} | 5,x | $\infty$ | 4,w | 5,u | 3,x | 5,x | $\infty$ |
| 3 | {x,w,u,s} | 5,x | 6,s | 4,w | 5,u | 3,x | 5,x | $\infty$ |
| 4 | {x,w,u,s,v} | 5,x | 6,s | 4,w | 5,u | 3,x | 5,x | $\infty$ |
| 5 | {x,w,u,s,v,y} | 5,x | 6,s | 4,w | 5,u | 3,x | 5,x | $\infty$ |
| 6 | {x,w,u,s,v,y,t} | 5,x | 6,s | 4,w | 5,u | 3,x | 5,x | 9,t |
| 7 | {x,w,u,s,v,y,t,z}| 5,x | 6,s | 4,w | 5,u | 3,x | 5,x | 9,t |
Final shortest-path costs from x: w=3; u=4 (x->w->u); s=5 (x->s); y=5 (x->y); v=5 (x->w->u->v); t=6 (x->s->t); z=9 (x->s->t->z).
## 2.3 CIDR Routing (20%)
> (20%) A router running classless interdomain routing (CIDR) has the following entries in its routing table:
> Address/mask Next hop
> 135.46.56.0/22 Interface 0
> 135.46.60.0/22 Interface 1
> 192.53.40.0/23 Router 2
> Default Router 3
>
> How does a CIDR router route the packets it receives? For each of the following IP addresses, explain what the router will do if a packet with that address arrives.
> a) 135.46.61.10
> b) 135.46.53.16
> c) 192.53.40.6
> d) 192.53.56.7
CIDR routing uses longest-prefix match: for each packet’s destination, select the matching entry with the longest prefix; otherwise use Default.
Decisions:
- 135.46.61.10 → matches 135.46.60.0/22 (covers 135.46.60.0–135.46.63.255) → Interface 1
- 135.46.53.16 → no /22 match (those cover 56.0–63.255) → Default → Router 3
- 192.53.40.6 → matches 192.53.40.0/23 (covers 40.0–41.255) → Router 2
- 192.53.56.7 → no match → Default → Router 3
## 2.4 TCP Congestion Control (20%)
> (20%) Consider that only a single TCP connection uses a 1 Gbps link, which does not buffer any data. Suppose that this link is the only congested link between the sending and receiving hosts. Assume that the TCP sender has a huge file to send to the receiver and the receiver’s receive buffer is much larger than the congestion window. Further assume that each TCP segment size is 1,500 bytes; the two-way propagation delay of this connection is 15 msec; and this TCP connection is always in the congestion avoidance phase (ignore slow start).
> a) What is the maximum window size (in segments) that this TCP connection can achieve?
> b) What is the average window size (in segments) and average throughput (in bps) of this TCP connection?
> c) How long would it take for this TCP connection to reach its maximum window again after recovering from a packet loss?
> d) Assume we want the 1 Gbps link to buffer a finite number of segments and always keep the link busy sending data. How would you choose a buffer size? Justify your answer.
Given: Single TCP flow over a 1 Gbps link, no buffering at the link; MSS=1500 B (12,000 bits); two-way propagation delay (RTT) ≈ 15 ms; always in congestion avoidance.
a) Maximum window (segments):
- BDP = C x RTT = 1e9 bps x 0.015 s = 15e6 bits
- W_max ≈ BDP / MSS = 15e6 / 12,000 ≈ 1,250 segments
b) Average window and throughput:
- AIMD sawtooth average ≈ 0.75 x W_max ≈ 937.5 segments
- Average throughput ≈ (937.5 x 12,000 bits) / 0.015 s ≈ 7.5e8 bps ≈ 750 Mbps
c) Time to recover to W_max after a loss:
- After halving to W_max/2, cwnd increases by ~1 MSS per RTT → needs W_max/2 RTTs
- 625 RTTs x 0.015 s ≈ 9.375 s
d) Buffer sizing to keep the link busy:
- For a single flow, provision on the order of the BDP to absorb bursts and maintain full utilization despite window dynamics: ≈ 15 Mb ≈ 1.875 MB ≈ ~1,250 MSS-sized segments.
- With many flows, smaller buffers (≈ BDP/sqrt(N)) can suffice, but for N=1, BDP is the safe choice.
|