summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--assignments/1/README.md736
1 files changed, 0 insertions, 736 deletions
diff --git a/assignments/1/README.md b/assignments/1/README.md
deleted file mode 100644
index e3d8e48..0000000
--- a/assignments/1/README.md
+++ /dev/null
@@ -1,736 +0,0 @@
----
-title: "COMP-347: Computer Networks"
-author: "munir khan - 3431709"
-date: "September 2025"
-subtitle: "Assignment 1"
-institute: "University of Athabasca"
-geometry: margin=1in
-fontsize: 11pt
-linestretch: 1.0
----
-
-# Part 1: Short Answer Questions (30%)
-## 1.1 Traceroute Analysis (5%)
-
-```
-Run Traceroute, TRACERT (on Windows), or another similar utility
-between a source and a destination in the country in which you
-reside. Do this at three different times of the day. Summarize your
-findings at each of the times with respect to the following, and
-explain your findings:
-
-* average and standard deviation of the round-trip delays
-* number of routers in the path
-
-If you are not familiar with the utility, read the Microsoft article,
-"How to Use TRACERT."
-```
-
-Traceroute was run from my computer to google.com at three different
-times with the following results:
-
-| Measurement | Time | # of Hops | End-to-End RTT (ms) |
-| ----------- | ---- | --------- | ------------------ |
-| 1 | 14:34 | 13 | 61.43 |
-| 2 | 14:39 | 8 | 20.85 |
-| 3 | 15:08 | 8 | 21.51 |
-
-RTT Statistics:
-
-- Average RTT: 34.60 ms
-- Standard Deviation: 23.24 ms
-
-### Trace 1
-
-```bash
-$ traceroute -v google.com
-Using interface: en9
-traceroute to google.com (142.251.32.78), 64 hops max, 40 byte packets
- 1 unifi (192.168.0.1) 48 bytes to 192.168.0.139 2.891 ms 0.843 ms 0.678 ms
- 2 10.139.230.1 (10.139.230.1) 48 bytes to 192.168.0.139 3.086 ms 2.069 ms 1.818 ms
- 3 * * *
- 4 209.85.174.62 (209.85.174.62) 48 bytes to 192.168.0.139 21.470 ms 20.441 ms 20.212 ms
- 5 * * *
- 6 142.251.55.202 (142.251.55.202) 48 bytes to 192.168.0.139 20.595 ms
- 142.250.225.216 (142.250.225.216) 76 bytes to 192.168.0.139 21.655 ms
- 142.250.233.152 (142.250.233.152) 76 bytes to 192.168.0.139 21.378 ms
- 7 108.170.255.130 (108.170.255.130) 76 bytes to 192.168.0.139 21.586 ms
- 142.251.249.236 (142.251.249.236) 36 bytes to 192.168.0.139 20.742 ms
- 108.170.255.132 (108.170.255.132) 36 bytes to 192.168.0.139 39.592 ms
- 8 * * 216.239.51.199 (216.239.51.199) 148 bytes to 192.168.0.139 27.783 ms
- 9 * * *
-10 * 142.251.234.79 (142.251.234.79) 36 bytes to 192.168.0.139 62.220 ms
- 172.253.64.253 (172.253.64.253) 36 bytes to 192.168.0.139 63.230 ms
-11 142.251.234.79 (142.251.234.79) 36 bytes to 192.168.0.139 61.495 ms 60.968 ms
- 192.178.98.125 (192.178.98.125) 76 bytes to 192.168.0.139 63.083 ms
-12 142.251.68.25 (142.251.68.25) 48 bytes to 192.168.0.139 61.120 ms
- 192.178.98.123 (192.178.98.123) 76 bytes to 192.168.0.139 62.039 ms 62.104 ms
-13 yyz12s07-in-f14.1e100.net (142.251.32.78) 36 bytes to 192.168.0.139 61.941 ms 61.323 ms 61.030 ms
-```
-
-### Trace 2
-
-```bash
-$ traceroute -v google.com
-Using interface: en9
-traceroute to google.com (142.251.215.238), 64 hops max, 40 byte packets
- 1 unifi (192.168.0.1) 48 bytes to 192.168.0.139 2.186 ms 0.819 ms 0.638 ms
- 2 10.139.230.1 (10.139.230.1) 48 bytes to 192.168.0.139 2.045 ms 2.458 ms 2.077 ms
- 3 * * *
- 4 209.85.174.62 (209.85.174.62) 48 bytes to 192.168.0.139 21.523 ms 20.465 ms 20.087 ms
- 5 * * *
- 6 142.250.225.216 (142.250.225.216) 76 bytes to 192.168.0.139 22.008 ms
- 142.251.55.200 (142.251.55.200) 76 bytes to 192.168.0.139 21.601 ms
- 142.250.233.152 (142.250.233.152) 76 bytes to 192.168.0.139 21.618 ms
- 7 192.178.105.146 (192.178.105.146) 36 bytes to 192.168.0.139 20.923 ms
- 142.251.250.56 (142.251.250.56) 36 bytes to 192.168.0.139 20.663 ms
- 142.251.241.137 (142.251.241.137) 48 bytes to 192.168.0.139 20.712 ms
- 8 172.253.79.231 (172.253.79.231) 76 bytes to 192.168.0.139 21.585 ms
- sea09s35-in-f14.1e100.net (142.251.215.238) 36 bytes to 192.168.0.139 21.078 ms 20.483 ms
-```
-
-### Trace 3
-
-```bash
-$ traceroute -v google.com
-Using interface: en9
-traceroute to google.com (142.251.215.238), 64 hops max, 40 byte packets
- 1 unifi (192.168.0.1) 48 bytes to 192.168.0.139 1.760 ms 0.894 ms 0.696 ms
- 2 10.139.230.1 (10.139.230.1) 48 bytes to 192.168.0.139 2.094 ms 2.604 ms 2.097 ms
- 3 * * *
- 4 209.85.174.62 (209.85.174.62) 48 bytes to 192.168.0.139 21.375 ms 20.245 ms 20.038 ms
- 5 * * *
- 6 142.251.55.202 (142.251.55.202) 48 bytes to 192.168.0.139 21.347 ms 20.090 ms
- 142.251.224.250 (142.251.224.250) 76 bytes to 192.168.0.139 21.977 ms
- 7 216.239.56.223 (216.239.56.223) 48 bytes to 192.168.0.139 20.924 ms
- 108.170.255.196 (108.170.255.196) 36 bytes to 192.168.0.139 21.603 ms
- 142.251.249.236 (142.251.249.236) 36 bytes to 192.168.0.139 21.117 ms
- 8 sea09s35-in-f14.1e100.net (142.251.215.238) 36 bytes to 192.168.0.139 20.809 ms
- 172.253.79.231 (172.253.79.231) 76 bytes to 192.168.0.139 22.211 ms 21.499 ms
-```
-
-### Trace 4
-
-```bash
-モ traceroute -v google.com
-Using interface: en0
-traceroute to google.com (142.251.215.238), 64 hops max, 40 byte packets
- 1 192.168.0.1 (192.168.0.1) 48 bytes to 192.168.0.53 5.041 ms 2.739 ms 2.033 ms
- 2 10.139.230.1 (10.139.230.1) 48 bytes to 192.168.0.53 3.232 ms 3.904 ms 4.116 ms
- 3 * * *
- 4 * * *
- 5 * * *
- 6 * * *
- 7 192.178.105.146 (192.178.105.146) 36 bytes to 192.168.0.53 29.061 ms
- 142.251.249.236 (142.251.249.236) 36 bytes to 192.168.0.53 22.413 ms
- 108.170.255.196 (108.170.255.196) 36 bytes to 192.168.0.53 23.130 ms
- 8 172.253.79.231 (172.253.79.231) 76 bytes to 192.168.0.53 23.171 ms
- sea09s35-in-f14.1e100.net (142.251.215.238) 36 bytes to 192.168.0.53 22.844 ms 22.020 ms
-```
-
-### Analysis
-
-The traceroute runs show different destination IPs due to Google's
-load balancing. Trace 1 took longer (61ms, 13 hops) while Traces
-2-3 were faster (21ms, 8 hops), indicating route optimization. The
-`* * *` entries show routers filtering ICMP traffic, which is normal
-behavior.
-
-## 1.2 Internet Protocol Stack Layers (5%)
-
-> What are the five layers in the Internet protocol stack? Develop a table to summarise what each layer does.
-
-The Internet protocol stack has five layers, each with specific responsibilities (Kurose & Ross, 2021, Section 1.5):
-
-| Layer | Name | Description |
-| ----- | ----------- | ------------------------------------------------------------------ |
-| 5 | Application | Provides services for applications (e.g., HTTP, SMTP, FTP, DNS) |
-| 4 | Transport | Reliable or unreliable data delivery between processes (TCP, UDP) |
-| 3 | Network | Routes packets between hosts using logical addressing (IP, ICMP) |
-| 2 | Link | Transfers data between adjacent nodes (Ethernet, WiFi) |
-| 1 | Physical | Sends raw bits as signals over the physical medium (cables, radio) |
-
-## 1.3 Packet-Switched vs Circuit-Switched Networks (5%)
-
-> What are packet-switched network and circuit-switched network,
-> respectively? Develop a table to summarise their features, pros,
-> and cons.
-
-| Aspect | Packet-Switched | Circuit-Switched |
-| ------------------- | ----------------------------------- | --------------------------------------- |
-| Connection Model | Connectionless | Connection-oriented |
-| Resource Allocation | Shared dynamically | Dedicated for session |
-| Path Determination | Per packet, can vary | Fixed for entire session |
-| Bandwidth Usage | Efficient, statistical multiplexing | Less efficient, reserved even when idle |
-| Setup Required | None | Required before communication |
-| Transmission Mode | Store-and-forward | Direct circuit path |
-| Reliability | Best effort delivery | Guaranteed once connected |
-| Cost | Lower, pay per usage | Higher, pay for reserved capacity |
-| Scalability | High | Limited by circuits |
-| Fault Tolerance | Robust, automatic rerouting | Vulnerable, failure breaks connection |
-| Performance | Variable delays, jitter possible | Predictable, consistent |
-| Typical Examples | Internet, Ethernet, web, email | PSTN, T1/T3 leased lines |
-
-Key Points:
-
-* Packet-switched: Efficient, flexible, cost-effective, but can have variable delays and packet loss.
-* Circuit-switched: Predictable, guaranteed bandwidth, good for real-time traffic, but inefficient and less scalable.
-
-## 1.4 Network Delays and Traffic Intensity (5%)
-
-> What are processing delay, queuing delay, transmission delay, and
-> propagation delay, respectively? Where does each delay occur? What
-> is traffic intensity? Why should the traffic intensity be no greater
-> than one (1) when designing a computer network?
-
-### Four Types of Network Delays
-
-In packet-switched networks, four main delays can occur (Kurose & Ross, 2021):
-
-1. Processing Delay (`d_proc`)
- * What: Time to examine packet header and determine where to forward it
- * Where: At routers and switches
- * Typical duration: Microseconds
-2. Queuing Delay (`d_queue`)
- * What: Time a packet waits in a queue before being transmitted
- * Where: In router output queues
- * Depends on: Network congestion and traffic load
-3. Transmission Delay (`d_trans`)
- * What: Time to push all bits of the packet onto the link
- * Where: At the sender’s interface
- * Formula: `d_trans = L / R`
- * `L` = packet length (bits)
- * `R` = link rate (bps)
-4. Propagation Delay (`d_prop`)
- * What: Time for the signal to travel across the link
- * Where: Along the transmission medium (e.g., fiber, cable)
- * Formula: `d_prop = d / s`
- * `d` = physical distance
- * `s` = propagation speed
-
-### Traffic Intensity
-
-* Definition: `ρ = La / R`
- * `L` = avg. packet length (bits)
- * `a` = avg. packet arrival rate (packets/sec)
- * `R` = link bandwidth (bps)
-* Why `ρ` should be ≤ 1:
- * If `ρ < 1`: Network can handle the traffic -> delays stay manageable
- * If `ρ = 1`: Link is fully utilized -> delays increase sharply
- * If `ρ > 1`: Traffic exceeds capacity -> queues grow indefinitely -> packet loss
-* Design Rule:
- Keep `ρ < 1` (ideally ≤ 0.7–0.8) for:
- * Stability
- * Low delays
- * Room for bursts
- * Preventing congestion collapse
-
-## 1.5 Web Caching (5%)
-
-> What is Web-caching? When may Web-caching be more useful in a
-> university? What problem does the conditional GET in HTTP aim to
-> solve?
-
-Web caching stores frequently requested web content (pages, images,
-videos) closer to users so it can be served locally instead of from
-the origin server. This reduces response time and network traffic.
-
-* Cache hit: Requested content is found in the cache
-* Cache miss: Content is not in the cache and must be fetched from the server
-
-Usefulness in universities:
-
-* Shared content: Many students access the same resources, increasing cache hits
-* Bandwidth optimization: Reduces external traffic, preserving limited bandwidth
-* Cost reduction: Cuts repeated downloads from the internet
-* Performance: Faster access to educational content, especially during peak usage
-
-Conditional GET in HTTP ensures cached content is up-to-date without
-downloading unchanged objects:
-
-1. Cache stores object with `Last-Modified` date or `ETag`
-2. On subsequent requests, cache sends a conditional GET with `If-Modified-Since` or `If-None-Match`
-3. Server responds:
- * 304 Not Modified: Content unchanged -> no download
- * 200 OK: Content changed -> new object sent
-
-Benefits: Maintains cache freshness, reduces bandwidth use, lowers
-server load, and validates cache efficiently
-
-## 1.6 Email Protocol Analysis (5%)
-
-> Suppose you have a Web-based email account, such as Gmail, and you
-> have just sent a message to a friend, Alice, who accesses her mail
-> from her mail server using IMAP. Assume that both you and Alice are
-> using a smartphone to access emails via Wi-Fi at home. List all the
-> network protocols that may be involved in sending and receiving the
-> email. Discuss in detail how the message went from your smartphone
-> to Alice's smartphone - that is, how the message went through all the
-> network protocol layers on each of the network devices involved in
-> the communication. Ignore everything between your ISP and Alice's
-> ISP.
-
-Scenario: Web-based email (Gmail) sent to Alice who accesses email via IMAP on smartphone over Wi-Fi (Kurose & Ross, 2021, Section 2.3).
-
-### Network Protocols Involved
-
-#### Sender Side (Web-based Gmail)
-
-- HTTP/HTTPS: Communication between browser and Gmail web server
-- SMTP: Gmail server sending email to Alice's mail server
-- DNS: Domain name resolution for Alice's mail server
-- TCP: Reliable transport for HTTP and SMTP
-- IP: Network layer routing
-- Wi-Fi (802.11): Wireless LAN protocol
-- Ethernet: If Wi-Fi access point connects via wired network
-
-#### Receiver Side (Alice's IMAP access)
-
-- IMAP: Alice's smartphone retrieving email from her mail server
-- TCP: Reliable transport for IMAP
-- IP: Network layer routing
-- Wi-Fi (802.11): Wireless connection to home network
-- DNS: Resolving mail server address
-
-### Detailed Message Flow Through Protocol Layers:
-
-#### Phase 1: Sending Email (Your Smartphone -> Gmail Server)
-
-1. Application Layer: You compose email in Gmail web interface
-2. HTTP/HTTPS: Browser sends POST request with email data to Gmail server
-3. Transport Layer: TCP segments the HTTP request, adds port numbers (443 for HTTPS)
-4. Network Layer: IP adds source/destination IP addresses (your phone -> Gmail server)
-5. Link Layer: Wi-Fi frames the IP packets with MAC addresses (phone -> router)
-6. Physical Layer: Radio waves transmit frames to Wi-Fi access point
-
-#### Phase 2: Gmail Processing and SMTP Delivery
-
-1. Gmail Server Processing: Receives HTTP request, extracts email, prepares for SMTP delivery
-2. DNS Resolution: Gmail server queries DNS to find Alice's mail server IP address
-3. SMTP Connection: Gmail establishes TCP connection to Alice's mail server (port 25/587)
-4. Application Layer: SMTP protocol transfers email from Gmail to Alice's mail server
-5. Transport Layer: TCP ensures reliable delivery of SMTP commands and email content
-6. Network Layer: IP routes packets through internet infrastructure
-7. Link Layer: Various link technologies (Ethernet, fiber, etc.) carry packets
-
-#### Phase 3: Alice Retrieving Email (Alice's Mail Server -> Smartphone)
-
-1. IMAP Request: Alice's email app sends IMAP commands to check for new messages
-2. Transport Layer: TCP establishes connection (port 143/993), segments IMAP requests
-3. Network Layer: IP routes packets from Alice's phone to her mail server
-4. Link Layer: Wi-Fi frames carry IP packets (phone -> Wi-Fi router -> ISP)
-5. Physical Layer: Radio waves and various physical media transmit signals
-
-#### Phase 4: Email Delivery to Alice's Phone
-
-1. IMAP Response: Mail server responds with email content via IMAP protocol
-2. Transport Layer: TCP reliably delivers email data to Alice's phone
-3. Network Layer: IP routes response packets back to Alice's phone
-4. Link Layer: Wi-Fi receives and processes frames at Alice's phone
-5. Physical Layer: Radio signals converted back to digital data
-6. Application Layer: Email app displays the received message
-
-Key Protocol Stack Traversals:
-
-- Downward: Each device processes from Application -> Physical when sending
-- Upward: Each device processes from Physical -> Application when receiving
-- Multiple Hops: Email traverses multiple network devices (routers, switches) between sender and receiver, each processing up to Network layer and back down
-
-# Part 2: Long Answer Questions (70%)
-
-> Solve the following network problems and show your work in detail.
-
-## 2.1 File Transfer Analysis (20%)
-
-> Consider that you are submitting your assignment in a compressed
-> file from your computer at home to the university server that is
-> hosting your online course. Your large file is segmented into smaller
-> packets before it is sent into the first link. Each packet is 10,000
-> bits long, including 100 bits of header. Assume the size of the
-> assignment file is 10 MB.
-
-Given:
-
-- File size: 10 MB = 10 × 10^6 bytes = 80 × 10^6 bits
-- Packet size: 10,000 bits (including 100 bits header)
-- Payload per packet: 9,900 bits
-
-### a) Number of packets
-
-Number of packets = Total file bits ÷ Payload bits per packet
-= 80 × 10^6 bits ÷ 9,900 bits/packet
-= 8,080.81 packets
-= 8,081 packets (rounded up)
-
-### b) Links identified using traceroute
-
-Based on traceroute to google.com (representing university server path):
-
-1. Home router -> ISP gateway (192.168.1.254 -> 192.168.0.1)
-2. ISP local -> ISP regional (192.168.0.1 -> 10.139.230.1)
-3. ISP -> Internet backbone (10.139.230.1 -> 209.85.174.62)
-4. Multiple backbone links through Google's network
-5. Final delivery to destination server
-
-Total identified links: 14 hops
-
-### c) Link speed calculations
-
-Based on traceroute RTT measurements:
-
-| Link | From -> To | RTT (ms) | Estimated Speed |
-| ----- | --------------- | ---------- | ---------------------------------- |
-| 1 | Home -> Router | 16 | ~100 Mbps (Local Ethernet/Wi-Fi) |
-| 2 | Router -> ISP | 16 | ~960 Mbps (Residential broadband) |
-| 3 | ISP Local | 18 | ~1 Gbps (ISP infrastructure) |
-| 4-14 | Backbone | 38-79 | ~10+ Gbps (Internet backbone) |
-
-First link speed estimation: ~960 Mbps (residential upload speed)
-
-### d) Time for last packet to enter first link
-
-Transmission time per packet = Packet size ÷ Link speed
-= 10,000 bits ÷ (960 × 10^6 bps)
-= 0.0104 ms per packet
-
-Time for all packets to enter first link:
-= 8,081 packets × 0.0104 ms/packet
-= 84.04 ms = 0.084 seconds
-
-At t₀ + 0.084 seconds, the last packet will be pushed into the first link.
-
-### e) Time for last packet to arrive at university server
-
-Total end-to-end delay consists of:
-- Transmission delays: Each link transmits the packet
-- Propagation delays: Signal travel time (estimated ~79 ms from traceroute)
-- Processing delays: Router processing (estimated ~1 ms per hop)
-- Queuing delays: Variable, estimated ~5 ms average per hop
-
-Calculation:
-- Time for last packet to enter first link: 0.084 s
-- Transmission time through all links: ~0.0104 ms × 14 = 0.146 ms
-- Propagation delay: ~79 ms
-- Processing delays: 1 ms × 14 = 14 ms
-- Queuing delays: 5 ms × 14 = 70 ms
-
-Total time = 0.084 s + 0.000146 s + 0.079 s + 0.014 s + 0.070 s
-= 0.247 seconds
-
-The last packet will arrive at the university server at t₀ + 0.25 seconds.
-
-## 2.2 Propagation Delay and Bandwidth-Delay Product (20%)
-
-> Consider that you are submitting another assignment from your home
-> computer to the university server, and you have worked out a list
-> of network links between your computer and the university server.
-
-Based on the network path from question 2.1 with links to university server.
-
-### a) Total distance calculation
-
-Distance estimation using geographic locations:
-
-From traceroute analysis and typical routing:
-
-1. Local network: Home -> ISP (≈ 5 km)
-2. Regional: ISP local -> regional (≈ 50 km)
-3. Long-haul: Regional -> major city hub (≈ 200 km)
-4. Backbone: Inter-city backbone links (≈ 1,500 km total)
-5. University: Final delivery to campus (≈ 20 km)
-
-Estimated total distance = 5 + 50 + 200 + 1,500 + 20 = 1,775 km
-
-Total distance ~ 1,775,000 meters
-
-### b) Propagation delay calculation
-
-Given:
-
-- Propagation speed: 2 × 10^8 m/s
-- Total distance: 1,775,000 m
-
-```
-Propagation delay (T_prop):
-T_prop = distance / speed
-= 1,775,000 m ÷ (2 × 10^8 m/s)
-= 0.008875 seconds
-= 8.875 ms
-```
-
-### c) Bandwidth-delay
-
-Assuming all links have the same speed `R` bps:
-
-From our analysis, the bottleneck is likely the residential connection at 960 Mbps.
-
-```
-Bandwidth-delay product:
-
-R × T_prop = 960 × 10^6 bps × 8.875 × 10^-3 s
-= 8,520,000 bits
-= 8.52 Mbits
-```
-
-### d) Maximum bits in links (continuous transmission)
-
-When sending a file continuously as one big stream, the maximum
-number of bits in the network links at any given time equals the
-bandwidth-delay product.
-
-Maximum bits in links = `R × T_prop` = 8,520,000 bits
-
-This represents the "capacity" of the network pipe, the total number
-of bits that can be "in flight" between sender and receiver at any
-moment.
-
-### e) Bandwidth-delay product implications
-
-The bandwidth-delay product represents the network pipe capacity and has several important implications (Kurose & Ross, 2021, Section 1.4.3):
-
-1. Buffer Requirements:
- - Minimum buffer size needed for optimal performance
- - TCP window size should be at least `R × T_prop` for maximum throughput
-2. Network Utilization:
- - To fully utilize the available bandwidth, the sender must keep at least `R × T_prop` bits in the network
- - Smaller send windows result in underutilized links
-3. Protocol Design:
- - Determines minimum window size for sliding window protocols
- - Critical for TCP congestion control and flow control mechanisms
-4. Performance Impact:
- - High bandwidth-delay product networks (like satellite links) require larger buffers and windows
- - Affects protocol efficiency and responsiveness
-5. Storage vs. Transmission Trade-off:
- - Shows the relationship between link capacity and propagation delay
- - Higher bandwidth or longer delays both increase the "storage" capacity of the network itself
-
-## 2.3 Web Cache Implementation and Performance (20%)
-
-> You have learned that a Web cache can be useful in some cases. In this problem, you will investigate how useful a Web cache can be at a home. First, you need to download Apache server and install and run it as a proxy server on a computer on your home network. Then, write a brief report on what you did to make it work and how you are using it on all your devices on your home network.
-> Assume your family has six members. Each member likes to download short videos from the Internet to watch on their personal devices. All these devices are connected to the Internet through Wi-Fi. Further assume the average object size of each short video is 100 MB and the average request rate from all devices to servers on the Internet is three requests per minute. Five seconds is the average amount of time it takes for the router on the ISP side of your Internet link to forward an HTTP request to a server on the Internet and receive a response.
->
-> * What is the average time α for your home router to receive a video object from your ISP router?
-> * What is the traffic intensity µ on the Internet link to your home router if none of the requested videos is cached on the proxy server?
-> * If average access delay β is defined as α/(μ−1), what is the average access delay your family members will experience when watching the short videos?
-> * If the total average response time is defined as 5+β, and the miss rate of your proxy server is 0.5, what will be the total average response time?
-
-### Apache Proxy Server Implementation
-
-I implemented an Apache HTTP proxy server with disk caching capabilities on the home network:
-
-- Installation: `brew bundle` to install Apache HTTP Server
-- Configuration: Forward proxy with disk caching (see `etc/apache/apache.conf`)
-- Management: Server startup/shutdown scripts in `bin/server` and `bin/client`
-- Network Setup: Server runs on port 8081, family devices configured to use proxy at `192.168.x.x:8081`
-
-The proxy server enables all family video requests to flow through the cache, providing the foundation for the performance analysis below.
-
-### Performance Analysis
-
-Given parameters:
-
-- 6 family members downloading videos
-- Average object size: 100 MB = 800 Mb per video
-- Average request rate: 3 requests/minute = 0.05 requests/second
-- ISP response time: 5 seconds average
-- Home internet connection: 960 Mbps
-
-### a) Average time (α) for home router to receive video object
-
-Transmission time calculation:
-
-- Video size: 100 MB = 800 Mb
-- Link speed: 960 Mbps (download)
-- Transmission time = 800 Mb ÷ 960 Mbps = 0.833 seconds
-
-```
-Total time (α):
-
-α = ISP processing time + transmission time
-α = 5 seconds + 0.833 seconds = 5.833 seconds
-```
-
-### b) Traffic intensity (μ) without caching
-
-Traffic calculation:
-
-- Request rate (λ): 6 members × 0.05 req/sec = 0.3 requests/second
-- Service time per request: 5.833 seconds (from part a)
-- Service rate (μ): 1/5.833 = 0.171 requests/second
-
-```
-Traffic intensity (ρ):
-
-ρ = λ/μ = 0.3 ÷ 0.171 = 1.75
-
-Note: ρ > 1 indicates system overload - requests arrive faster than they can be served.
-```
-
-### c) Average access delay (β)
-
-```
-Given formula: β = α/(μ-λ)
-
-Since μ = 0.171 and λ = 0.3:
-μ - λ = 0.171 - 0.3 = -0.129
-
-System is unstable (ρ > 1), so the traditional queuing formula doesn't apply directly.
-
-For stable analysis, let's assume the system can buffer requests:
-β = 5.833/(0.171 - 0.3) = 5.833/(-0.129) = -45.2 seconds
-
-Since the result is negative, this indicates queue buildup and system instability.
-
-Corrected interpretation: With ρ = 1.75, average delay approaches infinity due to queue buildup.
-```
-
-### d) Total average response time with proxy caching
-
-Given:
-
-- Miss rate: 0.5 (50% cache misses)
-- Hit rate: 0.5 (50% cache hits)
-
-Cache hit scenario:
-
-- Local cache access time: ~0.1 seconds (local disk access)
-- Response time for cache hit: 0.1 seconds
-
-Cache miss scenario:
-
-- Same as no caching: 5.833 seconds
-
-```
-Average response time:
-
-Total response time = (Hit rate × Hit time) + (Miss rate × Miss time)
-= (0.5 × 0.1) + (0.5 × 5.833)
-= 0.05 + 2.917
-= 2.967 seconds
-```
-
-With proxy caching, total average response time = 2.97 seconds
-
-### Summary of Results
-
-**Assignment Question Answers:**
-
-- **a) Average time (α)**: 5.833 seconds for home router to receive video object
-- **b) Traffic intensity (ρ)**: 1.75 (system overload - requests arrive faster than service)
-- **c) Access delay (β)**: System unstable with infinite delays due to ρ > 1
-- **d) Total response time with caching**: 2.97 seconds (50% cache hit rate)
-
-**Web Cache Performance Benefits:**
-
-The analysis demonstrates the critical importance of web caching in home networks:
-
-1. **System Stability**: Without caching, the traffic intensity of 1.75 creates an unstable system where requests queue indefinitely
-2. **Performance Transformation**: A 50% cache hit rate reduces average response time from infinite to 2.97 seconds
-3. **Bandwidth Efficiency**: Caching reduces external bandwidth usage by 50%, alleviating the bottleneck
-4. **User Experience**: Family members experience predictable, reasonable response times instead of system overload
-5. **Modern Internet Reality**: Even with gigabit internet (960 Mbps), the system is still overloaded due to ISP processing delays and request patterns
-
-**Key Insight**: Web caching remains essential even with high-speed internet connections. The bottleneck shifts from pure bandwidth to service capacity and ISP processing time, demonstrating that caching benefits extend beyond just bandwidth limitations.
-
-## 2.4 File Distribution Comparison (10%)
-
-> You have learned that a file can be distributed to peers in either client–server mode or peer-to-peer (P2P) mode (Kurose & Ross, 2021, Section 2.5). Consider distributing a large file of F = 21 GB to N peers. The server has an upload rate of Us = 1 Gbps, and each peer has a download rate of Di = 20 Mbps and an upload rate of U. For N = 10, 100, and 1,000 and U = 300 Kbps, 7000 Kbps, and 2 Mbps, develop a table giving the minimum distribution time for each of the combination of N and U for both client–server distribution and P2P distribution. Comment on the features of client–server distribution and P2P distribution and the differences between the two.
-
-Given parameters:
-
-- File size: F = 21 GB = 21 × 10^9 bytes = 168 × 10^9 bits
-- Server upload rate: U_s = 1 Gbps = 10^9 bps
-- Each peer download rate: D_i = 20 Mbps = 20 × 10^6 bps
-- Each peer upload rate: U = varies (300 Kbps, 7000 Kbps, 2 Mbps)
-
-### Distribution Time Calculations
-
-```
-Client-Server Distribution Time:
-
- T_cs = max(NF/U_s, F/D_min)
-
-Where D_min = min(D_i) = 20 Mbps for all peers
-
-P2P Distribution Time:
-
- T_p2p = max(F/U_s, F/D_min, NF/(U_s + ΣU_i))
-```
-
-### Results Table
-
-| N | U (Kbps) | Client-Server (seconds) | P2P (seconds) |
-| -- | -- | -- | -- |
-| N = 10 | | | |
-| | 300 | max(1680, 8400) = 8400 | max(168, 8400, 1512) = 8400 |
-| | 7000 | max(1680, 8400) = 8400 | max(168, 8400, 151.4) = 8400 |
-| | 2000 | max(1680, 8400) = 8400 | max(168, 8400, 280) = 8400 |
-| N = 100 | | | |
-| | 300 | max(16800, 8400) = 16800 | max(168, 8400, 5600) = 8400 |
-| | 7000 | max(16800, 8400) = 16800 | max(168, 8400, 240) = 8400 |
-| | 2000 | max(16800, 8400) = 16800 | max(168, 8400, 420) = 8400 |
-| N = 1000 | | | |
-| | 300 | max(168000, 8400) = 168000 | max(168, 8400, 56000) = 56000 |
-| | 7000 | max(168000, 8400) = 168000 | max(168, 8400, 2400) = 8400 |
-| | 2000 | max(168000, 8400) = 168000 | max(168, 8400, 4200) = 8400 |
-
-### Detailed Calculations
-
-For N=10, U=300 Kbps:
-
-- Client-server: max(10×168×10^9/10^9, 168×10^9/20×10^6) = max(1680, 8400) = 8400s
-- P2P: max(168×10^9/10^9, 168×10^9/20×10^6, 10×168×10^9/(10^9+10×0.3×10^6)) = max(168, 8400, 1512) = 8400s
-
-For N=1000, U=7000 Kbps:
-
-- Client-server: max(1000×168×10^9/10^9, 168×10^9/20×10^6) = max(168000, 8400) = 168000s
-- P2P: max(168, 8400, 1000×168×10^9/(10^9+1000×7×10^6)) = max(168, 8400, 2400) = 8400s
-
-### Client-Server Distribution Features:
-
-Advantages:
-
-- Simple implementation and management
-- Predictable performance
-- Server has complete control over distribution
-- No coordination overhead between peers
-
-Disadvantages:
-
-- Server becomes bottleneck as N increases
-- Distribution time grows linearly with number of peers
-- Inefficient use of peer upload capacity
-- Single point of failure
-
-Scaling characteristics: T_cs grows linearly with N when NF/U_s dominates
-
-### P2P Distribution Features:
-
-Advantages:
-
-- Utilizes aggregate peer upload capacity
-- More scalable than client-server for large N
-- Distributes load across all participants
-- Self-scaling with more peers
-
-Disadvantages:
-
-- Complex protocol implementation
-- Coordination overhead
-- Dependent on peer participation and upload capacity
-- Less predictable performance
-
-Scaling characteristics: Better scalability as collective upload capacity grows with N
-
-### Key Differences:
-
-1. Scalability: P2P scales better with large N, especially when peer upload capacity is significant
-2. Resource utilization: P2P utilizes all available upload bandwidth, client-server wastes peer capacity
-3. Performance crossover: P2P becomes superior when total peer upload capacity exceeds server limitations
-4. Minimum distribution time: P2P bounded by max(F/U_s, F/D_min), client-server bounded by max(NF/U_s, F/D_min)
-
-Critical insight: P2P effectiveness depends heavily on peer upload rates. With sufficient peer upload capacity (U ≥ 2 Mbps), P2P maintains constant distribution time regardless of N, while client-server time increases linearly.
-
-# References
-
-Kurose, J. F., & Ross, K. W. (2021). *Computer Networking: A Top-Down Approach* (8th ed.). Pearson.