diff options
| author | mo khan <mo@mokhan.ca> | 2025-09-15 16:20:00 -0600 |
|---|---|---|
| committer | mo khan <mo@mokhan.ca> | 2025-09-15 16:20:00 -0600 |
| commit | f8b44950a932822c6a2fec63d43011c4faf6c14e (patch) | |
| tree | 1dbc02e74386a21c981caa6539a2cc0219389768 /assignments/1 | |
| parent | 58722a514608d5c35d6feef8c85150e4c67c2ec7 (diff) | |
refactor: format headings to generate the proper table of contents
Diffstat (limited to 'assignments/1')
| -rw-r--r-- | assignments/1/README.md | 267 |
1 files changed, 138 insertions, 129 deletions
diff --git a/assignments/1/README.md b/assignments/1/README.md index d9d76fa..30b2202 100644 --- a/assignments/1/README.md +++ b/assignments/1/README.md @@ -9,8 +9,8 @@ fontsize: 11pt linestretch: 1.0 --- -## Part 1: Short Answer Questions (30%) -### 1.1 Traceroute Analysis (5%) +# Part 1: Short Answer Questions (30%) +## 1.1 Traceroute Analysis (5%) ``` Run Traceroute, TRACERT (on Windows), or another similar utility @@ -35,7 +35,7 @@ times with the following results: | 2 | 14:39 | 8 | 15.87 | 9.13 | | 3 | 15:08 | 8 | 15.36 | 9.12 | -Trace 1: +### Trace 1 ```bash $ traceroute -v google.com @@ -81,7 +81,7 @@ traceroute to google.com (142.251.32.78), 64 hops max, 40 byte packets | --- | --- | --- | --- | ----- | | | | | | 927.0721000000001 | -Trace 2: +### Trace 2 ```bash $ traceroute -v google.com @@ -115,7 +115,7 @@ traceroute to google.com (142.251.215.238), 64 hops max, 40 byte packets | --- | --- | --- | --- | ----- | | | | | | 262.969| -Trace 3: +### Trace 3 ```bash $ traceroute -v google.com @@ -148,29 +148,24 @@ traceroute to google.com (142.251.215.238), 64 hops max, 40 byte packets | --- | --- | --- | --- | ----- | | | | | | 263.380| -Analysis +### Analysis The traceroute measurements reveal several important network characteristics: -Round-Trip Time Analysis: - -- Measurement 1 (14:34): Highest average RTT (34.34 ms) with largest standard deviation (25.61 ms), indicating significant network variability -- Measurements 2 & 3 (14:39, 15:08): Similar performance with averages around 15.4 ms and standard deviations around 9.1 ms, showing more stable conditions - -Router Path Analysis: - -- Measurement 1: 13 hops - took a longer path through Google's network infrastructure -- Measurements 2 & 3: 8 hops - more direct routing to the same destination IP (142.251.215.238) -- The hop count reduction suggests load balancing and route optimization occurred between measurements - -Network Behavior Patterns: - -- Initial measurement showed higher latency and variability, possibly due to: - - Cold cache effects - - Different routing through Google's CDN - - Network congestion at that time -- Subsequent measurements benefited from optimized routing and reduced congestion -- The consistent performance in measurements 2 and 3 indicates stable network conditions during that timeframe +* Round-Trip Time Analysis + - Measurement 1 (14:34): Highest average RTT (34.34 ms) with largest standard deviation (25.61 ms), indicating significant network variability + - Measurements 2 & 3 (14:39, 15:08): Similar performance with averages around 15.4 ms and standard deviations around 9.1 ms, showing more stable conditions +* Router Path Analysis + - Measurement 1: 13 hops - took a longer path through Google's network infrastructure + - Measurements 2 & 3: 8 hops - more direct routing to the same destination IP (142.251.215.238) + - The hop count reduction suggests load balancing and route optimization occurred between measurements +* Network Behavior Patterns + - Initial measurement showed higher latency and variability, possibly due to: + - Cold cache effects + - Different routing through Google's CDN + - Network congestion at that time + - Subsequent measurements benefited from optimized routing and reduced congestion + - The consistent performance in measurements 2 and 3 indicates stable network conditions during that timeframe Key Findings: @@ -179,7 +174,7 @@ Key Findings: 3. Latency Consistency: Later measurements showed lower variability, indicating more predictable performance 4. Load Balancing: Different destination IPs (142.251.32.78 vs 142.251.215.238) demonstrate Google's distributed infrastructure -### 1.2 Internet Protocol Stack Layers (5%) +## 1.2 Internet Protocol Stack Layers (5%) > What are the five layers in the Internet protocol stack? Develop a table to summarise what each layer does. @@ -193,7 +188,7 @@ The Internet protocol stack consists of five layers, each with specific responsi | 2 | Link | Handles data transfer between adjacent network nodes (Ethernet, WiFi) | | 1 | Physical | Transmits raw bits over physical medium (cables, radio waves) | -#### Layer Details: +### Layer Details - Application Layer: Interfaces directly with software applications, providing services like web browsing (HTTP), email (SMTP), file transfer (FTP), and domain name resolution (DNS). - Transport Layer: Ensures reliable communication between applications on different hosts. TCP provides connection-oriented, reliable delivery while UDP offers connectionless, faster but unreliable delivery. @@ -201,7 +196,7 @@ The Internet protocol stack consists of five layers, each with specific responsi - Link Layer: Manages data transmission between directly connected nodes on the same network segment. Handles error detection and correction at the local level. - Physical Layer: Converts digital data into electrical, optical, or radio signals for transmission over physical media like copper wires, fiber optics, or wireless channels. -### 1.3 Packet-Switched vs Circuit-Switched Networks (5%) +## 1.3 Packet-Switched vs Circuit-Switched Networks (5%) > What are packet-switched network and circuit-switched network, > respectively? Develop a table to summarise their features, pros, @@ -236,31 +231,31 @@ is complete. | Fault Tolerance | Robust, automatic rerouting | Vulnerable, circuit failure breaks connection | | Performance | Variable delays, jitter possible | Predictable, consistent performance | -#### Advantages and Disadvantages: +### Advantages and Disadvantages -Packet-Switched Networks: +#### Packet-Switched Networks - Pros: Efficient bandwidth usage, robust against link failures, cost-effective, supports multiple simultaneous connections, flexible routing - Cons: Variable delays, potential packet loss, no guaranteed service quality, requires complex routing protocols, possible congestion -Circuit-Switched Networks: +#### Circuit-Switched Networks - Pros: Guaranteed bandwidth, predictable performance, simple once established, suitable for real-time applications, no packet overhead - Cons: Inefficient resource utilization, blocking when circuits unavailable, requires setup/teardown time, expensive for bursty traffic, poor fault tolerance -#### Examples: +### Examples: - Packet-switched: Internet (IP), Ethernet LANs, modern data networks, email systems, web browsing - Circuit-switched: Traditional telephone networks (PSTN), dedicated leased lines, T1/T3 connections -### 1.4 Network Delays and Traffic Intensity (5%) +## 1.4 Network Delays and Traffic Intensity (5%) > What are processing delay, queuing delay, transmission delay, and > propagation delay, respectively? Where does each delay occur? What > is traffic intensity? Why should the traffic intensity be no greater > than one (1) when designing a computer network? -#### Four Types of Network Delays: +### Four Types of Network Delays 1. Processing Delay (`d_proc`): - Definition: Time required for a router to examine the packet header and determine where to direct the packet @@ -279,7 +274,7 @@ Circuit-Switched Networks: - Where it occurs: Along the physical transmission medium (cables, fiber, wireless) - Formula: `d_prop = d/s` (where d = physical distance, s = propagation speed) -#### Traffic Intensity: +### Traffic Intensity Definition: Traffic intensity `(ρ) = La/R`, where: @@ -304,11 +299,11 @@ Design Principle: Networks must be designed with ρ < 1 to ensure: A safety margin (typically `ρ ≤ 0.7-0.8`) is often used to account for traffic variability and ensure good performance. -### 1.5 Web Caching (5%) +## 1.5 Web Caching (5%) > What is Web-caching? When may Web-caching be more useful in a university? What problem does the conditional GET in HTTP aim to solve? -#### What is Web Caching? +### What is Web Caching? Web caching is a technique where frequently requested web content (HTML pages, images, videos, etc.) is stored temporarily in locations closer to users than the origin server. When a user requests cached content, it can be served from the cache instead of fetching it from the distant origin server, reducing response time and network traffic. @@ -318,32 +313,31 @@ Components: - Cache hit: When requested content is found in the cache - Cache miss: When requested content is not in cache and must be fetched from origin server -#### Web Caching Usefulness in Universities: +### Web Caching Usefulness in Universities Web caching is particularly useful in university environments due to: 1. Shared Content Patterns: -- Multiple students often access the same educational resources, research papers, and popular websites -- High likelihood of cache hits for commonly accessed materials - + - Multiple students often access the same educational resources, research papers, and popular websites + - High likelihood of cache hits for commonly accessed materials 2. Bandwidth Optimization: -- Universities typically have limited internet bandwidth shared among thousands of users -- Caching reduces external traffic, preserving bandwidth for unique requests - + - Universities typically have limited internet bandwidth shared among thousands of users + - Caching reduces external traffic, preserving bandwidth for unique requests 3. Cost Reduction: -- Reduces bandwidth costs by serving content locally instead of repeatedly downloading from internet -- Improves network efficiency during peak usage periods (class times, assignment deadlines) - + - Reduces bandwidth costs by serving content locally instead of repeatedly downloading from internet + - Improves network efficiency during peak usage periods (class times, assignment deadlines) 4. Performance Improvement: -- Faster response times for students accessing cached educational content -- Reduced load on university's internet connection during high-traffic periods + - Faster response times for students accessing cached educational content + - Reduced load on university's internet connection during high-traffic periods -#### Conditional GET in HTTP: +### Conditional GET in HTTP Problem Addressed: + The conditional GET mechanism solves the problem of cache consistency - ensuring that cached content is up-to-date without unnecessarily downloading unchanged content. How it works: + 1. Initial Request: When a web object is first cached, the cache stores the object along with its `Last-Modified` date or `ETag` (entity tag) 2. Subsequent Requests: When the same object is requested again, the cache sends a conditional GET request to the origin server with: - `If-Modified-Since: <last-modified-date>` header, or @@ -353,19 +347,29 @@ How it works: - 200 OK: If content has changed, server sends the updated object with new Last-Modified/ETag Benefits: + - Maintains cache freshness without wasting bandwidth on unchanged content - Reduces server load and network traffic - Provides efficient mechanism for cache validation -### 1.6 Email Protocol Analysis (5%) +## 1.6 Email Protocol Analysis (5%) -> (5%) Suppose you have a Web-based email account, such as Gmail, and you have just sent a message to a friend, Alice, who accesses her mail from her mail server using IMAP. Assume that both you and Alice are using a smartphone to access emails via Wi-Fi at home. List all the network protocols that may be involved in sending and receiving the email. Discuss in detail how the message went from your smartphone to Alice’s smartphone—that is, how the message went through all the network protocol layers on each of the network devices involved in the communication. Ignore everything between your ISP and Alice's ISP. +> Suppose you have a Web-based email account, such as Gmail, and you +> have just sent a message to a friend, Alice, who accesses her mail +> from her mail server using IMAP. Assume that both you and Alice are +> using a smartphone to access emails via Wi-Fi at home. List all the +> network protocols that may be involved in sending and receiving the +> email. Discuss in detail how the message went from your smartphone +> to Alice's smartphone - that is, how the message went through all the +> network protocol layers on each of the network devices involved in +> the communication. Ignore everything between your ISP and Alice's +> ISP. Scenario: Web-based email (Gmail) sent to Alice who accesses email via IMAP on smartphone over Wi-Fi. -#### Network Protocols Involved: +### Network Protocols Involved -Sender Side (Web-based Gmail): +#### Sender Side (Web-based Gmail) - HTTP/HTTPS: Communication between browser and Gmail web server - SMTP: Gmail server sending email to Alice's mail server @@ -375,7 +379,7 @@ Sender Side (Web-based Gmail): - Wi-Fi (802.11): Wireless LAN protocol - Ethernet: If Wi-Fi access point connects via wired network -Receiver Side (Alice's IMAP access): +#### Receiver Side (Alice's IMAP access) - IMAP: Alice's smartphone retrieving email from her mail server - TCP: Reliable transport for IMAP @@ -383,9 +387,9 @@ Receiver Side (Alice's IMAP access): - Wi-Fi (802.11): Wireless connection to home network - DNS: Resolving mail server address -#### Detailed Message Flow Through Protocol Layers: +### Detailed Message Flow Through Protocol Layers: -Phase 1: Sending Email (Your Smartphone → Gmail Server) +#### Phase 1: Sending Email (Your Smartphone -> Gmail Server) 1. Application Layer: You compose email in Gmail web interface 2. HTTP/HTTPS: Browser sends POST request with email data to Gmail server @@ -394,7 +398,7 @@ Phase 1: Sending Email (Your Smartphone → Gmail Server) 5. Link Layer: Wi-Fi frames the IP packets with MAC addresses (phone → router) 6. Physical Layer: Radio waves transmit frames to Wi-Fi access point -Phase 2: Gmail Processing and SMTP Delivery +#### Phase 2: Gmail Processing and SMTP Delivery 1. Gmail Server Processing: Receives HTTP request, extracts email, prepares for SMTP delivery 2. DNS Resolution: Gmail server queries DNS to find Alice's mail server IP address @@ -404,7 +408,7 @@ Phase 2: Gmail Processing and SMTP Delivery 6. Network Layer: IP routes packets through internet infrastructure 7. Link Layer: Various link technologies (Ethernet, fiber, etc.) carry packets -Phase 3: Alice Retrieving Email (Alice's Mail Server → Smartphone) +#### Phase 3: Alice Retrieving Email (Alice's Mail Server → Smartphone) 1. IMAP Request: Alice's email app sends IMAP commands to check for new messages 2. Transport Layer: TCP establishes connection (port 143/993), segments IMAP requests @@ -412,7 +416,7 @@ Phase 3: Alice Retrieving Email (Alice's Mail Server → Smartphone) 4. Link Layer: Wi-Fi frames carry IP packets (phone → Wi-Fi router → ISP) 5. Physical Layer: Radio waves and various physical media transmit signals -Phase 4: Email Delivery to Alice's Phone +#### Phase 4: Email Delivery to Alice's Phone 1. IMAP Response: Mail server responds with email content via IMAP protocol 2. Transport Layer: TCP reliably delivers email data to Alice's phone @@ -427,13 +431,18 @@ Key Protocol Stack Traversals: - Upward: Each device processes from Physical → Application when receiving - Multiple Hops: Email traverses multiple network devices (routers, switches) between sender and receiver, each processing up to Network layer and back down -## Part 2: Long Answer Questions (70%) +# Part 2: Long Answer Questions (70%) > Solve the following network problems and show your work in detail. -### 2.1 File Transfer Analysis (20%) +## 2.1 File Transfer Analysis (20%) -> Consider that you are submitting your assignment in a compressed file from your computer at home to the university server that is hosting your online course. Your large file is segmented into smaller packets before it is sent into the first link. Each packet is 10,000 bits long, including 100 bits of header. Assume the size of the assignment file is 10 MB. +> Consider that you are submitting your assignment in a compressed +> file from your computer at home to the university server that is +> hosting your online course. Your large file is segmented into smaller +> packets before it is sent into the first link. Each packet is 10,000 +> bits long, including 100 bits of header. Assume the size of the +> assignment file is 10 MB. Given: @@ -441,16 +450,17 @@ Given: - Packet size: 10,000 bits (including 100 bits header) - Payload per packet: 9,900 bits -#### a) Number of packets +### a) Number of packets Number of packets = Total file bits ÷ Payload bits per packet = 80 × 10^6 bits ÷ 9,900 bits/packet = 8,080.81 packets = 8,081 packets (rounded up) -#### b) Links identified using traceroute +### b) Links identified using traceroute Based on traceroute to google.com (representing university server path): + 1. Home router → ISP gateway (192.168.1.254 → 192.168.0.1) 2. ISP local → ISP regional (192.168.0.1 → 10.139.230.1) 3. ISP → Internet backbone (10.139.230.1 → 209.85.174.62) @@ -459,20 +469,20 @@ Based on traceroute to google.com (representing university server path): Total identified links: 14 hops -#### c) Link speed calculations +### c) Link speed calculations Based on traceroute RTT measurements: -| Link | From → To | RTT (ms) | Estimated Speed | -|------|-----------|----------|----------------| -| 1 | Home → Router | 16 | ~100 Mbps (Local Ethernet/Wi-Fi) | -| 2 | Router → ISP | 16 | ~50 Mbps (Residential broadband) | -| 3 | ISP Local | 18 | ~1 Gbps (ISP infrastructure) | -| 4-14 | Backbone | 38-79 | ~10+ Gbps (Internet backbone) | +| Link | From → To | RTT (ms) | Estimated Speed | +| ----- | --------------- | ---------- | ---------------------------------- | +| 1 | Home → Router | 16 | ~100 Mbps (Local Ethernet/Wi-Fi) | +| 2 | Router → ISP | 16 | ~50 Mbps (Residential broadband) | +| 3 | ISP Local | 18 | ~1 Gbps (ISP infrastructure) | +| 4-14 | Backbone | 38-79 | ~10+ Gbps (Internet backbone) | First link speed estimation: ~50 Mbps (residential upload speed) -#### d) Time for last packet to enter first link +### d) Time for last packet to enter first link Transmission time per packet = Packet size ÷ Link speed = 10,000 bits ÷ (50 × 10^6 bps) @@ -484,7 +494,7 @@ Time for all packets to enter first link: At t₀ + 1.62 seconds, the last packet will be pushed into the first link. -#### e) Time for last packet to arrive at university server +### e) Time for last packet to arrive at university server Total end-to-end delay consists of: - Transmission delays: Each link transmits the packet @@ -504,17 +514,20 @@ Total time = 1.62 s + 0.0028 s + 0.079 s + 0.014 s + 0.070 s The last packet will arrive at the university server at t₀ + 1.79 seconds. -### 2.2 Propagation Delay and Bandwidth-Delay Product (20%) +## 2.2 Propagation Delay and Bandwidth-Delay Product (20%) -> Consider that you are submitting another assignment from your home computer to the university server, and you have worked out a list of network links between your computer and the university server. +> Consider that you are submitting another assignment from your home +> computer to the university server, and you have worked out a list +> of network links between your computer and the university server. Based on the network path from question 2.1 with links to university server. -#### a) Total distance calculation +### a) Total distance calculation Distance estimation using geographic locations: From traceroute analysis and typical routing: + 1. Local network: Home → ISP (≈ 5 km) 2. Regional: ISP local → regional (≈ 50 km) 3. Long-haul: Regional → major city hub (≈ 200 km) @@ -525,9 +538,10 @@ Estimated total distance = 5 + 50 + 200 + 1,500 + 20 = 1,775 km Total distance ≈ 1,775,000 meters -#### b) Propagation delay calculation +### b) Propagation delay calculation Given: + - Propagation speed: 2 × 10^8 m/s - Total distance: 1,775,000 m @@ -537,18 +551,19 @@ T_prop = distance ÷ speed = 0.008875 seconds = 8.875 ms -#### c) Bandwidth-delay product +### c) Bandwidth-delay product Assuming all links have the same speed R bps: From our analysis, the bottleneck is likely the residential connection at ~50 Mbps. Bandwidth-delay product: + R × T_prop = 50 × 10^6 bps × 8.875 × 10^-3 s = 443,750 bits = 443.75 Kbits -#### d) Maximum bits in links (continuous transmission) +### d) Maximum bits in links (continuous transmission) When sending a file continuously as one big stream, the maximum number of bits in the network links at any given time equals the bandwidth-delay product. @@ -556,31 +571,27 @@ Maximum bits in links = R × T_prop = 443,750 bits This represents the "capacity" of the network pipe - the total number of bits that can be "in flight" between sender and receiver at any moment. -#### e) Bandwidth-delay product implications +### e) Bandwidth-delay product implications The bandwidth-delay product represents the network pipe capacity and has several important implications: 1. Buffer Requirements: -- Minimum buffer size needed for optimal performance -- TCP window size should be at least R × T_prop for maximum throughput - + - Minimum buffer size needed for optimal performance + - TCP window size should be at least R × T_prop for maximum throughput 2. Network Utilization: -- To fully utilize the available bandwidth, the sender must keep at least R × T_prop bits in the network -- Smaller send windows result in underutilized links - + - To fully utilize the available bandwidth, the sender must keep at least R × T_prop bits in the network + - Smaller send windows result in underutilized links 3. Protocol Design: -- Determines minimum window size for sliding window protocols -- Critical for TCP congestion control and flow control mechanisms - + - Determines minimum window size for sliding window protocols + - Critical for TCP congestion control and flow control mechanisms 4. Performance Impact: -- High bandwidth-delay product networks (like satellite links) require larger buffers and windows -- Affects protocol efficiency and responsiveness - + - High bandwidth-delay product networks (like satellite links) require larger buffers and windows + - Affects protocol efficiency and responsiveness 5. Storage vs. Transmission Trade-off: -- Shows the relationship between link capacity and propagation delay -- Higher bandwidth or longer delays both increase the "storage" capacity of the network itself + - Shows the relationship between link capacity and propagation delay + - Higher bandwidth or longer delays both increase the "storage" capacity of the network itself -### 2.3 Web Cache Implementation and Performance (20%) +## 2.3 Web Cache Implementation and Performance (20%) > You have learned that a Web cache can be useful in some cases. In this problem, you will investigate how useful a Web cache can be at a home. First, you need to download Apache server and install and run it as a proxy server on a computer on your home network. Then, write a brief report on what you did to make it work and how you are using it on all your devices on your home network. > Assume your family has six members. Each member likes to download short videos from the Internet to watch on their personal devices. All these devices are connected to the Internet through Wi-Fi. Further assume the average object size of each short video is 100 MB and the average request rate from all devices to servers on the Internet is three requests per minute. Five seconds is the average amount of time it takes for the router on the ISP side of your Internet link to forward an HTTP request to a server on the Internet and receive a response. @@ -590,14 +601,14 @@ The bandwidth-delay product represents the network pipe capacity and has several > * If average access delay β is defined as α/(μ−1), what is the average access delay your family members will experience when watching the short videos? > * If the total average response time is defined as 5+β, and the miss rate of your proxy server is 0.5, what will be the total average response time? -#### Apache Proxy Server Implementation +### Apache Proxy Server Implementation Implementation Steps (Theoretical): 1. Download and Install Apache HTTP Server - Download from apache.org - Configure as reverse proxy using mod_proxy module - - Enable mod_cache and mod_disk_cache for caching functionality + - Enable `mod_cache` and `mod_disk_cache` for caching functionality 2. Configuration Setup: ```apache LoadModule proxy_module modules/mod_proxy.so @@ -619,7 +630,7 @@ Implementation Steps (Theoretical): - Set proxy settings in browsers, mobile devices, smart TVs - Router-level configuration for transparent proxying -#### Performance Analysis +### Performance Analysis Given parameters: @@ -629,7 +640,7 @@ Given parameters: - ISP response time: 5 seconds average - Home internet connection: ~50 Mbps (from previous analysis) -#### a) Average time (α) for home router to receive video object +### a) Average time (α) for home router to receive video object Transmission time calculation: @@ -644,7 +655,7 @@ Total time (α): α = 5 seconds + 16 seconds = 21 seconds ``` -#### b) Traffic intensity (μ) without caching +### b) Traffic intensity (μ) without caching Traffic calculation: @@ -660,7 +671,7 @@ Traffic intensity (ρ): Note: ρ > 1 indicates system overload - requests arrive faster than they can be served. ``` -#### c) Average access delay (β) +### c) Average access delay (β) ``` Given formula: β = α/(μ-1) @@ -678,7 +689,7 @@ Since the result is negative, this indicates queue buildup and system instabilit Corrected interpretation: With ρ = 6.3, average delay approaches infinity due to queue buildup. ``` -#### d) Total average response time with proxy caching +### d) Total average response time with proxy caching Given: @@ -705,7 +716,7 @@ Total response time = (Hit rate × Hit time) + (Miss rate × Miss time) With proxy caching, total average response time = 10.55 seconds -#### Benefits Analysis +### Benefits Analysis * Without cache: System unstable, infinite delays * With 50% cache hit rate: 10.55 seconds average response time @@ -719,7 +730,7 @@ Performance improvement: Key insight: Even a modest 50% cache hit rate transforms an unstable system into a functional one with reasonable response times. -### 2.4 File Distribution Comparison (10%) +## 2.4 File Distribution Comparison (10%) > You have learned that a file can be distributed to peers in either client–server mode or peer-to-peer (P2P) mode. Consider distributing a large file of F = 21 GB to N peers. The server has an upload rate of Us = 1 Gbps, and each peer has a download rate of Di = 20 Mbps and an upload rate of U. For N = 10, 100, and 1,000 and U = 300 Kbps, 7000 Kbps, and 2 Mbps, develop a table giving the minimum distribution time for each of the combination of N and U for both client–server distribution and P2P distribution. Comment on the features of client–server distribution and P2P distribution and the differences between the two. @@ -730,7 +741,7 @@ Given parameters: - Each peer download rate: D_i = 20 Mbps = 20 × 10^6 bps - Each peer upload rate: U = varies (300 Kbps, 7000 Kbps, 2 Mbps) -#### Distribution Time Calculations +### Distribution Time Calculations ``` Client-Server Distribution Time: @@ -744,24 +755,24 @@ P2P Distribution Time: T_p2p = max(F/U_s, F/D_min, NF/(U_s + ΣU_i)) ``` -#### Results Table - -| N | U (Kbps) | Client-Server (seconds) | P2P (seconds) | -|---|----------|------------------------|---------------| -| N = 10 | | | | -| | 300 | max(1680, 8400) = 8400 | max(168, 8400, 1512) = 8400 | -| | 7000 | max(1680, 8400) = 8400 | max(168, 8400, 151.4) = 8400 | -| | 2000 | max(1680, 8400) = 8400 | max(168, 8400, 280) = 8400 | -| N = 100 | | | | -| | 300 | max(16800, 8400) = 16800 | max(168, 8400, 5600) = 8400 | -| | 7000 | max(16800, 8400) = 16800 | max(168, 8400, 240) = 8400 | -| | 2000 | max(16800, 8400) = 16800 | max(168, 8400, 420) = 8400 | -| N = 1000 | | | | -| | 300 | max(168000, 8400) = 168000 | max(168, 8400, 56000) = 56000 | -| | 7000 | max(168000, 8400) = 168000 | max(168, 8400, 2400) = 8400 | -| | 2000 | max(168000, 8400) = 168000 | max(168, 8400, 4200) = 8400 | - -#### Detailed Calculations +### Results Table + +| N | U (Kbps) | Client-Server (seconds) | P2P (seconds) | +| -- | -- | -- | -- | +| N = 10 | | | | +| | 300 | max(1680, 8400) = 8400 | max(168, 8400, 1512) = 8400 | +| | 7000 | max(1680, 8400) = 8400 | max(168, 8400, 151.4) = 8400 | +| | 2000 | max(1680, 8400) = 8400 | max(168, 8400, 280) = 8400 | +| N = 100 | | | | +| | 300 | max(16800, 8400) = 16800 | max(168, 8400, 5600) = 8400 | +| | 7000 | max(16800, 8400) = 16800 | max(168, 8400, 240) = 8400 | +| | 2000 | max(16800, 8400) = 16800 | max(168, 8400, 420) = 8400 | +| N = 1000 | | | | +| | 300 | max(168000, 8400) = 168000 | max(168, 8400, 56000) = 56000 | +| | 7000 | max(168000, 8400) = 168000 | max(168, 8400, 2400) = 8400 | +| | 2000 | max(168000, 8400) = 168000 | max(168, 8400, 4200) = 8400 | + +### Detailed Calculations For N=10, U=300 Kbps: @@ -773,9 +784,7 @@ For N=1000, U=7000 Kbps: - Client-server: max(1000×168×10^9/10^9, 168×10^9/20×10^6) = max(168000, 8400) = 168000s - P2P: max(168, 8400, 1000×168×10^9/(10^9+1000×7×10^6)) = max(168, 8400, 2400) = 8400s -#### Analysis and Commentary - -#### Client-Server Distribution Features: +### Client-Server Distribution Features: Advantages: @@ -793,7 +802,7 @@ Disadvantages: Scaling characteristics: T_cs grows linearly with N when NF/U_s dominates -#### P2P Distribution Features: +### P2P Distribution Features: Advantages: @@ -811,7 +820,7 @@ Disadvantages: Scaling characteristics: Better scalability as collective upload capacity grows with N -#### Key Differences: +### Key Differences: 1. Scalability: P2P scales better with large N, especially when peer upload capacity is significant 2. Resource utilization: P2P utilizes all available upload bandwidth, client-server wastes peer capacity |
