hosts (max.) | net | mask | bit pattern for lsb |
---|---|---|---|
200 (254) | 192.35.149.1 - 192.35.149.254 | ff ff ff 00 | HHHH HHHH |
60 (62) | 192.35.150.1 - 192.35.150.63 | ff ff ff c0 | 00HH HHHH |
30 (30) | 192.35.150.65 - 192.35.150.94 | ff ff ff e0 | 010H HHHH |
14 (14) | 192.35.150.96 - 192.35.150.110 | ff ff ff f0 | 0110 HHHH |
One can obviously also allocate 4 subnets, but that would be wasteful.
10 pts.; 4 subnets: 4 pts.
The overhead is 12 bytes for RTP, 8 bytes for UDP, 20 bytes for IP, for a total of 200 bytes. If IP-in-IP encapsulation is used in multicast tunnels, an additional 20 bytes are added. (Note: RTP was not discussed in class; thus, answers without RTP are correct, too.)
3 pts.
The receiver needs a timestamp to reconstruct timing, a sequence number if it wants to detect losses and possibly some information to indicate the encoding being used. (It may be useful to indicate the start or end of a talkspurt or video frame.)
3 pts.
The IP packet has to be split into three parts; it is not particularly important how many bytes are in each fragment. A possible splitting would be to have 1500 bytes in the first two fragments and 1040 bytes in the last. (The extra two IP headers require 40 bytes; we assume there are no IP options.) The identification field is used to identify fragments belonging to the same packet and has the same value in all three fragments. The more fragments bit is one in the first two fragments, zero in the last. The fragment offset values are 0, 1480, 2960. (The values in the offset field are coded in multiples of eight bytes, thus the actual field values would be 0, 185 and 370.) The relevant protocol fields are the identification filed, the more fragments bits and the fragment offset.
6 pts.
Traceroute sends UDP datagrams to the destination system and to a port unlikely to be used, incrementing the TTL value by one for each attempt. If a router drops the packet, it sends an ICMP "time-exceeded" message back. If it reaches the destination, that host returns "port unreachable", thus terminating the process. Traceroute allows to trace all routers between the source and the destination, to determine how far packets travel in case of faults and to determine roundtrip times to points along the path.
4 pts.
Ping sends an ICMP echo request to the destination host, which is answered by an ICMP echo reply. The ping packet contains a timestamp, which is used to calculate the round-trip time from source to destination and back. Ping is used to determine whether a host is reachable and to measure round trip times.
4 pts.
A host A could send a broadcast packet if it wanted to reach host B on the same Ethernet. If the destination B responded with a unicast packet back to A (it knows A's Ethernet address from the source address in the Ethernet frame), A would now know the Ethernet address of B and could send future packets directly to that address. This works only if B responds to A before A needs to send another packet. ARP requires an additional broadcast packet; a host needs to hold the data packet until an ARP answer arrives.
ARP has the advantage of relative efficiency due to caching. It is subject to security problems due to spoofing. Unlike broadcasting the first packet, it has to hold (and discard on timeout) IP packets until the ARP reply comes back.
6 pts.
IPv6 has 128-bit addresses rather then 32-bit source and destination addresses. It offers a flow field for grouping packets for similar treatment, including packet priorities. IPv6 options are not limited in size. Instead of a total length, the header contains a payload length. Fragmentation information has been moved out of the fixed header into an extension header; fragmentation is only performed end-to-end rather than by routers. The protocol field has been replaced by a field that specifies the type of the next header. IPv6 packets can be larger than 64 kBytes (jumbo payload).
6 pts.
A packet with a valid network number, but an invalid host forces the router to generate ARP requests.
6 pts.
The simulation output below shows the messages exchanged and the development of the routing tables. 4.03 (#7): C -> E: B (2) means that at time 4.03, packet 7 causes the routing table of node C to contain an entry for destination E, with next hop B and distance of 2. Times shown are arbitrary.
0.00: A sends #0 to B: (A,0) 0.00: A sends #1 to D: (A,0) 1.00 (#0): B -> A: A (1) 1.00: B sends #2 to A: (A,1) (B,0) 1.00: B sends #3 to C: (A,1) (B,0) 1.00: B sends #4 to E: (A,1) (B,0) 1.01 (#1): D -> A: A (1) 1.01: D sends #5 to A: (A,1) (D,0) 1.01: D sends #6 to E: (A,1) (D,0) 2.00 (#2): A -> B: B (1) 2.00: A sends #7 to B: (B,1) (A,0) 2.00: A sends #8 to D: (B,1) (A,0) 2.01 (#3): C -> A: B (2) 2.01 (#3): C -> B: B (1) 2.01: C sends #9 to B: (A,2) (B,1) (C,0) 2.01: C sends #10 to E: (A,2) (B,1) (C,0) 2.01 (#5): A -> D: D (1) 2.01: A sends #11 to B: (D,1) (A,0) 2.01: A sends #12 to D: (D,1) (A,0) 2.02 (#4): E -> A: B (2) 2.02 (#4): E -> B: B (1) 2.02: E sends #13 to B: (A,2) (B,1) (E,0) 2.02: E sends #14 to C: (A,2) (B,1) (E,0) 2.02: E sends #15 to D: (A,2) (B,1) (E,0) 2.02 (#6): E -> D: D (1) 2.02: E sends #16 to B: (D,1) (E,0) 2.02: E sends #17 to C: (D,1) (E,0) 2.02: E sends #18 to D: (D,1) (E,0) 3.01 (#8): D -> B: A (2) 3.01: D sends #19 to A: (B,2) (D,0) 3.01: D sends #20 to E: (B,2) (D,0) 3.01 (#9): B -> C: C (1) 3.01: B sends #21 to A: (C,1) (B,0) 3.01: B sends #22 to C: (C,1) (B,0) 3.01: B sends #23 to E: (C,1) (B,0) 3.01 (#11): B -> D: A (2) 3.01: B sends #24 to A: (D,2) (B,0) 3.01: B sends #25 to C: (D,2) (B,0) 3.01: B sends #26 to E: (D,2) (B,0) 3.02 (#10): E -> C: C (10) 3.02: E sends #27 to B: (C,10) (E,0) 3.02: E sends #28 to C: (C,10) (E,0) 3.02: E sends #29 to D: (C,10) (E,0) 3.02 (#13): B -> E: E (1) 3.02: B sends #30 to A: (E,1) (B,0) 3.02: B sends #31 to C: (E,1) (B,0) 3.02: B sends #32 to E: (E,1) (B,0) 3.03 (#14): C -> E: E (10) 3.03: C sends #33 to B: (E,10) (C,0) 3.03: C sends #34 to E: (E,10) (C,0) 3.03 (#17): C -> D: E (11) 3.03: C sends #35 to B: (D,11) (C,0) 3.03: C sends #36 to E: (D,11) (C,0) 3.04 (#15): D -> E: E (1) 3.04: D sends #37 to A: (E,1) (D,0) 3.04: D sends #38 to E: (E,1) (D,0) 4.01 (#21): A -> C: B (2) 4.01: A sends #39 to B: (C,2) (A,0) 4.01: A sends #40 to D: (C,2) (A,0) 4.02 (#25): C -> D: B (3) 4.02: C sends #41 to B: (D,3) (C,0) 4.02: C sends #42 to E: (D,3) (C,0) 4.02 (#30): A -> E: B (2) 4.02: A sends #43 to B: (E,2) (A,0) 4.02: A sends #44 to D: (E,2) (A,0) 4.03 (#23): E -> C: B (2) 4.03: E sends #45 to B: (C,2) (E,0) 4.03: E sends #46 to C: (C,2) (E,0) 4.03: E sends #47 to D: (C,2) (E,0) 4.03 (#31): C -> E: B (2) 4.03: C sends #48 to B: (E,2) (C,0) 4.03: C sends #49 to E: (E,2) (C,0) 4.04 (#29): D -> C: E (11) 4.04: D sends #50 to A: (C,11) (D,0) 4.04: D sends #51 to E: (C,11) (D,0) 5.02 (#40): D -> C: A (3) 5.02: D sends #52 to A: (C,3) (D,0) 5.02: D sends #53 to E: (C,3) (D,0)15 pts.
Using a window mechanism for flow control means simply that a sender can keep on sending data up to the size of the window. Only after that the sender must stop sending any more data and has to wait for an acknowledgment from the receiver.
With an RTT of 2 sec and a bandwidth of 2 Mb/s the ideal window size would be 4 Mbits (500 kBytes). A TCP sender can at most send the largest allowed window each RTT. So with a window larger than 4 Mbits, say 8 Mbits (which would still be acceptable by the hosts, who have local buffers of 8 Mbits), the source would actually be sending data at a rate of 4 Mbits/sec. This of course leads to congestion state in the network and packet losses. It also requires a more "modern" TCP that allows window sizes above 64 kBytes.
10 pts.
10 pts.
The slow start algorithm adds another window to the TCP source, a congestion window (cwnd) that is measured in packets, i.e., in units of the size of the maximum packet size possible for this connection. When a new connection is established or after the expiration of a timer, this window is set to the size of one packet. Each time an acknowledgment is received, cwnd is increased by the size of the acknowledged packets. The source can now transmit up to the minimum of the cwnd and the window size advertised in the acknowledgment packets.
Congestion avoidance: Through the exponential increase of the sending rate caused by the slow start algorithm, the capacity of the network will be reached at some time and an intermediate router will start discarding packets. In order to avoid this, the slow start algorithm was supplemented through the congestion avoidance algorithm. With this algorithm the congestion window is increased for each acknowledgment as follows: cwnd += 1/cwnd. Congestion avoidance is used when the congestion window is larger than ssthresh, the slow start threshold. During congestion avoidance, the congestion window increases by at most one packet each round trip instead of doubling it as was the case for slow start.
As the participating hosts have no means of knowing the ideal window size using these mechanisms allows the users to increase their window sizes gradually up to the ideal size.
10 pts.
Time-out and duplicate acks. Both force the receiver to wait until the retransmitted packet arrives, causing delays which may be unacceptable for real-time audio and video data.
5 pts.
#include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <string.h> #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <time.h> char *gettime(void) { time_t c = time(0); return ctime(&c); } int main(int argc, char *argv[]) { int s, t; struct sockaddr_in sin; char *r; int sinlen; if ((s = socket(PF_INET, SOCK_STREAM, 0)) < 0) { perror("socket"); return -1; } sin.sin_family = AF_INET; sin.sin_port = htons(13); sin.sin_addr.s_addr = INADDR_ANY; if (bind(s, (struct sockaddr *)&sin, sizeof(sin)) < 0) { perror("bind"); return -1; } if (listen(s, 5) < 0) { perror("listen"); return -1; } for (;;) { sinlen = sizeof(sin); if ((t = accept(s, (struct sockaddr *)&sin, &sinlen)) < 0) { perror("accept"); return -1; } r = gettime(); if (write(t, r, strlen(r)) < 0) {perror("write"); return -1;} if (close(t) < 0) { perror("close"); return -1; } } if (close(s) < 0) { perror("close"); return -1; } return 0; }
10 pts.
Forking a process is not worthwhile here since serving each request takes very little time, likely less than creating a new process.
2 pts.
FTP, NFS, HTTP, SMTP (if you consider email to be a file).
FTP (DELE), NFS (REMOVE); also HTTP/1.1, but that was not discussed in class. The telnet and rlogin protocols do not have any notion of deleting files. (After all, one could send an email message requesting deletion of a file and claim that SMTP supports file deletion...)
Telnet, rlogin.
Telnet (commands, preceded by IAC (byte 255); HTTP (headers and data separated by CRLF), rlogin (2 bytes of 0xff, followed by two flag bytes), NFS (separate packet types, SMTP (commands, with data ended with single dot).
Telnet and rlogin (urgent mode), ftp (separate control connection)
Telnet (data mark), rlogin (commands from server to client); ftp (like telnet synch) [we did not discuss this, so this is optional]
FTP (USER, PASS), HTTP (Authorization), NFS (as part of RPC). Note that the telnet and rlogin protocols do not know about authentication; they simply pass on the "Password" request from the server to the client.
8 pts.
http://www.tu-berlin.de/vorlesungen.html
2 pts.