It is now our final week in CST 311 and, in addition to studying for our final, we have a final Wireshark lab and homework assignments about Network Security. Our final will be timed and online at a specific time. No pressure!
We’re in our penultimate week of Networking and we are focusing on two chapters this week–one on the Link Layer of the OSI model and one on Security and encryption.
I’m most interested in learning more about Network security and cryptography.
This week we learned more about the network layer, focusing on the control plane in contrast to the data plane.
We also studied two types of shortest path (least cost) algorithms — Dijkstra’s algorithm and the distance-vector algorithm.
What confused me at first about Dijkstra’s algorithm is that I thought that nodes had to be traveled in the order in which they were added to N’. This is not the case. As soon as nodes are added to N’, you can traverse them if they are an immediate neighbor of whichever node you are currently on.
Secondly, I was confused about the distance-vector algorithm on this week’s assignment. While I understood how the distance-vector algorithm works, I was hung up on the wording of the problem, which instructed to “show the distance table entries at node z”. I eventually surmised that “entries at node z” meant to only determine the entries from node z and its immediate neighbors.
This week has focused on the Network layer. For one assignment, we had to determine the subnets and IP address allocations on a fictional network. Without any solid examples of how to work a problem like this out, I had to do a lot of extensive independent research to figure out how to ultimately solve this problem.
As usual, diagramming things out really helped me. You can find my diagram below:
Basically, given a network broken up into separate subnets, if you know how many interfaces each subnet should support, you can use CIDR to determine how many bits of a 32 bit IP address should be allocated to network addresses. In the example above, we are given that subnet A should support 250 interfaces. First, we determine how many bits are needed to represent 250 unique values. If we allocate 8 bits, that is 2^8 or 256 unique values (0 – 255). Subtracting those 8 bits from the total 32 bits allocated to an IP address, we get 24. This is the number of bits used for the subnet mask, written as /24.
Since we were given a starting address of 22.214.171.124, we know that the address of subnet A is 126.96.36.199/24 and that it supports IP addresses on the range 188.8.131.52 – 184.108.40.206.
Last, I was able to check my understanding with this CIDR subnet calculator!
We are now half-way done with our Computer Networking class, so we took our midterm. This is only the second time that our cohort has had to take a proctored exam at an exact time with everyone logged into Zoom with our cameras on. It’s a strange experience that I don’t prefer to the alternative of having a harder task-based exam with a larger window of time for completion.
Because we had the midterm, there was not a lot of new content covered. We went back to the second chapter of our textbook to review a section we had skipped over on socket programming, which I found interesting. I hope we will have an opportunity to complete an assignment that has to deal with socket programming.
There are some differences when programming an application that uses TCP versus UCP since TCP is connection oriented and UCP is connectionless. However, one commonality is that both protocols are “open” and specified in the RFC protocol standard. This means that independent developers can easily work on implementing a server or a client program that can work together. This also means that the server/client programs would make use of the well-known port number established for UCP or TCP.
This week we delved into the second layer of the Internet architecture burrito: the transport layer. However, it was impossible not to at least brush upon the network layer as well because of their tightly knit relationship. Transport-layer protocols (TCP and UDP) provide logical communication between application processes running on different hosts — but they’re domain is the host itself. The communication between hosts is the domain of the network layer.
During week 2 of our computer networking course, we focused on the application layer (the top layer of the five layer protocol stack that comprises Internet architecture).
In the beginning section of our reading, we were given an overview of common application layer protocols and the underlying transport layer–either TCP or UDP–associated with each. Most common application layer protocols seem to be built on top of TCP (transmission control protocol) rather than UDP (user datagram protocol).
What transport layer protocol should my app be built over?
TCP is a popular choice because it provides reliable data transfer service, providing error checking to ensure all packets of a message are received and assembled in the correct order. However, because TCP is more complex than UDP, it is more likely to suffer from latency. In some applications (e.g. telephony or media streaming), some data loss is tolerable when it means less latency. Therefore, such applications may be built over UDP rather than TCP. However, when an application cannot tolerate data loss (e.g. text messaging or file transfer) but can tolerate some latency, that application may be built over TCP.
Pushy protocols give and pully ones receive
We also learned the that application protocols can be classified as pull or push. For instance, HTTP is a pull protocol because a client uses HTTP to retrieve data from a web server. SMTP, on the other hand, is a push protocol because email clients push out data to a receiving email sever. This gives context to why we use these verbs in git.
Sorry, Mario. Your IP is in another DNS Castle
We also covered Domain Name Services (DNS). While I previously understood the function of DNS, that there are many DNS servers, and that not every one had an exhaustive list of IP addresses to map on to a given domain name, I was unaware of the extent to which DNS severs were categorized hierarchically and that requests can either be resolved iteratively or recursively.
One of my favorite parts of this week’s reading was about Peer-to-Peer (P2P) architecture because it made me nostalgic for when the architecture was first popularized when I was in high school circa 1999. During this time, many P2P media sharing applications were born–Napster, Morpheus, Kazaa, SoulSeek. This method of file sharing was revolutionary because anyone could be a seed (server) or a leech (client) at the same time and you did not have to have the entire file to serve chunks of data to others. P2P architecture scales much better than server-client architecture as the number of hosts increase. This is because every host becomes a server as soon as they successfully fetch some part of the target file.
After a short summer break, we have begun a new class in Computer Networking. I think this course will really demystify some of the concepts we glossed over in Internet Programming, particularly when it comes to making design decisions in creating a web API.
This week, we read the first chapter of Computer Networking: A Top-Down Approach (Kurose and Ross, 2016), which included a broad overview of the five-layer Internet architecture: