Weeks 7 & 8 CST 363: Map-Shuffle-Reduce and the FINAL

During Week 7, we learned about MapReduce (or Map-Shuffle-Reduce), which is used in processing large amounts of data over distributed systems.

During the mapping stage, relevant data is selected from tables on each worker node. Then, during the shuffle stage, this data is redistributed and sorted into temporary tables on each node. These two stages may be repeated any number of times, depending on the number of joins is necessary to get the desired results. Finally, we enter the reduce stage, which combines the results from every worker node sent to the controller.

While the concept of MapReduce is easy to understand, it took a while for me to wrap my head around how it should be implemented in our Python model.

However, I ultimately realized that every temporary table created in the map-shuffle stages were equivalent to SELECT subqueries in the FROM clause in MySQL. The reduce stage is equivalent to the main query that uses these subqueries and JOINs to get a desired result. Once I understood this, crafting programs that would return result tables using MapReduce became incredibly easy.

I look forward to learning more about utilizing cluster computing in the future.

Entering week 8, I only have the final to worry about and I feel extremely prepared. I have read over the questions and I have no doubt that I will do well. I have learned much during this class and I look forward to integrating my knowledge of MySQL to Web Programming this summer.

Week 6 CST 363: ACID Properties

This week we focused on the importance of maintaining the ACID properties of databases.

Before understanding what these processes entail, one must understand that a database management system’s (DBMS) transactions are comprised of a group of tasks and that a task is the minimum processing unit that cannot be further broken down.


A: Atomicity

Each transaction must be treated as an atomic unit. That is, either ALL of its tasks are executed and committed or NONE.

C: Consistency

The database must remain in a consistent state after any transaction.

I: Isolation

Concurrent transactions (e.g. in the case of multiple users accessing the database at once) should be executed as if they are the ONLY transaction in the system. Transactions should not affect one another.

D: Durabilty

The database should maintain its latest updates even if the system fails/restarts. The system should hold commits or finish updating data once the system comes back online.

I feel pretty good about solving these past two assignments. They were challenging, but doable and enjoyable. On a lighter note, here is a picture of me drawn by one of my high school computer science students:

Week 5 CST 363 – B Trees

This week we learned about B-Trees and B+ Trees as a way of more efficiently reading and writing data from a hard disk drive (HDD).

In order to understand B Trees, it is important to understand a HDD’s structure and how data is read and written to it. A HDD is split up into tracks and sectors, two vital pieces of information utilized in block addresses. In addition to track and sector, we also need to know the offset (from byte 0) on which to read/write. In order to change sector, a HDD spins; in order to change tracks a HDD’s head shifts.

Once we understand a HDD’s structure, we can understand how data is stored on it, e.g. in terms of a database. Database records are rows within a table. The amount of space a row takes up on the HDD depends on how many bytes each column has been allocated. We can then use that information to determine how many records (rows) we can store per block. Block size is usually 512 bytes, but it depends on the HDD manufacturer. For example, if we create a database that has 5 columns: eid (10 bytes), name (50 bytes), dept (10 bytes), section (8 bytes), address (50 bytes), each record in that database would have a size of 128 bytes. We could then calculate that we could store up to 4 records per block (of 512 bytes). Furthermore, we would need 25 blocks to store a database with 100 rows in it.

Considering a database of this size, in the worst case scenario, we would have to access all 25 blocks on the HDD to return the results of a query. To make this process more efficient, we use indexing. An index is used to store a key (a column from the database) and a pointer to the record on the HDD. This reduces the number of blocks to access during search because the size of the index (key and pointer) will be smaller than the size of an entire record. Smaller size, means less blocks of storage and less blocks to search through. If our indexes, for instance, only take up 16 bytes, we would only need 4 blocks to store the indices for 100 records. This means, at maximum, we would only need to access 5 blocks (all 4 index blocks and the block to which the pointer points to).

But simple indexing is not the most efficient. Sparse indexing builds upon simple dense indexing by adding higher level layers of indexing on the lower level index table previously mentioned. These higher level indexes point to index blocks in the lower level table. Therefore, a single index could point to a block of 32 indexes. This is the notion of multilayer indexing.

Once we understand multilevel indexing, we can understand M-Way Trees. The m  stands for the degree and represents the maximum number of children a node can have. The number of keys a parent node can have is m – 1. To use M-Way Trees/Search, our data should be sorted from smallest to largest. A Binary tree is an example of an m-way Tree with a degree of 2. M-way search trees have child pointers associated with each of their keys. In multidimensional indexes, they also have record pointers. This is the node structure of a B-Tree.

B-Trees are M-Way trees with rules. These rules include:

  1. Each node should have m/2 children before filling the next node.
  2. Root can have minimum 2 children
  3. All lead nodes must be at the same level
  4. Creation process is bottom-up

The difference between a B-Tree and a B+ Tree is that B+ Trees do not have record pointers from every node — just the lead modes. Therefore, each key in a node is also copied on to one of the leaves. Leaves are linked together like a linked list. The leaf level is a dense index. The upper node levels are spare indices. For an illustration of how B+ trees work, use the B+ Tree Simulator!

Week 4 CST 363

This has been a busy week! We had a midterm and the final iteration of our DB/WI project due. I have learned quite a bit about the utility of data warehouses to create simple and understandable systems for end users to query a database. I have gained insight for how to continue with my Staff Directory System at my school site and how to structure different relational databases to the extract data into a central data warehouse to create different applications for different user purposes.

There are so many needs that my school has — teacher location, monitoring student success, preventing teacher burnout, ensuring equity — and I am beginning to see how an OLAP system can be applied to the decision making processes that make a school successful.