Week 1 of master’s in computer science

January 7th, 2019 marks the first day of my computer science master’s program through University of Georgia Tech. The week leading up to the first day was somewhat stressful since I was mid flight (returning to Seattle from London), my plane hovering over the massive blue sea, while I frantically typed away on my keyboard trying to register for a course in my allotted time window. Because of the spotty internet connection on the airplane, it took me about an hour and half to register for my course and by that point, the open spots filled up so fast that I positioned 100+ on the wait list (which I discovered, later on after reading online posts, that 100+ wait list is normal and that I would like get get into the course, which I did).

Anyways, despite all that headache, I’m officially enrolled in a course that I’ve wanted to take for a very very long time: Introduction to Operating Systems. So far, although it’s only been 1 week, I love the course, for multiple reasons.

First, I love the collaborative and sense of community of the program. Before getting into this master’s program, I was taking a handful of undergraduate computer science courses (e.g. computer organization, discrete mathematics, data structures) from University of North Dakota and University of Northern Iowa, two excellent schools that offered courses through their Distant Learning platform. Now although I learned a lot from the courses, I always felt like I was working in isolation, by myself, my only interaction was through a few short threaded e-mails with the professors. But now, with this course, there’s an online forum (i.e. Piazza) and chatty chatroom (via Slack, which was paid out of pocket by one of the TA of the course), where students fire off questions and comments (especially random comments in the #random slack channel). So in a sense, despite never meeting these folks, there’s a sense of comradery, a shared a goal.

Second, I’m learning a ton of material that’s not only applicable to my day to day job (as a software engineer) but material that I’m genuinely interested in. For the first week, the professor of the course has us reading a 30 page white paper (titled “Introduction to Threading”) written in 1989, a seminal piece of work (on threads and concurrency) that gives me perspective and appreciation of my industry. In addition to reading the white paper, I’m watching lectures covering fundamental operating system concepts (e.g. processes, virtual memory) and above all, writing C code! A ton of C code!

The project has us writing code for a multi-threaded client and multi-threaded web server (think implementing HTTP protocol) that’s intended to teach us how to write safe, concurrent systems that utilize the threading facility offered by the operating system.

The beauty of dynamic programming

I just discovered dynamic programming and damn, I’m blown away by the concept.  The other day, hile working through a homework assignment, I compared the run times between two python functions that I wrote, one function written recursively and the other written in a dynamic programming fashion.  Starting with the recursive solution, I arrived at the following:

That’s a fairly standard implementation of Fibonacci. There are two base cases; n=0; n=1.  So when n is either of these two numbers, the function simply returns 0 or 1, respectively.  But for any other number, the function recursively calls itself until reaching the aforementioned base cases.

So far so good, right?  And for calculating small values of n, this implementation doesn’t really present a problem. But say we want to calculate fibonacci when n equals 40. How long does this take? Alarmingly, this computation hovers around 45 seconds:

Now, what if we run the same calculation. But this time, we run it using a dynamic programming technique? How much time does that shave off?

What ?! From 45 seconds down to under a millisecond ?! How is that possible?

As you can see from the code above, instead of recurisvely calling fibonacci, we iteratively calculate all the values. In other words, this implementation runs linearly (i.e. direct proportion to n), unlike the first, recursive implementation, which runs exponentially.

 

 

Wrapping up discrete mathematics course

Last Friday, I took the final exam for my (distant learning) discrete mathematics course and just now I had logged into the student portal (i.e. Blackboard), surprised to find that my exam had not only been graded but my final grade had been posted as well. I finished the course with an 88%, a B, a few points short of an A.  In the past, I would’ve incessantly beat myself up over not achieving the highest grade, denigrating myself with self destructive thoughts: if only I tried harder … if only I studied more … if only I was smarter …

 But not this time.

This time, I’m inhibiting those thoughts. Instead, I’m celebrating. Celebrating that I’m fortunate enough to be able to take this mathematics course, a course where I learned about: writing proofs, solving Diophantine equations, applying set theory, counting with modular arithmetic, proving assertions with mathematic induction, converting recursive functions into closed form functions using characteristic equations method. Prior to the course, I was never exposed to those concepts.  Looking back, I only vaguely heard of those terms.  And who knows if I get to apply any of those concepts as I pursue a master’s – maybe a PhD, one day. Who knows if I’m lucky enough to apply that knowledge to my job as a software engineer.

But who cares?  Because really, my goal was to stretch myself, learning more about my field and craft: computer science.