UGBA 104

Business Analytics

I had heard coming into this course that it was essentially an ‘Excel’ course. For the first half of the course, this expectation did not disappoint, or rather, it did. The first half of this course was mostly dedicated to teaching us excel syntax and best practices via basic concepts such as linearization and optimization. Needless to say, I’ve never been a fan of curriculum that could easily be learned on a job within the first few weeks. I had a special distaste for the exam, which was a throwback to some intro to coding classes that tested whether you knew the syntax of a specific language. Forgot an end-parenthesis? That’s a point off. Just a giant middle finger to anyone who really codes.

The second half of this course surprised me in a good way. I was fully expecting another 6 weeks of dry and brainless excel curriculum, but I was met instead with an introductory machine learning class! Sure, some of the assignments (none of which I actually did) still required usage of Excel, but only as a means of learning much more interesting concepts. Likewise, the exam consisted of ZERO Excel. God bless!

I have always wanted to take an introductory Machine Learning class. ML seems to be a ubiquitous tool among all of STEM these days. Additionally, being a student at UC Berkeley, I found it only wise to take advantage of one of the best CS programs in the world. Since sophomore year I actually had Berkeley’s quintessential ML class (CS 189) on my radar. Tragically, I never found time in my schedule to slot it in. As a result, I was wholeheartedly delighted to finally be learning some ML, even if it was in a business context. I realize the ML taught in this course is literally just the surface of the ML iceberg, but I still found it fascinating. Taking a step back, the fact that an undergraduate business curriculum requires their students to graduate with basic knowledge in Machine Learning just goes to show how progressive the Berkeley curriculum is. To UGBA 104, I say thank you for opening my mind to the field of ML.

Lin Reg and Log Reg

Just a few of the ML algorithms we learned

  1. K-means clustering
  2. Association
  3. Linear Regression
  4. Logistic Regression
  5. K-nearest-neighbors

Food for Thought

The following is the algorithm for K-means clustering:

  1. Assign clusters randomly
  2. Find centroids of each cluster
  3. Reassign to closest centroid
  4. Recalculate centroid locations
  5. Repeat until no data changes clusters

Does this algorithm always converge to the same output classfication? In other words, are the final clusters always the same regardless of the initial clustering assignment?

Leave a comment