Everyone who’s gone through a computer science undergrad has studied these data structures and algorithms in an academic sense. We use at least some of the data structures we learned back then on a daily basis.. lists, queues, stacks, and graphs (to name a few).
A lot of programmers will tell you that they rarely use the algorithms they were taught: merge sort, breadth-first search, binary search trees, etc.. I would be one of those. I believe the number of times I’ve written one of these or a recursive algorithm in general during a professional project is probably < 6 times throughout my entire ~10 year career.
That said! I’ve just completed a pretty thorough review of these computer science fundamentals and I’ve been asking myself if I would have used them more in the past 10 years had they stayed fresher in my mind. And would my code have been better? I can’t answer that for sure, however I do now believe that revisiting these fundamentals from time to time is not only fun… but keeps you sharp.
The primary reason any programmer studies these topics is to be able to design algorithms to solve actual real world problems.
Solving a problem using these concepts requires the ability to:
- Understand the problem, input data and expected result
- Identify edgecases and test data that exercises those edgecases
- Identify the best data structure to represent your data
- Identify alogorithmic options that best uses this structure
- Identify the efficiencies/tradeoffs of each of those options
- Implementing the best one
This is the first in a series of blog posts I’ll write around the above topics. Since anyone who learns these concepts does so to be able to solve actual problems, I want the focus to be on the stages of problem solving, the decisions that need to be made at each of these and the tradeoffs/considerations that come with each decision.