You are on page 1of 4

How to Think Like a Hacker,

Even if You Cant Code

What do Mark Zuckerberg, Jeff Bezos, and Larry Page all have in common?
Yes, theyve founded multi-billion dollar businesses with virtually no formal business
training, with combined revenues of over $80 billion. But they are also former software
engineers and hackers, an experience that undoubtedly taught them skills in critical thinking
and problem solving. Making savvy business decisions is not only about engineering, but also
sales and marketing, business development, recruiting, and all the other functions of business.
(Bezos had experience working as a quant investor at hedge fund DE Shaw prior to founding
Amazon, but his duties had little to do with entrepreneurship or operations). I argue that their
programming backgrounds was critical to their success in these other areas as well.
I havent been an active coder since I was in my late 20s, but theres no question that my
experience learning how to code when I was 12 and teaching children and adults how to
program has changed the way I think about every aspect of my life. I developed an obsession
with research and formal process documentation in investing, entrepreneurship, education,
and even in my personal life, which all derive from that formative intellectual training.
Justin James, lead architect for Conigent, notes that great programmers understand not only
what their code does, but also how and why. Programming is effectively high-level problem
solving.
Writing code, however, is no easy feat. Compilers (which are responsible for translating the
programmers code into machine language) dont tolerate the ambiguity and imprecision of

everyday speech. The challenge for programmers, then, is specifying requests in such a way
that the compiler, and ultimately the central processing unit, will understand them. If a
programmer can explain problems in a way that a computer can understand them, those
problems are already effectively solved, because the programmer has figured out how to do
it. Problems may not always appear computational, but a skilled programmer nevertheless
finds ways to solve them computationally.
Programmers formalize, quantify, and solve problems (primarily with the aid of a computer)
using algorithms, or finite sets of well-defined instructions which, when followed exactly and
applied to a relevant problem, produce guaranteed results. With the exception of a small
number of well-documented problems in computability theory (e.g., the halting problem),
algorithms are theoretically capable of solving any computational problem. Not all algorithms
can be carried out in a reasonable amount of time, but a great many can.
It should be no surprise, then, that leaders who bring their programming experience
and algorithmic thinking to business have reshaped the playing field. After all, in the eyes of
an algorithmic thinker, solving problems based on intuition, opinions, and unsystematic
analysis of data is not an option. The algorithmic thinker will find a better solution and eat his
or her less savvy competitors lunch.
A skeptical reader might be thinking that algorithmic problem solving is abstract and
unnecessarythat such an approach is overthinking it. I definitely heard that response
when I applied algorithmic thinking to finding a spouse. But they would be missing the point.
Failing to use the right algorithm is far more costly than taking the time to come up with the
optimal solution.
Consider a task in which students are asked to sort a list of 100 integers with no specified
range of values. Many would undoubtedly start by finding the smallest number and putting it
at the front, then finding the next smallest number out of the remaining 99 integers, and so
on. This method is called selection sort. A clever student, however, might realize that she
could divide the list into 100 single-integer lists, then compare each adjacent list to create 50
sorted two-integer lists, and so on. By the time the student has merged up to a single 100integer list, it would be fully sorted. This method is called merge sort.
The first algorithm requires both a maximum and minimum of 4,950 comparisons, while the
second requires a maximum of about 450 comparisons and a minimum of about 275. To put
the difference in practical terms, if each comparison took one minute, merge sort could
accomplish in 7.5 hours what it would take selection sort over 3.4 days to do. Good
algorithms make a big difference, and merge sort isnt even necessarily the best or most
sophisticated sorting algorithm. Algorithms may sound abstract, but they are used to solve
problems we face on a daily basis (sorting the mail, scheduling, and deciding on the most
efficient path from point A to point B, for example).
Put simply, algorithmic thinking is breaking down a problem and constructing a set of welldefined steps to solve it. In practice, this involves rigorous testing, documentation, analysis,
and procedures. The problem of solving problems is not itself well enough specified to be
solved with a formal algorithm, but Ive listed the basic steps that an algorithmic thinker
takes in tackling a problem:

1. Analyze the problem and define it tightly. What are you trying to find or optimize?
What information is available to do so? Are there any constraints? Elegant code is simple
code, and simple code removes anything not conducive to meeting the why of the user. E.g.,
what concrete tools should I use to recruit great technical talent? How do I find a great startup
idea? How do stay fit while working an office job? Ive been working at ffVC on developing
a suite of standardized processes for many of the typical processes that a startup goes
through.
2. Break the problem down into components (also necessary for finding the brute force
solution). What specific set of steps is required to comprehensively solve the problem?
Solving a simplified example can be a useful exercise at this point.
3. Refine your basic solution. Are there any patterns in the brute force algorithm? Are
some steps reformulations of other steps or already-solved problems?
4. Recurse these initial steps to the subproblems. Apply steps 1-3 to the
steps/subproblems identified in step 3.
5. Implement each subproblem solution. Along the way, its important you design your
process for repeatability. In a coding context, that means using standard best practices like
defining terms (naming variables in a way that makes sense to another reader), adding
explanatory notes (commenting), and formatting properly (spacing). More on that below.
6. Test each subproblem solution. Be sure to check boundary cases. Separating code into
modules and rigorously testing each allows the programmer to localize errors so that instead
of having to search the entire program, he or she need only search one small section. This
general principle applies to pretty much any complex system.
7. Critically assess inefficiencies and iteratively improve solutions. Programming often
involves returning to ones work to make further optimizations or adjustmentsie, never
being satisfied. , for example, the name of one of the most famous and elegant algorithms in
computer science, was the product of decades of iterative improvements by multiple
academics. The Lean Startup movement draws from this same philosophy.
8. Once all subproblem solutions have been implemented, tested, and refined, do the
same for the overall solution.
How would you modify this rough pseudo-algorithm for solving problems?
While not discrete steps, two best practices are also critical for effective algorithmic problem
solving:
1. Document process and results. An example of documentation is the use of metadata,
defined by Frank Dilorios guide as data about data and the processes that support the
creation of data and related output. Dilorio notes that designing metadata is part fine art
and part black art, meaning that programmers need both an aesthetic sense and programmer

intuition. To be sure, inserting metadata is extra work in the short run, but in the long run its
a Quadrant II activity which makes your work much more useable.
2. Set up the process for long-term, unmanned use. A final characteristic of great
programmers is the ability to sacrifice short-term convenience for the sake of sustainable
longevity. Programming that takes easy shortcuts for the sake of expediency (e.g. nonintuitive variable naming or inefficient use of libraries) leads to unscalable, problem-ridden
code. First-class programmers create with an eye for the long-term consequences of what
they develop. The ability to control impulses and delay gratification is a relatively stable and
important trait.
A well-known study found that children who are able to resist the temptation to eat a
marshmallow for 20 minutes in order to receive a second marshmallow turn out to be better
psychologically adjusted, more dependable, and to score higher on the SAT years later than
children who eat the marshmallow right away. Fortunately, it is possible to improve selfcontrol simply by thinking about the world in a more global, abstract, and high-level way.
Given the efficacy and broad applicability of algorithmic thinking, I think it should have a
place in the standard academic curriculum. Heres a proof: its common for employers and
VCs in some of the industries with the highest recruiting standards (software, strategy
consulting, finance) to hire/back people who have degrees in disciplines like philosophy,
physics, and electrical engineering. This is surprising since those professions have virtually
nothing to do with those intellectual disciplines.
The reason employers do this is that certain disciplines like those listed provide training in
algorithmic thinking, and employers/VCs assume that if you can master those disciplines
youll be able to master the skills to work in the specific functions for which theyre
recruiting. The four highest-paying degrees in 2012 are engineering, computer science,
physics, and mathematics. The marketplace clearly values them, even though relatively few
students with degrees in physics or math actually work in a job that requires only those skills.
As Prof. Cathy Davidson of Duke points out, our current gold standard for large-scale testing,
multiple-choice tests, was invented in 1914 on the model of the assembly line. Its creator
characterized it as a way of assessing lower-order thinking. Indeed, multiple-choice tests
reward rote learning.
Why not ask algorithms questions instead wherever possible? An algorithm is right if it
works and wrong if it doesnt, with better algorithms having shorter run times. Algorithmic
thinking is process-oriented and exhaustive. It forces students to think meaningfully about
problems: they must deconstruct the logical structure of the problem, consider every
contingency, and come up with a solution. Because there is rarely a single right answer,
students are always creatively engaged in finding more and better solutions. Best of all,
algorithms are easy to teach. Michelle Levesque of the Mozilla Foundation, has even started
testing out a few strategies to do this.
What else in your life is computable?

You might also like