Computational Thinking – Computer Science for Business Professionals – by CS50 at Harvard
- Articles, Blog

Computational Thinking – Computer Science for Business Professionals – by CS50 at Harvard


[MUSIC PLAYING] DAVID J. MALAN: So odds are you
use a computer most every day, but you might not necessarily think
of yourself as a computer person. Something goes wrong, you don’t
necessarily know how to fix it. And if you actually want to solve
some problem using technology, the whole world of
technology, and computing, and algorithms and all of that
might seem all quite foreign. So much so that you can’t
necessarily feel like you’re making an informed decision. Well, that’s just because the
technology that we all use around us is kind of working at this level. Whereby there’s so many
people who have come before us that have invented this
level, and this level, and this level, and this level. And so what we’ll do here is try
to start from that ground level up. And indeed, at the base of all
computing, do we really have, as we’ll see, just zeros and ones? And that’s probably something you
know, at least, generally speaking. But it turns out once you
go have zeros and ones, can you very quickly go to text? Can you go to graphics? Can you go to videos? Can you go to spreadsheets? Can you go to so much more? But again, that’s the
world we now inhabit. But let’s now build
ourselves up to that point, so that you actually begin
to look around yourselves and realize, oh, I understand
now, what’s going on. And to do that, let’s consider this,
computational thinking, which really refers to thinking computationally. Thinking more methodically,
thinking more carefully, and somehow framing all problems in
the world as a sequence of steps. And those steps are quite simply,
input comes in, and output goes out. And what’s in between
those inputs and outputs we’ll eventually describe as
something called algorithms, but for now, let’s just consider
it to be this black box. We don’t know, we don’t care
what’s going on inside there. All that we care is that we get
back a correct output or a solution to a problem. But how do we go about representing
those inputs and outputs? If our problem at hand is to analyze
a whole bunch of financial data, a whole bunch of numbers
in a spreadsheet, how can we actually sum
all of those numbers together, or perform some other
arithmetic calculation on them? If we instead have a really big
document, a contract of some sort. And we want to make sure that
it is properly spellchecked? Well, the input there
isn’t numbers, but is all of the words and
letters in that document. And the output, we hope,
is a document without all of those little red squigglies. We want a correctly spelled document. So those are just some of the problems
that you might experience most any day, but underneath the hood, there’s
quite a bit of complexity going on. But that complexity,
it turns out, is just the result of layering, pretty
simple ideas, one on top of another. But in order to get to that
point in the discussion, we need to somehow represent these
inputs, these numbers, these letters, whatever it is that we have at hand. And to do that, we’re
going to use binary. Now odds are you’ve probably heard
that computers indeed only speak or understand zeros and ones. But how can that possibly be, when they
can do so much today, whether they’re on our desktops, or laptops or even
our mobile phones in our pockets, if all you have at the end
of the day is zeros and ones? How could you possibly count to two, let
alone three, or four, or four billion, let alone representing any
number of other types of media that computers today understand? Well, odds are you
recognize in this word here, this prefix, bi, bi meaning two,
and indeed that hints at the fact that you only have two letters
in your alphabet, so to speak. Two digits, zero and one. Now we humans have typically
grown up using the decimal system, dec meaning 10. And of course we have 0,1, 2,3,
4,5, 6, 7, 8, 9, 10 possible digits that we can permute and arrange
to count even higher than that. But with just zeros and ones, how do
you get to two, how do you get to three? Well, you’re thinking still in base
10, so to speak, base 10 being decimal. But we want to think now in
base 2, so to speak, binary, where we have just two of these inputs. And let me propose
that if you’re like me, you probably grew up understanding
numbers as really just this pattern of symbols like this, where each
symbol was in a column of sorts, a placeholder, if you will. And this, if you recall, was generally
called the ones place, and then the tens place, and then
with the hundreds place. And so at the moment, what we
really just have on the screen is a pattern of symbols, a pattern
of digits, one, two, three. But why is this pattern of
symbols, one, two, three, the number you and I think
of immediately as 123? Well, that’s because by way of these
columns or places do we implicitly, in our own mind, quickly
do a bit of arithmetic? Of course, if we have one, two, three,
that’s really like 100 times 1 plus 10 times 2 plus 1 times 3, and
of course that gives us 123. So it turns out that the binary system
is actually fundamentally the same. So if you get decimal, if you’ve been
counting like that since grade school, you are good to go now with
binary, because in fact and arguably, binary is even
simpler, because you don’t even have to keep track of so many digits. In fact, if we consider
this pattern of symbols, this of course, in our
human world, would generally be immediately recognized by
us humans as the number zero. Of course, it’s a little
silly to have those left most zeros, those
leading zeros, so to speak, because they don’t really
add much mathematically. But that’s OK. You can put as many zeros to the
left of a number in our real world and it doesn’t really affect the value. Now instead of using one and 10 and
100, let me just use one, two, and four. Now why those values? Well before, 1, 10, 100, and again
a thousand, 10,000, 100,000, and so forth. Those are powers of 10, 10 to the
zero is 1, 10 to the one is 10, 10 to the two is 100, and so forth. So what pattern do you perhaps see here? These aren’t powers of 10
anymore, 1, 2, 4, an 8, 16, 32. Now you really hear the pattern? These are instead powers of two. Two to the zero is one, two to
the one is two, two to the two is four, and so forth. So that’s the only change. By using base 10, just
two digits, zero and one, can we still count using
the same arithmetic system. So for instance, this of course
is still the number zero, because we have zero fours,
and zero twos, and zero ones. But if we instead change
this pattern to 0 0 1, well what value does this now represent? In the world of decimal, this
would represent the number one. And that’s true in the world of binary,
because we have 4 times 0 plus 2 times 0 plus 1 times 1, which
of course is just 1. So how do we get to two? You might be inclined to just
change this zero to a one, but that would be incorrect. That’s like carrying
the one, but then not wrapping around on the rightmost digit. Rather, if we want to
represent the number two, we need to come up with a
pattern that represents that. So let me propose instead that we
don’t just change that zero to a one, but we also change that one to zero. Because now we have 4
times 0 plus 2 times 1 plus times 0 which of course
is the number we know as 2. And now you can perhaps
grok the pattern at hand. If we want to count up to
the number we know as three, let’s just change that
rightmost one to a one. So that’s four times 0 plus
2 times 1 plus 1 times 1. That of course is three. Four is perhaps jumping out at you now. We just change that zero to a one, and
then the other two digits to zeros. And now if we want to count up to
five, or to six, or to seven, or to– dang it, what’s supposed to come next? It would seem that in the binary
system, we can only count as high as– what, seven? And that’s a little strange, because
of course, we started at zero, we counted here up to seven, but surely
computers can count higher than that. And that’s fine, just
need another digit. Much like if we wanted to count from
999, in our human decimal world, up to a thousand, which has four digits. Similarly here, do we
need another digit? We need an eights place,
and so we could represent the number we know as eight
with one, zero, zero, zero. But the key here is that
we needed another digit. To put it more mechanically,
we needed more hardware. We needed another physical place,
at least in this depiction, to actually store that one. And now we begin already to hint at a
connection between this digital world of zeros and ones, and the hardware
world in which it’s typically incarnated. And by that, I mean, we
need to decide, ultimately, how and where to store
patterns like this. And you know the nice thing about
binary only having two values, is that that effectively,
means you have just two states. And you might think of these two
states as something being on, or something being off. And maybe we might think of something
being on as a one, and something being off as a zero. And so just with two states,
can we have these two extremes. And maybe it’s not on or off, maybe
it’s true, or false, or black, or white, or red, or blue. It doesn’t even matter what
we call the two states, the key is that we have two of them. And you know In the physical
world, the simplest thing I can think of to turn on and off,
is perhaps something like this. Just a light bulb. This here’s a little
desk lamp, and in fact, if I go ahead and clip
this on here, and let me plug this into an
electrical outlet so that I’m getting some additional input if
you will, electricity or electrons, this lamp right now I claim is
representing the number zero. It’s completely off, and I’m just going
to agree, to agree with you, that this shall represent zero. But you know what, if I want to
represent a one in the physical world, just going to turn it on. And so now we have something that’s
on, or true, we’ll call that a one. Zero, one. Of course, we’re not really
counting all that high just yet. If I want to count higher than
zero to one, to something higher, I’m going to need another light bulb. I’m going to need more hardware,
more memory, if you will, in the world of computers. So let’s actually now give
myself a second zero or one. Plug this thing in here. Give it some electricity as
well, which again, is really one of the few physical
resources we have in a computer, whether it’s coming from
a power cord or a battery. And so now, if I want to represent
the number zero, here we are. But I’m doing it really with zero zero. This now is going to be one, this now
is going to be two, and this, of course, is going to be three. And let’s not stop there. Why have two desk lamps when
you can have three desk lamps. If we instead make a little
more room for here, and we want to instead now count
not just as high as three. Let me go ahead and plug-in
this third and final desk lamp, and go ahead and turn this on. And of course, we instantly
go from three to– not quite instantly because
you have the hands– to the number four. And so forth. So what’s now the connection
between the digital and the sort of analog
world of light bulbs, and the physical world of computers? Well inside of a computer is a whole
bunch of tiny, tiny little bulbs, so to speak. In fact, you can think of what’s inside
of your computer is exactly this. So many little light bulbs
that are either on or off. And thanks to those little light bulbs,
can your computer represent numbers? Zero, one, two, three, four, as many– as high of a number as it wants so
long as it has enough little lights. Now of course I’m oversimplifying. It’s not actually lights that you
have inside of your computer, rather, you have just the switches. These switches being called in
the computing world transistors, as you may have heard. Indeed, one of the things that companies
like Intel are increasingly doing, is packing more and more transistors
into their CPUs, central processing units. The brains of computers. And with those additional transistors,
can they store even more values, or equivalently can
they count even higher? And so now, even though we
began the discussion down here with just zeros and
ones in the abstract, now we’ve made it a little
more physical, in so far as we can represent those zeros
and ones with something physical, drawing electricity to power
these kinds of light bulbs. Of course, we can now
miniaturize that and consider these light bulbs to really be, what
we’ll call in the computer world, transistors. But at the end of the
day, even though we have now a way of
representing information, the zeros and ones, even
though we have a way now of representing even bigger numbers
by using even more transistors inside of our computer, we’re still
talking only about numbers. And with my computer I want to
do more than just use something like Microsoft Excel. I want to do more than
just do math on my data. I want to actually store
things like letters, and words. Let alone, colors, and
images, and videos, and more. So we need to abstract higher. So again zeros and ones, we know
how to represent it, now let’s build on top of the zeros and
ones and start to represent something more interesting. Something more
alphabetical, if you will. And so this gives us ASCII,
the American Standard Code for Information
Interchange, or more modernly referred to as a superset of Unicode. Turns out, human some time
ago, decided on a mapping between letters and numbers. Somewhat arbitrarily,
but in a way that’s convenient when it comes to actually
programming, things like this. And this is to say that
humans time ago decided, you know what, we are going to represent
the letter A using the decimal number 65. Now what does that mean? Well, this means that if your computer
is to store the letter A, or the letter B, or the letter C, it simply
has to store ultimately the number 65, or 66, or 67. What does it mean to store
number like 65, 66, or 67? Well that just means to store some
pattern of bits, some pattern of bulbs as we did a moment ago, and turn
them on and off in such a way that you’re representing in binary,
the decimal number 65, 66, 67. So if you think about something like
Microsoft Word or Google documents, or any program in which you
type text, what you’re really doing by typing on the keyboard,
is sending some pattern of signals that’s telling the computer to store
not just the letter ABC per se, but really to store the
number 65 or 66 or 67, or really, to store the
pattern of zeros and ones that ultimately represent
those same values. So again, this spirit
of layering binary now becomes ASCII, or Unicode,
something higher still. And so with this can
we represent messages? For instance, if I wanted to represent
a message like this, a familiar message, there say, I might store
these three values. 72, 73, and 33. Now what is that? Well turns out if I look back at
this pattern here, where 65 is A, and in tens 72 is H, and 73 is I, well
what are we really representing here? It would seem we’re
representing the pattern H I, and then you wouldn’t
know this from that chart, but if you look at
another online reference you’ll see a 33 is an exclamation point. So now we have the word
Hi or the exclamation Hi. But that’s in the context of a text
editor, or word processor like word. Suppose, instead, you
were using not a text editor, but something like
Photoshop, or MS Paint, or some other graphical program. Well instead, you might have
that same pattern of numbers, or really bits at the end of
the day, but in this context, something like Photoshop,
these numbers are meant to be values between 0 and 255. Turns out, long story short, that
if you have eight zeros or ones, where zero or one, you
know what, let’s just start calling them by their formal
name bits for binary digits. If you have eight bits, you can
count from 0 all the way up to 255. And the quick math there
is 2 to the 8, means you have 256 possible
permutations of zeros and ones. So therefore, if you start counting at
zero you can count as high up as 255. And so in the context of a graphics
program, you can think of 72 in so far as it’s not quite halfway between zero
and 255, it’s a medium amount of red and a medium amount of green. Not too much blue. Where zero means none of this color,
and 255 means a lot of this color. So if you had a pattern of bits,
and in-turn numbers, stored inside of your computer for the
purposes of a graphics program, it’s really like telling the computer
give me a medium amount of red, give me a minimum amount of green
and just a little bit of blue, and combine those like paint,
or like frequencies of light., until you get the
summation of those, really, are the combination
really of those, which is this murky shade here of yellow. So again, using the
same patterns of bits, can we represent either letters
and words and paragraphs, or in another context all together,
could that same pattern of bits still represent numbers, but be
interpreted not as letters and words and paragraphs, but as colors. If you treat each trio of 8 bits
as representing some amount of red, some amount of green,
some amount of blue, otherwise known as, if you’ve heard
the term, RGB, for red, green, blue. So this principle of starting
simple, and gradually making things more and more
and more complicated, is really a principle of abstraction. Because at this point
in the story, when we’re talking about words and paragraphs
and essays and documents, or in another context we’re
talking about images, or maybe even movies and more, we no longer
really need to care about, or even need to know
about, or understand, what’s underneath the
hood those zeros and ones, because we’ve abstracted
away from that lower level. And this principle of
abstraction, layering idea on idea on idea on idea such
that you no longer need to worry about how the lower
level ideas are implemented is nice, because it allows
us humans to focus only on the problems we
really care about, which in theory are those top most problems. The ones that are immediately at hand,
that we’re building solutions to, on the shoulders of, computer
scientists, and engineers, and just colleagues that
have come before us. And we don’t have to
get lost in the weeds, so to speak, of the earlier complexity. And so this is a powerful
problem solving technique, and indeed it’s a principle that
we’ll see applied ultimately in the world of programming as well. Now let’s try something. Speaking of abstractions,
let me encourage you at this point to take out a piece
of paper if you have one there. And surely, if you don’t
have one right there, surely you could pause this
video and actually go dig one up. Grab a pencil too, or pen. Yeah, Ill wait. Now you could just pause this,
I can’t wait all that long. Let’s assume at this point in the
story you’ve got that piece of paper, and a pencil or a pen, and
let’s play a little game. I of course can’t really
see what you’re doing. But I’m going to hope that you
either do or don’t do what I do, because either way it
will be instructive. And I’m going to leverage
some abstractions here, for better or for worse. I want you to go ahead, and please
don’t be one of those people like me who’s just following along
pretending like all right, I have the piece of paper, I have the
pencil, this is what I would be doing. Actually do this. This will be kind of fun for one of
us in the end, if this works out. Go ahead now, on that piece of paper,
with your pen or pencil, and go ahead and just draw a circle. OK. All right. And below that circle draw a square. All set. All right, and below that
square draw a triangle. Now odds are, you know exactly what
I meant when I said draw a circle, and perhaps you did a
little something like this. Or a little more perfect
than my circle there. And then beneath that
I said draw a square. And you just intuitively
know what a square is, so you might have
drawn a square like this. And then the third thing I
said was draw a triangle. And so you might have drawn a
triangle that looks like this. But immediately, ant that looks
curiously like part of a jack-o-lantern now. But curiously, there’s
some ambiguity there. Right, a circle is kind of a circle,
but I didn’t specify the radius or diameter, so maybe yours is
bigger, maybe yours a smaller, maybe it’s over here on your paper,
maybe it’s over there on your paper. So already, these abstractions
are useful in that you immediately knew what to draw, but you didn’t
really know how to draw it. Similarly, this square, I don’t know
why I drew it a little smaller here, but it’s indeed smaller,
and it’s barely a square. But it’s meant to be a square. But there’s a gap between it and the
circle, but I didn’t really specify. So there, too, the
abstraction is valuable in that you immediately knew how to
draw a square, but not where to draw it. Or in what size to draw it. And lastly, the triangle,
perhaps the most, the juiciest opportunity
for ambiguity, I didn’t tell you how to
orient that triangle. Maybe you, instead, did
something a little different, where your triangle
wasn’t drawn like that, with the pointy part at the bottom. Maybe you, instead, did
something like this. Or maybe it was somewhere
else on the paper altogether. So now at this point in the story, you
could have followed my instructions exactly as I described, but
we could still somehow come up with different solutions. And in fact, if I can now spoil
what the actual task at hand was, this was the picture I was describing. This picture here has
a circle, below which is a square, below which is a triangle. But that leaves out some key details. And the curious thing here, is that
even though abstraction is a useful mechanism, once you start to move away
from those implementation details, if you will, you very quickly
realize that I don’t really know what you’re telling me to do, necessarily. And the challenge is, that computers,
as complicated or intimidating as they might seem to you, they’re
really not all that bright. Right. They can only do what they
are explicitly told to do. And so if you, the human,
or you eventually perhaps, the programmer, don’t
actually specify absolutely precisely what you want the computer– or the human in this
case– to do, you might not get the output, or the correctness of
a solution, that you’re hoping for. In fact, let’s try one
other, and let’s see what the other extreme might feel like. So on that same piece of
paper, maybe on the flip side or another sheet of paper, let’s
play this game just one more time. And this one’s going
to be a little harder. I’m going to try to tell
you exactly what to do. So lesson learned,
that was too abstract. Let’s now drill down on
the implementation details. OK. So let me think like a computer
might, or I think a computer might, go ahead and put your pen or
pencil down on the piece of paper toward the top middle of the page. Now from that point, move say Southwest
to halfway down the piece of paper. So really at a 45 degree downward angle. And then go ahead and move south
east, or a different 45 degree angle, toward the bottom of the page. So you’ve kind of sort of made
the left half of a triangle, or not really that, 2/3 of a
triangle, two sides of a triangle. But stay where you are. At this point in the story your pen
is probably below your original dot. But you’ve drawn two
lines that form an angle. From where your pen now
is, draw a line Northeast, or in an upward and
to the right 45 degree angle, to the same height
as your previous line. And then double back and go say
Northwest, or a different 45 degree angle, still back to the original point. And at this point in the story, you
have really either nothing at all, or you have a diamond. And therein lies a curiosity too. I could have just said diamond,
draw a diamond, or draw a kite. Which would be an equivalent shape. But that too lends itself to same
ambiguity, so I drill down deeper. But my God, it’s taken us just a minute
or more just to get to this point. And we’re not done yet. We’re not drawing a diamond or a kite. Now you have a diamond or a kite, with
a top vertex or point, a bottom one, a left and a right. Move your pen to the left one,
and draw a vertical line down. Now go back to the diamond or the
kite, and on the right vertex or point, draw similarly a vertical line down. And now, in that original diameter kite,
at the bottom most vertex or point, draw a vertical line down. And now, if you followed along
with these very precise machine instructions, you’ve got three vertical
lines that are just kind of dangling. I don’t even want to mislead you with
any hand gestures, three vertical lines that are dangling, go ahead
and draw two lines that connect the ends of those three lines. Oh my God. Like it’s so complex to do what we just
did, and I would put money on the fact that you did not draw correctly,
what I was trying to get you to draw, which is just a cube. Right, a cube is a wonderfully
simple abstraction. A shape with which you and I
are probably long familiar, and it’s so easy to say draw a cube. But as we saw before with the circle
and the square and the triangle, just saying draw a cube is ambiguous. At what angle should it be oriented? How big should it be? Where on the piece of
paper should it be? And so I was trying to
be so much more precise this time by having you put
your pen down at the top, go down to the southwest
and draw this line, then go down to the southernmost point
here, then another Northeasternly line, and then a north westerly line. And that just got us to the kite. But the takeaway here,
is that when it comes to making a computer
do what you want to do, you can’t just speak these abstractions. You actually have to implement
them, or program them, or code them at least once. In fact, some of the earliest graphical
programs in the world of computing, were kind of as low level as this. There was an old programming
language called logo, in fact, that allowed you to
program by moving a cursor, like a turtle of sorts, up and down
and left and right on the screen. And putting either down
or up a marker of sorts, and that you could draw shapes like this
by just moving around on the screen. But to draw things like this clearly,
as in our verbal example here, you have to be so darn precise. And it just gets so tedious so quickly. It certainly would take all of
the fun out of using a computer, or all the fun out of using
programming a computer, if you had to do this every time. But that’s where there’s an
ingredient here to be leveraged. One of the things that a computer
scientist, and a programmer, and engineers, more
generally, very often do, is they absolutely implement these
kinds of low level details once. They go through those very
methodical, if mundane, steps of getting something just right. And then they save the
instructions they wrote. They save the programs
they wrote, if you will, so that they can reuse them later. And the fancy words for these
things will eventually see are called libraries. Or functions. Or other names still. So once one human in the world has
implemented a program, if you will, with which to draw a cube, similarly
can we stand on his or her shoulders and reuse that same routine. And hopefully, they were clever
enough to allow us to parameterize it. To customize it, by maybe changing
the angle and the size and the depth and so forth. So it doesn’t just have
to be that one cube. And so here we have a wonderfully
powerful problem solving technique. Abstraction. Which allows us to say what we mean,
and the rest of the humans in the room just immediately understand– at
least after some instruction– what it is we’re talking about. But with computers being these
very little literal devices, we can only talk at those
levels of abstraction once we’ve actually built up software,
implemented solutions to get us to that point in the conversation. And this is why, at first glance,
using a phone in your pocket or a computer on your desk might
actually seem super, super complicated. There’s so many moving parts. And absolutely there are. Windows and Mac OS are
literally the result of millions of lines of programming code these days,
having been written over the years. And so of course it’s to be expected
that you might be a little daunted or overwhelmed by the
apparent complexity. But one of the goals
here for this lesson, is to really help you appreciate
that beneath all that complexity, is a simpler idea. And then an even simpler idea. And then a very simple
idea, and so forth. And so once you sort of bottom out
and understand those first principles, zeros and ones, binary on top of which
might be Ascii or Unicode, on top of which might be some
other encoding still, can you resume the current
conversation and understand that what might have looked
completely complex at first glance, is really just the result
of assembling, if you will, a whole bunch of pretty
simple puzzle pieces. So at this point in the story, we now
have a way of representing information. But now let’s just stipulate. We know how to represent information. At the end of the day, Yeah it’s
binary, And at the end of the day you can think about it as decimal, and
maybe you’re using Ascii in Unicode, or maybe you’re using
graphics, or whatever is going on underneath the hood. All we need to know and care
about now is that you can do it. And we don’t really have to think
too much more at that level. Now we can resume our look
at the overarching model at hand, which is problem solving. We now have a way to represent
our inputs and outputs. What then is inside that black box? Well the buzzword du jour
these days is perhaps algorithms, where even if you
don’t necessarily know what it is or how to make one, or how to use one
in the context of a computer program, algorithms seem to be increasingly
the solution to all of our problems. Well an algorithm isn’t
all that complex, fancy though the word might
sound, it’s really just step by step instructions
for solving a problem. And those steps can be in English, or
they can be in something we’ll call pseudo code, sort of code like but
it’s not really an actual language, or can be in Java, or C,
or c++, or JavaScript, or any number of other
programming languages. Algorithms are, again, just sets of
instructions for solving a problem. Well what’s one such problem
we might want to solve? Well in the real world we might have– the real old world shall we say– we might have a problem that once
upon a time looked like this. A phone book with a whole lot
of pieces of paper inside of it, on which were a whole bunch of names
and a whole bunch of telephone numbers. And the phone book was
alphabetized from A to Z typically. And maybe there were some other sections
like the yellow pages, or apparently the red pages, whatever this is here. But we’ll just assume that
these are the white pages, so to speak, with just a whole
bunch of humans names and numbers. So suppose I want to
find one such human, a human whom we’ll call Mike Smith. How do you go about finding
Mike Smith in this phone book? Well I could, somewhat stupidly,
but arguably quite correctly, start at the beginning of the book. And I see here instructions
for calling 911. If I move on to page two, I now
see someone’s names and numbers. But these people’s
names all start with A. And so I continue going
through the A section. And the A section. And then eventually I
get to the B section. And the B section. And the B section. And then the Cs, and the Ds
and the Es and Fs and so forth. And eventually, tediously but correctly,
I’ll get to the Ss in the phone book. And if I see Mike Smith here, I can now
pick up the phone and call Mike Smith. Now no one out there, if you’ve
been still use this technology, is going to have looked
up Mike Smith in that way. You’re going to fly through this
phone book far faster than I, right, you learned probably in grade
school why count by ones when you can count by twos? So two, four, six, eight, ten, twelve. Sounds faster, is faster. It’s going to get me to Mike faster,
but is it going to get to me to Mike correctly? I’m going twice as fast by
doing two pages at a time. So I’m going to have
flip half as many times. But there’s actually a bug, so to
speak, a potential mistake here. What is that? Well in my cleverness
to get to Mike’s name twice as fast, what if I
go ever so slightly too far, just because by bad luck, Mike
is sandwiched between two of the pages that I so cleverly was
skimming two at a time? Of course I’m looking at
this side, maybe this side, but maybe Mike is in the middle. And so it turns out with that
algorithm, that twosie approach, am I going to have to have a
little bit of a check at the end? Such that if I hit like the T section,
and see a name starting with T, I better wait a minute,
let me double back, I might need to go back at
most one page to make sure that I didn’t actually miss Mike Smith. So at the end of the
day, it’s not correct if you naively just do
two pages at a time, but it is correct if you
do two pages at a time, with one final reversal by
a page, just to make sure once you go past SMITH
in the phone book, that you didn’t accidentally
miss Mike just one page prior. So it’s still super fast, you
just need that one little check. But still, no one out there, is going
to look up Mike Smith one page at a time, two pages at a time. You’re going to open in the
middle of the damn phone book, look down and see, oh, not in the S
section yet, I’m in the M section. And so what you intuitively know
how to do since growing up perhaps, is that you know Mike is not
in this half of the phone book. Mike is clearly in this
half of the phone book. And so at this point you can
figuratively– but in our case literally– tear the problem in half. Throw half of the problem
away, and be left with, we single use phone book, and half at
that, but half the size of the problem. So if we had 1,000 pages originally,
now maybe I’ve got only 500 pages. And so I can repeat this intuition. Jump roughly to the middle. Darn, I’m a little too far
now, I’m in the T section. Again, let’s tear half the problem
away, throw it away as well, and now be left with a 250 page problem,
which can now be 125 page problem. Which is getting easier and easier
and easier, until we repeat, repeat, repeat, repeat. Until theoretically, we’re left
with just one page in the end. Maybe Mike’s on it, maybe Mike’s
not, but if he’s in the phone book, he will be on this page. So how efficient was that. Well if that phone book had 1,000
pages, in my first algorithm it might have taken me what, like
700 plus steps to get to Mike? Or in the worst case 1,000
steps, right, it’s alphabetical. But maybe it’s not
Smith, maybe it’s someone with the last name that starts with Z.
In the worst case in that first phone book, maybe it would take me 1,000
steps maximally to find Mike Smith. Pretty slow. What about that second algorithm
where I was going two pages at a time? Well, with that algorithm, it’s
going to take me like 500 steps maximally to find Mike Smith. That’s twice as fast,
that’s pretty good. It’s not nearly as amazing as
the algorithm we settled on. The intuitive one, arguably,
where I divided and conquered. I have the problem again
and again and again. Because if I start at 1,000, and I go
to 500, then 250, then 125 and so forth, rounding as needed, I actually
get to one page much faster. Put another way, how many times
can you divide 1,000 in half before you’re left with
just the number one? Well if you do the math, either
in one direction or the other, you’ll see that it’s roughly ten. In fact, if you want to pause even and
grab a calculator, or a piece of paper, or pencil, or just think through it
in your head, you can start with 1,000 and go to 500, 125, and so forth. And you’ll eventually
hit one, after just 10– give or take, depending
on how you round– steps. That’s pretty powerful. But not that big of a deal. Right. Ten still is like a lot of page turns. But what about an even bigger problem. The types of problems that Google,
and Microsoft, and Facebook, and Oracle, and really big
companies deal with that have lots and lots of data. Suppose, for instance, that I’m
searching through not a phone book, but a database. A big program that stores
lots and lots of data. And suppose the data that’s being
stored is still names and numbers. How much time might it take to
find someone like Mike Smith, in a database that’s got like 4 billion
names and numbers in it somehow? Well, four billion names and numbers. Well if we use that
first algorithm, might take as many as four
billion steps to find Mike Smith in a really big database. In a really big computer
program, that’s not too smart. But if I instead use
the twosie approach, flipping through two
database records at once, maybe it’s not 4 billion operations,
maybe it’s just 2 billion. That’s good. That’s half as many operations. But what if I use this
super clever intuition that I kind of grew up with here, with
that divide and conquer algorithm. Well, I can start with 4 billion
database records, go to 2 billion, then 1 billion, then 500 million, 250
million, 125 million, and so forth. I’m getting to one much, much faster. In fact, I can only divide
the number 4 billion in half, roughly 32 times total. Again, depending kind of sort of
on how you round, but 32 times. 32 is so much smaller
than 2 billion steps. And 32 is certainly smaller than
the original starting point, 4 billion steps. So this is a really powerful
problem solving technique to divide and conquer. And here, too, even though what
computers might seem to be doing these days is super complicated
and sophisticated– and it is in many ways– but some of the ideas that those
computers and the programmers who program them are leveraging, are
actually pretty familiar to us already. Inside of this black box might not
be something super fancy, but just a clever adaptation of
some of your grade school human intuition to the
context of a computer program. Now it’s one thing to talk
about algorithms, especially if we’re just spit balling it verbally. But computers, of course,
need us to be more precise. And they need us to state our
thoughts more methodically. So what does this mean? Well, let me propose that we write
some code, or really pseudocode, for this same algorithm, where
we’re looking for someone like Mike Smith in a phone book. And it’s pseudocode because it’s
not going to be Java, or c++, or JavaScript, or anything else. It’s going to be English like syntax,
that’s kind of sort of like code. And before long, we’ll see
some actual code as well. But step zero, and just
to be playful here, I’m not going to start counting at one,
I’m going to start counting at zero, just because with any
number of bits, or digits, the lowest number that I could
count with is of course zero. Pick up phone book is the
first thing that I did. One, open to the middle
of the phone book is the second thing I did in
that third and final algorithm. Look at the names was the next thing I
did, looking down at that phone book. And then if Smith is
among names, and notice, this is semantically different
from those first three steps, because this expression starts with if. So this is kind of like the
proverbial fork in the road, if Smith is among the
names, let’s do this. What do we want to do? Call Mike, that’s great. Otherwise, or else if Mike
is earlier in the book, let’s go a different direction instead. Let’s instead open to the middle of
the left half of the book, all right. So that would be the left half
that I threw away earlier, and then go back to step two. Because once I have opened to the
middle of the left half of the book, I don’t have to actually
dramatically tear it. I now need to look for Mike’s
name again, as per step two. Else if Mike is later
in the book, I actually want to open to the middle of
the right half of the book, and then go back to step two as well. Now you might think that’s
everything to this program. But there’s actually a remaining step. Indeed I’ve got room left on the
screen for a remaining step or two, but there are more than
three possibilities. One of them is that Mike is
among the names, one of them is that Mike is to the left, the
other is that Mike is to the right. What’s the fourth? If I don’t consider the fourth,
and indeed if in a program I don’t implement the fourth,
my program might crash. My computer might hang. My computer might behave
in an unpredictable way, because if the programmer wasn’t
so precise as to anticipate something that might happen, who
knows what the computer might do. And indeed, that’s often
why your own computer might have a little spinning
beachball, or icon, or it might crash outright or freeze. It’s because something
unanticipated happened. So let’s be precise. There’s a fourth and final
scenario I can think of, which perhaps on your
mind too, just quit. Because in the fourth scenario,
if Mike’s not among the names, and he’s not to the left, and
he’s not to the right in the book, he must not be there. And so let’s avoid just hanging
infinitely somehow or other, by actually proactively
deciding to quit. But now let’s tease apart what some
of these terms actually are now. So in yellow are some things that
really just look like verbs, or actions. And we’re going to call those
statements, or more specifically functions, or procedures would
be a reasonable synonym too. And each of these yellow
terms is really a call to action for the computer to
just do something unconditionally. But sometimes, we want that computer
to do something conditionally, as evinced by these yellow terms now. If, and else if, and else and else,
kind of paints the picture of a four-way fork in the road. Where each of these
branches, or conditions, leads us to take different action. And you’ll see that I’ve indented lines
four and six and seven and nine and ten and twelve, because
they are meant to happen only if lines three and five
and eight and eleven, or eleven, actually applies. So those indentation kind of
captures the logic of this program. Lastly, or second to last there is this. This is what we call a little more
fancily, a Boolean expression. These yellow phrases here
are kind of like questions. There are either yes no, or
true false, or one and zero or any number of other binary terms. But these are questions we’re asking. The answer to which is
going to be true or false. Smith is among the names, true or false? Smith is earlier in the
book, true or false? Smith is later in the
book, true or false? And so these Boolean
expressions, named after someone who the last name of
Bool, long ago, is a way of having conditions take you
in a different direction based on whether something is true or false. Three such examples there. And then, lastly,
there’s these two lines. On seven and ten there’s
this expression go back to step two, which is to induce
a bit of cyclicity, right. You can sort of think about it
visually if you go from step seven, or ten for that matter,
back up to step two. This might happen again,
and again, and again, if Mike is still to the
left half, the left half, the left, as you’re whittling
down the phone book. And so we’re going to induce, and
induce, and induce potentially, this cycling or looping behavior. So these lines here
now in yellow represent what a computer program
might call a loop. Now these same English phrases we’ll
eventually see can be translated into Java, and c++, and
JavaScript, and Python, and Ruby, and other languages still. But the key takeaway
for us here today is one, the precision with which
we specified these steps, two, the fact that there’s this ordering,
what happens after the other. The fact that there is
these conditions, some only happen if something is true,
and the fact that some of them can happen again, and again, and
again, based on some kind of looping. But is this good? Like this is correct, and that
of course, was one of our goals with the phone book, initially,
was let’s get it correct, and better still, let’s get
it correct and efficient. correct and fast. But how fast now is this? In fact, how do I put a
number, of really a formula, on the performance of
the algorithm so that I can claim that I am a good programmer,
or I am a good problem solver? I’ve not only solved your problem
correctly, but really, really well. Well let me propose that we
analyze these three functions. Kind of in the abstract. We don’t need actual arithmetic
expressions here, per se. We’ll just do things
variably as follows. So here’s a nice little Cartesian plane,
with an x-axis of size of problem, and a y-axis of time to solve. And the farther out you go on size
of problem, from left to right, means bigger, bigger, bigger, bigger,
bigger, bigger, bigger bigger problem. And by bigger problem I
mean more and more pages. More and more input, whatever
the problem actually is. And then time to solve, is not
very much time, lots of time. So on the y-axis too, the
higher you go, the more time it takes to solve a
problem, or really the slower it is to solve the problem. So let’s now draw this red line here
as a depiction of the first algorithms performance, or running time. Whereby, there is a one to one
relationship between size and time. A one to one relationship
between number of pages, and maybe number of page
turns, or seconds, or whatever your unit of measure happens to be. So that if I have a phone book of this
size, this dot here on the red line is how many seconds, or page
turns it takes me to find it. And there’s a linear relationship
there, as implied by this variable– as we’ll call it– as in algebra. n for number of pages, for instance. It’s linear in so far as Verizon,
if the phone company like Verizon increases the number of pages in the
phone book just buy one next year, a few more people move into town so they
add one more page to the phone book. That might take one additional page
turn to find someone like Mike, or anyone else in that phone
book if Verizon just adds a page. Or if they doubled the
number of pages, it might double the amount of
time it takes to find someone. There’s a linear
relationship between the two. Now with the second algorithm, where
I was flying through the phone book two pages at a time, there’s
still a linear relationship, but it’s not quite as bad so to speak. For instance, if I have this
many pages in my phone book, my first algorithm might take this
many seconds in the first algorithm, but because I’m flying through
the phone book two at a time, it will take me half as much time. Half as many page turns
with that second algorithm. So we’ll describe it
algebraically as n over two. Where n, again, is just
the number of pages. So there is a relationship between
those two lines and those two formulas. But that third algorithm. Super, super fancy I’ll wager. And the shape of its line
is fundamentally different. If you remember your
logarithms, you’ll see here this green curved line, otherwise
depicted here as log of n, or really log base 2 of
n, but we’ll just keep it abstracted away as just log of n. This is a fundamentally different shape. And it still goes forever
up and up and up and up. Even though it looks
like it’s flattening out, it’s not perfectly flattening out, it’s
just growing and growing very slowly. And that’s key, because what’s powerful
about that third and final algorithm, is that Verizon, for instance, doubles
the number of pages in the phone book next year, because a whole lot of people
move into town, or maybe two towns merge or whatever. Then how many more steps will
it take me to find someone like Mike Smith in that phone book? Well just one additional page turn. One additional tear of the phone book. Because with each tear can I
take a bite out of that problem. That’s 50% of it. I can literally tear it in half. Meanwhile, Verizon doubled the
number of pages in the phone book with my first algorithm, as per this
straight linear relationship, aka n, it would double the amount of time
necessary, or double the number of page turns necessary, to find
Mike Smith in that case. So again, in the case
of the first algorithm, would double the amount of time taken. The case of the third algorithm
would increase by one step. One step. So if that phone book went ridiculously
from four billion to eight billion, no big deal. But that third algorithm, just
need one additional page turn. One additional tear of the phone book. Not in the case of the first algorithm. I would need an additional 4 billion
page turns to find who I’m looking for. OK. But a phone book is a physical problem. And searching for phone numbers in
a phone book is a physical solution. But what can computers
do electronically? Like what is the analog in
our computer to the problems we’ve been solving thus far? Well what about our own iPhones,
or Android phones, or the like, where you have contacts these days? Sort of a digital
version of a phone book. And those contacts are
typically sorted alphabetically. But even then, you might know
a lot of people, and most of us probably don’t scroll, scroll,
scroll, scroll, scroll. And if you do, you’re doing it wrong. You can instead search by keyword,
or by first name, or by last name, to actually find someone
much more efficiently. And typically their name
jumps to the top of the list. So that’s an algorithm. The Google, and Apple,
and other companies have implemented algorithms
for searching, or even sorting your list of contacts so that they
can find people for you quickly, so that you can just send them a
text message or make that call. But let’s simplify further. Rather than even get into the sorting
of names, and alphabetical phrases, let’s keep things simple for a moment. And just assume that we want
to store things like numbers. A list of numbers for instance. So if we want to store a list
of numbers, how can we do it? Well let me propose that the screen
here, or if you will a piece of paper in front of you, really just
represents your computer’s memory. Now you may know that
inside of your computer there’s different types of memory. Something called the hard disk,
or maybe a solid state disk, or something called ram, or random
access memory, something called Rom or read only memory, something
called l-1 cache, l-2 cache, sometimes l-3 cache and the like. So there’s different types of memory. But the one we’re going to think about
right now is ram, random access memory. And this is the type of
memory that you might have a gigabyte of, four gigabytes of,
16 gigabytes of, depending on how fancy your laptop or desktop computer
is, or your phone for that matter. So inside of your computer, or
phone, is a whole bunch of memory. A whole bunch of RAM. And if you have, for
instance, 4 gigabytes of RAM, that is four billion bytes. Because what that means
is giga means billion, so four giga means for billion, four
gigabytes means four billion bytes. Mega, meanwhile means
million, give or take. And in fact, earlier
when we talked about rgb, And four megabytes then
would be four million bytes. So 4 gigabytes is even bigger. And if you want to count
higher still, four terabytes would be a huge amount of memory. So if you have four
gigabytes of memory, that’s four billion bytes of memory, and a
byte meanwhile, is just eight bits. So why talk in terms of bytes? Well it’s just easier to
talk in terms of eight bits, rather than individual bits. Because with one bit, you
can’t really do all that much. You can only count from zero to one. So with 8 bits at least we can count
a bit higher and express a bit more. so if we have 4 billion bytes of
memory inside of our computer, you know it stands to reason that
we could number each of those bytes. And maybe the top most byte up
here represents byte number zero, and maybe the bottom most byte over
here represents the number four billion, give or take. In fact it’s not exactly 4 billion,
it’s a little more than that. But for our purposes, let’s just
assume that if this screen here represents all of my computer’s
memory, and you know for that matter, let’s think of my memory as being
divisible into rows and columns, just to kind of keep things orderly. This is not actually what it
looks like underneath the hood. And it’s definitely not as wavy
as I making it out to be here. But you can think of it
really, as rows and columns, that you can address each
of the cells in this table. So kind of sort of like a spreadsheet
with a whole lot of room for values. So this might be byte zero one,
two, three, dot dot dot if you will. Four billion or so in the
bottom right hand corner. But where these things are laid out
doesn’t really matter for our purposes right now. So that’s how you might address memory. But how might you use memory? So suppose I want to store
a whole bunch of values inside of my computer’s memory. I don’t really care where
it is for the moment. And therefore, I don’t care
what those addresses are. But let’s suppose that I do care that
the numbers are close by to each other. They are contiguous in memory. Back to back to back to back. And we’ll see that that has
some advantages for efficiency. So suppose that I want to store
one number in my computer’s memory, and suppose that this memory
and this memory and this memory here, those are all being used
already by other programs, or other things going on in my program. And so the first available slot,
for instance, say is this one here. And the number I want to
store is the number one. And the next number that I want to
store is going to be the number 50. And the next number I want to
store is going to be the number 51. And the next one 61. And then the next one 109. And then the next one 121. And so forth. Which is to say, if I’m storing not
people’s phone numbers or names, just more simply for
the sake of discussion, all I care about for whatever
reason is storing a list of numbers. I might store them in my computer’s
memory in exactly this way. Each of these squares represents
a byte, or eight bits. We know that we can abstract
above bits, and actually represent things like decimal numbers. So I’m just stipulating
that inside of these boxes really is eight bits, or some
pattern of eight zeros and ones. But that’s too low level. We’re going to abstract away from
that and just talk in terms, for now, in decimal digits. So this thing here might
be how I store something underneath the hood in my computer. And what’s nice about this, is that
it’s super, super easy to find values. In fact, let me just highlight this
range of values, and slap a name on it. This boxed in region here
in a computer program, would very often be called an array. It’s a list of sorts, of
values, but what’s key here is that this list of
values is back to back to back to back all contiguous in memory. When that is the case, when you have
this property inside of a computer, you can do things pretty darned fast. Because there’s a mathematical
relationship among the locations of all of these values. For instance, if you are
the computer, and you find yourself looking at
the number 50, how do you find the next number in the list? Well, you just add one to your location. Not one to your value,
one to your location. How do you go from the number 51 to 61? You just add one to your location. Again, because if this is byte
zero, one. two, three, and I don’t know where we are here. Maybe this is byte 100, 101, 102, 103. Because all of these values are
back to back to back to back, you can very quickly access them
just by looking to the left, looking to the right, by
a fixed number of bits. This is advantageous,
because you can jump around. In fact, this is where the name
random access memory comes from. You have random access, not in the sense
that you just go anywhere randomly, but you can go anywhere you
want in the same amount of time. Even though this byte four billion looks
really far away from everything else, doesn’t matter. Because arithmetically the computer can
say, give me byte number four billion. And that’s a constant time
operation, it’s not linear. Doesn’t take more and more steps the
farther away it is, because all of this is electricity and
electronic, not moving parts. You can just instantly jump to any of
the cells in your computer’s memory like that. The catch is, me the human, can
look at these values and just see, oh I see a whole bunch of values. Six values inside of that
array of memory there. But a computer can’t do that. In fact a computer can really
only see one value at a time. And so when I say the computer can go
get value four billion, it can do that. But it can only go get that one value. If it wants another, it has to go
get that value and another value. And indeed, inside of a computer, inside
of a CPU that a company like Intel might make, there’s
an instruction that’s often called load, which
refers to exactly this process. Go load a value, maybe one byte,
maybe four bytes, maybe eight bytes, from memory, and bring it back to me. And put it in something
called a register, typically. But that load operation is kind of like,
if you’ve ever seen those fancy soda machines where it doesn’t just
drop your soda on the ground, instead there’s that fancy little
robotic arm that for whatever reason goes left and then goes right
and then goes up and then goes down, and then finally decides
where your soda is. Then it moves it over the
dispenser and drops it out. Well that kind of robotic arm is
kind of like a physical incarnation of a computer, in that it can’t
just take a look back and figure out where the coca-cola is, it has
to go to that particular location and actually drop it for you. But computers, when there aren’t moving
parts, can just jump there instantly. You don’t have to wait for
that robotic arm to move. And so in this case, the computer can
get one of these values instantly, but it doesn’t know what value
is there until it arrives. Right It’s kind of like being– another analogy might be
a whole bunch of lockers in a train station or the like. Where the only way you can see what’s
inside of a locker is by renting it or by putting in a key and opening
it up and seeing what’s inside. Similarly, can you think of
there being tiny little doors, occluding these numbers so that
the computer has to open it up before it sees what’s there. Speaking of doors, let’s suppose
that the computer’s memory actually has some doors in front of it. Depicted here, is just
these simple rectangles. And behind each of these
doors is some random numbers, among which is one number
I do know and care about. In fact, I want to find
myself the number 50, but I don’t know where it is. So I’m going to take a pretty naive,
but correct approach, of just starting from the left. Just as we did with a phone book,
starting from the first page. And I’m going to go ahead and knock
on, and then open the first door. And I see the number 15, that
of course is not the number 50. So I’m going to proceed further. To go and knock on and touch
the second door, and I see 23. Still not the number I’m looking
for, so I knock and touch again. 16. Knock and touch again. Knock and touch again. Still no number, but I have
found the meaning of life. And let’s try one more door here and
touch the number 50 behind this door. So here too is a very linear
algorithm, but these numbers are kind of all over the place, right. It starts at 15, gets a little bigger,
then gets smaller, then smaller still, then bigger then bigger again. So it doesn’t seem that
these numbers are sorted. And in fact, if the analog
then in my phone might be, my contacts might be
just randomly ordered. Maybe ordered in the
order in which I inputted them, which probably isn’t ideal
when I want to find someone. I don’t really care in what
order they were put in, I care that I find them quickly. And so maybe I should do
a little work up front, or maybe Apple or Google should
do a little work upfront, and actually maintain my
contacts in sorted order. Which indeed they do. Or in this case here, why don’t
I at least maintain my numbers in sorted order. So let me go ahead and reset all
of the doors, and start again. This time with an assumption. This time with a feature, if you
will, that the numbers are now sorted behind these doors. So they go from smallest to biggest, and
there might be some gaps in the middle, but in so far as they’re
now sorted, I can leverage some of that old school
phone book intuition and divide and conquer this problem. Let me go ahead into the middle
of the problem now, not the left, and let me go ahead and knock
on and open the middle door. And there’s 16. Now 16 is kind of a
small number, but I now know that none of the
numbers to the left are going to be the one I’m looking for. Because I’m still looking for 50. So I can sort of throw
that half of the problem away, or really just
turn a blind eye to it. And now go to the right
half of the problem, identify the middle door here, knock on
and open this one, there again is 42. But now, again, I can apply
some of that intuition and know that 50 is
bigger than 42, so must be that 50 is right here at the last door. And in this case, have I gotten
away with opening fewer doors? It’s not a frightening number of fewer
doors, it’s still relatively small, but that’s only because we could fit
some seven doors or so on the screen here. But if the number of doors were four
billion, or if the number of doors were even larger, surely with
this divide and conquer approach, could I find my number super fast. But there’s kind of a
trade off here, right. Like yes, I can now find
numbers faster, or equivalently I can find people presumably
in my address book in my phone faster, if they are maintained in
sorted order, but what price did I pay? In fact, a theme in computer science,
and in programming specifically, is this theme of trade offs. Yes, you can speed up searches, but
you’re going to have to pay a price. What is that price in this context? Well it took someone some amount of
work to sort those contacts, right. Before class I had actually come up
with the sorted order of those doors, and then put the right numbers
behind the right doors. There was a non-zero amount of work. Not a huge amount of
work, but it’s non-zero. So really, by speeding
things up during search, I have to first incur an up front,
one time cost, of sorting things before I even begin to search. Or equivalently, someone at
Google, or someone at Microsoft, or someone at Facebook, or any of
these big companies with big data, if they want to use
searching efficiently, they need to pay a price upfront
of sorting the data first. And indeed, that’s what Android
and iOS and other devices too, must be doing underneath the hood. Someone at some point in time,
probably long ago at this point, implemented a sorting routine. But how quickly can we sort numbers? How quickly can we actually achieve
that result of sorted values so that we can then do
search more efficiently? So for this, let’s try
something a little visual, a little old school, in fact. We’re here in this beautiful theater. And in fact we have a whole bunch
of music stands from the musicians. Let’s go ahead and use
a few music stands, each of which will represent a
location in my computer’s memory, and then I’m just going to
have some numbers that I want to put inside of those locations. And let’s see how expensive it is
for me to actually sort those values. So I have here in my hand the
numbers one through eight, and I have here eight memory locations
as represented here by music stands. And let me go ahead and just put these
numbers in pretty much random order. I’ve got my little cheat
sheet there up on the screen, so that we reset each
time to the same location. And I’m just kind of putting
these in some random order. Where four’s going to be
over here, and the numbers are going to sometimes
get bigger, sometimes get smaller from left to right. The key, though, is that
they’re not actually sorted. And so I start now with
these values as such. All right. So suppose I wanted to find
now, maybe the number 50. Well 50 is not among these numbers,
so we better think a little smaller. Suppose I want to find
myself the number seven. Well again, as in the case
of a computer, or the doors, I can only find seven not just
by kind of cheating like a human and say, oh there it is over there. I have to get to that
point more methodically. And define the number seven linearly,
using linear search, so to speak. I have to check is this it, nope. Is this it, nope. Is this it? Nope. Is this it? Nope. Is this it? Nope. Is this it? Nope. Is this it? Yes. And now I found it, coincidentally,
some seven steps later. But seven could have been
anywhere, but in the worst case, it’s all the way over here. Now could I use binary search on this? Let’s actually slap a label on that
algorithm we’ve leveraged with a phone book. Dividing and conquering. If you do it half and half and half
and half, looking for some value, it’s actually more technically
called binary search. Bi meaning two, and so it’s
two because you’re splitting the problem in half each time. Could I use binary search here? Well there’s one minor
problem, which is that I don’t have an odd number
of music stands, which means the middle element is
like between eight and one. But that’s fine. We can deal with that arithmetically
by just rounding up or down, depending on how we want
to implement it ultimately. But even then, that’s
not the biggest problem. If I look at eight here, and
I’m looking for seven, well based on my previous divide
and conquer strategy, I should be looking to the left
because seven is smaller than eight. Of course I’m not going to
find it to the left of eight, because it’s not there, it’s over there. And that’s because the
numbers aren’t sorted. So as in the case with my
hypothetical address book, if I want to be able to
search this efficiently, or search a database sufficiently,
or search most anything efficiently, I need to know something about the data. And ideally that it’s sorted. So how do I get to the
point of sorting this data? Well, you know what, I could
take a fairly localized approach. Like four and two, I’m
looking at them right here. And technically, I’m looking
at them one at a time. I see four, I see two, and
I know they’re out of order. Well, as the computer, I could do this. I could, using my load and
store instruction, essentially, or a move instruction, do that. And now I’ve improved
the sorting of this list. Now four and six, those look OK. Six and eight, those look OK. One and eight, Mm-mm, not OK. So let me just make a quick fix. Let me just swap one and eight. It’s not perfect yet, but it’s better. Eight and three, let
me make a quick fix. Well, that’s better. Eight and seven, a quick
fix, that’s better. Eight and five, quick
fix, that’s better. So this is great. Eight is all the way to
the right as intended, and one is not yet all
the way to the left. So I haven’t solved the
problem, but I have improved it. I’ve gotten a step closer to the
correct solution of sorted numbers, because some of the values
moved at least one step closer to their intended location. But of course, I’ve
got to do this again. So two and four their good. Four and six they’re good. Oh six and one, now that’s out of order. And so I can transpose these two. So this algorithm has a name, whereby
I am transposing pairwise elements that are out of order, such that the biggest
numbers are bubbling their way up to the top. eight already made its way
there, seven just made its way there. And so if I keep doing
this, these bigger values are going to continue
to bubble their way up. Two and four those or fine. Four and one, ooh notice, four is
starting to bubble to the right. Four and three, it’s
continuing to bubble. Four and six is OK. Six and five, mm-mm. Six bubbled its way up, and
now I got a good one there. I really fixed a lot of those values. I just need to do this once more. One and two, fixed that. Two and three, three and four, four and
five, six and seven, seven and eight. Let me do another spot
check just to make sure. One and two, three and four,
five,six, seven, eight. I’m good. But I did a lot of walking, and
I did a lot of transpositions. And it turns out this
algorithm is correct, does have a formal name
called bubble sort. But it’s not very efficient. In fact, if you actually do the math
out, this takes as many as like n times n steps in the worst case. So if you have n elements,
that’s eight in this case. N times n, that’s 64. It’s not actually 64, but it starts
to approach 64 asymptotically, so to speak. So if we did a formal
analysis, long story short, it would start to feel like
something quadratic, so to speak. Not a fast algorithm. But there’s other ones. So let’s actually try
out another algorithm. Let me go ahead and restore the
original, seemingly random order here. And indeed it is random
in so far as I’m just putting it in some non-sorted order
that I thought through initially. One’s going to go here, three is
going to go here, seven and five. So now we’re back to the
original unsorted order. You know what, I can do another
algorithm altogether here. Let me not look at them pairwise,
let me just go even simpler. Let me just find the smallest
number, and deal with it. All right, four is pretty small indeed
it’s the smallest I’ve seen thus far, I’m going to hang on to this. Oh wait a minute, two a smaller. Let me hang on to that. Six not smaller. One, eight not smaller. One is smaller, let me go
ahead and hang on to this. Three, not smaller. Seven, five. OK so here’s the smallest element. I found it I’ve selected
the smallest element. You know what, this belongs
all the way to the left. I’m going to put it there. I’m going to do that. Of course, this is cheating. Like this music stand, even
though it’s kind of wide, should only be storing one value. There’s only enough room for one
byte for instance of information. So where do I put four? Like four can’t stay on
the same music stand. You know I can’t cheat and do that. So I could move eight over,
move six over, move two over, that’s a lot of work. Or I could just, fine, four was already
out of order in the first place. Let’s just put four over here. It’s not made the problem
any fundamentally worse, because they started in random order. I randomly put four there. The key is, one is in
really good shape now. Let me repeat this process of
selecting the smallest element. Let me look for the next smallest. Two is pretty small. Small, smallest, yep. OK I found it. Two is the smallest. Let’s put him co-incidentally
right where he goes. So that was kind of a waste of time,
but verification that it’s correct. Let’s keep looking. Six is pretty small now. Eight is not, oh four is smaller. Let’s grab four. Oh Three is even better. Let’s grab that. Seven, five. OK 3 needs to be put in its place. I don’t want to touch one and
two, because those are good. Six, sorry you don’t belong there. Let’s go ahead and evict
you and put six over there. So if I repeat this process of just
continually iterating, or looping, if you will, through the list,
looking for the smallest element, and evicting whatever is in
the place that it belongs, notice that I’ve sorted
it seemingly faster. But I actually got a little lucky there. It turns out that this algorithm, which
also has a name called selection sort, is also quadratic in nature. If you’ve got n elements, it’s going
to take roughly n times n total steps in order to achieve that. Not in every case. But in the worst case, so to speak. So it turns out this is just
two ways now to sort elements. And there are dozens, if not more,
other ways to actually sort elements, with each one getting maybe a
little better, maybe a little worse, maybe a little better
depending on the inputs. So sometimes it really depends. But these are all examples
then of algorithms. And algorithms have performance
associated with them. Algorithms have a running
time associated with them. And one of the things that
theoretical computer scientists do, is not only design
algorithms like these, or really even way more
sophisticated algorithms than these, but also analyze them. So that we can make arguments,
mathematical arguments, sometimes that state as a proof,
this algorithm is correct. No matter what, this will
sort your elements correctly, if implemented correctly
in a computer, and this is how much time it will take in
general in order to achieve that. But the takeaway here, is that
it was a lot of work, right. I had the luxury with
that phone book, long ago, of just opening it up to the
middle and looking for Mike Smith, and da da da da da, found Mike
Smith pretty darn efficiently, because Verizon, or whoever
made the phone book, actually did all of the
upfront work for me. So here’s a question then. Suppose that you have a
whole bunch of random values, and you want to search for something
specific among those values. Mike Smith, number 50, whatever
it is you’re looking for, should you sort those values
first before searching? Should you sort the values first? Well, I would wager you
should sort them if you’re going to be searching over
them pretty darn often. Right. Because it might be
painful, might be boring, might be tedious to actually sort
those values either manually, or algorithmically with a
computer, and a computer program. But it’ll probably pay off over time. So there’s this notion of amortization
of the cost over time, whereby yeah, it might take you a decent amount of steps. Quadratic, seconds, minutes,
whatever you want to measure it in. But, it’s going to pay
off in the long run. Of Course, if you add data,
you’re going to need to keep that new data sorted as well. But again, it’s this trade off. However, consider a scenario where
you just have a whole bunch of data, it’s in random order
for whatever reason, and you just want to find Mike Smith. Or you just want to find the number 50. And after that, you do
not care about the data. You’re never going to
look up someone else. You’re never going to
look up another value. Maybe then, in that case,
you should just blindly go through it brute force, so to speak. Left or right, in some sense,
or linearly, as opposed to trying to do anything clever
like we did with the phone book. Because it’s faster to just brute
force your way through the solution, than doing some upfront sophistication
just to improve the subsequent result. So again, it’s a trade off. It’s a cost benefit analysis that
you need to decide, ultimately, on what is most important. Which resource, time, space, money,
people, is most important to you. So in all of these
examples thus far have we assumed that the data is contiguous. Back to back to back with other values. And indeed, that’s exactly what
the music stands represented, were these fixed locations in memory. Bytes in your computer’s ram, so
to speak, that could have values. But you had to physically move those
values from one location to another if you wanted to sort
the values therein. you couldn’t just kind of
insert a new music stand, and push the others aside,
because that’s simply not allowed. That’s not the metaphor in question. Those music stands were
fixed just as the locations in memory of your computer are as well. And as it turns out, just as we drew
a picture before wherein we had some sequence of back to back values in
memory, turns out that in programming, actually using languages that aren’t
just pseudo code, but actual code, Java and c++ and others. Still, there is very often something
called an array, or a list. A data structure that gives you
the appearance of having values back to back to back to back. And in some languages,
these values are indeed next to each other in terms of their
bits inside of your computer’s memory. But there’s a problem
with a world like this. Because if you put in a whole bunch
of values up front, as I did earlier, and you want to actually
insert some new value, where do you actually put that value? For instance, earlier we had one and
50 and 51 and 61 and 109 and 121. Suppose I want to put the number
55 in this sequence of numbers. And better still, I want to
keep the whole list sorted. Well dammit. There’s really not room for 55 in there. I know where it should go numerically,
it should go right here, of course, between the 51 and the 61. But there’s no room. Or I could put it arbitrarily over
here, but then it’s not sorted. I could put it, well
I can’t put it there because the dot dot dot
remember from earlier, suggested that something else was there. Something else from
your program or computer was already using that location. Now I could put it maybe below,
but that’s kind of weird. The whole point here
is this contiguousness. And indeed, with the
music stands a moment ago, was it advantageous for me
to be able to take one step and be at another music stand? One more step be at another music stand. I didn’t have to walk all around
stage looking for my values, because they were indeed
back to back to back and that made things very efficient
and predictably close to one another. So what are my options? Like surely a computer can do this. It’s not the case that computers are
so dumb that once you write your values you can’t solve any new problems. So what are my options? I could put the 55 here
at the end, but then I’m going to have to reshuffle
all of the numbers. Or maybe I ask the computer
for more memory altogether, and you know what, I re
do this, and I say let me put one here, 50 here, 51 here,
55 here, then 61, 109, and 121. In other words, why don’t I
just use different memory? Different bytes in my computer’s
memory, but that feels kind of lame, because now I need twice as much
memory, at least temporarily. So I can kind of copy the
old values into the new, but leave room for that new value. So that seems a little time
consuming, and tedious certainly. And indeed, it would be
for a computer as well. But maybe there’s another
solution altogether. And here too, we’re experiencing
yet another trade off. The contiguousness of my memory
has thus far been an advantage. Because again, each of my music
stands just one step away. In the case of actual ram, each
number is just one byte away, and that lends itself to again, random
access, ergo random access memory. But it’s kind of painting
me into a corner, because I’m finding that
now I can no longer really add new values without
incurring significant cost. And by significant cost I mean having
to duplicate the entire array of memory into a new location, leaving one
additional space for that new value, which is going to take as many steps as
there are values in the original array. So you know what, maybe I take
an even older school approach, and maybe I do something proactive. A little defensive. I’ll put one there, and 50 here,
and 51 here, and 61 and then 109, and then 121 and so forth. And wow, this was clever
of me, because now I left myself little gaps in my values. So that I still get the predictability
of every value is now two steps away, but now I can fit 55 in here. Well, clever as you might
initially think that, now we’ve created kind of a problem. Because most values are two
bytes away from each other. But now there’s this weirdness
where 55 is only one byte away from its neighbors. So that’s really just
complicating things now. I can no longer predictably
take just one step at a time. Sometimes I take two,
sometimes I take one, and that just feels like it’s devolving
into a very confusing bug prone situation already. And in fact, if you ever saw, or maybe
grew up with the basic programming language, back in the day,
you didn’t number your lines of code zero, one, two, three, four. If you numbered of them
at all, you instead did something like line ten,
line 20, line 30, line 40. Which wonderfully left your room for
like nine more additions later on, without having to renumber everything. Of course now, programs just number
or lines automatically for us. And they’re not even strictly
necessary, those line numbers. So we’ve tried this, and it’s just,
it’s not really a good solution. Better would be I’ll
claim some dynamism. What if instead I consider
my computer’s memory still to be this grid of locations. But let me do this. Let me propose that the first number. I’m going to put in, maybe is here, and
just to keep things prettier I’m not going to draw the whole grid again. But let’s assume that one
byte I’m using is right there, and I put the number one in there. And then eventually, I decide
to add that second number. And I go ahead and add the number 50. And the number 50 just so happens to
end up, oh I don’t know, over here. And it’s farther away in memory. Maybe enough time has
passed that the bunch of memory in between those two values
is being used for some other purpose. And that’s OK. But the 50 and the one
are no longer next to each other, or therefore
pictorially, or really related, unless I somehow interconnect them. So for now, let me go
ahead and draw an arrow. And let me kind of weave
together my values. Much like popcorn on a thread,
or any number of other metaphors where you’re linking things together. Like with a chain, for instance,
can you make a list like this? Because my other numbers,
suppose that more time passes. I insert a third value like 51, just
happens to be over there, whatever. It’s not contiguous,
which is unfortunate. But that’s OK, so long as I somehow
link these together like this. And then maybe that next number here,
61, get lucky and it’s pretty close. So it’s over here. But we still need an
arrow to this one here. And then maybe we have maybe
121 and 109, each of which is in some different locations. So we might need this arrow here. And this arrow there. So it’s kind of all over the place. And if you ever play
the game growing up, it’s kind of like
chutes and ladders now, you kind of have to like follow
the chutes to get from one number to another. But that’s OK. In fact, it’s actually quite
beautiful, my handwriting aside. It’s not quite as efficient though. Because it turns out that if you’re
at one of the numbers in this list, and you want to jump to another, you
can get to the next number pretty fast. If I’m at one, I can get
to 50 pretty quickly. I just follow that arrow somehow. I don’t know how it works in
memory, and we don’t really need to know how it works. We can kind of abstract that away for
the sake of discussion at the moment. But I can get to 50 based on this
picture alone pretty darn directly. But I can’t really get to
like 109 pretty quickly. If I’m starting at one,
I want to get to 109. previously, I could just take one two,
three, four steps away, four bytes away and I could just take a step that’s
like four times as big as usual and I’m instantly there. But these arrows seem to suggest
that the memory could be anywhere. And so it’s almost like following a map. If I’m at one, all
right let me go find 50. Once I found 50, I can
pick up another map, and now I can follow another arrow
to 51, which is maybe over here. Now I pick up another map. Oh here’s 61. OK now this leads me to, Ah here is 109. It wasn’t as simple as
just one big step that’s four times as big as was possible
in the case of contiguous memory. Now, I kind of have to
weave my way through. That actually kind of hurt. Weave my way through my computer’s
memory to get where I’m going. So it’s a trade off, right. Because now I have perfect dynamism. No matter where there is
available memory in my computer, over there, over here, over
here, over here, I can use it. And I can just kind of stitch
together this data structure, the structure for my data. I have eliminated the gotcha that
my data structure is a fixed size. So now if I want to add another
value, suppose I want to add 55. OK, suppose that 55 happens to
have some available space here. You know what I can do here, I
can just get rid of this arrow, and then I can go ahead and
stitch this in like this. No big deal. I don’t have to touch any
of the other elements. But think back to the array. The array being the
rectangular example where everybody was back to back to back. That I had to make room for
things, or duplicate all of it. Had to do more work here, just
to allocate space for your 55, and then go ahead and just
update two of these arrows, or pointers, or references as that
might be called in different languages. So that’s pretty powerful. But again I’ve lost the
random access, which means I can’t do things
like binary search anymore. Which we’re predicated
on simple arithmetic. Divide by two, divide by two,
divide by two, Maybe round, but divide by two each time. And these arrows. Kind of abstracting maybe
too generously here. These arrows are going to
cost me some memory too. Now I’ve drawn them prettily
as arrows on the screen, but those actually themselves
are some kind of value. And it turns out that those arrows
themselves take up space, take up bits. So I’ve kind of doubled,
let’s say, the amount of space necessary to store this data. Because I need to somehow
store and memory those arrows. Underneath the hood those
arrows are effectively stored as numbers themselves. Which is to say, they take
up at least a byte or more. In fact, on most systems, they
take up four or eight bytes still. So it depends on what’s
important to you. Do you want to be able to search
the data super efficiently and actually have random access,
and do something like binary search and get that divide and conquer upside? Or do you want to have dynamism,
and be able to grow and shrink the data structure super fast, but pay
a price in terms of memory as well. And use more of those bits. It really quite depends. Now these aren’t the only data
structures at our disposal. We can use not just arrays,
not just linked lists, as we’ll call that last
one, whereby we have links tying these lists items together. But we have maybe tree structures too. In fact, we think back to
like your family tree that you might have made in grade school. You might have drawn a
little something that might have grandma or grandpa
at the top of the tree, and then their children
might be down here, and then these children might
be down here, and so forth. Well if we generalize away from a fam– OK ignore that one. If we generalize away
from a family tree, it turns out that this tree
data structure is another kind of incarnation of that previous idea. A linked list doesn’t have to just be
linked from left to right, so to speak, even if it’s kind of
swooping all over the screen. It doesn’t have to have
a start and an end. It can be a tree structure,
whereby your arrows actually, much like a branch and a program,
can go in different directions. So each of these edges in this
graph actually are directional. They take you from one place to another. You can imagine using a data structure
like this to store values as well. For instance, suppose that I wanted to
store numbers in this data structure. Instead of using squares,
I’m just doing circles now just because it’s conventional. Doesn’t mean anything else. Anything special. These are still just bytes underneath
the hood, or ultimately bits. Now suppose I want to store the number
two here, and one here, and three here, and five here, and
six here, and seven here. This is all very deliberate. Because notice the pattern
that I’ve kind of adopted. Four is at the top, in the
so-called root of the tree. What do you notice about all of
the values to the left of the four? All of its left descendants so to
speak, to borrow that family tree nomenclature. One, and two, and three. Curiously, all three of those
are smaller than the number four. What about all of its right
descendants, down this branch? Six and five and six and seven. Those are all bigger than four. That’s kind of neat. How about here? Two, this is kind of a
mini tree, if you will. This is a new route, if you
ignore everything above it. Two has two children, left and right. Notice that the one is less
than the two, and the three is more than the two. So this is not a coincidence
that I drew it like this. Similarly, if you think of this as a
tree, just those three nodes below it, six is bigger than five,
six is less than seven. So there’s a pattern. So we can actually reclaim some
of the capabilities of divide and conquer by laying
out our data, still with these arrows, these pointers
or references if you will. But not just doing it linearly,
or swiveling all over the screen, but from one start node to an end node. Where node is a fancy word for
these squares or these circles. What if we instead allow the user
to go in two different directions when searching for data? And suppose we leverage that
same phone book principle where the smaller values are this way,
and the bigger values are that way. What does that afford us? Well, this binary search tree, if
you will, to give it a fancy name, actually gives us back a
logarithmic running time. Much like dividing and
conquering a phone book. Because indeed, if I start at the
four and I’m looking for seven, and I immediately realize four is here,
which means seven must be to the right if it’s present at all, I can pretty
much chop off the half of the problem that I know seven is not in. So it’s like throwing away the
left half of the phone book. I can ignore almost half
of the nodes in the tree by just snipping off that left branch. Can’t be that easy. Got to be a trade off. What it that trade off here? Well, from the looks
of this picture, pretty though it is, relatively speaking,
it looks like now each of my nodes, each of my values has not just one
arrow associated with it potentially, but as many as two. And that means we’re spending
twice as much space as before. So whereas in our array, we
use just one byte per value, or four bytes per value, or some
fixed number of bytes per value, in my linked list
example I had to double that because I had to also store– using bytes in some form– those arrows. Those pointers, or
references as they’re called. But now in this darn tree, I
need three times as much data, because each of my nodes might
have a left child so to speak, or a right child. So again it’s a trade off. And the prevailing trade off here, thus
far, seems to be if you want less time, you’re going to have
to spend more space. Or rather, less time, I’m getting
the visual metaphor wrong here. If you want to spend less time, you’re
going to have to spend more space. And if you want to save space, you might
have to tolerate a little more time. But can we do better? Indeed, can we combine some of the best
features of multiple data structures in order to achieve
something better still? Well, sort of. It turns out that there
exists a very popular data structure called a hash table. And a hash table is a data
structure that a computer scientist will use when he or she really wants
to store a dynamic amount of data. It might grow or might shrink. But they also want to search
that data pretty efficiently, and ideally be able to find data
instantly, in constant time, so to speak. So you’ll often see a hash
table implemented, really with an array something like this,
and I’ve drawn it vertically just to make the picture prettier. But again, these are
just artists depictions. Bad artists depictions of
what’s going on in memory. It’s still just an array
of back to back values. But the values now in
the array are not going to be the numbers I care about,
or the words, or whatever. It’s actually going to be arrows, which
again we’ve stipulated take space. And each of these
arrows is going to point to what we described as a linked list. So it turns out that a
hash table will often look a little something like this. A combination of an
array, and a linked list where those linked lists might be
not present at all, might be short, might be long, totally depends. But we decide where to put our values
based on some property of the value. So for instance, if this array
weren’t this big, but maybe had 26 different locations in it. And suppose that we’re no longer
storing numbers, we’re storing names. Well every name in the English
alphabet starts with an A or B or C or dot dot dot a Z. One
of 26 possibilities. So if my hash table has
an array of size 26, then maybe, if I’m storing
names that start with A, I could just put them in this
linked list here, the first one. And if they start with B,
they’ll go in the second one. And if they start with C, the third. D the fourth, Z the very last. And what you’ll find in this
case is partial optimization. It’s not instantaneous to find
someone with a given name. If there’s a lot of people
who’s names start with A, you might still have to look
through all of the A names linearly through that list. But at least you only
have to look through 126th of the possible values
in your data structure, assuming a uniform distribution. Of course, that’s not fair, there’s
probably fewer names that start with Z, fewer names that start
with Q, maybe a lot that start with A or D
or other such letters. So your chains might
be of variable length, and maybe that’s not the best design. So maybe we should think a
little harder about this. So this principle of bucketizing things,
and putting values in their place, and using that as a
stepping stone to getting to a complete solution– in
this case of sorted order– is really useful not just in these
increasingly complex examples, but even in the real world. So for instance, if you’re
a fan of playing cards, you might have a deck of cards at home. And those cards of course, have
different values or numbers on them, and also different suits. Clubs and spades and
diamonds and hearts. And you know, sometimes you might
want to either shuffle the deck, or really un-shuffle it
and actually sort it. And if you did want to,
sort of obsessively like me, want to sort your deck
of cards, well you could just go through
it one card at a time. And then try to find or select
or maybe bubble up the values, like our algorithms before. But most of us probably
use a bit of intuition. So if I see a spade here,
I might put it in one pile, and a heart, I’m going to
put that in a different pile. A diamond, different pile still. A spade will go up here. A club now goes over here. And as I continue this pattern,
I’m kind of dividing the problem, albeit in a different way. I now have, instead
of a 52 card problem, I’m eventually going to
have four 13-card problems. But how am I getting to that point? Well, I’m bucketizing things. Not physical buckets, but there’s
like four piles here of cards. But what really am I doing? I’m really hashing these values. I am looking at each card as input. I am passing it through a sort of mental
function, a mental black box if you will, that outputs a value. Either bucket one, or two, or three,
or four, or more specifically spade, or diamond, or heart, or club. That’s kind of my hash function. So if you’ve ever done
something like that, where you’re trying to clean up a mess,
or you’re trying to sort some data, or organize your cards. Once you start bucketizing in
this way, you are hashing values. You’re taking as input
some value, maybe a card, and producing as output some identifier. Similarly, with our
example of names, if I’ve got a whole bunch of names, each of
which might start with A through Z, each time you hand me
a name, I’m looking at the very first letter
in that person’s name, and I’m deciding A bucket, Z bucket,
D bucket, C bucket or so forth. C should be over there. And deciding, based on the
input, what the output should be. So again, here to, this
sort of intuitive principle can very quickly be applied to
something much more sophisticated. Again, compare and contrast
the sort of sophistication of the before and the after. This is odds are pretty darn intuitive,
if not something very familiar to you. This might still look a bit like Greek. Certainly pretty arcane versus
some simple playing cards. But to the overarching point, that while
computing and technology all around us might seem especially
sophisticated, and complicated, and well beyond one’s
understanding, that’s just because it might be
presently beyond your familiarity with some of those building blocks. Indeed, we started so low
level with just zeros and ones, and then we got to letters,
and then maybe words and paragraphs, and graphics
and videos, and eventually so many more forms of media. And at the end of the day,
it’s useful to understand what’s going on underneath the
hood, but it’s also very empowering. Because you realize that
even once we’re at the point as we are literally right now, of
talking it this degree of complexity and sophistication for how you might
store data inside of her computer’s memory, and very efficiently
get at it, all of this was the result of these stepping stones. These abstractions. And along the way, in building
up these abstractions, and in solving problems
more effectively by standing on the shoulders of
problems solved past, similarly we have to
make some trade offs. And so at the end of the day, this
is what computational thinking is all about. This is what computer science
and engineering and programming is all about. It’s solving problems using
solutions to problems past, and along the way making very
conscious, calculated, and hopefully correct decisions, as to
what resources and what objectives are most important to you.

About Ralph Robinson

Read All Posts By Ralph Robinson

12 thoughts on “Computational Thinking – Computer Science for Business Professionals – by CS50 at Harvard

  1. Thank you Professor Malan! CS50 has helped me finally understand how the computer works! Now, it even influences my views and thinking about the world. I really enjoy and appreciate the courses!

  2. Great series. Question, are the videos designed to play in sequence or we can take each vid out of sequence.

Leave a Reply to Iven Sun Cancel reply

Your email address will not be published. Required fields are marked *