But it doesn't have to end here! Sign up for the 7-day coding interview crash course and you'll get a free Interview Cake problem every week.
You're in!
Hooray! It's opposite day. Linked lists go the opposite way today.
Write a function for reversing a
linked list.
↴
A linked list organizes items sequentially, with each item
storing a pointer to the next one.
Picture a linked list like a chain of paperclips linked together. It's quick to add another paperclip to the top or bottom. It's even quick to insert one in the middle—just disconnect the chain at the middle link, add the new paperclip, then reconnect the other half.
An item in a linked list is called a node. The first node is called the head. The last node is called the tail.
Confusingly, sometimes people use the word tail to refer to "the whole rest of the list after the head."
Unlike an array, consecutive items in a linked list are not
necessarily next to each other in memory.
Most languages (including Python 2.7)
don't provide a linked list implementation. Assuming we've already
implemented our own, here's how we'd construct the linked list
above:
In a basic linked list, each item stores a single pointer to the
next element.
In a doubly linked list, items have pointers to
the next and the previous nodes.
Doubly linked lists allow us to traverse our
list backwards. In a singly linked list, if you
just had a pointer to a node in the middle of a list,
there would be no way to know what nodes came before it.
Not a problem in a doubly linked list.
Most computers have caching systems that make reading from sequential addresses in memory faster than reading from scattered addresses.
Array items are always located right next to each other in computer memory, but linked list nodes can be scattered all over.
So iterating through a linked list is usually quite a bit slower than iterating through the items in an array, even though they're both theoretically O(n) time.
An in-place function
modifies data structures or objects outside of its
own
stack frame
↴
The call stack is what a program uses to keep
track of function calls. The call stack is
made up of stack frames—one for
each function call.
For instance, say we called a function that
rolled two dice and printed the sum.
First, our program calls roll_two_and_sum(). It goes
on the call stack:
That function calls roll_die(), which gets pushed
on to the top of the call stack:
Inside of roll_die(), we call random.randint().
Here's what our call stack looks like then:
When random.randint() finishes, we return back to
roll_die() by removing
("popping") random.randint()'s stack frame.
Same thing when roll_die() returns:
We're not done yet! roll_two_and_sum()
calls roll_die() again:
Which calls random.randint() again:
random.randint() returns, then roll_die() returns,
putting us back in roll_two_and_sum():
Which calls print():
What actually goes in a function's
stack frame?
A stack frame usually stores:
Some of the specifics vary between processor architectures. For
instance, AMD64 (64-bit x86) processors pass some arguments in
registers and some on the call stack. And, ARM processors (common
in phones) store the return address in a special register instead
of putting it on the call stack.
Each function call creates its own stack
frame, taking up space on the call stack. That's important
because it can impact the space complexity of an algorithm.
Especially when we use recursion.
For example, if we wanted to multiply all the numbers
between 1 and n,
we could use this recursive approach:
What would the call stack look like
when n = 10?
First, product_1_to_n() gets called
with n = 10:
This calls product_1_to_n() with
n = 9.
Which calls product_1_to_n()
with n = 8.
And so on until we get to n = 1.
Look at the size of all those stack frames! The entire call stack
takes up O(n) space. That's right—we
have an O(n) space cost even though
our function itself doesn't create any data
structures!
What if we'd used an iterative approach instead of a recursive one?
This version takes a constant amount of space. At the beginning of the loop,
the call stack looks like this:
As we iterate through the loop, the local variables change, but we
stay in the same stack frame because we don't call any other
functions.
In general, even though the compiler or interpreter will take
care of managing the call stack for you, it's important to consider the
depth of the call stack when analyzing the space complexity of an
algorithm.
Be especially careful with recursive functions!
They can end up building huge call stacks.
What happens if we run out of space? It's a stack
overflow! In Python 2.7, you'll get
a RecursionError.
If the very last thing
a function does is call
another function, then its stack frame
might not be needed any more. The function
could free up its stack frame before doing its final
call, saving space.
This is called tail call optimization
(TCO). If a recursive function is optimized with TCO, then it
may not end up with a big call stack.
In general, most languages don't provide TCO. Scheme
is one of the few languages that guarantee tail call
optimization. Some Ruby, C, and Javascript
implementations may do it. Python and Java decidedly
don't.
In-place algorithms are sometimes called
destructive, since the original input is
"destroyed" (or modified) during
the function call.
Careful: "In-place" does not mean "without
creating any additional variables!" Rather, it means
"without creating a new copy of the input." In general, an
in-place function will only create
additional variables that are O(1) space.
An out-of-place function
doesn't make any changes that are visible to
other functions. Usually,
those functions copy any data structures or objects
before manipulating and changing them.
In many languages, primitive values (integers,
floating point numbers, or characters) are copied when passed as
arguments, and more complex data structures
(lists, heaps, or hash tables) are
passed by
reference. This is what Python does.
Here are two functions that do the same
operation on a list, except one is
in-place and the other is out-of-place:
Working in-place is a good way to save time and
space. An in-place algorithm avoids the cost of
initializing or copying data structures, and it usually has
an O(1) space cost.
But be careful: an in-place algorithm can cause side effects.
Your input is "destroyed" or "altered," which can affect
code outside of your function. For
example:
Generally, out-of-place algorithms are considered safer
because they avoid side effects. You should only use an
in-place algorithm if you're space constrained or
you're positive you don't need the original input
anymore, even for debugging.
Quick reference
Worst Case
space
O(n)
prepend
O(1)
append
O(1)
lookup
O(n)
insert
O(n)
delete
O(n)
Strengths:
Weaknesses:
Uses:
In Python 2.7
a = LinkedListNode(5)
b = LinkedListNode(1)
c = LinkedListNode(9)
a.next = b
b.next = c
Doubly Linked Lists
Not cache-friendly
Overview
def roll_die():
return random.randint(1, 6)
def roll_two_and_sum():
total = 0
total += roll_die()
total += roll_die()
print total
roll_two_and_sum()
roll_two_and_sum()
roll_die()
roll_two_and_sum()
random.randint()
roll_die()
roll_two_and_sum()
roll_die()
roll_two_and_sum()
roll_two_and_sum()
roll_die()
roll_two_and_sum()
random.randint()
roll_die()
roll_two_and_sum()
roll_two_and_sum()
print()
roll_two_and_sum()
What's stored in a stack frame?
The Space Cost of Stack Frames
def product_1_to_n(n):
return 1 if n <= 1 else n * product_1_to_n(n - 1)
product_1_to_n() n = 10
product_1_to_n() n = 9
product_1_to_n() n = 10
product_1_to_n() n = 8
product_1_to_n() n = 9
product_1_to_n() n = 10
product_1_to_n() n = 1
product_1_to_n() n = 2
product_1_to_n() n = 3
product_1_to_n() n = 4
product_1_to_n() n = 5
product_1_to_n() n = 6
product_1_to_n() n = 7
product_1_to_n() n = 8
product_1_to_n() n = 9
product_1_to_n() n = 10
def product_1_to_n(n):
# We assume n >= 1
result = 1
for num in range(1, n + 1):
result *= num
return result
product_1_to_n() n = 10, result = 1, num = 1
product_1_to_n() n = 10, result = 2, num = 2
product_1_to_n() n = 10, result = 6, num = 3
product_1_to_n() n = 10, result = 24, num = 4
def square_list_in_place(int_list):
for index, element in enumerate(int_list):
int_list[index] *= element
# NOTE: no need to return anything - we modified
# int_list in place
def square_list_out_of_place(int_list):
# We allocate a new list with the length of the input list
squared_list = [None] * len(int_list)
for index, element in enumerate(int_list):
squared_list[index] = element ** 2
return squared_list
original_list = [2, 3, 4, 5]
square_list_in_place(original_list)
print "original list: %s" % original_list
# Prints: original list: [4, 9, 16, 25], confusingly!
Your function will have one input: the head of the list.
Your function should return the new head of the list.
Here's a sample linked list node class:
class LinkedListNode(object):
def __init__(self, value):
self.value = value
self.next = None
We can do this in O(1) space. So don't make a new list; use the existing list nodes!
We can do this is in O(n) time.
Careful—even the right approach will fail if done in the wrong order.
Try drawing a picture of a small linked list and running your function by hand. Does it actually work?
The most obvious edge cases are:
Does your function correctly handle those cases?
Log in or sign up with one click to get immediate access to 3 free mock interview questions
We'll never post on your wall or message your friends.
Actually, we don't support password-based login. Never have. Just the OAuth methods above. Why?
Log in or sign up with one click to get immediate access to 3 free mock interview questions
We'll never post on your wall or message your friends.
Actually, we don't support password-based login. Never have. Just the OAuth methods above. Why?
O(n) time and O(1) space. We pass over the list only once, and maintain a constant number of variables in memory.
This
in-place
↴
An in-place function
modifies data structures or objects outside of its
own
stack frame
↴
The call stack is what a program uses to keep
track of function calls. The call stack is
made up of stack frames—one for
each function call.
For instance, say we called a function that
rolled two dice and printed the sum.
First, our program calls roll_two_and_sum(). It goes
on the call stack:
That function calls roll_die(), which gets pushed
on to the top of the call stack:
Inside of roll_die(), we call random.randint().
Here's what our call stack looks like then:
When random.randint() finishes, we return back to
roll_die() by removing
("popping") random.randint()'s stack frame.
Same thing when roll_die() returns:
We're not done yet! roll_two_and_sum()
calls roll_die() again:
Which calls random.randint() again:
random.randint() returns, then roll_die() returns,
putting us back in roll_two_and_sum():
Which calls print():
What actually goes in a function's
stack frame?
A stack frame usually stores:
Some of the specifics vary between processor architectures. For
instance, AMD64 (64-bit x86) processors pass some arguments in
registers and some on the call stack. And, ARM processors (common
in phones) store the return address in a special register instead
of putting it on the call stack.
Each function call creates its own stack
frame, taking up space on the call stack. That's important
because it can impact the space complexity of an algorithm.
Especially when we use recursion.
For example, if we wanted to multiply all the numbers
between 1 and n,
we could use this recursive approach:
What would the call stack look like
when n = 10?
First, product_1_to_n() gets called
with n = 10:
This calls product_1_to_n() with
n = 9.
Which calls product_1_to_n()
with n = 8.
And so on until we get to n = 1.
Look at the size of all those stack frames! The entire call stack
takes up O(n) space. That's right—we
have an O(n) space cost even though
our function itself doesn't create any data
structures!
What if we'd used an iterative approach instead of a recursive one?
This version takes a constant amount of space. At the beginning of the loop,
the call stack looks like this:
As we iterate through the loop, the local variables change, but we
stay in the same stack frame because we don't call any other
functions.
In general, even though the compiler or interpreter will take
care of managing the call stack for you, it's important to consider the
depth of the call stack when analyzing the space complexity of an
algorithm.
Be especially careful with recursive functions!
They can end up building huge call stacks.
What happens if we run out of space? It's a stack
overflow! In Python 2.7, you'll get
a RecursionError.
If the very last thing
a function does is call
another function, then its stack frame
might not be needed any more. The function
could free up its stack frame before doing its final
call, saving space.
This is called tail call optimization
(TCO). If a recursive function is optimized with TCO, then it
may not end up with a big call stack.
In general, most languages don't provide TCO. Scheme
is one of the few languages that guarantee tail call
optimization. Some Ruby, C, and Javascript
implementations may do it. Python and Java decidedly
don't.
In-place algorithms are sometimes called
destructive, since the original input is
"destroyed" (or modified) during
the function call.
Careful: "In-place" does not mean "without
creating any additional variables!" Rather, it means
"without creating a new copy of the input." In general, an
in-place function will only create
additional variables that are O(1) space.
An out-of-place function
doesn't make any changes that are visible to
other functions. Usually,
those functions copy any data structures or objects
before manipulating and changing them.
In many languages, primitive values (integers,
floating point numbers, or characters) are copied when passed as
arguments, and more complex data structures
(lists, heaps, or hash tables) are
passed by
reference. This is what Python does.
Here are two functions that do the same
operation on a list, except one is
in-place and the other is out-of-place:
Working in-place is a good way to save time and
space. An in-place algorithm avoids the cost of
initializing or copying data structures, and it usually has
an O(1) space cost.
But be careful: an in-place algorithm can cause side effects.
Your input is "destroyed" or "altered," which can affect
code outside of your function. For
example:
Generally, out-of-place algorithms are considered safer
because they avoid side effects. You should only use an
in-place algorithm if you're space constrained or
you're positive you don't need the original input
anymore, even for debugging.
Overview
def roll_die():
return random.randint(1, 6)
def roll_two_and_sum():
total = 0
total += roll_die()
total += roll_die()
print total
roll_two_and_sum()
roll_two_and_sum()
roll_die()
roll_two_and_sum()
random.randint()
roll_die()
roll_two_and_sum()
roll_die()
roll_two_and_sum()
roll_two_and_sum()
roll_die()
roll_two_and_sum()
random.randint()
roll_die()
roll_two_and_sum()
roll_two_and_sum()
print()
roll_two_and_sum()
What's stored in a stack frame?
The Space Cost of Stack Frames
def product_1_to_n(n):
return 1 if n <= 1 else n * product_1_to_n(n - 1)
product_1_to_n() n = 10
product_1_to_n() n = 9
product_1_to_n() n = 10
product_1_to_n() n = 8
product_1_to_n() n = 9
product_1_to_n() n = 10
product_1_to_n() n = 1
product_1_to_n() n = 2
product_1_to_n() n = 3
product_1_to_n() n = 4
product_1_to_n() n = 5
product_1_to_n() n = 6
product_1_to_n() n = 7
product_1_to_n() n = 8
product_1_to_n() n = 9
product_1_to_n() n = 10
def product_1_to_n(n):
# We assume n >= 1
result = 1
for num in range(1, n + 1):
result *= num
return result
product_1_to_n() n = 10, result = 1, num = 1
product_1_to_n() n = 10, result = 2, num = 2
product_1_to_n() n = 10, result = 6, num = 3
product_1_to_n() n = 10, result = 24, num = 4
def square_list_in_place(int_list):
for index, element in enumerate(int_list):
int_list[index] *= element
# NOTE: no need to return anything - we modified
# int_list in place
def square_list_out_of_place(int_list):
# We allocate a new list with the length of the input list
squared_list = [None] * len(int_list)
for index, element in enumerate(int_list):
squared_list[index] = element ** 2
return squared_list
original_list = [2, 3, 4, 5]
square_list_in_place(original_list)
print "original list: %s" % original_list
# Prints: original list: [4, 9, 16, 25], confusingly!
Log in or sign up with one click to get immediate access to 3 free mock interview questions
We'll never post on your wall or message your friends.
Actually, we don't support password-based login. Never have. Just the OAuth methods above. Why?
Do you have an answer?
Wanna review this one again later? Or do you feel like you got it all?
Mark as done Pin for review laterWant more coding interview help?
Check out interviewcake.com for more advice, guides, and practice questions.
Reset editor
Powered by qualified.io
With just a couple clicks.
We'll never post on your wall or message your friends.
Actually, we don't support password-based login. Never have. Just the OAuth methods above. Why?