We use cookies to personalize content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. By clicking "Accept Cookies," scrolling this page, clicking a link or continuing to browse otherwise, you agree to the use of cookies. Privacy and Cookie Policies
You only have 3 free questions left (including this one).
But it doesn't have to end here! Sign up for the 7-day coding interview crash course and you'll get a free Interview Cake problem every week.
You're in!
Write a function for doing an in-place↴
An in-placefunction
modifies data structures or objects outside of its
own stack frame↴
Overview
The call stack is what a program uses to keep
track of function calls. The call stack is
made up of stack frames—one for
each function call.
For instance, say we called a function that
rolled two dice and printed the sum.
defroll_die():return random.randint(1,6)defroll_two_and_sum():
total =0
total += roll_die()
total += roll_die()print total
roll_two_and_sum()
First, our program calls roll_two_and_sum(). It goes
on the call stack:
roll_two_and_sum()
That function calls roll_die(), which gets pushed
on to the top of the call stack:
roll_die()
roll_two_and_sum()
Inside of roll_die(), we call random.randint().
Here's what our call stack looks like then:
random.randint()
roll_die()
roll_two_and_sum()
When random.randint() finishes, we return back to
roll_die() by removing
("popping") random.randint()'s stack frame.
roll_die()
roll_two_and_sum()
Same thing when roll_die() returns:
roll_two_and_sum()
We're not done yet! roll_two_and_sum()
calls roll_die()again:
roll_die()
roll_two_and_sum()
Which calls random.randint() again:
random.randint()
roll_die()
roll_two_and_sum()
random.randint() returns, then roll_die() returns,
putting us back in roll_two_and_sum():
roll_two_and_sum()
Which calls print():
print()
roll_two_and_sum()
What's stored in a stack frame?
What actually goes in a function's
stack frame?
A stack frame usually stores:
Local variables
Arguments passed into the function
Information about the caller's stack frame
The return address—what the program should do
after the function returns (i.e.: where it should "return
to"). This is usually somewhere in the middle of the caller's
code.
Some of the specifics vary between processor architectures. For
instance, AMD64 (64-bit x86) processors pass some arguments in
registers and some on the call stack. And, ARM processors (common
in phones) store the return address in a special register instead
of putting it on the call stack.
The Space Cost of Stack Frames
Each function call creates its own stack
frame, taking up space on the call stack. That's important
because it can impact the space complexity of an algorithm.
Especially when we use recursion.
For example, if we wanted to multiply all the numbers
between 1 and n,
we could use this recursive approach:
defproduct_1_to_n(n):return1if n <=1else n * product_1_to_n(n -1)
What would the call stack look like
when n = 10?
First, product_1_to_n() gets called
with n = 10:
product_1_to_n() n = 10
This calls product_1_to_n() with
n = 9.
product_1_to_n() n = 9
product_1_to_n() n = 10
Which calls product_1_to_n()
with n = 8.
product_1_to_n() n = 8
product_1_to_n() n = 9
product_1_to_n() n = 10
And so on until we get to n = 1.
product_1_to_n() n = 1
product_1_to_n() n = 2
product_1_to_n() n = 3
product_1_to_n() n = 4
product_1_to_n() n = 5
product_1_to_n() n = 6
product_1_to_n() n = 7
product_1_to_n() n = 8
product_1_to_n() n = 9
product_1_to_n() n = 10
Look at the size of all those stack frames! The entire call stack
takes up O(n) space. That's right—we
have an O(n) space cost even though
our function itself doesn't create any data
structures!
What if we'd used an iterative approach instead of a recursive one?
defproduct_1_to_n(n):# We assume n >= 1
result =1for num in range(1, n +1):
result *= num
return result
This version takes a constant amount of space. At the beginning of the loop,
the call stack looks like this:
product_1_to_n() n = 10, result = 1, num = 1
As we iterate through the loop, the local variables change, but we
stay in the same stack frame because we don't call any other
functions.
product_1_to_n() n = 10, result = 2, num = 2
product_1_to_n() n = 10, result = 6, num = 3
product_1_to_n() n = 10, result = 24, num = 4
In general, even though the compiler or interpreter will take
care of managing the call stack for you, it's important to consider the
depth of the call stack when analyzing the space complexity of an
algorithm.
Be especially careful with recursive functions!
They can end up building huge call stacks.
What happens if we run out of space? It's a stack
overflow! In Python 2.7, you'll get
a RecursionError.
If the very last thing
a function does is call
another function, then its stack frame
might not be needed any more. The functioncould free up its stack frame before doing its final
call, saving space.
This is called tail call optimization
(TCO). If a recursive function is optimized with TCO, then it
may not end up with a big call stack.
In general, most languages don't provide TCO. Scheme
is one of the few languages that guarantee tail call
optimization. Some Ruby, C, and Javascript
implementations may do it. Python and Java decidedly
don't.
(i.e.: stored on
the process heap or in
the stack frame of a calling function). Because of this, the
changes made by the function remain after
the call completes.
In-place algorithms are sometimes called
destructive, since the original input is
"destroyed" (or modified) during
the function call.
Careful: "In-place" does not mean "without
creating any additional variables!" Rather, it means
"without creating a new copy of the input." In general, an
in-place function will only create
additional variables that are O(1) space.
An out-of-placefunction
doesn't make any changes that are visible to
other functions. Usually,
those functions copy any data structures or objects
before manipulating and changing them.
In many languages, primitive values (integers,
floating point numbers, or characters) are copied when passed as
arguments, and more complex data structures
(lists, heaps, or hash tables) are
passed by
reference. This is what Python does.
Here are two functions that do the same
operation on a list, except one is
in-place and the other is out-of-place:
defsquare_list_in_place(int_list):for index, element in enumerate(int_list):
int_list[index]*= element
# NOTE: no need to return anything - we modified# int_list in placedefsquare_list_out_of_place(int_list):# We allocate a new list with the length of the input list
squared_list =[None]* len(int_list)for index, element in enumerate(int_list):
squared_list[index]= element **2return squared_list
Working in-place is a good way to save time and
space. An in-place algorithm avoids the cost of
initializing or copying data structures, and it usually has
an O(1) space cost.
But be careful: an in-place algorithm can cause side effects.
Your input is "destroyed" or "altered," which can affect
code outside of your function. For
example:
Generally, out-of-place algorithms are considered safer
because they avoid side effects. You should only use an
in-place algorithm if you're space constrained or
you're positive you don't need the original input
anymore, even for debugging.
shuffle of a list.
The shuffle must be "uniform," meaning each item in the original list must have the same probability of ending up in each spot in the final list.
Assume that you have a functionget_random(floor, ceiling) for getting a random integer that is >= floor and <= ceiling.
Gotchas
A common first idea is to walk through the list and swap each element with a random other element. Like so:
import random
defget_random(floor, ceiling):return random.randrange(floor, ceiling +1)defnaive_shuffle(the_list):# For each index in the listfor first_index in xrange(0, len(the_list)-1):# Grab a random other index
second_index = get_random(0, len(the_list)-1)# And swap the valuesif second_index != first_index:
the_list[first_index], the_list[second_index]= \
the_list[second_index], the_list[first_index]
However, this does not give a uniform random distribution.
Why? We could calculate the exact probabilities of two outcomes to show they aren't the same. But the math gets a little messy. Instead, think of it this way:
Suppose our list had 3 elements: [a, b, c]. This means it'll make 3 calls to get_random(0, 2). That's 3 random choices, each with 3 possibilities. So our total number of possible sets of choices is 3∗3∗3=27. Each of these 27 sets of choices is equally probable.
But how many possible outcomes do we have? If you paid attention in stats class you might know the answer is 3!, which is 6. Or you can just list them by hand and count:
a, b, c
a, c, b
b, a, c
b, c, a
c, b, a
c, a, b
But our function has 27 equally-probable sets of choices. 27 is not evenly divisible by 6. So some of our 6 possible outcomes will be achievable with more sets of choices than others.
We can do this in a single pass. O(n) time and O(1) space.
A common mistake is to have a mostly-uniform shuffle where an item is less likely to stay where it started than it is to end up in any given slot. Each item should have the same probability of ending up in each spot, including the spot where it starts.
Breakdown
Start your free trial!
Log in or sign up with one click to get immediate access to 3 free mock interview questions
Actually, we don't support password-based login. Never have. Just the OAuth methods above. Why?
It's easy and quick. No "reset password" flow. No password to forget.
It lets us avoid storing passwords that hackers could access and use to try to log into our users' email or bank accounts.
It makes it harder for one person to share a paid Interview Cake account with multiple people.
“Interview Cake's teaching style is very effectiveâ the incremental hints and thorough explanations were key in helping me understand the material. I felt well prepared going into programming interviews.
—
Chris