Python Interview Questions: Mastering Technical Interviews

Get the 7-day crash course!

In this free email course, I'll teach you the right way of thinking for breaking down tricky algorithmic coding interview questions.

You've learned Python's basics—functions, loops, and data structures. Perhaps you've built some impressive projects. So, when your interviewer asks what language you want to use, the answer seems easy: "Python!"

Most interviewers will let you solve problems in any language, but that flexibility comes with a catch—they expect you to demonstrate genuine fluency in whatever language you pick. If you choose Python, they'll expect you to know more than what you'd pick up in a classroom setting.

In this guide, we'll explore the Python practices that separate classroom code from professional solutions. We'll dive into the patterns and principles that experienced developers use daily. Whether you're preparing for your first technical interview or looking to level up your Python skills, we'll help you transform your code from "it works" to "production-ready." Excited to think about Python like a pro? Let's dive in.

Topic 1: Why Python?

Like any tool, Python shines in some scenarios and falls short in others. When interviewing, be candid about whether Python is actually the best choice for the scenario provided. (And if it's not, don't hesitate to ask if you can switch languages!)

Here are Python's biggest strengths:

  • Readability and Expressiveness: Python's clean, English-like syntax makes code incredibly readable. Its "one obvious way to do it" philosophy means different developers will write similar-looking code, making it ideal for collaborative teams and code review.
  • Versatility: Python supports multiple programming paradigms. You can write procedural scripts, dive into functional programming with list comprehensions and lambda functions, or build complex object-oriented systems—whichever works best for the task at hand.
  • Rich Ecosystem: Python's package ecosystem is vast and well-maintained. Need web development? Django and Flask are battle-tested. Machine learning? TensorFlow and PyTorch lead the industry. Data analysis? Pandas and NumPy are optimized number crunchers. This means you rarely need to "reinvent the wheel."

In practice, Python excels at:

  • Rapid Prototyping: Python's simplicity makes it easy to get started and mock up a minimum viable product (MVP)
  • Data Science and ML: The ecosystem of supporting libraries for these tasks is unmatched
  • Web APIs and Services: Frameworks like FastAPI and Django REST make building robust APIs straightforward
  • Automation and Scripting: More powerful than Bash, more readable than Perl, and more portable than PowerShell

However, you should probably look elsewhere when:

  • Performance is critical: For real-time systems or performance-sensitive applications, consider C++ or Rust
  • Resources are constrained: On embedded systems or memory-limited environments, C is usually better
  • Building user interfaces: For web frontends, stick with JavaScript/TypeScript. For mobile apps, use Swift or Kotlin
  • Handling Massive Concurrency: Go and Rust offer better built-in support for parallel processing
  • Writing System-Level Code: For operating system components or drivers, C, C++, or Rust are more appropriate

Topic 2: Python's Mutable and Immutable Types

Understanding Python's distinction between mutable and immutable types is crucial for writing reliable, production-ready code. Simply put, immutable objects can't be modified after creation—any operation that appears to modify them actually creates a new object. Mutable objects, on the other hand, can be modified in place.

So, which items are mutable and which are immutable?

  • Immutable: Numbers, strings, byte strings, tuples, and frozen sets
  • Mutable: Nearly everything else—lists, sets, dictionaries, user-defined objects

When writing code, choose between mutable and immutable types with intention. If you know a value shouldn't ever change, use an immutable type to prevent accidental modifications. Conversely, if you're frequently modifying something, use a mutable type to avoid the overhead of creating new objects with each change.

Beware: Mutable Types and Shallow Copying

Suppose we're writing some code to track interview questions. We have a default template to get us started:

question_template = { 'title': 'default title', 'question': 'default question', 'answer': 'default answer', 'hints': [] }

Each time we make a new question, we copy the template and populate it with specifics:

def make_new_question(title, question, answer, hints=None): new_q = question_template.copy() # always require title, question, answer new_q['title'] = title new_q['question'] = question new_q['answer'] = answer # sometimes there aren't hints, that's fine. Otherwise, add them: if hints is not None: new_q['hints'].extend(hints) return new_q

Let's add a few questions and print them out:

question_1 = make_new_question("title1", "question1", "answer1", ["q1 hint1", "q1 hint2"]) question_2 = make_new_question("title2", "question2", "answer2") question_3 = make_new_question("title3", "question3", "answer3", ["q3 hint1"]) print(question_1) # Prints: {'title': 'title1', 'question': 'question1', 'answer': 'answer1', 'hints': ['q1 hint1', 'q1 hint2', 'q3 hint1']} print(question_2) # Prints: {'title': 'title2', 'question': 'question2', 'answer': 'answer2', 'hints': ['q1 hint1', 'q1 hint2', 'q3 hint1']} print(question_3) # Prints: {'title': 'title3', 'question': 'question3', 'answer': 'answer3', 'hints': ['q1 hint1', 'q1 hint2', 'q3 hint1']}

What happened?

When we copied the template (using copy), we got a shallow copy. While each question does have its own dictionary, the mutable items are still shared! Uh oh.

To fix this, either:

  • Use copy.deepcopy to create completely independent copies, or
  • Use immutable types inside your template to be copied

Mutable Types and Function Default Parameters: A Subtle Interaction

Take a look at the code snippet below. What do you think will get printed?

def add_to_list(value, my_list=[]): my_list.append(value) return my_list print(add_to_list(1)) # [1] print(add_to_list(2)) # [1, 2] - Wait, what?

The output might surprise you! Why does the second print statement show values from the first call?

In Python, default arguments are evaluated once—when the function is defined—not each time the function is called. Because lists are mutable, each call modifies the same list that was created during function definition. This can lead to unexpected behavior, because multiple calls will share the same underlying mutable default. In larger applications where functions are called repeatedly, that's a recipe for unexpected bugs.

A good practice is to make all default values immutable. In this example, we use None and initialize a new list inside the function when necessary.

def add_to_list(value, my_list=None): if my_list is None: my_list = [] my_list.append(value) return my_list print(add_to_list(1)) # [1] print(add_to_list(2)) # [2] - Each call gets a fresh list

This pattern—using immutable defaults creating the mutable objects inside the function—is a common Python idiom you'll see in professional codebases. It ensures each function call starts with a fresh mutable object, preventing unexpected interactions between different calls.

Bonus: Immutable Numbers? What Does That Mean?

In Python, everything, including numbers, is an object. When you "change" a number, you actually get a new object:

a = 2024 print(id(a)) # 4372512560 a += 1 print(id(a)) # 4372509168 - a new object!

As a performance optimization, Python creates singletons for commonly used integers. That means there's only one instance of those numbers that all code shares:

b = 5 c = 6 print(id(b), id(c)) # 4385709128 4385709160 b += 1 print(id(b), id(c)) # 4385709160 4385709160 - the same object, twice

Can you find all the integers that use singletons? Don't forget numbers less than zero!

Topic 3: Python's try, except, and finally: Exception Handling

Production code always runs into unexpected hiccups: a user claims they were born in 1792, a web server temporarily can't reach the internet, an expired API key makes a web request fail. Any of these edge cases can generate an exception which will terminate our program. In a production system, that's usually not what we want. Instead, we should handle the exception and continue running.

Basic Exception Handling

In Python, use a try block to isolate code that might raise an exception. After the try block, add a series of except blocks to catch and handle different kinds of errors. An optional finally block contains cleanup code that always runs—whether there were exceptions or not.

try: with open(config_file) as fd: parse_config(fd) except FileNotFoundError: print(f'File {config_file} does not exist!') except IsADirectoryError: print(f'File {config_file} is a directory!') except EOFError: print(f'File {config_file} ended unexpectedly!') finally: # Always runs - opportunity to clean up print('Done loading configuration file.')

In production code, you'll probably want to do more than just print inside the except blocks. The specifics vary depending on the application, but some common exception handling actions are:

  • Log the error in a structured log. Python's logging module shines here.
  • Retry the action that failed, with some delay in between attempts.
  • Fall back on sane behavior. In the example above, we might use a default configuration if we can't load the provided customized one.

Broad vs. Specific Exception Handling

You should only catch the exceptions that you're prepared to address. Each except statement should be tied to a specific Exception type to prevent it from accidentally catching more than expected. While the code below runs, it potentially masks a huge range of errors that might need more explicit handling.

try: complex_data_loading_and_processing() # Don't do this! Too broad - catches everything! except: # Don't do this! No logging or handling - just silent failure pass

Interviewers love considering corner cases and ways things can go wrong. During the interview, you should ask whether the interviewer wants your code to be robust to these kinds of failures or not. Some want to see you write careful exception handling code, and others prefer that you focus the limited interview time on other topics.

Topic 4: Type Annotations and Hints in Python

Python has always been a dynamically-typed language, meaning that variables don't have explicit types associated with them. That said, since mid-2015, Python has had type hints—snippets specifying what type variables should be at runtime. Using them is optional, but they'll make your code clearer, help catch bugs, and improve IDE support.

Type Annotation Syntax

Type hints can be added to function parameters and return values

def calculate_discount(price: float, discount_percent: float) -> float: return price * (1 - discount_percent / 100)

Type hints can also be added to variables:

user_id: int = 12345 name: str = 'Antoine'

Python's typing module provides specifiers for more complex types.

from typing import List, Dict def calculate_bill_total(items: List[str], prices: Dict[str, float], gratuity: float) -> float: return sum([prices[item] for item in items]) * (1 + gratuity) names: List[str] = ['Alice', 'Bob', 'Charlie']

If you're doing something more specialized, the typing module also provides building blocks to make custom types.

Type Annotation Enforcement

Python's type hints are not usually enforced at runtime. To validate types at runtime, look into mypy or the type-enforced modules. It's most common to use those tools while developing code and then turn off type checking in production deployments.

If type annotations aren't enforced at runtime does that make them worthless? Definitely not. Type hints make code more maintainable and explicitly communicate assumptions to other developers. Definitely consider adding them when working on a larger project with multiple developers, creating an API or library, or dealing with data structures that have a specific schema.

When discussing type hints in interviews, emphasize how they help in the development process but note their limitations in enforcement. Adding them to your own code helps the interviewer understand your intentions and also signals you're committed to code maintainability and documentation. A win-win!

Topic 5: Function Decorators in Python

Decorators are one of Python's most powerful features, yet they often trip up developers. At their core, decorators are just functions that modify other functions. That's especially valuable because it lets you add functionality to existing code without changing a function's internal implementation. You should reach for decorators when you want to add extra logic either before or after every call to a function.

A Simple Decorator Example

Let's look at a common example: timing how long a function takes to run. Without decorators, you might write something like this:

import time import logging def slow_method(): # Do something expensive here start = time.time() slow_method() end = time.time() logging.info(f'Function slow_method ran in {end - start} seconds.')

This works, but it's tedious and messy to add these lines everywhere slow_method is called. Decorators to the rescue!

import logging from typing import Callable, Any from time import time def timer(func: Callable) -> Callable: '''Measures and prints the execution time of a function.''' def wrapper(*args: Any, **kwargs: Any) -> Any: start = time() result = func(*args, **kwargs) end = time() logging.info(f'{func.__name__} took {end - start:.2f} seconds to execute') return result return wrapper @timer def slow_method(): # Do something expensive here

Now every call to slow_method automatically includes timing, no matter where it's called from. Pretty nifty, right?

Common Uses of Decorators:

  • Monitoring function performance
  • Caching and memoizing function results
  • Validating input
  • Rate limiting for API calls
  • Access control and authentication
  • Acquiring and releasing exclusive locks

Python's ecosystem includes several useful decorators. The functools module's @lru_cache decorator is a particular standout, caching function results to speed up repetitive computations:

from functools import lru_cache @lru_cache def fibonacci(n: int) -> int: if n < 2: return n return fibonacci(n-1) + fibonacci(n-2)

Python's Class Decorators

A few built in decorators are particularly helpful when defining classes.

  • The @property decorator makes it simple to control how object attributes are accessed and modified. As an example, if we were defining a Circle class, we could have an attribute for the circle's radius that had to be non-negative. Using decorators, it's easy to enforce that constraint.

    class Circle: def __init__(self, radius: float) -> None: self._radius = radius @property def radius(self) -> float: return self._radius @radius.setter def radius(self, value: float) -> None: if value < 0: raise ValueError('Radius must be >= 0!') self._radius = value my_circle = Circle(5.0) my_circle.radius = -2 # Calls radius.setter, raises ValueError
  • Usually, the first argument to every method inside a class is self—the object instance involved in the call. That said, sometimes you'll want to write class methods that don't take in an instance. To do that, use the @staticmethod decorator.

    class Circle: # Snip @staticmethod def shape_string() -> str: return 'circle' print(Circle.shape_string()) # prints 'circle'
  • Other times, you might want to write a class method that takes the class instead of a specific instance of the class. To do that, use the @classmethod decorator.

    from __future__ import annotations import math class Circle: # Snip @classmethod def initialize_from_area(cls, area: float) -> Circle: # area = pi * r^2 so r = sqrt(area / pi) radius = math.sqrt(area / math.pi) return cls(radius) unit_circle = Circle.initialize_from_area(math.pi) print(unit_circle.radius) # prints 1.0

During interviews, watch for opportunities where decorators could improve your solution. Common signs include repeated setup/cleanup code, need for timing or logging, or operations that could benefit from caching. When you spot these opportunities, flag them to your interviewer and ask if they'd like you to implement the decorator or focus elsewhere. This demonstrates both your knowledge of Python's features and your ability to prioritize during time-constrained situations.

Topic 6: Threading and Python's Global Interpreter Lock

Python's Global Interpreter Lock (GIL) is a mutex that prevents multiple native threads from executing Python bytecode simultaneously. This can limit performance of parallel applications built with threading. Even if you have multiple threads that appear to be running concurrently, in practice only one will run at a time. Yikes.

The GIL is a great example of tradeoffs in design. The default Python implementation, CPython, has a GIL in order to simplify the underlying code. Other Python implementations, like Jython, don't rely on a GIL and handle the complexity of parallelism directly.

Implications for Developers

If you're using CPython, the GIL means that:

  • CPU-bound tasks (like number crunching) won't run faster with threading, because only one thread will be able to do computations at a time. Instead, use multiprocessing, which has a bit more overhead but does run different tasks on multiple CPUs.
  • I/O-bound tasks (like network requests or file operations) can still benefit significantly from threading, because most of the time threads will be waiting on I/O, not running Python code. Yay!
  • The underlying Python implementation matters. If you're not using CPython, you can skip the two points above. Do you know which one you're using? What does production use?

Parallel processing is a staple of performance production environments. During a coding interview, you should be confident explaining Python's limitations with parallelism and know when to use threading vs. multiprocessing for the biggest speed boost.

Topic 7: Debugging Python Code—Beyond Printing

When an interviewer asks about debugging, they're often less interested in the specific tools you use and more interested in your systematic approach to problem-solving. That said, it's good to know about at least one debugging tool that's more robust than adding print statements. In Python, two common debugging options are the built-in debugger (pdb) and IDE visual debuggers.

Key Steps When Debugging

If asked to describe your debugging workflow, be sure to touch on each of the items below:

  1. Reproduce the Issue
    • Create a minimal example that demonstrates the bug
    • Document the exact steps to trigger the problem
    • Note the environment details (Python version, OS, relevant package versions)
  2. Quick Investigation
    • Add strategic print statements to trace program flow
    • Log variable values at key points
    • Check for obvious issues like None values or type mismatches
  3. Deep Investigation. Use a debugger to find the specific point where behavior diverges from expectation.
    • Set breakpoints at suspicious locations
    • Inspect variable values and call stacks
    • Step through code execution
    • Evaluate expressions in the current context
  4. Verify the Fix
    • Confirm the original test case now works
    • Add regression tests to prevent future occurrences
    • Check that the fix didn't introduce new problems
    • Document the solution for future reference

How to Use Python's pdb

During Step #3, it's often helpful to use a debugger. To trigger Python's built-in debugger, add a call to breakpoint at the point in your code where you want to debug. When that point runs, you'll be dropped into an interactive shell where you can inspect and modify the program state. You might:

  • Show a stack trace (where)
  • Print out the value of variables (print [variable name])
  • Display source code around the current line (list)
  • Execute until the next line, then stop (next)
  • Add additional breakpoints (break [linenum | function])
  • Continue until the next breakpoint (continue)
  • Quit (quit)

Of course, it's unlikely you'll carry out extensive debugging sessions when coding on a whiteboard. Still, having fluency around using debuggers demonstrates programming maturity.

Heading Off Bugs During Development

Prevention is Better Than Cure: While debugging tools are invaluable, professional developers rely heavily on preventive measures to avoid digging out debuggers in most cases. Make sure to mention how your development cycle would incorporate:

  • Logging: Structured logging provides insights into production issues
  • Assert Statements: Catch invalid states early in development
  • Unit Tests: Identify problems before they reach production
  • Type Annotations: Catch type-related issues during development
  • Code Reviews: Get another pair of eyes on complex logic

Topic 8: Virtual Environments and Python Package Managers

If you're not using virtual environments yet, you should start. Virtual environments are absolutely essential for professional Python work—they're the difference between reliable, reproducible code and the dreaded "it works on my machine" dependency mess.

A virtual environment is like a fresh, isolated Python installation for each project. Think of it as a clean slate where you can install packages without worrying about conflicts with other projects.

Why virtual environments matter in practice:

  • Dependency Isolation: Different projects often need different versions of the same package. Project A might need TensorFlow 1.x while Project B requires 2.x. Virtual environments let each project get their own copy with the expected version.
  • Reproducibility: When you use a virtual environment and maintain a requirements.txt file, anyone can recreate your exact development environment with a single command. This is crucial for team collaboration and deployment.
  • Clean Testing: Virtual environments ensure your tests run against only the dependencies you've explicitly specified, catching missing dependencies before they become production issues.

Your tooling choices matter too. While the built-in venv module works fine, many professional developers use a tool like poetry that manages Python packages alongside other dependencies. Being familiar with at least one of these tools (and ideally multiple!) shows you understand professional development practices and the importance of standardized environments.

Topic 9: Generators and Lazy Evaluation in Python

Generators are one of Python's most powerful features, but they're often forgotten. Why? When working with small amounts of data, generators don't make that much of a difference. As data sizes grow though, using generators can be the difference between a program that runs quickly and one that runs out of memory and crashes. You should be comfortable recognizing where lazy evaluation would help make your code more efficient and scalable.

What are Generators in Python?

At their core, generators are functions that generate values on-demand rather than all at once. This "lazy evaluation" approach can dramatically reduce memory usage when working with large datasets or infinite sequences. Here's a simple example that illustrates the difference:

from typing import List, Generator # Memory-intensive approach def get_square_numbers(n: int) -> List[int]: return [x * x for x in range(n)] # Creates entire list in memory # Memory-efficient generator approach def get_square_numbers(n: int) -> Generator[int, None, None]: for x in range(n): yield x * x # Generates values one at a time

The generator version might look similar, but it's fundamentally different in how it uses resources. If n is 1 million, the list version creates a list with 1 million integers in memory all at once. The generator version? It holds just a single value in memory at any time.

When to use Python Generators?

Generators shine in real-world scenarios like:

  • Processing large files: Read and process the file line by line instead of loading it all into memory
  • API pagination: Fetch and yield results one page at a time
  • Infinite sequences: Model potentially infinite sequences without running out of memory
  • Data transformations: Chain operations together without creating intermediate lists

Generator Gotcha: Multiple Iterations

One common pitfall with generators is trying to iterate through them multiple times. Once a generator is exhausted the first time, subsequent iterations will yield no values. You need to reinitialize the generator before each iteration.

As an example, suppose we want to create a 3\times3 multiplication table:

[ [1, 2, 3], [2, 4, 6], [3, 6, 9] ]
matrix = [] iterator = range(1, 4) for row in iterator: matrix.append( [row * i for i in iterator] ) # Result: [[2, 3]] - Oops!

What happened?

Our generator has three items: 1, 2, and 3. The first item in our generator (1) is consumed when we enter the loop, setting row = 1. The remaining two items (2, 3) are consumed inside the list comprehension. There's nothing left to build up the remaining rows. Whoops!

A More Complex Example: Parsing Logs With Generators

Here's another example that's a bit more realistic. Suppose we have gigabytes of logs from across our computing cluster. Using generators, we can efficiently parse through the logs and aggregate statistics.

from typing import Generator import re import datetime from collections import Counter def parse_logs(log_file: str) -> Generator[dict, None, None]: error_pattern = re.compile( r'(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) ' # Timestamp r'(\w+) ' # Log level r'\[(\w+)\] ' # Service name r'(.*)' # Message ) with open(log_file, 'r') as f: for line in f: match = error_pattern.match(line.strip()) if match: timestamp_str, level, service, message = match.groups() yield { 'timestamp': datetime.strptime(timestamp_str, '%Y-%m-%d %H:%M:%S'), 'level': level, 'service': service, 'message': message, } # Example usage: def error_counts_by_service(log_file: str) -> dict: '''Analyze error logs to generate a service reliability report.''' service_errors = Counter() for entry in parse_logs(log_file): service = entry['service'] service_errors[service] += 1 return service_errors

Using generators is crucial when you're working with data that's too large to fit in memory. In an interview, showing that you know when and why to use generators demonstrates you know how to write code that scales to production workloads—a key trait for professional developers companies are trying to hire.

Topic 10: Asynchronous Python Programming with Asyncio

Traditional programming is synchronous, meaning that operations run sequentially. In synchronous programming, when a function is called, the caller has to wait for that function to return before it can continue to the next step.

That said, often multiple steps could run at the same time. This is particularly common when dealing with I/O operations like network requests, database queries, or file manipulation. Ideally, we should run those steps concurrently for better performance.

This is where asynchronous programming comes in. Think of it like a chef in a kitchen: instead of standing idle while water boils (I/O wait), they can prep ingredients for another dish (useful work). Python's asyncio module provides the tools to write such efficient, non-blocking code.

When to Use Asyncio

While Python offers several tools for concurrent programming, each serves a different purpose:

  • Threading: Best for I/O-bound tasks when you need to maintain shared state
  • Multiprocessing: Ideal for CPU-bound tasks that benefit from parallel computation
  • Asyncio: Perfect for I/O-bound operations, especially when handling many concurrent operations

Python Asyncio Core Concepts

In asyncio, a coroutine is a special type of function that can pause its execution while waiting for something (like a network response) and let other code run in the meantime. Think of a coroutine as a polite function that says "I'm waiting for something, so feel free to do other work until I'm ready to continue."

In Python, there's special syntax for working with asynchronous functions:

  • To declare a coroutine, use async def

    async def my_coroutine(): pass
  • To pause execution of a coroutine (while waiting for something else), use await

    await some_coroutine()
  • To schedule a coroutine to run, create a Task using asyncio.create_task

    task = asyncio.create_task(my_coroutine()) # Then, wait for the task to finish result = await task
  • To run a scheduled coroutine, use asyncio.gather or create a TaskGroup

    results = await asyncio.gather([my_coroutine(), my_coroutine(), my_coroutine()])
  • To kick off the outermost coroutine, use asyncio.run

    asyncio.run(main())

A Simple Asyncio Example

Let's simulate an API service that processes orders:

import time import asyncio from typing import List, Dict async def process_order(order_id: int, processing_time: float) -> Dict: '''Simulate processing a single order.''' print(f'Starting to process order {order_id}') await asyncio.sleep(processing_time) # Simulate I/O work (e.g., database/API calls) print(f'Finished processing order {order_id}') return { 'order_id': order_id, 'status': 'completed', 'processing_time': processing_time } async def process_orders(orders: List[Dict]) -> List[Dict]: '''Process multiple orders concurrently.''' tasks = [ process_order(order['id'], order['processing_time']) for order in orders ] return await asyncio.gather(*tasks) async def main(): # Simulate different orders with varying processing times orders = [ {'id': 1, 'processing_time': 2}, # Takes 2 seconds {'id': 2, 'processing_time': 1}, # Takes 1 second {'id': 3, 'processing_time': 3}, # Takes 3 seconds ] print('Starting order processing...') start_time = time.time() results = await process_orders(orders) end_time = time.time() total_time = end_time - start_time print(f'\nProcessed {len(results)} orders in {total_time:.2f} seconds') print(f'Individual results: {results}') # Run the async program if __name__ == '__main__': asyncio.run(main())

Running this code produces output like:

Starting order processing... Starting to process order 1 Starting to process order 2 Starting to process order 3 Finished processing order 2 Finished processing order 1 Finished processing order 3 Processed 3 orders in 3.01 seconds

Notice that even though the orders total 6 seconds of processing time (2+1+3), they complete in about 3 seconds because they run concurrently. This is the magic of asyncio—it efficiently handles multiple I/O-bound tasks without blocking.

When discussing asyncio in programming interviews:

  • Emphasize your understanding of when to use asyncio versus other concurrency tools. Show that you can identify I/O-bound bottlenecks where asyncio would be most beneficial.
  • Demonstrate awareness of the complexity trade-offs. While asynchronous code can be more complex to reason about, the performance benefits often justify this complexity for I/O-heavy applications.
  • Be prepared to discuss real-world applications. Common examples include:

    • Web scrapers handling multiple requests
    • API servers processing numerous concurrent connections
    • Data pipelines managing multiple file operations

Remember: The goal isn't just to show that you know how to use asyncio, but that you understand where and why it's appropriate to use it in production systems.

Topic 11: Python Linters: Code Quality and Style

Writing clean, maintainable code isn't just about making things work—it's about making them work well over time. Professional Python developers rely heavily on automated tools to maintain consistent code quality. Understanding these tools and Python's style conventions demonstrates that you think about code as a long-term investment, not just a one-off solution.

The Official Standard: PEP 8

PEP 8 is Python's official style guide. Some of the conventions it establishes include:

  • Indentation uses 4 spaces—not tabs
  • Maximum line length is 79 characters (though modern teams often use larger widths)
  • Use snake_case for functions and variables, PascalCase for classes
  • Spaces around operators: x = 1 + 2, not x=1+2
  • One space after commas: func(x, y), not func(x,y)
  • Two blank lines before top-level classes and functions

You should start writing PEP8 compliant code now so that it grows into a habit over time.

Helpful Tools for Python Styling

Python's ecosystem includes several tools that help maintain consistent code in larger projects. On production development teams, it's likely they'll already have one set up to enforce style conventions across the code base. It's a good idea to be familiar with at least one of the common tools:

  • Pylint is the de facto standard for Python linting. It's comprehensive—highlighting style violations, logic bugs and code duplication. Well established, opinionated, and widely used across the community.
  • Flake8 combines a few different Python linters into one tool to flag deviations from PEP8, overly complicated code, and other logic bugs. It's particularly good at catching syntax errors and style violations without being as strict as Pylint about code organization.
  • Black, the "uncompromising code formatter," takes a different approach. Instead of just pointing out style issues, it automatically reformats your code to match a consistent style. This eliminates style discussions in code reviews—the team simply agrees to use Black, and formatting becomes automatic.

Regardless of whether you're coding on a whiteboard or inside an editor, make sure your code uses a consistent style, ideally one that's PEP8 compliant. Doing so shows the code you'll contribute will be consistent and follow standard conventions, making it easy for others to read and understand.

That last point is worth emphasising again. Above all, you should make sure your code is legible and coherent. It can be tempting to show off Python's features like complex list comprehensions or fancy one-liners. Don't. It's fun to write this code, but the result isn't fun to read or maintain. And in professional development environments, maintainability is crucial.

Topic 12: Python Unit Testing and Mocking

Production code isn't just about what happens when everything works perfectly—it's about being confident your code works correctly in all possible cases. That's where testing comes in, and Python's testing ecosystem makes it straightforward to write and maintain comprehensive tests.

Python Unit Tests With pytest

Python's pytest framework is a de-facto standard for testing. To get a feel for it, let's work through a small example: writing tests for a User class that tracks whether individuals can vote or not.

# production code in user.py class User: def __init__(self, name: str, age: int) -> None: if age < 0: raise ValueError('Age cannot be negative') self.name = name self.age = age def can_vote(self) -> bool: return self.age >= 18

Ideally, our tests should cover all the possible branches of the code. To start off, let's make a test that checks users are initialized sanely:

# test code in test_user.py import pytest from user import User def test_user_creation(): user = User('Alice', 25) assert user.name == 'Alice' assert user.age == 25

We should also test that negative ages are rejected with the expected exception:

def test_negative_age(): with pytest.raises(ValueError): User('Bob', -1)

Finally, let's test the logic around voting age:

def test_voting_age(): minor = User('Charlie', 16) adult = User('Diana', 18) assert not minor.can_vote() assert adult.can_vote()

Mocking External Dependencies in Python Testing

Real applications rarely exist in isolation—they call APIs, read files, or query databases. During testing, we don't want to depend on these external services. Mocking to the rescue!

Mocking allows us to tell the test environment how to handle calls to external libraries. Usually, we'll skip the external call entirely and instead provide a hard coded return value back. Here's what that might look like for a network request:

from unittest.mock import patch import requests from typing import Dict, Any def get_user_data(user_id: int) -> Dict[str, Any]: response = requests.get(f'https://api.example.com/users/{user_id}') return response.json() class MockResponse: def __init__(self, data: Dict[str, Any]) -> None: self._data = data def json(self) -> Dict[str, Any]: return self._data def test_get_user_data(): # Create a mock response object mock_response = MockResponse({'id': 1, 'name': 'Alice'}) # Replace requests.get with our mock with patch('requests.get') as mock_get: mock_get.return_value = mock_response # Now we can test without hitting the real API data = get_user_data(1) assert data['name'] == 'Alice' mock_get.assert_called_once_with('https://api.example.com/users/1')

It's important to use mocking in a balanced way. Mock too much and your test suite might skip large portions of code. Mock too little and your test environment quickly grows complex. Usually, API calls to external libraries are a sweet spot for defining and using mocks in unit tests.

In interviews, demonstrating knowledge of testing shows you think about code quality and reliability. You might not have time to write full tests during the interview, but mentioning how you'd test your solution can set you apart from other candidates. Definitely highlight the edge cases you'd test against, which external components you'd mock, and how unit tests would fit into your larger testing strategy to ensure thorough code coverage.

Topic 13: Profiling Python Code to Optimize Performance

Understanding how your code performs is crucial for production applications. Python's ecosystem includes several built-in and third-party tools for identifying performance bottlenecks and optimizing code execution.

Python's Built-In Profiler: cProfile

Python's standard library comes with a powerful profiling tool: cProfile. Give it a function or script to run and it'll break down how much time was spent in various subroutines, quickly identifying hot paths to focus on optimizing.

For instance, say we had this code, which counts the number of unique words in a file:

import sys def unique_words(filepath: str) -> int: unique = [] with open(filepath) as fd: for line in fd: for word in line.strip().split(): if unique.count(word) == 0: unique.append(word) return len(unique) unique_words(sys.argv[1])

Let's run this with cProfile:

python3 -m cProfile unique_words.py /usr/share/dict/words 944524 function calls in 203.324 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 203.324 203.324 {built-in method builtins.exec} 1 0.002 0.002 203.324 203.324 unique_words.py:1(<module>) 1 0.336 0.336 203.322 203.322 unique_words.py:3(unique_words) 235976 202.924 0.001 202.924 0.001 {method 'count' of 'list' objects} 235976 0.023 0.000 0.023 0.000 {method 'split' of 'str' objects} # snip

Oof. It takes a long time! We quickly see that we're spending a lot of time in count. That makes sense—list membership checking is . A faster option is to use a set, which has membership checking.

import sys def unique_words(filepath: str) -> int: unique = set() with open(filepath) as fd: for line in fd: for word in line.strip().split(): if word not in unique: unique.add(word) return len(unique) unique_words(sys.argv[1])

Other Python Profiling Tools

In addition to cProfile, Python's ecosystem also includes:

  • line_profiler: Similar to cProfile, but profiles each source code line instead of each function call. Records how many times each line ran, the time per single run, and the total time.

    Here's the same slow example as above profiled with line_profiler. Again, the slow call to count jumps out as a performance bottleneck (99.8% of the runtime).

    $ kernprof -l -v unique_words.py /usr/share/dict/words Wrote profile results to unique_words.py.lprof Timer unit: 1e-06 s Total time: 203.646 s File: unique_words.py Function: unique_words at line 4 Line # Hits Time Per Hit % Time Line Contents ============================================================== 4 @profile 5 def unique_words(filepath): 6 1 0.0 0.0 0.0 unique = [] 7 2 1273.0 636.5 0.0 with open(filepath) as fd: 8 235977 85806.0 0.4 0.0 for line in fd: 9 471952 186702.0 0.4 0.1 for word in line.strip().split(): 10 235976 203255415.0 861.3 99.8 if unique.count(word) == 0: 11 235976 116381.0 0.5 0.1 unique.append(word) 12 1 0.0 0.0 0.0 return unique
  • timeit: This package runs snippets of code repeatedly to determine how long they take. A quick demonstration shows how the built in pow function compares to manually calculating our own exponentiation.

    # File: timing_test.py import timeit def slow_power(base: float, exp: int) -> float: result = 1 for i in range(exp): result = result * base return result def fast_power(base: float, exp: int) -> float: return pow(base, exp) print(timeit.timeit(lambda: slow_power(5, 5000), number=1000)) print(timeit.timeit(lambda: fast_power(5, 5000), number=1000)) $ python3 timing_test.py 1.7985437081661075 0.026958625065162778

    A ~66x speed difference—wow!

  • memory_profiler: This package tracks memory usage line-by-line as the program runs. It's helpful for finding lines that are using a lot of memory and perhaps could be optimized for space. Here's a small example showing the space overhead difference between making a generator vs. storing an entire list:

    from memory_profiler import profile @profile def list_vs_range(): a = range(1000000) b = list(range(1000000)) list_vs_range() $ python3 memory_test.py Filename: memory_test.py Line # Mem usage Increment Occurrences Line Contents ============================================================= 3 21.1 MiB 21.1 MiB 1 @profile 4 def list_vs_range(): 5 21.1 MiB 0.0 MiB 1 a = range(1000000) 6 59.3 MiB 38.2 MiB 1 b = list(range(1000000))

    One additional tidbit: some programmers like to use list comprehensions to shorten their code by a few lines. But there's a memory cost! When you use a list comprehension this way, the entire list gets assembled in memory only to be immediately thrown away. That's a bit silly, so it's best to avoid it.

    # Log all the users that are getting a notification. # Bad - makes a large list for no reason! [ logging.info(f'Sending a notification to {user.email}') for user in all_users if user.notifications_on ] # Better - no unnecessary memory overhead for user in all_users: if user.notifications_on: logging.info(f'Sending a notification to {user.email}')

    If you really do want to use a one-liner comprehension instead of a loop, you can avoid the memory overhead by using a generator comprehension instead of a list comprehension. But really, a simple loop works perfectly!

    # Less memory overhead; no unnecessary list ( logging.info(f'Sending a notification to {user.email}') for user in all_users if user.notifications_on )

Profiling and Optimization in Coding Interviews

When discussing performance in interviews, the goal is to write efficient code from the start:

  • Choose Data Structures Wisely: When writing code, reason about the operations you'll be doing and picking data structures accordingly. In our example above, using a set instead of a list for uniqueness checking shows you understand both the problem requirements and data structure operations.
  • Consider Complexity First: Before diving into profiling, analyze your algorithm's Big-O complexity. It's better to revise an algorithm to be than optimize an version.
  • Know Your Tools: Being able to discuss profiling tools shows you're familiar with professional development practices. Even if you can't remember exact syntax, understanding what tools exist and when to use them is valuable.
  • Follow the 80/20 Rule: Emphasize that in real-world scenarios, 80% of the runtime often comes from 20% of the code. Profiling helps identify that crucial 20% worth targeting for optimization.

Conclusion: Coding as a Professional

Throughout this guide, we've explored the gap between academic Python knowledge and professional development practices. From virtual environments that ensure reproducible code, to generators that handle large-scale data processing, to asyncio that manages concurrent operations efficiently, to systematic debugging approaches—these aren't just technical skills, they're the building blocks of code that's maintainable in a team environment and scalable to run production workloads.

What separates a junior developer from a senior one isn't just knowing more syntax or algorithms—it's understanding how to write maintainable, scalable code that others can work with. It's about making intentional trade-offs between simplicity and performance, between quick fixes and robust solutions. When interviewing, demonstrate this maturity by:

  • Discussing trade-offs explicitly: "We could use a list comprehension here, but a generator would be more memory efficient for large datasets."
  • Considering maintenance: "Let's add type hints and docstrings to make this more maintainable."
  • Thinking about scale: "This works for small files, but for production logs, we'd want to process them incrementally."
  • Planning for reliability: "We should wrap this I/O operation in a try/except and add proper exception handling."

Remember: Companies aren't just hiring someone to write code—they're hiring a professional developer who can solve real-world problems reliably and at scale. By mastering these concepts and understanding when to apply them, you'll demonstrate that you're ready for that responsibility.

Good luck!

Looking for more coding interview advice?

Check out our free 7-day email crash course on coding interview strategies. We talk about the key algorithmic patterns you can use to easily break down tricky technical interview questions.

No spam, ever. Easy unsubscribe.

Psst. Pass it on.

I am so glad I stumbled on your service ... The way it walks me through problems, gives me hints, fully explains why at each step AND is language native (Python written by someone that obviously knows Python very well!) has made it the best resource I have available. All of the other books I have address issues in Java or C++ with Python being an after thought. Your approach is giving me a stronger grasp of the language features and the proper approaches to solving problems with the tools available. — A happy Interview Cake user (July 2017)
Implement A Queue With Two Stacks »

Implement a queue with two stacks. Assume you already have a stack implementation. keep reading »

Compute the nth Fibonacci Number »

Computer the nth Fibonacci number. Careful--the recursion can quickly spin out of control! keep reading »

Rectangular Love »

Find the area of overlap between two rectangles. In the name of love. keep reading »

Making Change »

Write a function that will replace your role as a cashier and make everyone rich or something. keep reading »

Parenthesis Matching »

Write a function that finds the corresponding closing parenthesis given the position of an opening parenthesis in a string. keep reading »

Bracket Validator »

Write a super-simple JavaScript parser that can find bugs in your intern's code. keep reading »

Balanced Binary Tree »

Write a function to see if a binary tree is 'superbalanced'--a new tree property we just made up. keep reading »

Binary Search Tree Checker »

Write a function to check that a binary tree is a valid binary search tree. keep reading »

2nd Largest Item in a Binary Search Tree »

Find the second largest element in a binary search tree. keep reading »

MillionGazillion »

I'm making a new search engine called MillionGazillion(tm), and I need help figuring out what data structures to use. keep reading »

The Cake Thief »

You've hit the mother lode: the cake vault of the Queen of England. Figure out how much of each cake to carry out to maximize profit. keep reading »

Word Cloud Data »

You're building a word cloud. Write a function to figure out how many times each word appears so we know how big to make each word in the cloud. keep reading »

Largest Stack »

You've implemented a Stack class, but you want to access the largest element in your stack from time to time. Write an augmented LargestStack class. keep reading »

The Stolen Breakfast Drone »

In a beautiful Amazon utopia where breakfast is delivered by drones, one drone has gone missing. Write a function to figure out which one is missing. keep reading »

Delete Node »

Write a function to delete a node from a linked list. Turns out you can do it in constant time! keep reading »

Reverse A Linked List »

Write a function to reverse a linked list in place. keep reading »

Kth to Last Node in a Singly-Linked List »

Find the kth to last node in a singly-linked list. We'll start with a simple solution and move on to some clever tricks. keep reading »

Reverse String in Place »

Write a function to reverse a string in place. keep reading »

Reverse Words »

Write a function to reverse the word order of a string, in place. It's to decipher a supersecret message and head off a heist. keep reading »

Top Scores »

Efficiently sort numbers in an array, where each number is below a certain maximum. keep reading »

Which Appears Twice »

Find the repeat number in an array of numbers. Optimize for runtime. keep reading »

Find in Ordered Set »

Given an array of numbers in sorted order, how quickly could we check if a given number is present in the array? keep reading »

Find Rotation Point »

I wanted to learn some big words to make people think I'm smart, but I messed up. Write a function to help untangle the mess I made. keep reading »

Inflight Entertainment »

Writing a simple recommendation algorithm that helps people choose which movies to watch during flights keep reading »

Permutation Palindrome »

Check if any permutation of an input string is a palindrome. keep reading »

Recursive String Permutations »

Write a recursive function of generating all permutations of an input string. keep reading »

In-Place Shuffle »

Do an in-place shuffle on an array of numbers. It's trickier than you might think! keep reading »

Cafe Order Checker »

Write a function to tell us if cafe customer orders are served in the same order they're paid for. keep reading »

Simulate 5-sided die »

Given a 7-sided die, make a 5-sided die. keep reading »

Simulate 7-sided die »

Given a 5-sided die, make a 7-sided die. keep reading »

Two Egg Problem »

A building has 100 floors. Figure out the highest floor an egg can be dropped from without breaking. keep reading »

Find Repeat, Space Edition »

Figure out which number is repeated. But here's the catch: optimize for space. keep reading »

Find Repeat, Space Edition BEAST MODE »

Figure out which number is repeated. But here's the catch: do it in linear time and constant space! keep reading »

Find Duplicate Files »

Your friend copied a bunch of your files and put them in random places around your hard drive. Write a function to undo the damage. keep reading »

Apple Stocks »

Figure out the optimal buy and sell time for a given stock, given its prices yesterday. keep reading »

Product of All Other Numbers »

For each number in an array, find the product of all the other numbers. You can do it faster than you'd think! keep reading »

Highest Product of 3 »

Find the highest possible product that you can get by multiplying any 3 numbers from an input array. keep reading »

Merging Meeting Times »

Write a function for merging meeting times given everyone's schedules. It's an enterprise end-to-end scheduling solution, dog. keep reading »

Temperature Tracker »

Write code to continually track the max, min, mean, and mode as new numbers are inserted into a tracker class. keep reading »

Ticket Sales Site »

Design a ticket sales site, like Ticketmaster keep reading »

. . .