

Picture by Writer | Ideogram
Let’s be sincere. Once you’re studying Python, you are in all probability not serious about efficiency. You are simply making an attempt to get your code to work! However this is the factor: making your Python code sooner does not require you to change into an skilled programmer in a single day.
With a couple of easy strategies that I am going to present you at present, you’ll be able to enhance your code’s pace and reminiscence utilization considerably.
On this article, we’ll stroll by way of 5 sensible beginner-friendly optimization strategies collectively. For each, I am going to present you the “earlier than” code (the best way many novices write it), the “after” code (the optimized model), and clarify precisely why the development works and the way a lot sooner it will get.
🔗 Hyperlink to the code on GitHub
1. Change Loops with Checklist Comprehensions
Let’s begin with one thing you in all probability do on a regular basis: creating new lists by reworking present ones. Most novices attain for a for loop, however Python has a a lot sooner method to do that.
Earlier than Optimization
This is how most novices would sq. a listing of numbers:
import time
def square_numbers_loop(numbers):
consequence = []
for num in numbers:
consequence.append(num ** 2)
return consequence
# Let's take a look at this with 1000000 numbers to see the efficiency
test_numbers = record(vary(1000000))
start_time = time.time()
squared_loop = square_numbers_loop(test_numbers)
loop_time = time.time() - start_time
print(f"Loop time: {loop_time:.4f} seconds")
This code creates an empty record known as consequence, then loops by way of every quantity in our enter record, squares it, and appends it to the consequence record. Fairly easy, proper?
After Optimization
Now let’s rewrite this utilizing a listing comprehension:
def square_numbers_comprehension(numbers):
return [num ** 2 for num in numbers] # Create your complete record in a single line
start_time = time.time()
squared_comprehension = square_numbers_comprehension(test_numbers)
comprehension_time = time.time() - start_time
print(f"Comprehension time: {comprehension_time:.4f} seconds")
print(f"Enchancment: {loop_time / comprehension_time:.2f}x sooner")
This single line [num ** 2 for num in numbers]
does precisely the identical factor as our loop, however it’s telling Python “create a listing the place every ingredient is the sq. of the corresponding ingredient in numbers.”
Output:
Loop time: 0.0840 seconds
Comprehension time: 0.0736 seconds
Enchancment: 1.14x sooner
Efficiency enchancment: Checklist comprehensions are sometimes 30-50% sooner than equal loops. The advance is extra noticeable once you work with very giant iterables.
Why does this work? Checklist comprehensions are carried out in C beneath the hood, so that they keep away from lots of the overhead that comes with Python loops, issues like variable lookups and performance calls that occur behind the scenes.
2. Select the Proper Knowledge Construction for the Job
This one’s enormous, and it is one thing that may make your code lots of of instances sooner with only a small change. The secret’s understanding when to make use of lists versus units versus dictionaries.
Earlier than Optimization
For example you need to discover widespread parts between two lists. This is the intuitive strategy:
def find_common_elements_list(list1, list2):
widespread = []
for merchandise in list1: # Undergo every merchandise within the first record
if merchandise in list2: # Verify if it exists within the second record
widespread.append(merchandise) # If sure, add it to our widespread record
return widespread
# Check with fairly giant lists
large_list1 = record(vary(10000))
large_list2 = record(vary(5000, 15000))
start_time = time.time()
common_list = find_common_elements_list(large_list1, large_list2)
list_time = time.time() - start_time
print(f"Checklist strategy time: {list_time:.4f} seconds")
This code loops by way of the primary record, and for every merchandise, it checks if that merchandise exists within the second record utilizing if merchandise in list2
. The issue? Once you do merchandise in list2
, Python has to look by way of your complete second record till it finds the merchandise. That is gradual!
After Optimization
This is the identical logic, however utilizing a set for sooner lookups:
def find_common_elements_set(list1, list2):
set2 = set(list2) # Convert record to a set (one-time price)
return [item for item in list1 if item in set2] # Verify membership in set
start_time = time.time()
common_set = find_common_elements_set(large_list1, large_list2)
set_time = time.time() - start_time
print(f"Set strategy time: {set_time:.4f} seconds")
print(f"Enchancment: {list_time / set_time:.2f}x sooner")
First, we convert the record to a set. Then, as an alternative of checking if merchandise in list2
, we examine if merchandise in set2
. This tiny change makes membership testing almost instantaneous.
Output:
Checklist strategy time: 0.8478 seconds
Set strategy time: 0.0010 seconds
Enchancment: 863.53x sooner
Efficiency enchancment: This may be of the order of 100x sooner for giant datasets.
Why does this work? Units use hash tables beneath the hood. Once you examine if an merchandise is in a set, Python does not search by way of each ingredient; it makes use of the hash to leap on to the place the merchandise must be. It is like having a guide’s index as an alternative of studying each web page to seek out what you need.
3. Use Python’s Constructed-in Features Each time Attainable
Python comes with tons of built-in capabilities which are closely optimized. Earlier than you write your individual loop or customized perform to do one thing, examine if Python already has a perform for it.
Earlier than Optimization
This is the way you may calculate the sum and most of a listing for those who did not learn about built-ins:
def calculate_sum_manual(numbers):
whole = 0
for num in numbers:
whole += num
return whole
def find_max_manual(numbers):
max_val = numbers[0]
for num in numbers[1:]:
if num > max_val:
max_val = num
return max_val
test_numbers = record(vary(1000000))
start_time = time.time()
manual_sum = calculate_sum_manual(test_numbers)
manual_max = find_max_manual(test_numbers)
manual_time = time.time() - start_time
print(f"Guide strategy time: {manual_time:.4f} seconds")
The sum
perform begins with a complete of 0, then provides every quantity to that whole. The max
perform begins by assuming the primary quantity is the utmost, then compares each different quantity to see if it is larger.
After Optimization
This is the identical factor utilizing Python’s built-in capabilities:
start_time = time.time()
builtin_sum = sum(test_numbers)
builtin_max = max(test_numbers)
builtin_time = time.time() - start_time
print(f"Constructed-in strategy time: {builtin_time:.4f} seconds")
print(f"Enchancment: {manual_time / builtin_time:.2f}x sooner")
That is it! sum()
provides the whole of all numbers within the record, and max()
returns the biggest quantity. Similar consequence, a lot sooner.
Output:
Guide strategy time: 0.0805 seconds
Constructed-in strategy time: 0.0413 seconds
Enchancment: 1.95x sooner
Efficiency enchancment: Constructed-in capabilities are sometimes sooner than handbook implementations.
Why does this work? Python’s built-in capabilities are written in C and closely optimized.
4. Carry out Environment friendly String Operations with Be a part of
String concatenation is one thing each programmer does, however most novices do it in a method that will get exponentially slower as strings get longer.
Earlier than Optimization
This is the way you may construct a CSV string by concatenating with the + operator:
def create_csv_plus(knowledge):
consequence = "" # Begin with an empty string
for row in knowledge: # Undergo every row of knowledge
for i, merchandise in enumerate(row): # Undergo every merchandise within the row
consequence += str(merchandise) # Add the merchandise to our consequence string
if i < len(row) - 1: # If it is not the final merchandise
consequence += "," # Add a comma
consequence += "n" # Add a newline after every row
return consequence
# Check knowledge: 1000 rows with 10 columns every
test_data = [[f"item_{i}_{j}" for j in range(10)] for i in vary(1000)]
start_time = time.time()
csv_plus = create_csv_plus(test_data)
plus_time = time.time() - start_time
print(f"String concatenation time: {plus_time:.4f} seconds")
This code builds our CSV string piece by piece. For every row, it goes by way of every merchandise, converts it to a string, and provides it to our consequence. It provides commas between objects and newlines between rows.
After Optimization
This is the identical code utilizing the be part of technique:
def create_csv_join(knowledge):
# For every row, be part of the objects with commas, then be part of all rows with newlines
return "n".be part of(",".be part of(str(merchandise) for merchandise in row) for row in knowledge)
start_time = time.time()
csv_join = create_csv_join(test_data)
join_time = time.time() - start_time
print(f"Be a part of technique time: {join_time:.4f} seconds")
print(f"Enchancment: {plus_time / join_time:.2f}x sooner")
This single line does lots! The internal half ",".be part of(str(merchandise) for merchandise in row)
takes every row and joins all objects with commas. The outer half "n".be part of(...)
takes all these comma-separated rows and joins them with newlines.
Output:
String concatenation time: 0.0043 seconds
Be a part of technique time: 0.0022 seconds
Enchancment: 1.94x sooner
Efficiency enchancment: String becoming a member of is way sooner than concatenation for giant strings.
Why does this work? Once you use += to concatenate strings, Python creates a brand new string object every time as a result of strings are immutable. With giant strings, this turns into extremely wasteful. The be part of
technique figures out precisely how a lot reminiscence it wants upfront and builds the string as soon as.
5. Use Mills for Reminiscence-Environment friendly Processing
Generally you needn’t retailer all of your knowledge in reminiscence without delay. Mills allow you to create knowledge on-demand, which may save large quantities of reminiscence.
Earlier than Optimization
This is the way you may course of a big dataset by storing every thing in a listing:
import sys
def process_large_dataset_list(n):
processed_data = []
for i in vary(n):
# Simulate some knowledge processing
processed_value = i ** 2 + i * 3 + 42
processed_data.append(processed_value) # Retailer every processed worth
return processed_data
# Check with 100,000 objects
n = 100000
list_result = process_large_dataset_list(n)
list_memory = sys.getsizeof(list_result)
print(f"Checklist reminiscence utilization: {list_memory:,} bytes")
This perform processes numbers from 0 to n-1, applies some calculation to every one (squaring it, multiplying by 3, and including 42), and shops all leads to a listing. The issue is that we’re preserving all 100,000 processed values in reminiscence without delay.
After Optimization
This is the identical processing utilizing a generator:
def process_large_dataset_generator(n):
for i in vary(n):
# Simulate some knowledge processing
processed_value = i ** 2 + i * 3 + 42
yield processed_value # Yield every worth as an alternative of storing it
# Create the generator (this does not course of something but!)
gen_result = process_large_dataset_generator(n)
gen_memory = sys.getsizeof(gen_result)
print(f"Generator reminiscence utilization: {gen_memory:,} bytes")
print(f"Reminiscence enchancment: {list_memory / gen_memory:.0f}x much less reminiscence")
# Now we will course of objects separately
whole = 0
for worth in process_large_dataset_generator(n):
whole += worth
# Every worth is processed on-demand and could be rubbish collected
The important thing distinction is yield
as an alternative of append
. The yield
key phrase makes this a generator perform – it produces values separately as an alternative of making them all of sudden.
Output:
Checklist reminiscence utilization: 800,984 bytes
Generator reminiscence utilization: 224 bytes
Reminiscence enchancment: 3576x much less reminiscence
Efficiency enchancment: Mills can use “a lot” much less reminiscence for giant datasets.
Why does this work? Mills use lazy analysis, they solely compute values once you ask for them. The generator object itself is tiny; it simply remembers the place it’s within the computation.
Conclusion
Optimizing Python code does not must be intimidating. As we have seen, small adjustments in the way you strategy widespread programming duties can yield dramatic enhancements in each pace and reminiscence utilization. The secret’s creating an instinct for choosing the proper software for every job.
Bear in mind these core rules: use built-in capabilities after they exist, select acceptable knowledge buildings in your use case, keep away from pointless repeated work, and be aware of how Python handles reminiscence. Checklist comprehensions, units for membership testing, string becoming a member of, mills for giant datasets are all instruments that must be in each newbie Python programmer’s toolkit. Continue to learn, preserve coding!
Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, knowledge science, and content material creation. Her areas of curiosity and experience embody DevOps, knowledge science, and pure language processing. She enjoys studying, writing, coding, and occasional! At present, she’s engaged on studying and sharing her information with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates participating useful resource overviews and coding tutorials.