Find two indices in a sorted array whose numbers sum to a given target. Sample Input: numbers = [1, 2, 4, 6, 10] target = 8. Sample Output: (1, 3).
The 'Pair Sum in Sorted Array' problem can be solved efficiently due to the sorted nature of the input. Here are two common approaches:
Solution 1: Two Pointers
This is the most efficient approach for a sorted array. Start with pointers at both ends of the array. Compare their sum to the target: if it matches, you've found the pair. If the sum is too small, increment the left pointer to increase the sum. If the sum is too large, decrement the right pointer to decrease the sum. Because the input is sorted, every pointer move discards impossible pairs and finds the answer in linear time.
from typing import List, Tuple
def pair_sum_sorted(numbers: List[int], target: int) -> Tuple[int, int]:
left, right = 0, len(numbers) - 1
while left < right:
current = numbers[left] + numbers[right]
if current == target:
return left, right
if current < target:
left += 1
else:
right -= 1
return -1, -1
if __name__ == "__main__":
values = [1, 2, 4, 6, 10]
goal = 8
print(pair_sum_sorted(values, goal))
Solution 2: Hash Map
Although the array is sorted, a hash map can also be used to solve this problem. Iterate through the array, for each number, calculate the complement needed to reach the target. Check if the complement is already in the hash map. If yes, return the indices. Otherwise, add the current number and its index to the hash map. This approach works for unsorted arrays as well, but for sorted arrays, the two-pointer approach is generally preferred for its O(1) space complexity.
from typing import List, Tuple
def pair_sum_hash_map(numbers: List[int], target: int) -> Tuple[int, int]:
seen = {}
for index, number in enumerate(numbers):
complement = target - number
if complement in seen:
return seen[complement], index
seen[number] = index
return -1, -1
if __name__ == "__main__":
values = [1, 2, 4, 6, 10]
goal = 8
print(pair_sum_hash_map(values, goal))
Return 1-indexed positions of two numbers in a sorted array that add up to the target. Sample Input: numbers = [2, 3, 4, 7, 11] target = 15. Sample Output: [4, 5].
The 'Two Sum II (1-indexed)' problem leverages the sorted nature of the input array. Here are two common approaches:
Solution 1: Two Pointers
Two indices traverse the sorted array from the outer edges while comparing their sum to the target. Moving the left pointer right or the right pointer left is justified because sorting tells us how the sum changes. The indices are 1-based as per the problem statement.
from typing import List
def two_sum_ii(numbers: List[int], target: int) -> List[int]:
left, right = 0, len(numbers) - 1
while left < right:
current = numbers[left] + numbers[right]
if current == target:
return [left + 1, right + 1]
if current < target:
left += 1
else:
right -= 1
return [-1, -1]
if __name__ == "__main__":
nums = [2, 3, 4, 7, 11]
print(two_sum_ii(nums, 15))
Solution 2: Hash Map
Similar to the unsorted Two Sum problem, a hash map can be used. Iterate through the array, storing each number and its 1-indexed position. For each number, check if its complement (target - current number) exists in the hash map. If it does, return the stored index and the current index. This approach has a time complexity of O(N) and space complexity of O(N).
from typing import List
def two_sum_ii_hash_map(numbers: List[int], target: int) -> List[int]:
seen = {}
for index, number in enumerate(numbers):
complement = target - number
if complement in seen:
return [seen[complement] + 1, index + 1] # 1-indexed
seen[number] = index
return [-1, -1]
if __name__ == "__main__":
nums = [2, 3, 4, 7, 11]
print(two_sum_ii_hash_map(nums, 15))
List all unique triplets in an array that sum to zero. Sample Input: nums = [-1, 0, 1, 2, -1, -4]. Sample Output: [[-1, -1, 2], [-1, 0, 1]].
The 'Triplet Sum (3Sum)' problem requires finding all unique triplets that sum to zero. Due to the uniqueness constraint and the need for efficiency, sorting combined with two pointers is the standard approach. A hash set approach is also possible but generally less efficient for 3Sum.
Solution 1: Two Pointers with Sorting
The most common and efficient approach involves sorting the array first. Then, iterate through the sorted array, fixing one element (`value`). For the remaining part of the array, use a two-pointer approach (left and right) to find two elements that sum to `-value`. Careful handling of duplicates is crucial to ensure unique triplets in the result. Skipping duplicates at both the anchor and pointer levels avoids repeated triplets while keeping the complexity O(n^2).
from typing import List
def three_sum(nums: List[int]) -> List[List[int]]:
nums.sort()
result: List[List[int]] = []
for index, value in enumerate(nums):
if index > 0 and value == nums[index - 1]:
continue
left, right = index + 1, len(nums) - 1
while left < right:
total = value + nums[left] + nums[right]
if total == 0:
result.append([value, nums[left], nums[right]])
left += 1
right -= 1
while left < right and nums[left] == nums[left - 1]:
left += 1
while left < right and nums[right] == nums[right + 1]:
right -= 1
elif total < 0:
left += 1
else:
right -= 1
return result
if __name__ == "__main__":
print(three_sum([-1, 0, 1, 2, -1, -4]))
Solution 2: Hash Set
This approach also involves sorting the array first. Then, for each fixed element, instead of using two pointers, we use a hash set to find the remaining two elements that sum to `-value`. This can be slightly less efficient than the two-pointer approach for 3Sum due to hash set overhead, but it's a common pattern for k-Sum problems.
from typing import List
def three_sum_hash_set(nums: List[int]) -> List[List[int]]:
nums.sort()
result: List[List[int]] = []
for i, num1 in enumerate(nums):
if i > 0 and num1 == nums[i - 1]:
continue
seen = set()
for j in range(i + 1, len(nums)):
num2 = nums[j]
complement = -num1 - num2
if complement in seen:
result.append([num1, num2, complement])
while j + 1 < len(nums) and nums[j] == nums[j + 1]:
j += 1
seen.add(num2)
return result
if __name__ == "__main__":
print(three_sum_hash_set([-1, 0, 1, 2, -1, -4]))
Return the sum of three numbers that is closest to the target value. Sample Input: nums = [-1, 2, 1, -4], target = 1. Sample Output: 2.
The '3Sum Closest' problem asks us to find three integers in an array whose sum is closest to a given target. Here are two approaches:
Solution 1: Two Pointers with Sorting
This efficient approach starts by sorting the array. Then, for each element, we use a two-pointer technique on the remaining part of the array to find two other elements that, when combined with the fixed element, yield a sum closest to the target. We continuously track the minimum absolute difference and update the `closest` sum. Adjusting the pointer on the side that moves the total toward the target steadily improves the answer.
from typing import List
def three_sum_closest(nums: List[int], target: int) -> int:
nums.sort()
closest = nums[0] + nums[1] + nums[2] # Initialize with sum of first three elements
for index in range(len(nums) - 2):
left, right = index + 1, len(nums) - 1
while left < right:
total = nums[index] + nums[left] + nums[right]
if abs(target - total) < abs(target - closest):
closest = total
if total < target:
left += 1
elif total > target:
right -= 1
else:
return target # Found exact match
return closest
if __name__ == "__main__":
print(three_sum_closest([-1, 2, 1, -4], 1))
Solution 2: Brute Force
The brute force approach involves checking every possible combination of three numbers in the array. This is done using three nested loops. For each triplet, calculate its sum and compare it to the target, updating the `closest` sum if a better match is found. This approach has a time complexity of O(N^3) and is generally too slow for larger inputs, but it's simple to understand.
from typing import List
def three_sum_closest_brute_force(nums: List[int], target: int) -> int:
n = len(nums)
if n < 3:
raise ValueError("Array must contain at least 3 elements")
closest_sum = nums[0] + nums[1] + nums[2]
for i in range(n):
for j in range(i + 1, n):
for k in range(j + 1, n):
current_sum = nums[i] + nums[j] + nums[k]
if abs(target - current_sum) < abs(target - closest_sum):
closest_sum = current_sum
return closest_sum
if __name__ == "__main__":
print(three_sum_closest_brute_force([-1, 2, 1, -4], 1))
Check whether a cleaned string reads the same forward and backward. Sample Input: text = "A man, a plan, a canal: Panama". Sample Output: True.
The 'Valid Palindrome' problem involves checking if a string is a palindrome after filtering out non-alphanumeric characters and converting to lowercase. Here are two common approaches:
Solution 1: Two Pointers (In-place)
This efficient approach uses two pointers, one starting from the beginning and one from the end of the string. The pointers move inward, skipping any non-alphanumeric characters. At each step, the characters at the pointers are compared (case-insensitively). If they don't match, the string is not a palindrome. This method checks the palindrome property in one pass without extra storage.
def is_valid_palindrome(text: str) -> bool:
left, right = 0, len(text) - 1
while left < right:
while left < right and not text[left].isalnum():
left += 1
while left < right and not text[right].isalnum():
right -= 1
if text[left].lower() != text[right].lower():
return False
left += 1
right -= 1
return True
if __name__ == "__main__":
sample = "A man, a plan, a canal: Panama"
print(is_valid_palindrome(sample))
Solution 2: String Cleaning and Reversal
This approach first preprocesses the string by filtering out all non-alphanumeric characters and converting the remaining characters to lowercase. Then, it simply compares the cleaned string with its reversed version. If they are identical, the original string is a valid palindrome. This method is often more readable but uses O(N) extra space for the cleaned string.
def is_valid_palindrome_clean_reverse(text: str) -> bool:
cleaned_text = "".join(char.lower() for char in text if char.isalnum())
return cleaned_text == cleaned_text[::-1]
if __name__ == "__main__":
sample = "A man, a plan, a canal: Panama"
print(is_valid_palindrome_clean_reverse(sample))
Compute the maximum area of water a container can hold between vertical lines. Sample Input: heights = [1, 8, 6, 2, 5, 4, 8, 3, 7]. Sample Output: 49.
The 'Largest Container With Water' problem asks us to find two lines that, along with the x-axis, form a container that holds the most water. Here are two approaches:
Solution 1: Two Pointers
This optimal approach uses two pointers, one at the beginning and one at the end of the array. The area is calculated based on the height of the shorter line and the distance between them. To maximize the area, we always move the pointer of the shorter line inward. This is because moving the taller line inward will never increase the height (it's limited by the shorter line) but will decrease the width. Moving the shorter line inward might find a taller line, potentially increasing the area. This process continues until the pointers meet, achieving O(N) time complexity.
from typing import List
def max_water_container(heights: List[int]) -> int:
left, right = 0, len(heights) - 1
best = 0
while left < right:
height = min(heights[left], heights[right])
width = right - left
best = max(best, height * width)
if heights[left] < heights[right]:
left += 1
else:
right -= 1
return best
if __name__ == "__main__":
print(max_water_container([1, 8, 6, 2, 5, 4, 8, 3, 7]))
Solution 2: Brute Force
The brute force approach involves checking every possible pair of lines. For each pair, calculate the area they can contain (minimum of their heights multiplied by the distance between them) and keep track of the maximum area found. This involves nested loops, resulting in a time complexity of O(N^2), which is less efficient than the two-pointer approach but conceptually simple.
from typing import List
def max_water_container_brute_force(heights: List[int]) -> int:
max_area = 0
n = len(heights)
for i in range(n):
for j in range(i + 1, n):
height = min(heights[i], heights[j])
width = j - i
max_area = max(max_area, height * width)
return max_area
if __name__ == "__main__":
print(max_water_container_brute_force([1, 8, 6, 2, 5, 4, 8, 3, 7]))
Move all zeros in an array to the end while keeping the order of non-zero elements. Sample Input: values = [0, 1, 0, 3, 12]. Sample Output: [1, 3, 12, 0, 0].
The 'Shift Zeros to the End' problem requires moving all zeros to the end of an array while maintaining the relative order of non-zero elements. Here are two common in-place approaches:
Solution 1: Two Pointers (In-place Swap)
This approach uses two pointers: a `write_index` that tracks where the next non-zero element should be placed, and a `read_index` that iterates through the array. When a non-zero element is encountered by `read_index`, it is swapped with the element at `write_index`, and `write_index` is incremented. This effectively moves non-zero elements to the front while preserving their relative order. The remaining elements at the end will naturally be zeros.
from typing import List
def shift_zeros_to_end(values: List[int]) -> List[int]:
write_index = 0
for read_index, number in enumerate(values):
if number != 0:
values[write_index], values[read_index] = values[read_index], values[write_index]
write_index += 1
return values
if __name__ == "__main__":
print(shift_zeros_to_end([0, 1, 0, 3, 12]))
Solution 2: Stable Overwrite
Write the non-zero numbers in-place from left to right without swapping. A second pass fills the suffix with zeros. This avoids redundant swaps when the array already contains long runs of non-zero values.
from typing import List
def shift_zeros_stable(values: List[int]) -> List[int]:
write = 0
for number in values:
if number != 0:
values[write] = number
write += 1
for index in range(write, len(values)):
values[index] = 0
return values
if __name__ == "__main__":
print(shift_zeros_stable([0, 1, 0, 3, 12]))
Transform an array of numbers into its next lexicographical permutation in-place. Sample Input: sequence = [1, 2, 3]. Sample Output: [1, 3, 2].
Find the first descending pair from the right, swap the pivot with the smallest greater element in the suffix, and reverse the suffix. This minimal adjustment produces the next permutation in lexicographic order.
Solution 1: Primary Approach
from typing import List
def next_lexicographical_sequence(sequence: List[int]) -> List[int]:
pivot = len(sequence) - 2
while pivot >= 0 and sequence[pivot] >= sequence[pivot + 1]:
pivot -= 1
if pivot >= 0:
successor = len(sequence) - 1
while sequence[successor] <= sequence[pivot]:
successor -= 1
sequence[pivot], sequence[successor] = sequence[successor], sequence[pivot]
left, right = pivot + 1, len(sequence) - 1
while left < right:
sequence[left], sequence[right] = sequence[right], sequence[left]
left += 1
right -= 1
return sequence
if __name__ == "__main__":
print(next_lexicographical_sequence([1, 2, 3]))
Solution 2: Next Permutation via Reverse Search
Perform the textbook next permutation in a slightly different style: scan the suffix to locate the pivot and swap candidate using explicit reversal helpers. The logic mirrors the in-place STL implementation.
from typing import List
def next_permutation_reverse(sequence: List[int]) -> List[int]:
def reverse_range(start: int, end: int) -> None:
end -= 1
while start < end:
sequence[start], sequence[end] = sequence[end], sequence[start]
start += 1
end -= 1
pivot = len(sequence) - 2
while pivot >= 0 and sequence[pivot] >= sequence[pivot + 1]:
pivot -= 1
if pivot == -1:
sequence.reverse()
return sequence
successor = len(sequence) - 1
while sequence[successor] <= sequence[pivot]:
successor -= 1
sequence[pivot], sequence[successor] = sequence[successor], sequence[pivot]
reverse_range(pivot + 1, len(sequence))
return sequence
if __name__ == "__main__":
print(next_permutation_reverse([1, 2, 3]))
Calculate how much rainwater can be trapped between bars of varying heights. Sample Input: heights = [0, 1, 0, 2, 1, 0, 1, 3, 2, 1, 2, 1]. Sample Output: 6.
Sweep pointers from both ends while maintaining the best wall height seen from each side. Whichever side is lower limits the water, so adding its deficit and advancing that pointer accumulates the trapped volume.
Solution 1: Primary Approach
from typing import List
def trapping_rain_water(heights: List[int]) -> int:
if not heights:
return 0
left, right = 0, len(heights) - 1
left_max = right_max = 0
trapped = 0
while left <= right:
if heights[left] <= heights[right]:
if heights[left] >= left_max:
left_max = heights[left]
else:
trapped += left_max - heights[left]
left += 1
else:
if heights[right] >= right_max:
right_max = heights[right]
else:
trapped += right_max - heights[right]
right -= 1
return trapped
if __name__ == "__main__":
print(trapping_rain_water([0, 1, 0, 2, 1, 0, 1, 3, 2, 1, 2, 1]))
Solution 2: Prefix/Suffix Max Arrays
Pre-compute the tallest bar to the left and right for each column. Water trapped above the column equals the minimum of those supports minus the column height. This trades extra memory for straightforward loops.
from typing import List
def trapping_rain_water_prefix(heights: List[int]) -> int:
if not heights:
return 0
left_max = [0] * len(heights)
right_max = [0] * len(heights)
left_max[0] = heights[0]
for i in range(1, len(heights)):
left_max[i] = max(left_max[i - 1], heights[i])
right_max[-1] = heights[-1]
for i in range(len(heights) - 2, -1, -1):
right_max[i] = max(right_max[i + 1], heights[i])
trapped = 0
for i, height in enumerate(heights):
trapped += max(0, min(left_max[i], right_max[i]) - height)
return trapped
if __name__ == "__main__":
print(trapping_rain_water_prefix([0, 1, 0, 2, 1, 0, 1, 3, 2, 1, 2, 1]))
Find two distinct indices in an unsorted array whose values sum to the target. Sample Input: nums = [2, 7, 11, 15] target = 9. Sample Output: [0, 1].
Process each value once while storing the complement that would complete the target sum. The first time a complement reappears you have the answer, giving O(n) time without sorting.
Solution 1: Primary Approach
from typing import List
def two_sum(nums: List[int], target: int) -> List[int]:
seen = {}
for index, number in enumerate(nums):
complement = target - number
if complement in seen:
return [seen[complement], index]
seen[number] = index
return [-1, -1]
if __name__ == "__main__":
print(two_sum([2, 7, 11, 15], 9))
Solution 2: Brute Force Search
Try every pair of indices until the target sum is found. This O(n²) scan is simple and works when the input sizes are tiny.
from typing import List
def two_sum_bruteforce(nums: List[int], target: int) -> List[int]:
for i in range(len(nums)):
for j in range(i + 1, len(nums)):
if nums[i] + nums[j] == target:
return [i, j]
return [-1, -1]
if __name__ == "__main__":
print(two_sum_bruteforce([2, 7, 11, 15], 9))
Determine whether any value appears at least twice in the array. Sample Input: nums = [1, 2, 3, 1]. Sample Output: True.
Insert every element into a set and exit early when you see a value twice. Set membership tests make the duplicate detection O(n).
Solution 1: Primary Approach
from typing import List
def contains_duplicate(nums: List[int]) -> bool:
seen = set()
for number in nums:
if number in seen:
return True
seen.add(number)
return False
if __name__ == "__main__":
print(contains_duplicate([1, 2, 3, 1]))
Solution 2: Sorting and Scan
Sort the numbers first, then any duplicate values become adjacent. The time complexity is dominated by the O(n log n) sort.
from typing import List
def contains_duplicate_sorted(nums: List[int]) -> bool:
nums = sorted(nums)
for i in range(1, len(nums)):
if nums[i] == nums[i - 1]:
return True
return False
if __name__ == "__main__":
print(contains_duplicate_sorted([1, 2, 3, 1]))
Check whether two strings consist of the same multiset of characters. Sample Input: first = "anagram", second = "nagaram". Sample Output: True.
Build frequency counts for both strings with Counter and compare them. Identical counts ensure the two strings share the same multiset of characters.
Solution 1: Primary Approach
from collections import Counter
def is_anagram(first: str, second: str) -> bool:
return Counter(first) == Counter(second)
if __name__ == "__main__":
print(is_anagram("anagram", "nagaram"))
Solution 2: Sorting Comparison
Two strings are anagrams if their sorted character sequences match. This approach uses O(n log n) time but keeps the implementation short.
def is_anagram_sorted(first: str, second: str) -> bool:
return sorted(first) == sorted(second)
if __name__ == "__main__":
print(is_anagram_sorted("listen", "silent"))
Count contiguous subarrays whose sum equals the target value K. Sample Input: nums = [1, 1, 1] target = 2. Sample Output: 2.
Track a running prefix sum and how many times each sum has occurred. A subarray ending at the current index equals K whenever prefix minus K has been seen before.
Solution 1: Primary Approach
from typing import List
from collections import defaultdict
def count_subarray_sum(nums: List[int], target: int) -> int:
prefix_counts = defaultdict(int)
prefix_counts[0] = 1
running_sum = 0
total = 0
for number in nums:
running_sum += number
total += prefix_counts[running_sum - target]
prefix_counts[running_sum] += 1
return total
if __name__ == "__main__":
print(count_subarray_sum([1, 1, 1], 2))
Solution 2: Prefix Sum with Nested Scan
Build the prefix sum array and check every start/end pair. The double loop is O(n²) yet easy to reason about when constraints are small.
from typing import List
def subarray_sum_bruteforce(nums: List[int], k: int) -> int:
count = 0
prefix = [0]
for number in nums:
prefix.append(prefix[-1] + number)
for start in range(len(nums)):
for end in range(start + 1, len(nums) + 1):
if prefix[end] - prefix[start] == k:
count += 1
return count
if __name__ == "__main__":
print(subarray_sum_bruteforce([1, 1, 1], 2))
Find the length of the longest sequence of consecutive integers. Sample Input: nums = [100, 4, 200, 1, 3, 2]. Sample Output: 4.
Store every number in a set and only start counting sequences when a number lacks a predecessor. Extending forward from each start touches each element once, producing O(n) time.
Solution 1: Primary Approach
from typing import List
def longest_consecutive(nums: List[int]) -> int:
unique = set(nums)
best = 0
for number in unique:
if number - 1 not in unique:
length = 1
while number + length in unique:
length += 1
best = max(best, length)
return best
if __name__ == "__main__":
print(longest_consecutive([100, 4, 200, 1, 3, 2]))
Solution 2: Sorting then Sweep
Sort the numbers and count the length of each consecutive streak in one pass. This O(n log n) technique avoids the explicit hash set lookups.
from typing import List
def longest_consecutive_sorted(nums: List[int]) -> int:
if not nums:
return 0
nums = sorted(set(nums))
longest = current = 1
for i in range(1, len(nums)):
if nums[i] == nums[i - 1] + 1:
current += 1
longest = max(longest, current)
else:
current = 1
return longest
if __name__ == "__main__":
print(longest_consecutive_sorted([100, 4, 200, 1, 3, 2]))
Return the K most frequent integers in the array. Sample Input: nums = [1, 1, 1, 2, 2, 3] k = 2. Sample Output: [1, 2].
Use Counter to measure frequencies and a heap to extract the k largest counts. The heap avoids sorting the full list of unique values and keeps the work to O(n log k).
Solution 1: Primary Approach
from typing import List
import heapq
from collections import Counter
def top_k_frequent_elements(nums: List[int], k: int) -> List[int]:
counts = Counter(nums)
return [value for value, _ in heapq.nlargest(k, counts.items(), key=lambda pair: pair[1])]
if __name__ == "__main__":
print(top_k_frequent_elements([1, 1, 1, 2, 2, 3], 2))
Solution 2: Sort by Frequency
Count occurrences with a dictionary, sort the items by count in descending order, and take the first k keys. Slightly slower than the heap-based solution but concise.
from collections import Counter
from typing import List
def top_k_frequent_sorted(nums: List[int], k: int) -> List[int]:
counts = Counter(nums)
ordered = sorted(counts.items(), key=lambda item: item[1], reverse=True)
return [value for value, _ in ordered[:k]]
if __name__ == "__main__":
print(top_k_frequent_sorted([1, 1, 1, 2, 2, 3], 2))
Return the K most frequent words with lexicographic tie-breaking. Sample Input: words = ["i", "love", "leetcode", "i", "love", "coding"] k = 2. Sample Output: ['i', 'love'].
Count word occurrences and push (-frequency, word) pairs into a heap so lexicographic order breaks ties. Popping k entries yields the correct top list with the required ordering.
Solution 1: Primary Approach
from typing import List
import heapq
from collections import Counter
def top_k_frequent_strings(words: List[str], k: int) -> List[str]:
counts = Counter(words)
heap = [(-frequency, word) for word, frequency in counts.items()]
heapq.heapify(heap)
result = []
for _ in range(k):
_, word = heapq.heappop(heap)
result.append(word)
return result
if __name__ == "__main__":
print(top_k_frequent_strings(["i", "love", "leetcode", "i", "love", "coding"], 2))
Solution 2: Sort by Frequency & Lexicographic Tie
Sort the frequency table using a (-count, word) key so ties fall back to lexicographic order. This mirrors the requirements of the classic LeetCode task.
from collections import Counter
from typing import List
def top_k_frequent_words_sorted(words: List[str], k: int) -> List[str]:
counts = Counter(words)
ordered = sorted(counts.items(), key=lambda item: (-item[1], item[0]))
return [word for word, _ in ordered[:k]]
if __name__ == "__main__":
print(top_k_frequent_words_sorted(["i", "love", "leetcode", "i", "love", "coding"], 2))
Compact non-zero values to the front of the array and fill the rest with zeros. Sample Input: nums = [0, 5, 0, 3, 0, 9]. Sample Output: [5, 3, 9, 0, 0, 0].
Write non-zero numbers forward in-place and then fill the trailing slots with zeros. This pass compacts the array without allocating extra space.
Solution 1: Primary Approach
from typing import List
def zero_stripping(nums: List[int]) -> List[int]:
write = 0
for value in nums:
if value != 0:
nums[write] = value
write += 1
while write < len(nums):
nums[write] = 0
write += 1
return nums
if __name__ == "__main__":
print(zero_stripping([0, 5, 0, 3, 0, 9]))
Solution 2: Two-Pass Stable Copy
Copy all non-zero values into a new list, then extend with zeros to match the original length. This avoids in-place mutation and emphasises clarity.
from typing import List
def strip_zeros_copy(values: List[int]) -> List[int]:
cleaned = [number for number in values if number != 0]
cleaned.extend([0] * (len(values) - len(cleaned)))
return cleaned
if __name__ == "__main__":
print(strip_zeros_copy([0, 1, 0, 3, 12]))
Count triplets that form a geometric progression with common ratio r. Sample Input: values = [1, 4, 16, 64] ratio = 4. Sample Output: 2.
Maintain counts for potential middle and ending elements as you scan. Each value upgrades pending pairs to full triplets based on the ratio, so every valid chain is counted exactly once.
Solution 1: Primary Approach
from typing import List
from collections import defaultdict
def count_geometric_triplets(values: List[int], ratio: int) -> int:
left = defaultdict(int)
right = defaultdict(int)
for value in values:
right[value] += 1
total = 0
for value in values:
right[value] -= 1
if value % ratio == 0:
total += left[value // ratio] * right[value * ratio]
left[value] += 1
return total
if __name__ == "__main__":
print(count_geometric_triplets([1, 4, 16, 64], 4))
Solution 2: Backward Pair Counting
Traverse the numbers from right to left while tracking possible middle and left elements. The two hash maps mirror the forward approach but highlight the symmetric reasoning.
from collections import Counter, defaultdict
from typing import List
def count_triplets_backward(arr: List[int], ratio: int) -> int:
right = Counter(arr)
left = defaultdict(int)
total = 0
for value in arr:
right[value] -= 1
if value % ratio == 0:
total += left[value // ratio] * right[value * ratio]
left[value] += 1
return total
if __name__ == "__main__":
print(count_triplets_backward([1, 4, 16, 64], 4))
Serialize a list of strings and decode it back without ambiguity. Sample Input: strings = ['lint', 'code', 'love', 'you']. Sample Output: 4#lint4#code4#love3#you ['lint', 'code', 'love', 'you'].
Prefix each string with its length and a separator when encoding, then read the length back to slice during decoding. Knowing exact lengths makes the concatenation reversible even when data contains delimiters.
Solution 1: Primary Approach
from typing import List
def encode(strings: List[str]) -> str:
return ''.join(f"{len(item)}#{item}" for item in strings)
def decode(encoded: str) -> List[str]:
result = []
index = 0
while index < len(encoded):
separator = encoded.find('#', index)
length = int(encoded[index:separator])
separator += 1
result.append(encoded[separator:separator + length])
index = separator + length
return result
if __name__ == "__main__":
original = ["lint", "code", "love", "you"]
compressed = encode(original)
print(compressed)
print(decode(compressed))
Solution 2: Delimiter with Escaping
Prefix each string with its length and a separator, escaping occurrences of the separator. The decode step reconstructs the list by reading the length prefix.
from typing import List
DELIM = '#'
ESC = '\\'
def encode_with_escape(strings: List[str]) -> str:
encoded = []
for s in strings:
safe = s.replace(ESC, ESC + ESC).replace(DELIM, ESC + DELIM)
encoded.append(f"{len(safe)}{DELIM}{safe}")
return ''.join(encoded)
def decode_with_escape(data: str) -> List[str]:
result = []
index = 0
while index < len(data):
length_end = data.index(DELIM, index)
length = int(data[index:length_end])
index = length_end + 1
chunk = data[index:index + length]
result.append(chunk.replace(ESC + DELIM, DELIM).replace(ESC + ESC, ESC))
index += length
return result
if __name__ == "__main__":
sample = ["lint", "code", "love", "you"]
encoded = encode_with_escape(sample)
print(encoded)
print(decode_with_escape(encoded))
Reverse a singly linked list and return the new head node. Sample Input: 1 -> 2 -> 3. Sample Output: [3, 2, 1].
Iteratively reroute each node's next pointer to the previously seen node. Keeping a prev pointer gives an in-place reversal with O(1) extra space.
Solution 1: Primary Approach
from typing import Optional
class ListNode:
def __init__(self, value: int, next_node: Optional['ListNode'] = None) -> None:
self.value = value
self.next = next_node
def reverse_list(head: Optional[ListNode]) -> Optional[ListNode]:
previous = None
current = head
while current:
next_node = current.next
current.next = previous
previous = current
current = next_node
return previous
def to_list(head: Optional[ListNode]) -> list[int]:
values = []
while head:
values.append(head.value)
head = head.next
return values
if __name__ == "__main__":
head = ListNode(1, ListNode(2, ListNode(3)))
result = reverse_list(head)
print(to_list(result))
Solution 2: Recursive Reversal
Use recursion to reverse the tail first, then attach the head to the end. This highlights the natural recursive structure of linked lists.
from typing import Optional
class ListNode:
def __init__(self, val: int, next: Optional["ListNode"] = None) -> None:
self.val = val
self.next = next
def reverse_list_recursive(head: Optional[ListNode]) -> Optional[ListNode]:
if not head or not head.next:
return head
reversed_tail = reverse_list_recursive(head.next)
head.next.next = head
head.next = None
return reversed_tail
if __name__ == "__main__":
nodes = [ListNode(value) for value in [1, 2, 3, 4]]
for previous, current in zip(nodes, nodes[1:]):
previous.next = current
head = reverse_list_recursive(nodes[0])
values = []
node = head
while node:
values.append(node.val)
node = node.next
print(values)
Remove the k-th node from the end of a singly linked list. Sample Input: list = [1, 2, 3, 4, 5], k = 2. Sample Output: [1, 2, 3, 5].
Advance a fast pointer k steps ahead of a slow pointer, then move both until fast reaches the end. The slow pointer then references the node before the target, allowing you to unlink it.
Solution 1: Primary Approach
from typing import Optional
class ListNode:
def __init__(self, value: int, next_node: Optional['ListNode'] = None) -> None:
self.value = value
self.next = next_node
def remove_kth_from_end(head: Optional[ListNode], k: int) -> Optional[ListNode]:
dummy = ListNode(0, head)
slow = fast = dummy
for _ in range(k + 1):
fast = fast.next
while fast:
slow = slow.next
fast = fast.next
slow.next = slow.next.next
return dummy.next
def to_list(head: Optional[ListNode]) -> list[int]:
values = []
while head:
values.append(head.value)
head = head.next
return values
if __name__ == "__main__":
node = ListNode(1, ListNode(2, ListNode(3, ListNode(4, ListNode(5)))))
updated = remove_kth_from_end(node, 2)
print(to_list(updated))
Solution 2: Two Pass Length Count
First walk the list to determine its length, then delete the target node using a second traversal. This approach emphasises clarity.
from typing import Optional
class ListNode:
def __init__(self, val: int, next: Optional["ListNode"] = None) -> None:
self.val = val
self.next = next
def remove_kth_from_end_length(head: Optional[ListNode], k: int) -> Optional[ListNode]:
dummy = ListNode(0, head)
length = 0
node = head
while node:
length += 1
node = node.next
steps = length - k
prev = dummy
for _ in range(steps):
prev = prev.next # type: ignore[assignment]
if prev and prev.next:
prev.next = prev.next.next
return dummy.next
if __name__ == "__main__":
nodes = [ListNode(value) for value in [1, 2, 3, 4, 5]]
for previous, current in zip(nodes, nodes[1:]):
previous.next = current
head = remove_kth_from_end_length(nodes[0], 2)
values = []
node = head
while node:
values.append(node.val)
node = node.next
print(values)
Find the node where two singly linked lists intersect. Sample Input: A: 3 -> 7 -> 8 -> 10 B: 99 -> 8 -> 10. Sample Output: 8.
Walk both lists and switch each pointer to the other list when it reaches the tail. After traversing equal total lengths they either meet at the intersection or reach None together.
Solution 1: Primary Approach
from typing import Optional
class ListNode:
def __init__(self, value: int) -> None:
self.value = value
self.next: Optional['ListNode'] = None
def get_intersection_node(first: Optional[ListNode], second: Optional[ListNode]) -> Optional[ListNode]:
pointer_a, pointer_b = first, second
while pointer_a is not pointer_b:
pointer_a = pointer_a.next if pointer_a else second
pointer_b = pointer_b.next if pointer_b else first
return pointer_a
if __name__ == "__main__":
shared = ListNode(8)
shared.next = ListNode(10)
head_a = ListNode(3)
head_a.next = ListNode(7)
head_a.next.next = shared
head_b = ListNode(99)
head_b.next = shared
node = get_intersection_node(head_a, head_b)
print(node.value if node else None)
Solution 2: Length Alignment
Align the lists by advancing the longer prefix so both pointers have the same remaining distance. Then move together until the intersection appears.
from typing import Optional
class ListNode:
def __init__(self, val: int, next: Optional["ListNode"] = None) -> None:
self.val = val
self.next = next
def get_intersection_node_length(head_a: Optional[ListNode], head_b: Optional[ListNode]) -> Optional[ListNode]:
def length(node: Optional[ListNode]) -> int:
count = 0
while node:
count += 1
node = node.next
return count
len_a = length(head_a)
len_b = length(head_b)
cur_a, cur_b = head_a, head_b
if len_a > len_b:
for _ in range(len_a - len_b):
cur_a = cur_a.next
else:
for _ in range(len_b - len_a):
cur_b = cur_b.next
while cur_a is not cur_b:
cur_a = cur_a.next if cur_a else None
cur_b = cur_b.next if cur_b else None
return cur_a
if __name__ == "__main__":
a1 = ListNode(4)
a2 = ListNode(1)
c1 = ListNode(8)
c2 = ListNode(4)
c3 = ListNode(5)
b1 = ListNode(5)
b2 = ListNode(6)
b3 = ListNode(1)
a1.next = a2
a2.next = c1
c1.next = c2
c2.next = c3
b1.next = b2
b2.next = b3
b3.next = c1
print(get_intersection_node_length(a1, b1).val)
Design an LRU cache with constant-time get and put operations. Sample Input: put(1,1), put(2,2), get(1), put(3,3), get(2). Sample Output: 1 -1.
Store nodes in a hashmap for O(1) access while a doubly linked list keeps items ordered by recency. Touching an item moves it to the tail, and evicting from the head removes the least recently used entry.
Solution 1: Primary Approach
from typing import Optional
class Node:
def __init__(self, key: int, value: int) -> None:
self.key = key
self.value = value
self.prev: Optional['Node'] = None
self.next: Optional['Node'] = None
class LRUCache:
def __init__(self, capacity: int) -> None:
self.capacity = capacity
self.nodes = {}
self.left = Node(0, 0)
self.right = Node(0, 0)
self.left.next = self.right
self.right.prev = self.left
def _remove(self, node: Node) -> None:
previous, nxt = node.prev, node.next
previous.next = nxt
nxt.prev = previous
def _add(self, node: Node) -> None:
previous = self.right.prev
previous.next = node
node.prev = previous
node.next = self.right
self.right.prev = node
def get(self, key: int) -> int:
if key not in self.nodes:
return -1
node = self.nodes[key]
self._remove(node)
self._add(node)
return node.value
def put(self, key: int, value: int) -> None:
if key in self.nodes:
self._remove(self.nodes[key])
node = Node(key, value)
self.nodes[key] = node
self._add(node)
if len(self.nodes) > self.capacity:
lru = self.left.next
self._remove(lru)
del self.nodes[lru.key]
if __name__ == "__main__":
cache = LRUCache(2)
cache.put(1, 1)
cache.put(2, 2)
print(cache.get(1))
cache.put(3, 3)
print(cache.get(2))
Solution 2: OrderedDict Backing
Leverage Python's OrderedDict to maintain access order automatically. Every lookup moves the key to the end, and eviction pops the oldest key from the front.
from collections import OrderedDict
class LRUCacheOrdered:
def __init__(self, capacity: int) -> None:
self.capacity = capacity
self.store: "OrderedDict[int, int]" = OrderedDict()
def get(self, key: int) -> int:
if key not in self.store:
return -1
self.store.move_to_end(key)
return self.store[key]
def put(self, key: int, value: int) -> None:
if key in self.store:
self.store.move_to_end(key)
self.store[key] = value
if len(self.store) > self.capacity:
self.store.popitem(last=False)
if __name__ == "__main__":
cache = LRUCacheOrdered(2)
cache.put(1, 1)
cache.put(2, 2)
print(cache.get(1))
cache.put(3, 3)
print(cache.get(2))
Determine whether a singly linked list is a palindrome. Sample Input: list = [1, 2, 2, 1]. Sample Output: True.
Find the midpoint, reverse the second half, and compare the two halves node by node. Restoring the list is optional, but the half reversal keeps the space usage constant.
Solution 1: Primary Approach
from typing import Optional
class ListNode:
def __init__(self, value: int, next_node: Optional['ListNode'] = None) -> None:
self.value = value
self.next = next_node
def is_palindrome_list(head: Optional[ListNode]) -> bool:
slow = fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
previous = None
current = slow
while current:
next_node = current.next
current.next = previous
previous = current
current = next_node
left, right = head, previous
while right:
if left.value != right.value:
return False
left = left.next
right = right.next
return True
if __name__ == "__main__":
head = ListNode(1, ListNode(2, ListNode(2, ListNode(1))))
print(is_palindrome_list(head))
Solution 2: Array Comparison
Copy the list values into an array and compare it with its reverse. This uses O(n) extra space but keeps the pointer manipulation minimal.
from typing import Optional
class ListNode:
def __init__(self, value: int, next_node: Optional["ListNode"] = None) -> None:
self.value = value
self.next = next_node
def is_palindrome_array(head: Optional[ListNode]) -> bool:
values = []
node = head
while node:
values.append(node.value)
node = node.next
return values == values[::-1]
if __name__ == "__main__":
head = ListNode(1, ListNode(2, ListNode(2, ListNode(1))))
print(is_palindrome_array(head))
Flatten a multilevel doubly linked list into a single-level list. Sample Input: 1 - 2 with child 3. Sample Output: [1, 2, 3].
Perform depth-first traversal with a stack that pushes next pointers before child pointers. Rewiring nodes as they are popped yields a single-level doubly linked list that preserves the original order.
Solution 1: Primary Approach
from typing import Optional
class Node:
def __init__(self, value: int, prev: Optional['Node'] = None, next_node: Optional['Node'] = None, child: Optional['Node'] = None) -> None:
self.value = value
self.prev = prev
self.next = next_node
self.child = child
def flatten(head: Optional[Node]) -> Optional[Node]:
if not head:
return None
stack = [head]
dummy = Node(0)
previous = dummy
while stack:
node = stack.pop()
previous.next = node
node.prev = previous
if node.next:
stack.append(node.next)
if node.child:
stack.append(node.child)
node.child = None
previous = node
dummy.next.prev = None
return dummy.next
if __name__ == "__main__":
head = Node(1)
head.next = Node(2, prev=head)
head.next.child = Node(3)
flat = flatten(head)
values = []
while flat:
values.append(flat.value)
flat = flat.next
print(values)
Solution 2: Recursive DFS
Recursively flatten each child list and splice it between the node and its next pointer. The helper returns the tail of the flattened section to stitch everything together.
from typing import Optional, Tuple
class Node:
def __init__(self, value: int, prev: Optional["Node"] = None, next_node: Optional["Node"] = None, child: Optional["Node"] = None) -> None:
self.value = value
self.prev = prev
self.next = next_node
self.child = child
def flatten_recursive(head: Optional[Node]) -> Optional[Node]:
if not head:
return None
def dfs(node: Optional[Node]) -> Tuple[Optional[Node], Optional[Node]]:
current = node
tail = node
while current:
next_node = current.next
if current.child:
child_head, child_tail = dfs(current.child)
current.child = None
current.next = child_head
child_head.prev = current
if next_node:
child_tail.next = next_node
next_node.prev = child_tail
tail = child_tail
current = child_tail
else:
tail = current
current = current.next
return node, tail
head, _ = dfs(head)
return head
if __name__ == "__main__":
head = Node(1)
head.next = Node(2, prev=head)
head.next.child = Node(3)
flat = flatten_recursive(head)
values = []
while flat:
values.append(flat.value)
flat = flat.next
print(values)
Merge two sorted linked lists into one sorted list. Sample Input: [1,3,5] and [2,4,6]. Sample Output: [1, 2, 3, 4, 5, 6].
Use a dummy node and advance a tail pointer while repeatedly choosing the smaller head node from the two lists. This in-place merge stitches the lists together in linear time.
Solution 1: Primary Approach
from typing import Optional
class ListNode:
def __init__(self, value: int, next_node: Optional['ListNode'] = None) -> None:
self.value = value
self.next = next_node
def merge_two_lists(first: Optional[ListNode], second: Optional[ListNode]) -> Optional[ListNode]:
dummy = tail = ListNode(0)
while first and second:
if first.value <= second.value:
tail.next = first
first = first.next
else:
tail.next = second
second = second.next
tail = tail.next
tail.next = first or second
return dummy.next
def to_list(head: Optional[ListNode]) -> list[int]:
values = []
while head:
values.append(head.value)
head = head.next
return values
if __name__ == "__main__":
a = ListNode(1, ListNode(3, ListNode(5)))
b = ListNode(2, ListNode(4, ListNode(6)))
merged = merge_two_lists(a, b)
print(to_list(merged))
Solution 2: Recursive Merge
Recursively pick the smaller head and delegate the merge of the remainder. The recursion depth equals the combined list length.
from typing import Optional
class ListNode:
def __init__(self, value: int, next_node: Optional["ListNode"] = None) -> None:
self.value = value
self.next = next_node
def merge_two_lists_recursive(first: Optional[ListNode], second: Optional[ListNode]) -> Optional[ListNode]:
if not first or not second:
return first or second
if first.value <= second.value:
first.next = merge_two_lists_recursive(first.next, second)
return first
second.next = merge_two_lists_recursive(first, second.next)
return second
if __name__ == "__main__":
a = ListNode(1, ListNode(3, ListNode(5)))
b = ListNode(2, ListNode(4, ListNode(6)))
merged = merge_two_lists_recursive(a, b)
values = []
while merged:
values.append(merged.value)
merged = merged.next
print(values)
Merge K sorted linked lists into a single sorted list. Sample Input: [[1,4], [1,3], [2,6]]. Sample Output: [1, 1, 2, 3, 4, 6].
Push the head of each list into a min-heap keyed by value and repeatedly pop the smallest node. Every extraction appends one node to the result and pushes its successor, giving O(N log k) performance.
Solution 1: Primary Approach
from typing import Optional, List
import heapq
class ListNode:
def __init__(self, value: int) -> None:
self.value = value
self.next: Optional['ListNode'] = None
def merge_k_lists(lists: List[Optional[ListNode]]) -> Optional[ListNode]:
heap: list[tuple[int, int, ListNode]] = []
for index, node in enumerate(lists):
if node:
heapq.heappush(heap, (node.value, index, node))
dummy = current = ListNode(0)
while heap:
value, index, node = heapq.heappop(heap)
current.next = node
current = current.next
if node.next:
heapq.heappush(heap, (node.next.value, index, node.next))
return dummy.next
if __name__ == "__main__":
a = ListNode(1)
a.next = ListNode(4)
b = ListNode(1)
b.next = ListNode(3)
c = ListNode(2)
c.next = ListNode(6)
merged = merge_k_lists([a, b, c])
values = []
while merged:
values.append(merged.value)
merged = merged.next
print(values)
Solution 2: Divide and Conquer
Repeatedly merge lists pairwise in a divide-and-conquer fashion. Each round halves the number of lists, giving O(N log k) complexity without a heap.
from typing import Optional, List
class ListNode:
def __init__(self, value: int, next_node: Optional["ListNode"] = None) -> None:
self.value = value
self.next = next_node
def merge_two_lists(first: Optional[ListNode], second: Optional[ListNode]) -> Optional[ListNode]:
if not first or not second:
return first or second
if first.value <= second.value:
first.next = merge_two_lists(first.next, second)
return first
second.next = merge_two_lists(first, second.next)
return second
def merge_k_lists_divide_and_conquer(lists: List[Optional[ListNode]]) -> Optional[ListNode]:
if not lists:
return None
interval = 1
result = lists[:]
while interval < len(result):
for i in range(0, len(result) - interval, interval * 2):
result[i] = merge_two_lists(result[i], result[i + interval])
interval *= 2
return result[0]
if __name__ == "__main__":
a = ListNode(1, ListNode(4))
b = ListNode(1, ListNode(3))
c = ListNode(2, ListNode(6))
merged = merge_k_lists_divide_and_conquer([a, b, c])
values = []
while merged:
values.append(merged.value)
merged = merged.next
print(values)
Reorder a list as L0 → Ln → L1 → Ln-1 → … without modifying node values. Sample Input: [1, 2, 3, 4]. Sample Output: [1, 4, 2, 3].
Split the list in half, reverse the second half, and weave nodes from the front and back alternately. This transforms the order without allocating new nodes.
Solution 1: Primary Approach
from typing import Optional
class ListNode:
def __init__(self, value: int, next_node: Optional['ListNode'] = None) -> None:
self.value = value
self.next = next_node
def reorder_list(head: Optional[ListNode]) -> Optional[ListNode]:
if not head or not head.next:
return head
slow = fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
previous = None
current = slow
while current:
next_node = current.next
current.next = previous
previous = current
current = next_node
first, second = head, previous
while second.next:
temp1, temp2 = first.next, second.next
first.next = second
second.next = temp1
first, second = temp1, temp2
return head
def to_list(head: Optional[ListNode]) -> list[int]:
values = []
while head:
values.append(head.value)
head = head.next
return values
if __name__ == "__main__":
node = ListNode(1, ListNode(2, ListNode(3, ListNode(4))))
reorder_list(node)
print(to_list(node))
Solution 2: Deque Interleave
Collect nodes into a deque, then rebuild the list by alternately popping from the left and right. The method is conceptually simple at the cost of O(n) extra space.
from collections import deque
from typing import Optional
class ListNode:
def __init__(self, value: int, next_node: Optional["ListNode"] = None) -> None:
self.value = value
self.next = next_node
def reorder_list_deque(head: Optional[ListNode]) -> Optional[ListNode]:
if not head:
return None
nodes = deque()
node = head
while node:
nodes.append(node)
node = node.next
dummy = ListNode(0)
tail = dummy
toggle = True
while nodes:
node = nodes.popleft() if toggle else nodes.pop()
node.next = None
tail.next = node
tail = tail.next
toggle = not toggle
return dummy.next
if __name__ == "__main__":
node = ListNode(1, ListNode(2, ListNode(3, ListNode(4))))
reordered = reorder_list_deque(node)
values = []
while reordered:
values.append(reordered.value)
reordered = reordered.next
print(values)
Detect a cycle in a linked list and return the node where the cycle begins. Sample Input: 1 -> 2 -> 3 -> points back to 2. Sample Output: 2.
First apply Floyd's tortoise-and-hare to confirm a cycle, then reset one pointer to head and walk both at the same speed. Their meeting point identifies the start of the loop.
Solution 1: Primary Approach
from typing import Optional
class ListNode:
def __init__(self, value: int) -> None:
self.value = value
self.next: Optional['ListNode'] = None
def detect_cycle(head: Optional[ListNode]) -> Optional[ListNode]:
slow = fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow is fast:
break
else:
return None
slow = head
while slow is not fast:
slow = slow.next
fast = fast.next
return slow
if __name__ == "__main__":
head = ListNode(1)
head.next = ListNode(2)
head.next.next = ListNode(3)
head.next.next.next = head.next
node = detect_cycle(head)
print(node.value if node else None)
Solution 2: Hash Set Detection
Track visited nodes in a set. When a node repeats we return it as the cycle entrance. This trades O(n) space for immediate detection without pointer algebra.
from typing import Optional, Set
class ListNode:
def __init__(self, value: int) -> None:
self.value = value
self.next: Optional["ListNode"] = None
def detect_cycle_hashset(head: Optional[ListNode]) -> Optional[ListNode]:
seen: Set[ListNode] = set()
node = head
while node:
if node in seen:
return node
seen.add(node)
node = node.next
return None
if __name__ == "__main__":
head = ListNode(1)
head.next = ListNode(2)
head.next.next = ListNode(3)
head.next.next.next = head.next
entry = detect_cycle_hashset(head)
print(entry.value if entry else None)
Given the head of a singly linked list, return True if the list contains a cycle. A cycle exists if some node can be reached again by continuously following next pointers. (LeetCode 141.)
There are two canonical techniques: remembering all visited nodes, or running two pointers at different speeds to detect the cycle without extra memory.
Solution 1: Hash Set Detection
Track every visited node in a hash set. Reaching a node twice proves that we have looped back to an earlier point. The method runs in O(n) time with O(n) auxiliary memory.
from typing import Optional, Set
# Definition for singly-linked list.
class ListNode:
def __init__(self, val: int = 0, next: Optional['ListNode'] = None) -> None:
self.val = val
self.next = next
class Solution:
def hasCycle(self, head: Optional[ListNode]) -> bool:
seen: Set[ListNode] = set()
node = head
while node:
if node in seen:
return True
seen.add(node)
node = node.next
return False
if __name__ == "__main__":
a = ListNode(3)
b = ListNode(2)
c = ListNode(0)
d = ListNode(-4)
a.next = b
b.next = c
c.next = d
d.next = b # cycle back to node with value 2
print(Solution().hasCycle(a))
Solution 2: Floyd's Tortoise and Hare
Advance a slow pointer by one step and a fast pointer by two steps. If the fast pointer ever laps the slow pointer, a cycle exists; otherwise the fast pointer reaches the tail. This is the optimal O(1) space solution.
from typing import Optional
# Definition for singly-linked list.
class ListNode:
def __init__(self, val: int = 0, next: Optional['ListNode'] = None) -> None:
self.val = val
self.next = next
class Solution:
def hasCycle(self, head: Optional[ListNode]) -> bool:
slow = head
fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow is fast:
return True
return False
if __name__ == "__main__":
a = ListNode(1)
b = ListNode(2)
c = ListNode(3)
a.next = b
b.next = c
c.next = b # cycle
print(Solution().hasCycle(a))
Return the middle node of a singly linked list. If there are two middle nodes, return the second middle node. (LeetCode 876.)
A counting pass gives the index of the middle, while the classic tortoise-and-hare technique finds it in one traversal. Both approaches appear frequently in interviews.
Solution 1: Two-Pass Counting
Traverse once to determine the length, then walk length // 2 steps from the head. This remains a valid baseline answer when you want to emphasise clarity.
from typing import Optional
class ListNode:
def __init__(self, val: int = 0, next: Optional['ListNode'] = None) -> None:
self.val = val
self.next = next
class Solution:
def middleNode(self, head: Optional[ListNode]) -> Optional[ListNode]:
length = 0
node = head
while node:
length += 1
node = node.next
steps = length // 2
node = head
for _ in range(steps):
node = node.next
return node
if __name__ == "__main__":
head = ListNode(1, ListNode(2, ListNode(3, ListNode(4, ListNode(5)))))
middle = Solution().middleNode(head)
print(middle.val if middle else None)
Solution 2: Fast & Slow Runner
Move one pointer two steps at a time and a second pointer one step at a time. When the fast pointer reaches the end, the slow pointer is at the desired middle.
from typing import Optional
class ListNode:
def __init__(self, val: int = 0, next: Optional['ListNode'] = None) -> None:
self.val = val
self.next = next
class Solution:
def middleNode(self, head: Optional[ListNode]) -> Optional[ListNode]:
slow = head
fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
return slow
if __name__ == "__main__":
head = ListNode(1, ListNode(2, ListNode(3, ListNode(4, ListNode(5, ListNode(6))))))
middle = Solution().middleNode(head)
print(middle.val if middle else None)
A happy number repeatedly replaces itself with the sum of the squares of its digits until the value becomes 1 or falls into a loop. Determine whether a given integer is happy. (LeetCode 202.)
The digit-square sequence either reaches 1 or loops forever. You can either keep track of visited values or reuse the tortoise-and-hare idea to find cycles without extra storage.
Solution 1: Track Seen Sums
Generate each digit-square sum and add it to a set. Encountering a repeated sum before hitting 1 means the process is cycling and the number is unhappy.
from typing import Set
class Solution:
def isHappy(self, n: int) -> bool:
seen: Set[int] = set()
value = n
while value != 1 and value not in seen:
seen.add(value)
value = self._next_sum(value)
return value == 1
def _next_sum(self, value: int) -> int:
total = 0
while value:
value, digit = divmod(value, 10)
total += digit * digit
return total
if __name__ == "__main__":
solver = Solution()
print(solver.isHappy(19)) # True
print(solver.isHappy(20)) # False
Solution 2: Floyd's Cycle Detection
Use slow and fast runners through the digit-square sequence. If the fast runner reaches 1 we are done; if the runners meet elsewhere the sequence is cycling.
class Solution:
def isHappy(self, n: int) -> bool:
slow = n
fast = self._next_sum(n)
while fast != 1 and slow != fast:
slow = self._next_sum(slow)
fast = self._next_sum(self._next_sum(fast))
return fast == 1
def _next_sum(self, value: int) -> int:
total = 0
while value:
value, digit = divmod(value, 10)
total += digit * digit
return total
if __name__ == "__main__":
solver = Solution()
print(solver.isHappy(19))
print(solver.isHappy(4))
Find the length of the longest substring without repeating characters. Sample Input: text = "abcabcbb". Sample Output: 3.
Track the last seen index of each character and slide the left boundary past duplicates. Maintaining the window size while expanding right reveals the maximum distinct substring length.
Solution 1: Primary Approach
def length_of_longest_substring(text: str) -> int:
last_seen = {}
start = best = 0
for index, letter in enumerate(text):
if letter in last_seen and last_seen[letter] >= start:
start = last_seen[letter] + 1
last_seen[letter] = index
best = max(best, index - start + 1)
return best
if __name__ == "__main__":
print(length_of_longest_substring("abcabcbb"))
Solution 2: Brute Force Expansion
Consider every starting index, expand until a duplicate appears, and track the best length. The O(n²) complexity is acceptable for tiny strings and illustrates the baseline reasoning.
from typing import Set
def length_of_longest_substring_bruteforce(text: str) -> int:
best = 0
for start in range(len(text)):
seen: Set[str] = set()
for end in range(start, len(text)):
if text[end] in seen:
break
seen.add(text[end])
best = max(best, end - start + 1)
return best
if __name__ == "__main__":
print(length_of_longest_substring_bruteforce("abcabcbb"))
Find the maximum window length after replacing at most k characters. Sample Input: text = "AABABBA", k = 1. Sample Output: 4.
Maintain a window and the count of its most frequent character; shrink when window length minus that frequency exceeds k. This invariant guarantees the window represents a valid replacement region.
Solution 1: Primary Approach
from collections import defaultdict
def character_replacement(text: str, k: int) -> int:
counts = defaultdict(int)
start = best = 0
max_count = 0
for end, letter in enumerate(text):
counts[letter] += 1
max_count = max(max_count, counts[letter])
while (end - start + 1) - max_count > k:
counts[text[start]] -= 1
start += 1
best = max(best, end - start + 1)
return best
if __name__ == "__main__":
print(character_replacement("AABABBA", 1))
Solution 2: Sliding Window with Recount
Maintain the window but recompute the highest frequency character after each contraction instead of tracking it lazily. This keeps the logic straightforward at the cost of extra counting work.
from collections import Counter
def character_replacement_recount(text: str, k: int) -> int:
counts = Counter()
left = 0
best = 0
for right, char in enumerate(text):
counts[char] += 1
while (right - left + 1) - max(counts.values()) > k:
counts[text[left]] -= 1
if counts[text[left]] == 0:
del counts[text[left]]
left += 1
best = max(best, right - left + 1)
return best
if __name__ == "__main__":
print(character_replacement_recount("AABABBA", 1))
Return the smallest substring of s that contains all characters of t. Sample Input: text = "ADOBECODEBANC", need = "ABC". Sample Output: BANC.
Expand the window until it covers all required characters, then contract from the left while counts stay satisfied. Keeping deficit counts lets us locate the smallest substring containing the pattern.
Solution 1: Primary Approach
from collections import Counter
def minimum_window_substring(text: str, need: str) -> str:
counter = Counter(need)
required = len(need)
start = best_start = 0
best_length = float('inf')
for end, letter in enumerate(text):
if counter[letter] > 0:
required -= 1
counter[letter] -= 1
while required == 0:
if end - start + 1 < best_length:
best_length = end - start + 1
best_start = start
counter[text[start]] += 1
if counter[text[start]] > 0:
required += 1
start += 1
return "" if best_length == float('inf') else text[best_start:best_start + best_length]
if __name__ == "__main__":
print(minimum_window_substring("ADOBECODEBANC", "ABC"))
Solution 2: Counter Driven Shrink
Use two Counters: one for the target and one for the window. Expand the window, then shrink while coverage remains valid. This mirrors the canonical approach but emphasises explicit coverage checks.
from collections import Counter
def min_window_counter(s: str, t: str) -> str:
if not t or not s:
return ""
need = Counter(t)
window = Counter()
have = 0
required = len(need)
left = 0
answer = (float('inf'), 0, 0)
for right, char in enumerate(s):
window[char] += 1
if char in need and window[char] == need[char]:
have += 1
while have == required and left <= right:
if right - left + 1 < answer[0]:
answer = (right - left + 1, left, right)
left_char = s[left]
window[left_char] -= 1
if left_char in need and window[left_char] < need[left_char]:
have -= 1
left += 1
if answer[0] == float('inf'):
return ""
return s[answer[1]:answer[2] + 1]
if __name__ == "__main__":
print(min_window_counter("ADOBECODEBANC", "ABC"))
Find all starting indices where an anagram of pattern p appears in s. Sample Input: text = "cbaebabacd", pattern = "abc". Sample Output: [0, 6].
Use a fixed-size sliding window while tracking the difference between pattern and window character counts. Whenever the difference map is zero, the current window is an anagram.
Solution 1: Primary Approach
from collections import Counter
from typing import List
def find_anagram_indices(text: str, pattern: str) -> List[int]:
pattern_length = len(pattern)
pattern_counts = Counter(pattern)
window_counts = Counter(text[:pattern_length])
indices = []
if window_counts == pattern_counts:
indices.append(0)
for i in range(pattern_length, len(text)):
entering = text[i]
leaving = text[i - pattern_length]
window_counts[entering] += 1
window_counts[leaving] -= 1
if window_counts[leaving] == 0:
del window_counts[leaving]
if window_counts == pattern_counts:
indices.append(i - pattern_length + 1)
return indices
if __name__ == "__main__":
print(find_anagram_indices("cbaebabacd", "abc"))
Solution 2: Sorted Fingerprint
Sort each window before comparison. The approach is O(n·m log m) but demonstrates the brute-force fingerprinting idea for strings of small length.
from typing import List
def find_anagrams_sorted(text: str, pattern: str) -> List[int]:
m = len(pattern)
target = ''.join(sorted(pattern))
positions: List[int] = []
for start in range(len(text) - m + 1):
window = text[start:start + m]
if ''.join(sorted(window)) == target:
positions.append(start)
return positions
if __name__ == "__main__":
print(find_anagrams_sorted("cbaebabacd", "abc"))
Locate the index where the target should be inserted to keep the array sorted. Sample Input: nums = [1, 2, 4, 4, 5], target = 4. Sample Output: 2.
Binary search for the first position where the value is not less than the target. Tighter right bounds on matches yield the insertion index even with duplicates.
Solution 1: Primary Approach
from typing import List
def lower_bound(nums: List[int], target: int) -> int:
left, right = 0, len(nums)
while left < right:
mid = (left + right) // 2
if nums[mid] < target:
left = mid + 1
else:
right = mid
return left
if __name__ == "__main__":
print(lower_bound([1, 2, 4, 4, 5], 4))
Solution 2: Bisect Module
Delegate to the standard library's bisect_left helper. It implements the same lower-bound semantics with well-tested C code.
from bisect import bisect_left
from typing import List
def lower_bound_bisect(nums: List[int], target: int) -> int:
return bisect_left(nums, target)
if __name__ == "__main__":
print(lower_bound_bisect([1, 2, 4, 4, 5], 4))
Return the first and last indices of a target value in a sorted array. Sample Input: nums = [5, 7, 7, 8, 8, 10], target = 8. Sample Output: [3, 4].
Run two binary searches, one biased left and one right, to locate the first and last positions of the target. Splitting the search prevents scanning while still working in O(log n).
Solution 1: Primary Approach
from typing import List
def find_range(nums: List[int], target: int) -> List[int]:
def boundary(search_left: bool) -> int:
left, right = 0, len(nums)
while left < right:
mid = (left + right) // 2
if nums[mid] > target or (search_left and nums[mid] == target):
right = mid
else:
left = mid + 1
return left
left_index = boundary(True)
if left_index == len(nums) or nums[left_index] != target:
return [-1, -1]
right_index = boundary(False) - 1
return [left_index, right_index]
if __name__ == "__main__":
print(find_range([5, 7, 7, 8, 8, 10], 8))
Solution 2: Bisect Boundaries
Use bisect_left for the first position and bisect_right for the exclusive upper bound. Subtracting one from the right boundary produces the last occurrence.
from bisect import bisect_left, bisect_right
from typing import List
def search_range_bisect(nums: List[int], target: int) -> List[int]:
left = bisect_left(nums, target)
if left == len(nums) or nums[left] != target:
return [-1, -1]
right = bisect_right(nums, target) - 1
return [left, right]
if __name__ == "__main__":
print(search_range_bisect([5, 7, 7, 8, 8, 10], 8))
Find the maximum cut length to obtain at least the required number of pieces. Sample Input: woods = [232, 124, 456], required = 7. Sample Output: 114.
Binary search on the candidate cut length and count how many pieces it would produce. If the length yields enough pieces it is feasible, otherwise shrink the range.
Solution 1: Primary Approach
from typing import List
def max_cut_length(woods: List[int], required: int) -> int:
left, right = 1, max(woods)
best = 0
while left <= right:
mid = (left + right) // 2
pieces = sum(length // mid for length in woods)
if pieces >= required:
best = mid
left = mid + 1
else:
right = mid - 1
return best
if __name__ == "__main__":
print(max_cut_length([232, 124, 456], 7))
Solution 2: Linear Decrement
Start from the maximum possible length and decrease until a feasible cut is found. Though O(max_length), it demonstrates the search space semantics clearly.
from typing import List
def max_cut_length_linear(woods: List[int], required: int) -> int:
candidate = max(woods)
while candidate > 0:
pieces = sum(length // candidate for length in woods)
if pieces >= required:
return candidate
candidate -= 1
return 0
if __name__ == "__main__":
print(max_cut_length_linear([232, 124, 456], 7))
Search for a target value in a rotated sorted array in logarithmic time. Sample Input: nums = [4, 5, 6, 7, 0, 1, 2], target = 0. Sample Output: 4.
Identify which side of the current midpoint is sorted and whether the target lies there. Redirecting the search to the sorted half keeps the time logarithmic despite the rotation.
Solution 1: Primary Approach
from typing import List
def search_rotated(nums: List[int], target: int) -> int:
left, right = 0, len(nums) - 1
while left <= right:
mid = (left + right) // 2
if nums[mid] == target:
return mid
if nums[left] <= nums[mid]:
if nums[left] <= target < nums[mid]:
right = mid - 1
else:
left = mid + 1
else:
if nums[mid] < target <= nums[right]:
left = mid + 1
else:
right = mid - 1
return -1
if __name__ == "__main__":
print(search_rotated([4, 5, 6, 7, 0, 1, 2], 0))
Solution 2: Pivot then Binary Search
First locate the smallest element with binary search to find the rotation pivot. Then perform a standard binary search within the appropriate segment.
from typing import List
def search_rotated_with_pivot(nums: List[int], target: int) -> int:
if not nums:
return -1
left, right = 0, len(nums) - 1
while left < right:
mid = (left + right) // 2
if nums[mid] > nums[right]:
left = mid + 1
else:
right = mid
pivot = left
left, right = 0, len(nums) - 1
if nums[pivot] <= target <= nums[right]:
left = pivot
else:
right = pivot - 1
while left <= right:
mid = (left + right) // 2
if nums[mid] == target:
return mid
if nums[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1
if __name__ == "__main__":
print(search_rotated_with_pivot([4,5,6,7,0,1,2], 0))
Find the minimum element in a rotated sorted array without duplicates. Sample Input: nums = [3, 4, 5, 1, 2]. Sample Output: 1.
Compare mid against the rightmost value to decide which half is unsorted. The minimum must live in the unsorted segment, so narrowing there converges to the pivot.
Solution 1: Primary Approach
from typing import List
def find_min_rotated(nums: List[int]) -> int:
left, right = 0, len(nums) - 1
while left < right:
mid = (left + right) // 2
if nums[mid] > nums[right]:
left = mid + 1
else:
right = mid
return nums[left]
if __name__ == "__main__":
print(find_min_rotated([3, 4, 5, 1, 2]))
Solution 2: Linear Scan
Check each adjacent pair to find the rotation point. Although O(n), it is easy to reason about and works for small arrays.
from typing import List
def find_min_linear(nums: List[int]) -> int:
minimum = nums[0]
for value in nums[1:]:
if value < minimum:
minimum = value
return minimum
if __name__ == "__main__":
print(find_min_linear([4,5,6,7,0,1,2]))
Compute the median of two sorted arrays in logarithmic time. Sample Input: nums1 = [1, 3], nums2 = [2]. Sample Output: 2.0.
Binary search the partition of the smaller array so that left partitions contain half of the elements. Once both sides are ordered around the cut, the median follows from the bordering values.
Solution 1: Primary Approach
from typing import List
def median_two_sorted(nums1: List[int], nums2: List[int]) -> float:
if len(nums1) > len(nums2):
nums1, nums2 = nums2, nums1
m, n = len(nums1), len(nums2)
left, right = 0, m
half = (m + n + 1) // 2
while left <= right:
partition_m = (left + right) // 2
partition_n = half - partition_m
left_max_m = float('-inf') if partition_m == 0 else nums1[partition_m - 1]
right_min_m = float('inf') if partition_m == m else nums1[partition_m]
left_max_n = float('-inf') if partition_n == 0 else nums2[partition_n - 1]
right_min_n = float('inf') if partition_n == n else nums2[partition_n]
if left_max_m <= right_min_n and left_max_n <= right_min_m:
if (m + n) % 2 == 0:
return (max(left_max_m, left_max_n) + min(right_min_m, right_min_n)) / 2
return float(max(left_max_m, left_max_n))
if left_max_m > right_min_n:
right = partition_m - 1
else:
left = partition_m + 1
return 0.0
if __name__ == "__main__":
print(median_two_sorted([1, 3], [2]))
Solution 2: Merge then Median
Merge the arrays using the two-pointer technique and compute the median of the combined list. This matches the brute-force baseline many candidates start with.
from typing import List
def find_median_merge(nums1: List[int], nums2: List[int]) -> float:
merged: List[int] = []
i = j = 0
while i < len(nums1) and j < len(nums2):
if nums1[i] <= nums2[j]:
merged.append(nums1[i])
i += 1
else:
merged.append(nums2[j])
j += 1
merged.extend(nums1[i:])
merged.extend(nums2[j:])
n = len(merged)
if n % 2:
return float(merged[n // 2])
return (merged[n // 2 - 1] + merged[n // 2]) / 2
if __name__ == "__main__":
print(find_median_merge([1, 3], [2]))
Search for a target in a matrix treated as a flattened sorted list. Sample Input: matrix = [[1,3,5],[7,9,11],[13,15,17]], target = 9. Sample Output: True.
Treat the matrix as a flattened sorted list and perform binary search on indices. Mapping the midpoint back to row and column via divmod keeps the logic simple and logarithmic.
Solution 1: Primary Approach
from typing import List
def search_matrix(matrix: List[List[int]], target: int) -> bool:
if not matrix or not matrix[0]:
return False
rows, cols = len(matrix), len(matrix[0])
left, right = 0, rows * cols - 1
while left <= right:
mid = (left + right) // 2
row, col = divmod(mid, cols)
value = matrix[row][col]
if value == target:
return True
if value < target:
left = mid + 1
else:
right = mid - 1
return False
if __name__ == "__main__":
grid = [[1, 3, 5], [7, 9, 11], [13, 15, 17]]
print(search_matrix(grid, 9))
Solution 2: Row-wise Binary Search
Binary search each row individually until the target is found. The complexity is O(m log n) and mirrors the simple baseline solution.
from typing import List
def search_matrix_rows(matrix: List[List[int]], target: int) -> bool:
if not matrix or not matrix[0]:
return False
for row in matrix:
left, right = 0, len(row) - 1
while left <= right:
mid = (left + right) // 2
if row[mid] == target:
return True
if row[mid] < target:
left = mid + 1
else:
right = mid - 1
return False
if __name__ == "__main__":
grid = [
[1, 4, 7, 11, 15],
[2, 5, 8, 12, 19],
[3, 6, 9, 16, 22],
[10, 13, 14, 17, 24],
[18, 21, 23, 26, 30],
]
print(search_matrix_rows(grid, 5))
print(search_matrix_rows(grid, 20))
Find an index of a peak element that is not smaller than its neighbors. Sample Input: nums = [1, 3, 5, 4, 2]. Sample Output: 2.
Binary search by comparing mid with mid+1; climb toward the side that rises. Eventually the pointers converge on a peak because every move follows an uphill edge.
Solution 1: Primary Approach
from typing import List
def find_peak(nums: List[int]) -> int:
left, right = 0, len(nums) - 1
while left < right:
mid = (left + right) // 2
if nums[mid] < nums[mid + 1]:
left = mid + 1
else:
right = mid
return left
if __name__ == "__main__":
print(find_peak([1, 3, 5, 4, 2]))
Solution 2: Gradient Walk
Walk linearly until a peak is found by comparing neighbours. The method stops when the trend switches from rising to falling.
from typing import List
def find_peak_linear(nums: List[int]) -> int:
for i in range(1, len(nums) - 1):
if nums[i] >= nums[i - 1] and nums[i] >= nums[i + 1]:
return i
return 0 if nums[0] >= nums[-1] else len(nums) - 1
if __name__ == "__main__":
print(find_peak_linear([1, 3, 4, 3, 2]))
Pick an index according to weights using prefix sums and binary search. Sample Input: weights = [1, 3, 2]; value = 0.2 then 0.8. Sample Output: 0 1.
Precompute prefix sums of the weights and multiply the random value by the total weight. A binary search over the prefix array finds which bucket the sample falls into.
Solution 1: Primary Approach
from bisect import bisect_left
from typing import List
class WeightedPicker:
def __init__(self, weights: List[int]) -> None:
self.prefix = []
total = 0
for weight in weights:
total += weight
self.prefix.append(total)
self.total = total
def pick_by_value(self, value: float) -> int:
scaled = value * self.total
return bisect_left(self.prefix, scaled + 1e-9)
if __name__ == "__main__":
picker = WeightedPicker([1, 3, 2])
print(picker.pick_by_value(0.2))
print(picker.pick_by_value(0.8))
Solution 2: Prefix Array + Linear Pick
Create a prefix sum array and select a random weight by linearly scanning the prefix list. This removes the inner binary search at the cost of O(n) selection.
import random
from typing import List
class WeightedPickerLinear:
def __init__(self, weights: List[int]) -> None:
self.prefix: List[int] = []
total = 0
for weight in weights:
total += weight
self.prefix.append(total)
self.total = total
def pick_index(self) -> int:
target = random.randint(1, self.total)
for index, value in enumerate(self.prefix):
if target <= value:
return index
return len(self.prefix) - 1
if __name__ == "__main__":
picker = WeightedPickerLinear([1, 3, 2])
print([picker.pick_index() for _ in range(5)])
Determine whether a bracket string is well-formed. Sample Input: "()[]{}" and "(]". Sample Output: True False.
Push opening brackets onto a stack and pop when matching closers appear. Any mismatch or leftover entries shows the string is invalid.
Solution 1: Primary Approach
def is_valid_parentheses(text: str) -> bool:
mapping = {')': '(', ']': '[', '}': '{'}
stack: list[str] = []
for char in text:
if char in mapping:
if not stack or stack.pop() != mapping[char]:
return False
else:
stack.append(char)
return not stack
if __name__ == "__main__":
print(is_valid_parentheses("()[]{}"))
print(is_valid_parentheses("(]"))
Solution 2: String Collapsing
Repeatedly remove occurrences of (), [], and {} until the string stops changing. If the string becomes empty it was well-formed; otherwise mismatched brackets remain.
def is_valid_parentheses_collapse(text: str) -> bool:
previous = None
current = text
while current != previous:
previous = current
current = current.replace('()', '').replace('[]', '').replace('{}', '')
return current == ''
if __name__ == "__main__":
print(is_valid_parentheses_collapse("()[]{}"))
print(is_valid_parentheses_collapse("(]"))
For each element, find the next greater value to its right. Sample Input: [2, 1, 2, 4, 3]. Sample Output: [4, 2, 4, -1, -1].
Traverse from right to left while maintaining a decreasing stack of candidates. Popping smaller elements leaves the nearest larger element on top for each position.
Solution 1: Primary Approach
from typing import List
def next_greater_elements(nums: List[int]) -> List[int]:
result = [-1] * len(nums)
stack: list[int] = []
for index, value in enumerate(nums):
while stack and nums[stack[-1]] < value:
previous_index = stack.pop()
result[previous_index] = value
stack.append(index)
return result
if __name__ == "__main__":
print(next_greater_elements([2, 1, 2, 4, 3]))
Solution 2: Double Loop
For each index, scan to the right until a larger element appears. The quadratic runtime is acceptable for small arrays and clarifies the goal.
from typing import List
def next_greater_elements_bruteforce(nums: List[int]) -> List[int]:
result: List[int] = []
for i, value in enumerate(nums):
greater = -1
for j in range(i + 1, len(nums)):
if nums[j] > value:
greater = nums[j]
break
result.append(greater)
return result
if __name__ == "__main__":
print(next_greater_elements_bruteforce([2, 1, 2, 4, 3]))
Evaluate a basic arithmetic expression with parentheses and operators. Sample Input: "3 + 2 * (2 + 5)". Sample Output: 17.
Use two stacks, one for numbers and one for operators, and apply precedence when pushing operations. Evaluating as soon as higher precedence operations appear yields the correct result.
Solution 1: Primary Approach
def evaluate_expression(expression: str) -> int:
def apply(operator: str, b: int, a: int) -> int:
if operator == '+':
return a + b
if operator == '-':
return a - b
if operator == '*':
return a * b
return int(a / b)
precedence = {'+': 1, '-': 1, '*': 2, '/': 2}
values: list[int] = []
operators: list[str] = []
index = 0
while index < len(expression):
char = expression[index]
if char == ' ':
index += 1
continue
if char.isdigit():
number = 0
while index < len(expression) and expression[index].isdigit():
number = number * 10 + int(expression[index])
index += 1
values.append(number)
continue
if char == '(':
operators.append(char)
elif char == ')':
while operators and operators[-1] != '(':
op = operators.pop()
values.append(apply(op, values.pop(), values.pop()))
operators.pop()
else:
while operators and operators[-1] in precedence and precedence[operators[-1]] >= precedence[char]:
op = operators.pop()
values.append(apply(op, values.pop(), values.pop()))
operators.append(char)
index += 1
while operators:
op = operators.pop()
values.append(apply(op, values.pop(), values.pop()))
return values[-1]
if __name__ == "__main__":
print(evaluate_expression("3 + 2 * (2 + 5)"))
Solution 2: Recursive Descent Parser
Parse the grammar expression → term → factor with recursion. Each helper consumes characters and returns the evaluated sub-expression.
class ExpressionEvaluator:
def __init__(self, expression: str) -> None:
self.expression = expression.replace(' ', '')
self.index = 0
def parse(self) -> int:
return self._parse_expression()
def _parse_expression(self) -> int:
value = self._parse_term()
while self._peek() in ('+', '-'):
op = self._next()
rhs = self._parse_term()
if op == '+':
value += rhs
else:
value -= rhs
return value
def _parse_term(self) -> int:
value = self._parse_factor()
while self._peek() in ('*', '/'):
op = self._next()
rhs = self._parse_factor()
if op == '*':
value *= rhs
else:
value = int(value / rhs)
return value
def _parse_factor(self) -> int:
if self._peek() == '(':
self._next()
value = self._parse_expression()
self._next() # consume ')'
return value
return self._parse_number()
def _parse_number(self) -> int:
start = self.index
while self._peek().isdigit():
self._next()
return int(self.expression[start:self.index])
def _peek(self) -> str:
if self.index >= len(self.expression):
return '0'
return self.expression[self.index]
def _next(self) -> str:
char = self.expression[self.index]
self.index += 1
return char
def evaluate_expression_recursive(expression: str) -> int:
return ExpressionEvaluator(expression).parse()
if __name__ == "__main__":
print(evaluate_expression_recursive("3 + 2 * (2 + 5)"))
Eliminate adjacent duplicate characters from a string repeatedly. Sample Input: "abbaca". Sample Output: "ca".
Use a stack to build the result, discarding characters when they match the top. Each collapse removes both duplicates and naturally handles cascading removals.
Solution 1: Primary Approach
def remove_adjacent_duplicates(text: str) -> str:
stack: list[str] = []
for char in text:
if stack and stack[-1] == char:
stack.pop()
else:
stack.append(char)
return ''.join(stack)
if __name__ == "__main__":
print(remove_adjacent_duplicates("abbaca"))
Solution 2: Recursion
Remove adjacent duplicates and then recurse on the shortened string until no changes remain.
def remove_adjacent_duplicates_recursive(text: str) -> str:
stack = []
for char in text:
if stack and stack[-1] == char:
stack.pop()
else:
stack.append(char)
reduced = ''.join(stack)
if reduced == text:
return reduced
return remove_adjacent_duplicates_recursive(reduced)
if __name__ == "__main__":
print(remove_adjacent_duplicates_recursive("abbaca"))
Simulate a FIFO queue using two LIFO stacks. Sample Input: push 1, push 2, peek, pop. Sample Output: 1 1.
Maintain an input stack for pushes and an output stack for pops and peeks. Only when the output stack is empty do you pour elements over, giving amortized O(1) operations.
Solution 1: Primary Approach
class MyQueue:
def __init__(self) -> None:
self.in_stack: list[int] = []
self.out_stack: list[int] = []
def push(self, value: int) -> None:
self.in_stack.append(value)
def _shift(self) -> None:
if not self.out_stack:
while self.in_stack:
self.out_stack.append(self.in_stack.pop())
def pop(self) -> int:
self._shift()
return self.out_stack.pop()
def peek(self) -> int:
self._shift()
return self.out_stack[-1]
def empty(self) -> bool:
return not self.in_stack and not self.out_stack
if __name__ == "__main__":
queue = MyQueue()
queue.push(1)
queue.push(2)
print(queue.peek())
print(queue.pop())
Solution 2: Lazy Transfer
Use a deque to simulate the queue directly. Although the standard solution sticks to two stacks, Python's deque demonstrates the target behaviour succinctly.
from collections import deque
class MyQueueDeque:
def __init__(self) -> None:
self.data = deque()
def push(self, value: int) -> None:
self.data.append(value)
def pop(self) -> int:
return self.data.popleft()
def peek(self) -> int:
return self.data[0]
def empty(self) -> bool:
return not self.data
if __name__ == "__main__":
queue = MyQueueDeque()
queue.push(1)
queue.push(2)
print(queue.peek())
print(queue.pop())
Return the maximum for every sliding window of size k in an array. Sample Input: nums = [1,3,-1,-3,5,3,6,7], k = 3. Sample Output: [3, 3, 5, 5, 6, 7].
Keep a deque of indices in decreasing order of values; drop smaller elements from the back and expired indices from the front. The deque's head always holds the maximum for the current window.
Solution 1: Primary Approach
from collections import deque
from typing import List
def max_sliding_window(nums: List[int], k: int) -> List[int]:
result: list[int] = []
window = deque()
for index, value in enumerate(nums):
while window and window[0] <= index - k:
window.popleft()
while window and nums[window[-1]] <= value:
window.pop()
window.append(index)
if index >= k - 1:
result.append(nums[window[0]])
return result
if __name__ == "__main__":
print(max_sliding_window([1, 3, -1, -3, 5, 3, 6, 7], 3))
Solution 2: Max-Heap Window
Maintain a max-heap of (value, index) pairs and discard entries that fall out of the window. This approach is O(n log k) yet conceptually straightforward.
import heapq
from typing import List, Tuple
def max_sliding_window_heap(nums: List[int], k: int) -> List[int]:
heap: List[Tuple[int, int]] = []
result: List[int] = []
for index, value in enumerate(nums):
heapq.heappush(heap, (-value, index))
while heap[0][1] <= index - k:
heapq.heappop(heap)
if index >= k - 1:
result.append(-heap[0][0])
return result
if __name__ == "__main__":
print(max_sliding_window_heap([1, 3, -1, -3, 5, 3, 6, 7], 3))
Find the maximum rectangle area in a histogram. Sample Input: heights = [2,1,5,6,2,3]. Sample Output: 10.
Maintain a monotonic stack of increasing bar indices and compute areas when the sequence drops. Adding a sentinel zero height flushes the stack so every bar acts as the limiting height once.
Solution 1: Primary Approach
from typing import List
def largest_rectangle_area(heights: List[int]) -> int:
stack: list[int] = []
best = 0
heights.append(0)
for index, height in enumerate(heights):
while stack and heights[stack[-1]] > height:
h = heights[stack.pop()]
left = stack[-1] if stack else -1
width = index - left - 1
best = max(best, h * width)
stack.append(index)
heights.pop()
return best
if __name__ == "__main__":
print(largest_rectangle_area([2, 1, 5, 6, 2, 3]))
Solution 2: Divide and Conquer
Recursively compute the best area in the left half, right half, and the crossing rectangle using the minimum bar as a pivot.
from typing import List
def largest_rectangle_divide_conquer(heights: List[int]) -> int:
def solve(start: int, end: int) -> int:
if start > end:
return 0
if start == end:
return heights[start]
min_index = min(range(start, end + 1), key=lambda i: heights[i])
area_with_min = heights[min_index] * (end - start + 1)
return max(
area_with_min,
solve(start, min_index - 1),
solve(min_index + 1, end),
)
return solve(0, len(heights) - 1)
if __name__ == "__main__":
print(largest_rectangle_divide_conquer([2, 1, 5, 6, 2, 3]))
Compute days until a warmer temperature for each day in the list. Sample Input: [73,74,75,71,69,72,76,73]. Sample Output: [1, 1, 4, 2, 1, 1, 0, 0].
Traverse temperatures while the stack tracks indices with unresolved warmer days. When a warmer day arrives, pop indices and record the distance to that day.
Solution 1: Primary Approach
from typing import List
def daily_temperatures(temperatures: List[int]) -> List[int]:
answer = [0] * len(temperatures)
stack: list[int] = []
for index, value in enumerate(temperatures):
while stack and temperatures[stack[-1]] < value:
previous = stack.pop()
answer[previous] = index - previous
stack.append(index)
return answer
if __name__ == "__main__":
print(daily_temperatures([73, 74, 75, 71, 69, 72, 76, 73]))
Solution 2: Reverse Traversal
Traverse the array from right to left, jumping forward using stored distances until a warmer day is found.
from typing import List
def daily_temperatures_reverse(temperatures: List[int]) -> List[int]:
n = len(temperatures)
answer = [0] * n
for i in range(n - 2, -1, -1):
j = i + 1
while j < n and temperatures[j] <= temperatures[i] and answer[j] != 0:
j += answer[j]
if j < n and temperatures[j] > temperatures[i]:
answer[i] = j - i
return answer
if __name__ == "__main__":
print(daily_temperatures_reverse([73, 74, 75, 71, 69, 72, 76, 73]))
Maintain the median of a dynamically updating list of numbers. Sample Input: stream = [1, 2, 3]. Sample Output: 2.0.
Keep a max-heap for the lower half and a min-heap for the upper half, rebalancing after each insertion. The median is either the top of one heap or the average of both tops depending on parity.
Solution 1: Primary Approach
import heapq
class MedianFinder:
def __init__(self) -> None:
self.small: list[int] = [] # max heap as negatives
self.large: list[int] = [] # min heap
def add_num(self, num: int) -> None:
heapq.heappush(self.small, -num)
heapq.heappush(self.large, -heapq.heappop(self.small))
if len(self.large) > len(self.small):
heapq.heappush(self.small, -heapq.heappop(self.large))
def find_median(self) -> float:
if len(self.small) > len(self.large):
return float(-self.small[0])
return (-self.small[0] + self.large[0]) / 2.0
if __name__ == "__main__":
finder = MedianFinder()
for number in [1, 2, 3]:
finder.add_num(number)
print(finder.find_median())
Solution 2: Sorted List Maintenance
Insert each number into a sorted list using bisect. The median is then taken directly from the list. Updates are O(n) but logic stays simple.
from bisect import insort
class MedianFinderSorted:
def __init__(self) -> None:
self.data: list[int] = []
def add_num(self, num: int) -> None:
insort(self.data, num)
def find_median(self) -> float:
n = len(self.data)
if n % 2:
return float(self.data[n // 2])
return (self.data[n // 2 - 1] + self.data[n // 2]) / 2
if __name__ == "__main__":
finder = MedianFinderSorted()
for number in [1, 2, 3]:
finder.add_num(number)
print(finder.find_median())
Return the K most frequent integers using a heap. Sample Input: nums = [1,1,1,2,2,3], k = 2. Sample Output: [1, 2].
Count frequencies and maintain a min-heap of size k keyed by frequency. When the heap grows past k, removing the smallest leaves the top k frequent elements.
Solution 1: Primary Approach
from typing import List
from collections import Counter
import heapq
def top_k_frequent(nums: List[int], k: int) -> List[int]:
counts = Counter(nums)
heap: list[tuple[int, int]] = []
for value, frequency in counts.items():
heapq.heappush(heap, (frequency, value))
if len(heap) > k:
heapq.heappop(heap)
return [value for _, value in sorted(heap, reverse=True)]
if __name__ == "__main__":
print(top_k_frequent([1, 1, 1, 2, 2, 3], 2))
Solution 2: Frequency Sort
Sort the frequency table descending by count and slice the top k values.
from collections import Counter
from typing import List
def top_k_frequent_sort(nums: List[int], k: int) -> List[int]:
counts = Counter(nums)
ordered = sorted(counts.items(), key=lambda item: item[1], reverse=True)
return [value for value, _ in ordered[:k]]
if __name__ == "__main__":
print(top_k_frequent_sort([1, 1, 1, 2, 2, 3], 2))
Merge multiple sorted linked lists using a min-heap. Sample Input: [[1,4], [2,3]]. Sample Output: [1, 2, 3, 4].
Push each list head into a min-heap and repeatedly pop the smallest node to append. Pushing the popped node's successor keeps the heap stocked until all nodes are output.
Solution 1: Primary Approach
from typing import Optional, List
import heapq
class ListNode:
def __init__(self, value: int) -> None:
self.value = value
self.next: Optional['ListNode'] = None
def merge_sorted_lists(lists: List[Optional[ListNode]]) -> Optional[ListNode]:
heap: list[tuple[int, int, ListNode]] = []
for index, node in enumerate(lists):
if node:
heapq.heappush(heap, (node.value, index, node))
dummy = tail = ListNode(0)
while heap:
_, index, node = heapq.heappop(heap)
tail.next = node
tail = tail.next
if node.next:
heapq.heappush(heap, (node.next.value, index, node.next))
return dummy.next
if __name__ == "__main__":
a = ListNode(1)
a.next = ListNode(4)
b = ListNode(2)
b.next = ListNode(3)
merged = merge_sorted_lists([a, b])
values = []
while merged:
values.append(merged.value)
merged = merged.next
print(values)
Solution 2: Sequential Merge
Repeatedly merge lists two at a time without a heap. Each merge runs in linear time, so merging k lists sequentially costs O(kN).
from typing import Optional, List
class ListNode:
def __init__(self, value: int, next_node: Optional["ListNode"] = None) -> None:
self.value = value
self.next = next_node
def merge_two(first: Optional[ListNode], second: Optional[ListNode]) -> Optional[ListNode]:
dummy = tail = ListNode(0)
while first and second:
if first.value <= second.value:
tail.next = first
first = first.next
else:
tail.next = second
second = second.next
tail = tail.next
tail.next = first or second
return dummy.next
def merge_k_lists_sequential(lists: List[Optional[ListNode]]) -> Optional[ListNode]:
result: Optional[ListNode] = None
for node in lists:
result = merge_two(result, node)
return result
if __name__ == "__main__":
a = ListNode(1, ListNode(4))
b = ListNode(1, ListNode(3))
c = ListNode(2, ListNode(6))
merged = merge_k_lists_sequential([a, b, c])
values = []
while merged:
values.append(merged.value)
merged = merged.next
print(values)
Sort an array where each element is at most k positions from its sorted spot. Sample Input: nums = [6,5,3,2,8,10,9], k = 3. Sample Output: [2, 3, 5, 6, 8, 9, 10].
Insert at most k+1 items into a min-heap and pop the smallest to produce the sorted order. Because each element is near its final position, the heap never grows beyond k+1 entries.
Solution 1: Primary Approach
from typing import List
import heapq
def sort_k_sorted_array(nums: List[int], k: int) -> List[int]:
heap = nums[:k + 1]
heapq.heapify(heap)
target = 0
for index in range(k + 1, len(nums)):
nums[target] = heapq.heappop(heap)
heapq.heappush(heap, nums[index])
target += 1
while heap:
nums[target] = heapq.heappop(heap)
target += 1
return nums
if __name__ == "__main__":
print(sort_k_sorted_array([6, 5, 3, 2, 8, 10, 9], 3))
Solution 2: Insertion Sort
Insertion sort performs well when each element is at most k positions away from its target location.
from typing import List
def sort_k_sorted_insertion(nums: List[int]) -> List[int]:
for i in range(1, len(nums)):
key = nums[i]
j = i - 1
while j >= 0 and nums[j] > key:
nums[j + 1] = nums[j]
j -= 1
nums[j + 1] = key
return nums
if __name__ == "__main__":
print(sort_k_sorted_insertion([3, 1, 2, 4, 5]))
Merge all overlapping intervals into disjoint segments. Sample Input: [[1,3],[2,6],[8,10],[15,18]]. Sample Output: [[1, 6], [8, 10], [15, 18]].
Sort intervals by start time and extend the current interval while overlaps exist. Once an interval no longer overlaps, append it and start a new merge window.
Solution 1: Primary Approach
from typing import List
def merge_intervals(intervals: List[List[int]]) -> List[List[int]]:
if not intervals:
return []
intervals.sort()
merged = [intervals[0]]
for start, end in intervals[1:]:
last_end = merged[-1][1]
if start <= last_end:
merged[-1][1] = max(last_end, end)
else:
merged.append([start, end])
return merged
if __name__ == "__main__":
print(merge_intervals([[1, 3], [2, 6], [8, 10], [15, 18]]))
Solution 2: Line Sweep
Treat interval endpoints as events and accumulate active segments. Whenever the active count drops to zero a merged interval completes.
from typing import List, Tuple
def merge_intervals_line_sweep(intervals: List[Tuple[int, int]]) -> List[Tuple[int, int]]:
events = []
for start, end in intervals:
events.append((start, 1))
events.append((end, -1))
events.sort()
merged: List[Tuple[int, int]] = []
active = 0
start = None
for point, delta in events:
if active == 0:
start = point
active += delta
if active == 0 and start is not None:
merged.append((start, point))
return merged
if __name__ == "__main__":
print(merge_intervals_line_sweep([(1, 3), (2, 6), (8, 10), (15, 18)]))
Insert a new interval into a sorted list and merge where necessary. Sample Input: intervals = [[1,2],[3,5],[6,7],[8,10],[12,16]], new = [4,8]. Sample Output: [[1, 2], [3, 10], [12, 16]].
Add the new interval in order, merging as long as overlaps remain, and then append the untouched remainder. This single pass handles both insertion and merging.
Solution 1: Primary Approach
from typing import List
def insert_interval(intervals: List[List[int]], new_interval: List[int]) -> List[List[int]]:
result = []
index = 0
while index < len(intervals) and intervals[index][1] < new_interval[0]:
result.append(intervals[index])
index += 1
while index < len(intervals) and intervals[index][0] <= new_interval[1]:
new_interval[0] = min(new_interval[0], intervals[index][0])
new_interval[1] = max(new_interval[1], intervals[index][1])
index += 1
result.append(new_interval)
result.extend(intervals[index:])
return result
if __name__ == "__main__":
print(insert_interval([[1, 2], [3, 5], [6, 7], [8, 10], [12, 16]], [4, 8]))
Solution 2: Append and Merge
Append the new interval to the list, sort by start, and reuse the interval merging helper to combine overlaps.
from typing import List, Tuple
def insert_interval_sort(intervals: List[Tuple[int, int]], new_interval: Tuple[int, int]) -> List[Tuple[int, int]]:
all_intervals = sorted(intervals + [new_interval])
merged: List[Tuple[int, int]] = []
for start, end in all_intervals:
if not merged or start > merged[-1][1]:
merged.append([start, end])
else:
merged[-1][1] = max(merged[-1][1], end)
return [tuple(interval) for interval in merged]
if __name__ == "__main__":
print(insert_interval_sort([(1, 3), (6, 9)], (2, 5)))
List every pair of intervals that overlap in time. Sample Input: [(1,5),(3,7),(8,10)]. Sample Output: [((1, 5), (3, 7))].
Sort intervals and for each start, scan forward while the next start lies before the current end. Early breaks after a non-overlap keep the nested loop practical.
Solution 1: Primary Approach
from typing import List, Tuple
def overlapping_pairs(intervals: List[Tuple[int, int]]) -> List[Tuple[Tuple[int, int], Tuple[int, int]]]:
intervals = sorted(intervals)
overlaps = []
for index in range(len(intervals)):
for next_index in range(index + 1, len(intervals)):
first = intervals[index]
second = intervals[next_index]
if second[0] <= first[1]:
overlaps.append((first, second))
else:
break
return overlaps
if __name__ == "__main__":
pairs = overlapping_pairs([(1, 5), (3, 7), (8, 10)])
print(pairs)
Solution 2: Brute Force Pairs
Check every pair of intervals for overlap using the max-start <= min-end condition.
from typing import List, Tuple
def overlapping_pairs_bruteforce(intervals: List[Tuple[int, int]]) -> List[Tuple[int, int]]:
overlaps: List[Tuple[int, int]] = []
for i in range(len(intervals)):
for j in range(i + 1, len(intervals)):
a_start, a_end = intervals[i]
b_start, b_end = intervals[j]
if max(a_start, b_start) <= min(a_end, b_end):
overlaps.append((i, j))
return overlaps
if __name__ == "__main__":
print(overlapping_pairs_bruteforce([(1, 5), (2, 3), (6, 8)]))
Determine the maximum number of simultaneous intervals. Sample Input: [[1,4],[2,5],[7,9]]. Sample Output: 2.
Convert starts and ends into sweep events and accumulate active counts as you traverse. The peak active count equals the maximum number of simultaneous intervals.
Solution 1: Primary Approach
from typing import List
def max_overlap(intervals: List[List[int]]) -> int:
events = []
for start, end in intervals:
events.append((start, 1))
events.append((end, -1))
events.sort()
active = best = 0
for _, change in events:
active += change
best = max(best, active)
return best
if __name__ == "__main__":
print(max_overlap([[1, 4], [2, 5], [7, 9]]))
Solution 2: Heap Based Sweep
Sort intervals by start time and push their end times into a min-heap. Removing expired intervals keeps the heap size equal to the number of simultaneous meetings.
import heapq
from typing import List, Tuple
def max_overlap_heap(intervals: List[Tuple[int, int]]) -> int:
intervals = sorted(intervals)
heap: List[int] = []
best = 0
for start, end in intervals:
while heap and heap[0] <= start:
heapq.heappop(heap)
heapq.heappush(heap, end)
best = max(best, len(heap))
return best
if __name__ == "__main__":
print(max_overlap_heap([(1, 4), (2, 5), (7, 9)]))
Return the minimum number of meeting rooms required. Sample Input: [[0,30],[5,10],[15,20]]. Sample Output: 2.
Sort start and end times separately and advance whichever happens first. Tracking the number of active meetings reveals how many rooms are required at once.
Solution 1: Primary Approach
from typing import List
def min_meeting_rooms(intervals: List[List[int]]) -> int:
starts = sorted(start for start, _ in intervals)
ends = sorted(end for _, end in intervals)
rooms = best = start_index = end_index = 0
while start_index < len(starts):
if starts[start_index] < ends[end_index]:
rooms += 1
best = max(best, rooms)
start_index += 1
else:
rooms -= 1
end_index += 1
return best
if __name__ == "__main__":
print(min_meeting_rooms([[0, 30], [5, 10], [15, 20]]))
Solution 2: Chronological Ordering
Split start and end times into separate arrays, sort them, and walk both pointers to count ongoing meetings.
from typing import List, Tuple
def min_meeting_rooms_chrono(intervals: List[Tuple[int, int]]) -> int:
starts = sorted(start for start, _ in intervals)
ends = sorted(end for _, end in intervals)
rooms = best = 0
i = j = 0
while i < len(starts):
if starts[i] < ends[j]:
rooms += 1
best = max(best, rooms)
i += 1
else:
rooms -= 1
j += 1
return best
if __name__ == "__main__":
print(min_meeting_rooms_chrono([(0, 30), (5, 10), (15, 20)]))
Select non-overlapping weighted intervals to maximize total profit. Sample Input: [(1,3,50),(3,5,20),(6,19,100),(2,100,200)]. Sample Output: 250.
Sort jobs by end time and use DP to compare skipping a job with taking it plus the best compatible job found via binary search. The recurrence maximizes profit.
Solution 1: Primary Approach
from typing import List, Tuple
import bisect
def weighted_interval_scheduling(jobs: List[Tuple[int, int, int]]) -> int:
jobs = sorted(jobs, key=lambda job: job[1])
ends = [job[1] for job in jobs]
dp = [0] * (len(jobs) + 1)
for index, (start, end, profit) in enumerate(jobs, 1):
prev = bisect.bisect_right(ends, start, hi=index - 1)
dp[index] = max(dp[index - 1], dp[prev] + profit)
return dp[-1]
if __name__ == "__main__":
work = [(1, 3, 50), (3, 5, 20), (6, 19, 100), (2, 100, 200)]
print(weighted_interval_scheduling(work))
Solution 2: Recursive Memo
Pick or skip each interval recursively while memoising on the current index.
from bisect import bisect_left
from typing import List, Tuple
def weighted_interval_scheduling_recursive(intervals: List[Tuple[int, int, int]]) -> int:
intervals = sorted(intervals, key=lambda x: x[1])
finishes = [end for _, end, _ in intervals]
from functools import lru_cache
@lru_cache(maxsize=None)
def dfs(index: int) -> int:
if index == len(intervals):
return 0
start, end, weight = intervals[index]
skip = dfs(index + 1)
next_index = bisect_left(finishes, start, lo=index + 1)
take = weight + dfs(next_index)
return max(skip, take)
return dfs(0)
if __name__ == "__main__":
jobs = [(1, 3, 5), (2, 5, 6), (4, 6, 5), (6, 7, 4), (5, 8, 11), (7, 9, 2)]
print(weighted_interval_scheduling_recursive(jobs))
Find the minimum number of jumps needed to reach the last index. Sample Input: [2,3,1,1,4]. Sample Output: 2.
Walk the array while tracking the farthest index reachable in the current jump range. When you finish a range you increment the jump count, mirroring level-by-level BFS.
Solution 1: Primary Approach
from typing import List
def jump_game_min_jumps(nums: List[int]) -> int:
jumps = current_end = farthest = 0
for index in range(len(nums) - 1):
farthest = max(farthest, index + nums[index])
if index == current_end:
jumps += 1
current_end = farthest
return jumps
if __name__ == "__main__":
print(jump_game_min_jumps([2, 3, 1, 1, 4]))
Solution 2: Dynamic Programming
Compute the minimum jumps needed to reach each index by relaxing all reachable positions to the right.
from typing import List
def jump_game_dp(nums: List[int]) -> int:
n = len(nums)
dp = [float('inf')] * n
dp[0] = 0
for i in range(n):
far = min(n - 1, i + nums[i])
for j in range(i + 1, far + 1):
dp[j] = min(dp[j], dp[i] + 1)
return dp[-1]
if __name__ == "__main__":
print(jump_game_dp([2, 3, 1, 1, 4]))
Design a data structure that supports sumRange(left, right) returning the sum of the elements from left to right inclusive. The array is immutable. (LeetCode 303.)
A baseline solution recomputes the sum every time; the optimized answer precomputes prefix sums to answer each query in constant time.
Solution 1: Recompute on Demand
Store the original numbers and add them up for each query. This keeps the constructor trivial but answers each query in O(n).
from typing import List
class NumArray:
def __init__(self, nums: List[int]) -> None:
self.nums = nums
def sumRange(self, left: int, right: int) -> int:
total = 0
for index in range(left, right + 1):
total += self.nums[index]
return total
if __name__ == "__main__":
arr = NumArray([-2, 0, 3, -5, 2, -1])
print(arr.sumRange(0, 2))
print(arr.sumRange(2, 5))
Solution 2: Prefix Sum Array
Precompute a prefix array where prefix[i] is the sum of the first i elements. Each query becomes a subtraction of two prefix entries.
from typing import List
class NumArray:
def __init__(self, nums: List[int]) -> None:
self.prefix = [0]
for value in nums:
self.prefix.append(self.prefix[-1] + value)
def sumRange(self, left: int, right: int) -> int:
return self.prefix[right + 1] - self.prefix[left]
if __name__ == "__main__":
arr = NumArray([-2, 0, 3, -5, 2, -1])
print(arr.sumRange(0, 2))
print(arr.sumRange(2, 5))
Given an integer array nums and an integer k, return the number of non-empty subarrays whose sum is divisible by k. (LeetCode 974.)
The brute-force approach checks every subarray. The optimized solution stores prefix sums modulo k so matching remainders signal divisible subarrays.
Solution 1: Enumerate All Subarrays
Try every start index, extend to every end index, and count the subarrays whose sum mod k is zero. This is O(n²) but a good warm-up.
from typing import List
class Solution:
def subarraysDivByK(self, nums: List[int], k: int) -> int:
count = 0
for start in range(len(nums)):
running_sum = 0
for end in range(start, len(nums)):
running_sum += nums[end]
if running_sum % k == 0:
count += 1
return count
if __name__ == "__main__":
print(Solution().subarraysDivByK([4, 5, 0, -2, -3, 1], 5))
Solution 2: Prefix Remainder Counts
Maintain how many times each remainder has been seen. When the current remainder repeats, the numbers between those remainders form a sum divisible by k.
from typing import List
from collections import defaultdict
class Solution:
def subarraysDivByK(self, nums: List[int], k: int) -> int:
remainder_count = defaultdict(int)
remainder_count[0] = 1
prefix = 0
total = 0
for number in nums:
prefix = (prefix + number) % k
total += remainder_count[prefix]
remainder_count[prefix] += 1
return total
if __name__ == "__main__":
print(Solution().subarraysDivByK([4, 5, 0, -2, -3, 1], 5))
Given an array nums, return a new array answer where answer[i] is the product of all elements of nums except nums[i]. Do not use division. (LeetCode 238.)
Prefix and suffix products provide the standard solution. You can either keep separate arrays or accumulate the products in-place for constant extra memory.
Solution 1: Prefix and Suffix Arrays
Compute prefix products and suffix products in two arrays and multiply them element-wise. This keeps the logic explicit, at the cost of O(n) additional memory.
from typing import List
class Solution:
def productExceptSelf(self, nums: List[int]) -> List[int]:
length = len(nums)
prefix = [1] * length
suffix = [1] * length
for i in range(1, length):
prefix[i] = prefix[i - 1] * nums[i - 1]
for i in range(length - 2, -1, -1):
suffix[i] = suffix[i + 1] * nums[i + 1]
return [prefix[i] * suffix[i] for i in range(length)]
if __name__ == "__main__":
print(Solution().productExceptSelf([1, 2, 3, 4]))
Solution 2: In-Place Prefix & Suffix Sweep
Reuse the output array: store the prefix product in the first pass, then multiply by a running suffix in the second pass. This achieves O(1) auxiliary space.
from typing import List
class Solution:
def productExceptSelf(self, nums: List[int]) -> List[int]:
result = [1] * len(nums)
prefix = 1
for i in range(len(nums)):
result[i] = prefix
prefix *= nums[i]
suffix = 1
for i in range(len(nums) - 1, -1, -1):
result[i] *= suffix
suffix *= nums[i]
return result
if __name__ == "__main__":
print(Solution().productExceptSelf([1, 2, 3, 4]))
Given the root of a binary tree, invert the tree and return its root. Every left/right child should be swapped recursively. (LeetCode 226.)
The inversion can be implemented with a straightforward depth-first recursion, or iteratively with a queue to avoid recursion depth limits.
4 / \ 2 7 / \ / \ 1 3 6 9
Solution 1: Recursive DFS
Swap the left and right children after inverting both subtrees. Recursion visits every node once and uses O(h) stack space.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def invertTree(self, root: Optional[TreeNode]) -> Optional[TreeNode]:
if root is None:
return None
root.left, root.right = self.invertTree(root.right), self.invertTree(root.left)
return root
if __name__ == "__main__":
root = TreeNode(4,
TreeNode(2, TreeNode(1), TreeNode(3)),
TreeNode(7, TreeNode(6), TreeNode(9)))
inverted = Solution().invertTree(root)
print(inverted.left.val, inverted.right.val)
Solution 2: Iterative BFS
Use a queue and swap children as you pop nodes. This avoids recursion and still runs in linear time.
from collections import deque
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def invertTree(self, root: Optional[TreeNode]) -> Optional[TreeNode]:
if root is None:
return None
queue: deque[TreeNode] = deque([root])
while queue:
node = queue.popleft()
node.left, node.right = node.right, node.left
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
return root
if __name__ == "__main__":
root = TreeNode(2, TreeNode(1), TreeNode(3))
inverted = Solution().invertTree(root)
print(inverted.left.val, inverted.right.val)
Given the root of a binary tree, return True if it is height-balanced. A height-balanced tree is one where the depths of the two subtrees of every node differ by no more than one. (LeetCode 110.)
You can either compute heights top-down (simpler but O(n²) in the worst case) or perform a post-order traversal that carries heights up while checking balance in O(n).
1
/ \
2 2
/ \
3 4
Solution 1: Top-Down Height Check
Compute the height of each subtree with a helper function. The straightforward recursion is easy to explain but can revisit nodes repeatedly.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def isBalanced(self, root: Optional[TreeNode]) -> bool:
if root is None:
return True
if abs(self._height(root.left) - self._height(root.right)) > 1:
return False
return self.isBalanced(root.left) and self.isBalanced(root.right)
def _height(self, node: Optional[TreeNode]) -> int:
if node is None:
return 0
return 1 + max(self._height(node.left), self._height(node.right))
if __name__ == "__main__":
root = TreeNode(1, TreeNode(2, TreeNode(3), TreeNode(4)), TreeNode(2))
print(Solution().isBalanced(root))
Solution 2: Bottom-Up Postorder
Return each subtree's height along with a balance flag. If any subtree is unbalanced, propagate failure upward immediately, yielding O(n) time.
from typing import Optional, Tuple
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def isBalanced(self, root: Optional[TreeNode]) -> bool:
_, balanced = self._dfs(root)
return balanced
def _dfs(self, node: Optional[TreeNode]) -> Tuple[int, bool]:
if node is None:
return 0, True
left_height, left_balanced = self._dfs(node.left)
right_height, right_balanced = self._dfs(node.right)
balanced = (
left_balanced and
right_balanced and
abs(left_height - right_height) <= 1
)
return 1 + max(left_height, right_height), balanced
if __name__ == "__main__":
root = TreeNode(1, TreeNode(2, TreeNode(3), TreeNode(4)), TreeNode(2))
print(Solution().isBalanced(root))
Given the root of a binary tree, return the values of the nodes visible from the right side in top-down order. (LeetCode 199.)
1
/ \
2 3
\ \
5 4
Solution 1: Level Order Traversal
Traverse level by level and capture the last node at each depth. The queue naturally maintains the breadth-first order.
from collections import deque
from typing import List, Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def rightSideView(self, root: Optional[TreeNode]) -> List[int]:
if root is None:
return []
queue: deque[TreeNode] = deque([root])
view: List[int] = []
while queue:
level_size = len(queue)
for index in range(level_size):
node = queue.popleft()
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
if index == level_size - 1:
view.append(node.val)
return view
if __name__ == "__main__":
root = TreeNode(1, TreeNode(2, right=TreeNode(5)), TreeNode(3, right=TreeNode(4)))
print(Solution().rightSideView(root))
Solution 2: DFS with Depth Tracking
Run a depth-first search visiting right children before left. The first node encountered at each depth is the visible node.
from typing import List, Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def rightSideView(self, root: Optional[TreeNode]) -> List[int]:
view: List[int] = []
self._dfs(root, 0, view)
return view
def _dfs(self, node: Optional[TreeNode], depth: int, view: List[int]) -> None:
if node is None:
return
if depth == len(view):
view.append(node.val)
self._dfs(node.right, depth + 1, view)
self._dfs(node.left, depth + 1, view)
if __name__ == "__main__":
root = TreeNode(1, TreeNode(2, right=TreeNode(5)), TreeNode(3, right=TreeNode(4)))
print(Solution().rightSideView(root))
Given the root of a binary tree, return the maximum width among all levels. The width of a level is the length between the leftmost and rightmost non-null nodes in a conceptual complete binary tree that the structure would fit into. (LeetCode 662.)
Track positional indices while traversing. A breadth-first sweep is the most direct, while a recursive strategy can achieve the same using depth-index maps.
1
/ \
3 2
/ \ \
5 3 9
Solution 1: BFS with Indices
Store each node with its index in the implicit complete tree. For every level, subtract the first index from the last to measure width.
from collections import deque
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def widthOfBinaryTree(self, root: Optional[TreeNode]) -> int:
if root is None:
return 0
queue: deque[tuple[TreeNode, int]] = deque([(root, 0)])
best = 0
while queue:
level_size = len(queue)
_, first_index = queue[0]
for _ in range(level_size):
node, index = queue.popleft()
normalized = index - first_index
best = max(best, normalized + 1)
if node.left:
queue.append((node.left, 2 * normalized))
if node.right:
queue.append((node.right, 2 * normalized + 1))
return best
if __name__ == "__main__":
root = TreeNode(1, TreeNode(3, TreeNode(5), TreeNode(3)), TreeNode(2, None, TreeNode(9)))
print(Solution().widthOfBinaryTree(root))
Solution 2: DFS with Depth Maps
During a depth-first traversal, remember the first index encountered at each depth and use it to compute widths as you unwind.
from typing import Dict, Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def widthOfBinaryTree(self, root: Optional[TreeNode]) -> int:
first_index: Dict[int, int] = {}
return self._dfs(root, depth=0, index=0, first_index=first_index)
def _dfs(self, node: Optional[TreeNode], depth: int, index: int, first_index: Dict[int, int]) -> int:
if node is None:
return 0
if depth not in first_index:
first_index[depth] = index
current_width = index - first_index[depth] + 1
left_width = self._dfs(node.left, depth + 1, 2 * index, first_index)
right_width = self._dfs(node.right, depth + 1, 2 * index + 1, first_index)
return max(current_width, left_width, right_width)
if __name__ == "__main__":
root = TreeNode(1, TreeNode(3, TreeNode(5), TreeNode(3)), TreeNode(2, None, TreeNode(9, TreeNode(7), None)))
print(Solution().widthOfBinaryTree(root))
Given the root of a binary tree, determine if it is a valid binary search tree (BST). A BST requires that every node satisfies left.val < node.val < right.val recursively. (LeetCode 98.)
Two common approaches are to maintain allowable value bounds when traversing, or to perform an inorder traversal and confirm the values appear in strictly increasing order.
2
/ \
1 3
Solution 1: Recursive Bounds
Pass down the allowable range for each node. If any node violates its range, the tree cannot be a BST.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def isValidBST(self, root: Optional[TreeNode]) -> bool:
return self._validate(root, float('-inf'), float('inf'))
def _validate(self, node: Optional[TreeNode], low: float, high: float) -> bool:
if node is None:
return True
if not (low < node.val < high):
return False
return (
self._validate(node.left, low, node.val) and
self._validate(node.right, node.val, high)
)
if __name__ == "__main__":
root = TreeNode(2, TreeNode(1), TreeNode(3))
print(Solution().isValidBST(root))
Solution 2: Inorder Traversal
Inorder traversal of a BST must yield a strictly increasing sequence. Track the previous value seen to ensure the invariant holds.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def isValidBST(self, root: Optional[TreeNode]) -> bool:
prev = None
stack: list[TreeNode] = []
node = root
while stack or node:
while node:
stack.append(node)
node = node.left
node = stack.pop()
if prev is not None and node.val <= prev:
return False
prev = node.val
node = node.right
return True
if __name__ == "__main__":
root = TreeNode(5, TreeNode(1), TreeNode(4, TreeNode(3), TreeNode(6)))
print(Solution().isValidBST(root))
Given the root of a binary search tree and two nodes p and q, return their lowest common ancestor node (the deepest node that has both as descendants). (LeetCode 235.)
The BST property lets you compare node values to guide the search. You can either walk iteratively or recurse until you find their split point.
6
/ \
2 8
/ \ / \
0 4 7 9
/ \
3 5
Solution 1: Iterative Search
Traverse from the root. If both targets are smaller, go left; if both are larger, go right. Otherwise the current node is the LCA.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def lowestCommonAncestor(self, root: TreeNode, p: TreeNode, q: TreeNode) -> TreeNode:
node = root
while node:
if p.val < node.val and q.val < node.val:
node = node.left
elif p.val > node.val and q.val > node.val:
node = node.right
else:
return node
return root
if __name__ == "__main__":
root = TreeNode(6,
TreeNode(2, TreeNode(0), TreeNode(4, TreeNode(3), TreeNode(5))),
TreeNode(8, TreeNode(7), TreeNode(9)))
p = root.left
q = root.right
print(Solution().lowestCommonAncestor(root, p, q).val)
Solution 2: Recursive Split
A recursive version mirrors the iterative logic: recurse into the side that contains both nodes until they diverge.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def lowestCommonAncestor(self, root: TreeNode, p: TreeNode, q: TreeNode) -> TreeNode:
if p.val < root.val and q.val < root.val:
return self.lowestCommonAncestor(root.left, p, q)
if p.val > root.val and q.val > root.val:
return self.lowestCommonAncestor(root.right, p, q)
return root
if __name__ == "__main__":
root = TreeNode(6,
TreeNode(2, TreeNode(0), TreeNode(4, TreeNode(3), TreeNode(5))),
TreeNode(8, TreeNode(7), TreeNode(9)))
p = root.left.right # node 4
q = root.left # node 2
print(Solution().lowestCommonAncestor(root, p, q).val)
Given the root of a binary tree and two nodes p and q, return their lowest common ancestor (LCA). The LCA is the deepest node that has both nodes as descendants. (LeetCode 236.)
A post-order DFS can determine the first node whose subtrees contain both targets. Alternatively, you can map parents with BFS and then walk ancestors upward.
3
/ \
5 1
/ \ / \
6 2 0 8
/ \
7 4
Solution 1: Post-Order DFS
Return the current node if it matches p or q, otherwise recurse on both children. When both sides return non-null, the current node is the LCA.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def lowestCommonAncestor(self, root: Optional[TreeNode], p: TreeNode, q: TreeNode) -> Optional[TreeNode]:
if root is None or root is p or root is q:
return root
left = self.lowestCommonAncestor(root.left, p, q)
right = self.lowestCommonAncestor(root.right, p, q)
if left and right:
return root
return left or right
if __name__ == "__main__":
root = TreeNode(3,
TreeNode(5, TreeNode(6), TreeNode(2, TreeNode(7), TreeNode(4))),
TreeNode(1, TreeNode(0), TreeNode(8)))
solver = Solution()
ancestor = solver.lowestCommonAncestor(root, root.left, root.right)
print(ancestor.val if ancestor else None)
Solution 2: Parent Pointers
Perform a BFS to record each node's parent. Then climb from one node while marking visited ancestors, and walk the second node upward until you hit a visited ancestor.
from collections import deque
from typing import Dict, Optional, Set
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def lowestCommonAncestor(self, root: TreeNode, p: TreeNode, q: TreeNode) -> TreeNode:
parent: Dict[TreeNode, Optional[TreeNode]] = {root: None}
queue: deque[TreeNode] = deque([root])
while queue and (p not in parent or q not in parent):
node = queue.popleft()
if node.left:
parent[node.left] = node
queue.append(node.left)
if node.right:
parent[node.right] = node
queue.append(node.right)
ancestors: Set[TreeNode] = set()
node = p
while node:
ancestors.add(node)
node = parent[node]
node = q
while node not in ancestors:
node = parent[node]
return node
if __name__ == "__main__":
root = TreeNode(3,
TreeNode(5, TreeNode(6), TreeNode(2, TreeNode(7), TreeNode(4))),
TreeNode(1, TreeNode(0), TreeNode(8)))
solver = Solution()
ancestor = solver.lowestCommonAncestor(root, root.left.right.right, root.right)
print(ancestor.val)
Construct and return the binary tree from its preorder and inorder traversal arrays. All node values are unique. (LeetCode 105.)
Use preorder to choose the next root and locate it within the inorder array. Hashing inorder indices gives an O(n) recursion; an iterative stack approach is an alternative when you want to avoid explicit recursion.
3
/ \
9 20
/ \
15 7
Solution 1: Recursive Hash Map
Map each value to its inorder index and recursively build left/right subtrees based on the preorder pointer.
from typing import Dict, List, Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:
index_map: Dict[int, int] = {value: idx for idx, value in enumerate(inorder)}
self.pre_idx = 0
def helper(left: int, right: int) -> Optional[TreeNode]:
if left > right:
return None
root_val = preorder[self.pre_idx]
self.pre_idx += 1
root = TreeNode(root_val)
mid = index_map[root_val]
root.left = helper(left, mid - 1)
root.right = helper(mid + 1, right)
return root
return helper(0, len(inorder) - 1)
if __name__ == "__main__":
solver = Solution()
tree = solver.buildTree([3, 9, 20, 15, 7], [9, 3, 15, 20, 7])
print(tree.val)
Solution 2: Iterative Stack Construction
Use a stack to simulate recursion: advance through preorder, attaching nodes as left children until the inorder value matches the top, then unwind and add right children.
from typing import List, Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:
if not preorder:
return None
root = TreeNode(preorder[0])
stack = [root]
inorder_index = 0
for value in preorder[1:]:
node = stack[-1]
if node.val != inorder[inorder_index]:
node.left = TreeNode(value)
stack.append(node.left)
else:
while stack and stack[-1].val == inorder[inorder_index]:
node = stack.pop()
inorder_index += 1
node.right = TreeNode(value)
stack.append(node.right)
return root
if __name__ == "__main__":
solver = Solution()
tree = solver.buildTree([3, 9, 20, 15, 7], [9, 3, 15, 20, 7])
print(tree.right.left.val)
A path in a binary tree is any sequence of nodes where each pair of adjacent nodes has a parent-child relationship. The path does not need to pass through the root. Return the maximum path sum of any non-empty path. (LeetCode 124.)
Use post-order traversal to compute the best downward gain from each node while tracking the global best. An alternative is to return both downward gains and the best path up the recursion without global state.
-10
/ \
9 20
/ \
15 7
Solution 1: Post-Order with Global Tracker
For each node, compute the best gain from either subtree and update a global maximum with the path passing through that node.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def maxPathSum(self, root: Optional[TreeNode]) -> int:
self.best = float('-inf')
self._gain(root)
return self.best
def _gain(self, node: Optional[TreeNode]) -> int:
if node is None:
return 0
left_gain = max(self._gain(node.left), 0)
right_gain = max(self._gain(node.right), 0)
self.best = max(self.best, node.val + left_gain + right_gain)
return node.val + max(left_gain, right_gain)
if __name__ == "__main__":
root = TreeNode(-10, TreeNode(9), TreeNode(20, TreeNode(15), TreeNode(7)))
print(Solution().maxPathSum(root))
Solution 2: Return Tuple of Gains
Avoid global state by returning both the best downward gain and the best path sum for each subtree, combining results as you unwind recursion.
from typing import Optional, Tuple
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def maxPathSum(self, root: Optional[TreeNode]) -> int:
_, best = self._helper(root)
return best
def _helper(self, node: Optional[TreeNode]) -> Tuple[int, int]:
if node is None:
return 0, float('-inf')
left_down, left_best = self._helper(node.left)
right_down, right_best = self._helper(node.right)
down_gain = node.val + max(left_down, right_down, 0)
through = node.val + max(left_down, 0) + max(right_down, 0)
best_here = max(left_best, right_best, through)
return down_gain, best_here
if __name__ == "__main__":
root = TreeNode(-10, TreeNode(9), TreeNode(20, TreeNode(15), TreeNode(7)))
print(Solution().maxPathSum(root))
Given the root of a binary tree, determine whether it is symmetric around its center—i.e., it is a mirror of itself. (LeetCode 101.)
Mirror symmetry can be verified recursively by pairing nodes from the left and right subtrees, or iteratively with a queue that checks mirrored children.
1
/ \
2 2
/ \ / \
3 4 4 3
Solution 1: Recursive Mirror Check
Recursively compare the left subtree of one node with the right subtree of the other and ensure both value and structure align.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def isSymmetric(self, root: Optional[TreeNode]) -> bool:
return self._mirror(root.left, root.right) if root else True
def _mirror(self, left: Optional[TreeNode], right: Optional[TreeNode]) -> bool:
if left is None and right is None:
return True
if left is None or right is None or left.val != right.val:
return False
return self._mirror(left.left, right.right) and self._mirror(left.right, right.left)
if __name__ == "__main__":
root = TreeNode(1,
TreeNode(2, TreeNode(3), TreeNode(4)),
TreeNode(2, TreeNode(4), TreeNode(3)))
print(Solution().isSymmetric(root))
Solution 2: Iterative Queue
Use a queue of node pairs, always enqueuing mirrored children together. If any pair breaks the mirror condition, the tree is not symmetric.
from collections import deque
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def isSymmetric(self, root: Optional[TreeNode]) -> bool:
if root is None:
return True
queue: deque[tuple[Optional[TreeNode], Optional[TreeNode]]] = deque([(root.left, root.right)])
while queue:
left, right = queue.popleft()
if left is None and right is None:
continue
if left is None or right is None or left.val != right.val:
return False
queue.append((left.left, right.right))
queue.append((left.right, right.left))
return True
if __name__ == "__main__":
root = TreeNode(1,
TreeNode(2, TreeNode(3), TreeNode(4)),
TreeNode(2, TreeNode(4), TreeNode(3)))
print(Solution().isSymmetric(root))
Given the root of a binary tree, return the vertical order traversal: nodes grouped by column index, top to bottom, and sorted lexicographically when they share row and column. (LeetCode 987.)
Track column and row coordinates per node. Either collect everything and sort, or traverse level by level, accumulating nodes per column while maintaining row order.
3
/ \
9 20
/ \
15 7
Solution 1: Coordinate Sorting
Store (column, row, value) triples during traversal, sort them, and group by column.
from collections import defaultdict, deque
from typing import Dict, List, Optional, Tuple
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def verticalTraversal(self, root: Optional[TreeNode]) -> List[List[int]]:
if root is None:
return []
nodes: List[Tuple[int, int, int]] = []
queue: deque[tuple[TreeNode, int, int]] = deque([(root, 0, 0)])
while queue:
node, column, row = queue.popleft()
nodes.append((column, row, node.val))
if node.left:
queue.append((node.left, column - 1, row + 1))
if node.right:
queue.append((node.right, column + 1, row + 1))
nodes.sort()
columns: Dict[int, List[int]] = defaultdict(list)
for column, _, value in nodes:
columns[column].append(value)
return [columns[col] for col in sorted(columns)]
if __name__ == "__main__":
root = TreeNode(3, TreeNode(9), TreeNode(20, TreeNode(15), TreeNode(7)))
print(Solution().verticalTraversal(root))
Solution 2: Level-Order Column Buckets
Process nodes breadth-first, collecting values for each column per level and sorting ties locally before appending.
from collections import defaultdict, deque
from typing import Dict, List, Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def verticalTraversal(self, root: Optional[TreeNode]) -> List[List[int]]:
if root is None:
return []
columns: Dict[int, List[int]] = defaultdict(list)
queue: deque[tuple[TreeNode, int]] = deque([(root, 0)])
while queue:
level: Dict[int, List[int]] = defaultdict(list)
for _ in range(len(queue)):
node, column = queue.popleft()
level[column].append(node.val)
if node.left:
queue.append((node.left, column - 1))
if node.right:
queue.append((node.right, column + 1))
for column in level:
columns[column].extend(sorted(level[column]))
return [columns[col] for col in sorted(columns)]
if __name__ == "__main__":
root = TreeNode(3, TreeNode(9), TreeNode(20, TreeNode(15), TreeNode(7)))
print(Solution().verticalTraversal(root))
Given the root of a binary search tree and an integer k, return the k-th smallest value in the tree (1-indexed). (LeetCode 230.)
Inorder traversal of a BST yields values in ascending order. You can stop the traversal once the k-th element is seen, or precompute subtree sizes to navigate directly using order statistics.
5
/ 3 7
/ \ / 2 4 6 8
Solution 1: Inorder Traversal
Traverse the tree in-order keeping a counter. When the counter reaches k, return the current node's value.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def kthSmallest(self, root: Optional[TreeNode], k: int) -> int:
stack: list[TreeNode] = []
node = root
remaining = k
while stack or node:
while node:
stack.append(node)
node = node.left
node = stack.pop()
remaining -= 1
if remaining == 0:
return node.val
node = node.right
raise ValueError("k is larger than the number of nodes")
if __name__ == "__main__":
root = TreeNode(5, TreeNode(3, TreeNode(2), TreeNode(4)), TreeNode(7, TreeNode(6), TreeNode(8)))
print(Solution().kthSmallest(root, 4))
Solution 2: Subtree Sizes
Precompute the size of each subtree, then walk left or right depending on how many nodes lie on the left of the current root.
from typing import Dict, Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def kthSmallest(self, root: Optional[TreeNode], k: int) -> int:
sizes: Dict[TreeNode, int] = {}
def compute(node: Optional[TreeNode]) -> int:
if node is None:
return 0
sizes[node] = 1 + compute(node.left) + compute(node.right)
return sizes[node]
compute(root)
node = root
target = k
while node:
left_count = sizes.get(node.left, 0)
if target == left_count + 1:
return node.val
if target <= left_count:
node = node.left
else:
target -= left_count + 1
node = node.right
raise ValueError("k is larger than the number of nodes")
if __name__ == "__main__":
root = TreeNode(5, TreeNode(3, TreeNode(2), TreeNode(4)), TreeNode(7, TreeNode(6), TreeNode(8)))
print(Solution().kthSmallest(root, 4))
Design an algorithm to serialize and deserialize a binary tree so that the original structure is reproduced exactly. (LeetCode 297.)
Common codecs either perform breadth-first traversal recording null sentinels or use preorder DFS with sentinels. The same traversal order must be used for encoding and decoding.
1
/ 2 3
/ 4 5
Solution 1: Level-Order Codec
Use a queue to process nodes level by level, appending # for null children. During decoding, rebuild the tree while reading values sequentially.
from collections import deque
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Codec:
def serialize(self, root: Optional[TreeNode]) -> str:
if root is None:
return ''
queue: deque[Optional[TreeNode]] = deque([root])
parts: list[str] = []
while queue:
node = queue.popleft()
if node is None:
parts.append('#')
continue
parts.append(str(node.val))
queue.append(node.left)
queue.append(node.right)
return ','.join(parts)
def deserialize(self, data: str) -> Optional[TreeNode]:
if not data:
return None
values = data.split(',')
root = TreeNode(int(values[0]))
queue: deque[TreeNode] = deque([root])
index = 1
while queue and index < len(values):
node = queue.popleft()
if values[index] != '#':
node.left = TreeNode(int(values[index]))
queue.append(node.left)
index += 1
if index < len(values) and values[index] != '#':
node.right = TreeNode(int(values[index]))
queue.append(node.right)
index += 1
return root
if __name__ == "__main__":
codec = Codec()
original = TreeNode(1, TreeNode(2), TreeNode(3, TreeNode(4), TreeNode(5)))
payload = codec.serialize(original)
restored = codec.deserialize(payload)
print(payload)
print(restored.right.left.val)
Solution 2: Preorder DFS Codec
Record nodes in preorder and use # placeholders for null children. Decoding reads the stream recursively, mirroring the traversal.
from typing import Iterator, Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Codec:
def serialize(self, root: Optional[TreeNode]) -> str:
parts: list[str] = []
def dfs(node: Optional[TreeNode]) -> None:
if node is None:
parts.append('#')
return
parts.append(str(node.val))
dfs(node.left)
dfs(node.right)
dfs(root)
return ','.join(parts)
def deserialize(self, data: str) -> Optional[TreeNode]:
values: Iterator[str] = iter(data.split(',')) if data else iter(())
def dfs() -> Optional[TreeNode]:
value = next(values, None)
if value is None or value == '#':
return None
node = TreeNode(int(value))
node.left = dfs()
node.right = dfs()
return node
return dfs()
if __name__ == "__main__":
codec = Codec()
original = TreeNode(1, TreeNode(2), TreeNode(3, TreeNode(4), TreeNode(5)))
payload = codec.serialize(original)
restored = codec.deserialize(payload)
print(payload)
print(restored.right.val)
The diameter of a binary tree is the number of edges on the longest path between any two nodes. Compute the diameter. (LeetCode 543.)
Post-order traversal lets you compute subtree heights while keeping a running maximum of left-plus-right lengths. You can implement this with a global tracker or return both height and diameter from each call.
1
/ 2 3
/ 4 5
Solution 1: Postorder with Global State
Compute heights recursively and update a shared maximum whenever a longer path is found.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def diameterOfBinaryTree(self, root: Optional[TreeNode]) -> int:
self.best = 0
self._depth(root)
return self.best
def _depth(self, node: Optional[TreeNode]) -> int:
if node is None:
return 0
left = self._depth(node.left)
right = self._depth(node.right)
self.best = max(self.best, left + right)
return 1 + max(left, right)
if __name__ == "__main__":
root = TreeNode(1, TreeNode(2, TreeNode(4), TreeNode(5)), TreeNode(3))
print(Solution().diameterOfBinaryTree(root))
Solution 2: Return Height and Diameter
Have each recursive call return both the height and the best diameter found in that subtree, avoiding a shared variable.
from typing import Optional, Tuple
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def diameterOfBinaryTree(self, root: Optional[TreeNode]) -> int:
_, diameter = self._solve(root)
return diameter
def _solve(self, node: Optional[TreeNode]) -> Tuple[int, int]:
if node is None:
return 0, 0
left_height, left_diameter = self._solve(node.left)
right_height, right_diameter = self._solve(node.right)
height = 1 + max(left_height, right_height)
diameter = max(left_diameter, right_diameter, left_height + right_height)
return height, diameter
if __name__ == "__main__":
root = TreeNode(1, TreeNode(2, TreeNode(4), TreeNode(5)), TreeNode(3))
print(Solution().diameterOfBinaryTree(root))
Each node in a binary tree represents a house with some amount of money. Adjacent houses (parent/child) cannot both be robbed. Return the maximum amount you can rob. (LeetCode 337.)
Use tree DP: for every node compute the best value when you rob it and when you skip it. This can be implemented via post-order recursion or memoized recursion that considers robbing grandchildren.
3
/ 2 3
\ 3 1
Solution 1: Postorder DP
Return a pair (rob, skip) for each node, combining children's results to decide whether to rob the current node.
from typing import Optional, Tuple
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def rob(self, root: Optional[TreeNode]) -> int:
rob, skip = self._dfs(root)
return max(rob, skip)
def _dfs(self, node: Optional[TreeNode]) -> Tuple[int, int]:
if node is None:
return 0, 0
left_rob, left_skip = self._dfs(node.left)
right_rob, right_skip = self._dfs(node.right)
rob = node.val + left_skip + right_skip
skip = max(left_rob, left_skip) + max(right_rob, right_skip)
return rob, skip
if __name__ == "__main__":
root = TreeNode(3, TreeNode(2, right=TreeNode(3)), TreeNode(3, right=TreeNode(1)))
print(Solution().rob(root))
Solution 2: Memoized Recursion
Recursively consider robbing a node vs. skipping it and rely on memoization to avoid recomputing overlapping subproblems.
from functools import lru_cache
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def rob(self, root: Optional[TreeNode]) -> int:
@lru_cache(maxsize=None)
def dp(node: Optional[TreeNode]) -> int:
if node is None:
return 0
rob_current = node.val
if node.left:
rob_current += dp(node.left.left) + dp(node.left.right)
if node.right:
rob_current += dp(node.right.left) + dp(node.right.right)
skip_current = dp(node.left) + dp(node.right)
return max(rob_current, skip_current)
return dp(root)
if __name__ == "__main__":
root = TreeNode(3, TreeNode(2, right=TreeNode(3)), TreeNode(3, right=TreeNode(1)))
print(Solution().rob(root))
You are installing cameras on a binary tree. Each camera monitors its parent, itself, and its immediate children. Return the minimum number of cameras needed to monitor every node. (LeetCode 968.)
Classify nodes by whether they have a camera, are covered, or need coverage. A greedy DFS that installs cameras only when needed achieves the optimum; an alternative is dynamic programming that returns costs for each state.
0
/ 0 0
/ 0 0
Solution 1: Greedy State DFS
Perform a post-order traversal. If any child needs a camera, place one at the current node; otherwise mark the node as covered or needing coverage.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
NEEDS = 0
COVERED = 1
HAS_CAMERA = 2
def minCameraCover(self, root: Optional[TreeNode]) -> int:
self.cameras = 0
if self._dfs(root) == self.NEEDS:
self.cameras += 1
return self.cameras
def _dfs(self, node: Optional[TreeNode]) -> int:
if node is None:
return self.COVERED
left = self._dfs(node.left)
right = self._dfs(node.right)
if left == self.NEEDS or right == self.NEEDS:
self.cameras += 1
return self.HAS_CAMERA
if left == self.HAS_CAMERA or right == self.HAS_CAMERA:
return self.COVERED
return self.NEEDS
if __name__ == "__main__":
root = TreeNode(0, TreeNode(0), TreeNode(0, TreeNode(0), TreeNode(0)))
print(Solution().minCameraCover(root))
Solution 2: DP with Costs
For each node compute the cost of three states: placing a camera here, being covered by a child, or needing coverage. Choose the state that covers the parent with minimum cost.
from typing import Optional, Tuple
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def minCameraCover(self, root: Optional[TreeNode]) -> int:
with_cam, covered, needs = self._solve(root)
return min(with_cam, covered)
def _solve(self, node: Optional[TreeNode]) -> Tuple[int, int, int]:
if node is None:
return (float('inf'), 0, 0)
left_with, left_cov, left_need = self._solve(node.left)
right_with, right_cov, right_need = self._solve(node.right)
with_cam = 1 + min(left_with, left_cov, left_need) + min(right_with, right_cov, right_need)
covered = min(left_with + min(right_with, right_cov), right_with + min(left_with, left_cov))
needs = left_cov + right_cov
return with_cam, covered, needs
if __name__ == "__main__":
root = TreeNode(0, TreeNode(0), TreeNode(0, TreeNode(0), TreeNode(0)))
print(Solution().minCameraCover(root))
A node X is good if on the path from the root to X there are no nodes with a value greater than X. Count the number of good nodes in the tree. (LeetCode 1448.)
Maintain the maximum value seen on the current root-to-node path. This can be done via recursion or iteratively with a queue.
3
/ 1 4
/ / 3 1 5
Solution 1: DFS with Path Maximum
Pass the best value down the recursion and increment the count when the current node meets or exceeds it.
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def goodNodes(self, root: Optional[TreeNode]) -> int:
return self._dfs(root, float('-inf'))
def _dfs(self, node: Optional[TreeNode], best: int) -> int:
if node is None:
return 0
good = 1 if node.val >= best else 0
new_best = max(best, node.val)
return good + self._dfs(node.left, new_best) + self._dfs(node.right, new_best)
if __name__ == "__main__":
root = TreeNode(3, TreeNode(1, TreeNode(3)), TreeNode(4, TreeNode(1), TreeNode(5)))
print(Solution().goodNodes(root))
Solution 2: BFS Tracking Maximum
Use a queue storing pairs of (node, max_so_far) and process nodes iteratively.
from collections import deque
from typing import Optional
class TreeNode:
def __init__(self, val: int = 0, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.val = val
self.left = left
self.right = right
class Solution:
def goodNodes(self, root: Optional[TreeNode]) -> int:
if root is None:
return 0
queue: deque[tuple[TreeNode, int]] = deque([(root, root.val)])
good = 0
while queue:
node, best = queue.popleft()
if node.val >= best:
good += 1
new_best = max(best, node.val)
if node.left:
queue.append((node.left, new_best))
if node.right:
queue.append((node.right, new_best))
return good
if __name__ == "__main__":
root = TreeNode(3, TreeNode(1, TreeNode(3)), TreeNode(4, TreeNode(1), TreeNode(5)))
print(Solution().goodNodes(root))
For a tree rooted at node 0, compute for every node the size of its subtree and the sum of values inside it. (Adapted from LeetCode 1519/834 style tasks.)
A post-order traversal accumulates counts and sums from children. Alternatively, a rerooting technique can derive aggregates for all roots after an initial DFS.
0
/ 1 2
/ 3 4
Solution 1: Postorder Accumulation
Traverse from the root, summing child contributions and storing count/sum pairs for each node.
from typing import Dict, List
def subtree_metrics(n: int, edges: List[List[int]], values: List[int]) -> Dict[int, Dict[str, int]]:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
result: Dict[int, Dict[str, int]] = {}
def dfs(node: int, parent: int) -> None:
size = 1
total = values[node]
for neighbor in graph[node]:
if neighbor == parent:
continue
dfs(neighbor, node)
size += result[neighbor]['count']
total += result[neighbor]['sum']
result[node] = {'count': size, 'sum': total}
dfs(0, -1)
return result
if __name__ == "__main__":
metrics = subtree_metrics(5, [[0,1],[0,2],[1,3],[1,4]], [5,1,2,3,4])
print(metrics[1])
Solution 2: Rerooting Extension
After computing subtree data, reroot the tree to compute aggregates for any node as the root (useful for problems like sum of distances or rerooted DP).
from typing import Dict, List
def reroot_counts(n: int, edges: List[List[int]]) -> List[int]:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
size = [1] * n
parent = [-1] * n
def dfs(node: int) -> None:
for neighbor in graph[node]:
if neighbor == parent[node]:
continue
parent[neighbor] = node
dfs(neighbor)
size[node] += size[neighbor]
dfs(0)
answer = size[:]
def reroot(node: int) -> None:
for neighbor in graph[node]:
if neighbor == parent[node]:
continue
answer[neighbor] = answer[node]
reroot(neighbor)
reroot(0)
return answer
if __name__ == "__main__":
print(reroot_counts(5, [[0,1],[0,2],[1,3],[1,4]]))
Answer LCA queries on a rooted tree using binary lifting. Sample Input: tree edges as shown, query (3,4). Sample Output: 1.
Preprocess the tree with binary lifting tables that store 2^k ancestors. To answer queries, lift the deeper node until depths match and then lift both until they meet.
0
/ \
1 2
/ \ / \
3 4 5 6
Solution 1: Primary Approach
from typing import List
import math
def build_lca(n: int, edges: List[List[int]], root: int = 0):
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
log = math.ceil(math.log2(n + 1))
up = [[-1] * n for _ in range(log)]
depth = [0] * n
def dfs(node: int, parent: int) -> None:
up[0][node] = parent
for k in range(1, log):
prev = up[k - 1][node]
up[k][node] = -1 if prev == -1 else up[k - 1][prev]
for neighbor in graph[node]:
if neighbor == parent:
continue
depth[neighbor] = depth[node] + 1
dfs(neighbor, node)
dfs(root, -1)
def lift(node: int, dist: int) -> int:
bit = 0
while dist:
if dist & 1:
node = up[bit][node]
dist >>= 1
bit += 1
if node == -1:
break
return node
def lca(a: int, b: int) -> int:
if depth[a] < depth[b]:
a, b = b, a
a = lift(a, depth[a] - depth[b])
if a == b:
return a
for k in range(log - 1, -1, -1):
if up[k][a] != up[k][b]:
a = up[k][a]
b = up[k][b]
return up[0][a]
return lca
if __name__ == "__main__":
lca = build_lca(7, [[0,1],[0,2],[1,3],[1,4],[2,5],[2,6]])
print(lca(3, 4))
Solution 2: Parent Climb
Store only parent pointers and depths. To answer LCA queries, climb the deeper node one parent at a time until depths match, then ascend both nodes together.
from collections import deque
from typing import List, Dict
def build_lca_parent(n: int, edges: List[List[int]], root: int = 0):
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
parent: Dict[int, int] = {root: -1}
depth = {root: 0}
queue = deque([root])
while queue:
node = queue.popleft()
for neighbor in graph[node]:
if neighbor in parent:
continue
parent[neighbor] = node
depth[neighbor] = depth[node] + 1
queue.append(neighbor)
def lca(a: int, b: int) -> int:
while depth[a] > depth[b]:
a = parent[a]
while depth[b] > depth[a]:
b = parent[b]
while a != b:
a = parent[a]
b = parent[b]
return a
return lca
if __name__ == "__main__":
lca = build_lca_parent(7, [[0,1],[0,2],[1,3],[1,4],[2,5],[2,6]])
print(lca(3, 4))
Count root-to-leaf paths whose node values can form a palindrome. Sample Input: root values [2,3,1,3,1,null,1]. Sample Output: 2.
Traverse root-to-leaf paths while flipping bits representing digit parity. A path is pseudo-palindromic when the mask has at most one bit set, which the bit test checks quickly.
2
/ \
3 1
/ \ \
3 1 1
Solution 1: Primary Approach
from typing import Optional
class TreeNode:
def __init__(self, value: int, left: Optional['TreeNode'] = None, right: Optional['TreeNode'] = None) -> None:
self.value = value
self.left = left
self.right = right
def pseudo_palindromic_paths(root: Optional[TreeNode]) -> int:
def dfs(node: Optional[TreeNode], mask: int) -> int:
if not node:
return 0
mask ^= 1 << node.value
if not node.left and not node.right:
return int(mask & (mask - 1) == 0)
return dfs(node.left, mask) + dfs(node.right, mask)
return dfs(root, 0)
if __name__ == "__main__":
tree = TreeNode(2, TreeNode(3, TreeNode(3), TreeNode(1)), TreeNode(1, right=TreeNode(1)))
print(pseudo_palindromic_paths(tree))
Solution 2: Frequency Counter
Maintain a frequency dictionary for digits along the path. A path is valid when at most one count is odd.
from collections import Counter
from typing import Optional
class TreeNode:
def __init__(self, value: int, left: Optional["TreeNode"] = None, right: Optional["TreeNode"] = None) -> None:
self.value = value
self.left = left
self.right = right
def pseudo_palindromic_paths_counter(root: Optional[TreeNode]) -> int:
def dfs(node: Optional[TreeNode], counts: Counter) -> int:
if not node:
return 0
counts[node.value] += 1
if not node.left and not node.right:
odd = sum(freq % 2 for freq in counts.values())
counts[node.value] -= 1
if counts[node.value] == 0:
del counts[node.value]
return int(odd <= 1)
total = dfs(node.left, counts) + dfs(node.right, counts)
counts[node.value] -= 1
if counts[node.value] == 0:
del counts[node.value]
return total
return dfs(root, Counter())
if __name__ == "__main__":
tree = TreeNode(2, TreeNode(3, TreeNode(3), TreeNode(1)), TreeNode(1, right=TreeNode(1)))
print(pseudo_palindromic_paths_counter(tree))
Select the largest set of non-adjacent nodes in a tree. Sample Input: tree edges [[0,1],[0,2],[1,3],[1,4]]. Sample Output: 3.
For each node compute the best value when you include it versus when you do not. Including forces children to be excluded, so taking the max of both states per node yields the optimum.
0
/ \
1 2
/ \
3 4
Solution 1: Primary Approach
from typing import List, Tuple
def max_independent_set(n: int, edges: List[List[int]]) -> int:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
def dfs(node: int, parent: int) -> Tuple[int, int]:
include = 1
exclude = 0
for neighbor in graph[node]:
if neighbor == parent:
continue
child_include, child_exclude = dfs(neighbor, node)
include += child_exclude
exclude += max(child_include, child_exclude)
return include, exclude
include_root, exclude_root = dfs(0, -1)
return max(include_root, exclude_root)
if __name__ == "__main__":
print(max_independent_set(5, [[0,1],[0,2],[1,3],[1,4]]))
Solution 2: Memoised Recursion
Use recursion with memoisation keyed by (node, parent_taken) to decide whether to include each node.
from functools import lru_cache
from typing import List
def max_independent_set_memo(n: int, edges: List[List[int]]) -> int:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
@lru_cache(maxsize=None)
def dfs(node: int, parent: int, take_parent: bool) -> int:
total = 0
for neighbor in graph[node]:
if neighbor == parent:
continue
total += best(neighbor, node)
if take_parent:
return total
take = 1
skip = 0
for neighbor in graph[node]:
if neighbor == parent:
continue
take += dfs(neighbor, node, True)
skip += best(neighbor, node)
return max(take, skip)
@lru_cache(maxsize=None)
def best(node: int, parent: int) -> int:
include = 1
exclude = 0
for neighbor in graph[node]:
if neighbor == parent:
continue
include += dfs(neighbor, node, True)
exclude += best(neighbor, node)
return max(include, exclude)
return best(0, -1)
if __name__ == "__main__":
print(max_independent_set_memo(5, [[0,1],[0,2],[1,3],[1,4]]))
Assign colors to tree nodes minimizing cost with adjacent nodes differing. Sample Input: tree edges and 3-color cost matrix. Sample Output: 8.
Use DFS to compute the minimum cost for each node given a chosen color by adding the cheapest compatible color for every child. Comparing colors per node yields the global minimum.
0
/ \
1 2
/
3
Solution 1: Primary Approach
from typing import List
def min_paint_cost(tree: List[List[int]], costs: List[List[int]]) -> int:
n = len(tree)
colors = len(costs[0])
graph = [[] for _ in range(n)]
for u, v in tree:
graph[u].append(v)
graph[v].append(u)
def dfs(node: int, parent: int) -> List[int]:
dp = costs[node][:]
for neighbor in graph[node]:
if neighbor == parent:
continue
child_dp = dfs(neighbor, node)
for color in range(colors):
dp[color] += min(child_dp[c] for c in range(colors) if c != color)
return dp
return min(dfs(0, -1))
if __name__ == "__main__":
tree = [[0,1],[0,2],[1,3]]
costs = [
[3, 2, 7],
[5, 1, 4],
[2, 3, 6],
[4, 2, 3],
]
print(min_paint_cost(tree, costs))
Solution 2: Top-Down Memo
Memoise the cost of coloring a subtree with a specific parent color.
from functools import lru_cache
from typing import List
def min_paint_cost_topdown(tree: List[List[int]], costs: List[List[int]]) -> int:
n = len(tree)
colors = len(costs[0])
graph = [[] for _ in range(n)]
for u, v in tree:
graph[u].append(v)
graph[v].append(u)
@lru_cache(maxsize=None)
def dfs(node: int, parent: int, parent_color: int) -> int:
best = float('inf')
for color in range(colors):
if color == parent_color:
continue
total = costs[node][color]
for neighbor in graph[node]:
if neighbor == parent:
continue
total += dfs(neighbor, node, color)
best = min(best, total)
return best
return dfs(0, -1, -1)
if __name__ == "__main__":
tree = [[0,1],[0,2],[1,3]]
costs = [
[3, 2, 7],
[5, 1, 4],
[2, 3, 6],
[4, 2, 3],
]
print(min_paint_cost_topdown(tree, costs))
Template for computing rerooted DP values for every node. Sample Input: n = 5, edges [[0,1],[0,2],[1,3],[1,4]]. Sample Output: [5, 5, 3, 3, 3].
First compute subtree contributions in a postorder pass, then propagate parent contributions outward in preorder. Combining down and up values gives every node's rerooted answer.
0
/ \
1 2
/ \
3 4
Solution 1: Primary Approach
from typing import List
def reroot_template(n: int, edges: List[List[int]]) -> List[int]:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
down = [0] * n
answer = [0] * n
def post(node: int, parent: int) -> None:
total = 1
for neighbor in graph[node]:
if neighbor == parent:
continue
post(neighbor, node)
total += down[neighbor]
down[node] = total
def pre(node: int, parent: int, parent_value: int) -> None:
answer[node] = down[node] + parent_value
for neighbor in graph[node]:
if neighbor == parent:
continue
without_neighbor = answer[node] - down[neighbor]
pre(neighbor, node, without_neighbor)
post(0, -1)
pre(0, -1, 0)
return answer
if __name__ == "__main__":
print(reroot_template(5, [[0,1],[0,2],[1,3],[1,4]]))
Solution 2: Recompute per Root
For each node, perform a DFS treating it as the root and count the resulting subtree size. This O(n²) baseline clarifies what rerooting precomputation speeds up.
from typing import List
def reroot_counts_naive(n: int, edges: List[List[int]]) -> List[int]:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
def dfs(node: int, parent: int) -> int:
total = 1
for neighbor in graph[node]:
if neighbor == parent:
continue
total += dfs(neighbor, node)
return total
return [dfs(root, -1) for root in range(n)]
if __name__ == "__main__":
print(reroot_counts_naive(5, [[0,1],[0,2],[1,3],[1,4]]))
Implement a trie that supports insert, search, and prefix queries. Sample Input: insert "apple", search "apple", startsWith "app". Sample Output: True True.
Walk or create child nodes for each character during insert and mark finished words. Searching and prefix checks reuse the same traversal to confirm presence.
root
├─ 'a' -> (word)
└─ 'b' -> 'e' -> (word)
Solution 1: Primary Approach
class TrieNode:
def __init__(self) -> None:
self.children: dict[str, 'TrieNode'] = {}
self.is_word = False
class Trie:
def __init__(self) -> None:
self.root = TrieNode()
def insert(self, word: str) -> None:
node = self.root
for char in word:
node = node.children.setdefault(char, TrieNode())
node.is_word = True
def search(self, word: str) -> bool:
node = self._find(word)
return bool(node and node.is_word)
def starts_with(self, prefix: str) -> bool:
return self._find(prefix) is not None
def _find(self, prefix: str) -> TrieNode | None:
node = self.root
for char in prefix:
if char not in node.children:
return None
node = node.children[char]
return node
if __name__ == "__main__":
trie = Trie()
trie.insert("apple")
print(trie.search("apple"))
print(trie.starts_with("app"))
Solution 2: Nested Dictionaries
Represent the trie as nested dictionaries with a special terminator key.
TERMINATOR = "#"
class DictionaryTrie:
def __init__(self) -> None:
self.root: dict[str, dict] = {}
def insert(self, word: str) -> None:
node = self.root
for char in word:
node = node.setdefault(char, {})
node[TERMINATOR] = {}
def search(self, word: str) -> bool:
node = self._find(word)
return node is not None and TERMINATOR in node
def starts_with(self, prefix: str) -> bool:
return self._find(prefix) is not None
def _find(self, prefix: str) -> dict | None:
node = self.root
for char in prefix:
if char not in node:
return None
node = node[char]
return node
if __name__ == "__main__":
trie = DictionaryTrie()
trie.insert("apple")
print(trie.search("apple"))
print(trie.starts_with("app"))
Add words and search with wildcard characters using a trie. Sample Input: add "bad", "dad"; search "b.d". Sample Output: True.
Store words in a trie and on encountering a '.' explore every child recursively. The branching search only explores feasible paths until a full match is found.
root
├─ 'b' -> 'a' -> 'd'
└─ 'd' -> 'a' -> 'd'
Solution 1: Primary Approach
class WordDictionary:
def __init__(self) -> None:
self.root = {}
def add_word(self, word: str) -> None:
node = self.root
for char in word:
node = node.setdefault(char, {})
node['#'] = True
def search(self, word: str) -> bool:
def dfs(node: dict[str, dict], index: int) -> bool:
if index == len(word):
return '#' in node
char = word[index]
if char == '.':
return any(dfs(next_node, index + 1) for key, next_node in node.items() if key != '#')
if char in node:
return dfs(node[char], index + 1)
return False
return dfs(self.root, 0)
if __name__ == "__main__":
dictionary = WordDictionary()
dictionary.add_word("bad")
dictionary.add_word("dad")
print(dictionary.search("b.d"))
Solution 2: Buckets by Length
Group words by length and compare character by character, treating '.' as a wildcard.
from collections import defaultdict
class WordDictionaryBuckets:
def __init__(self) -> None:
self.buckets = defaultdict(list)
def addWord(self, word: str) -> None:
self.buckets[len(word)].append(word)
def search(self, pattern: str) -> bool:
for candidate in self.buckets[len(pattern)]:
if all(p == '.' or p == c for p, c in zip(pattern, candidate)):
return True
return False
if __name__ == "__main__":
dictionary = WordDictionaryBuckets()
dictionary.addWord("bad")
dictionary.addWord("dad")
dictionary.addWord("mad")
print(dictionary.search("pad"))
print(dictionary.search(".ad"))
Find all dictionary words present on a board using DFS and a trie. Sample Input: board = [['o','a','a','n'], ...], words = ["oath","pea","eat","rain"]. Sample Output: ['eat', 'oath'].
Build a trie of target words and DFS from each board cell while pruning branches that are not prefixes. Removing found words from the trie avoids duplicate work and early exits dead branches.
Board:
o a a n
e t a e
i h k r
i f l v
Solution 1: Primary Approach
from typing import List
def find_words(board: List[List[str]], words: List[str]) -> List[str]:
trie = {}
for word in words:
node = trie
for char in word:
node = node.setdefault(char, {})
node['$'] = word
rows, cols = len(board), len(board[0])
found: set[str] = set()
def dfs(row: int, col: int, node: dict[str, dict]) -> None:
char = board[row][col]
if char not in node:
return
next_node = node[char]
word = next_node.pop('$', None)
if word:
found.add(word)
board[row][col] = '#'
for delta_row, delta_col in ((1, 0), (-1, 0), (0, 1), (0, -1)):
new_row, new_col = row + delta_row, col + delta_col
if 0 <= new_row < rows and 0 <= new_col < cols and board[new_row][new_col] != '#':
dfs(new_row, new_col, next_node)
board[row][col] = char
if not next_node:
node.pop(char)
for r in range(rows):
for c in range(cols):
dfs(r, c, trie)
return sorted(found)
if __name__ == "__main__":
grid = [
['o', 'a', 'a', 'n'],
['e', 't', 'a', 'e'],
['i', 'h', 'k', 'r'],
['i', 'f', 'l', 'v'],
]
print(find_words(grid, ["oath", "pea", "eat", "rain"]))
Solution 2: Per-Word DFS
For each word, run a depth-first search across the board without using a trie.
from typing import List, Set, Tuple
def find_words_bruteforce(board: List[List[str]], words: List[str]) -> List[str]:
rows, cols = len(board), len(board[0])
def exists(word: str) -> bool:
def dfs(r: int, c: int, index: int, visited: Set[Tuple[int, int]]) -> bool:
if index == len(word):
return True
if not (0 <= r < rows and 0 <= c < cols):
return False
if (r, c) in visited or board[r][c] != word[index]:
return False
visited.add((r, c))
for dr, dc in ((1,0),(-1,0),(0,1),(0,-1)):
if dfs(r + dr, c + dc, index + 1, visited):
return True
visited.remove((r, c))
return False
for r in range(rows):
for c in range(cols):
if dfs(r, c, 0, set()):
return True
return False
found = []
for word in words:
if exists(word):
found.append(word)
return found
if __name__ == "__main__":
board = [
['o','a','a','n'],
['e','t','a','e'],
['i','h','k','r'],
['i','f','l','v'],
]
print(find_words_bruteforce(board, ["oath", "pea", "eat", "rain"]))
You are given a reference to a node in a connected undirected graph. Return a deep copy of the entire graph. Sample Input: adjacency list [[2,4],[1,3],[2,4],[1,3]] for node 1. Sample Output: cloned adjacency list [[2,4],[1,3],[2,4],[1,3]]. (LeetCode 133.)
The graph can contain cycles, so you must memoize cloned nodes to avoid infinite recursion. You can traverse with BFS or DFS, creating neighbor links as you discover new nodes.
1 -- 2
Solution 1: BFS Cloning
Traverse level by level while maintaining a map from original nodes to their clones. Every time you discover a new node you enqueue it and wire the edge to the already-created clone.
from typing import Dict, List, Optional
from collections import deque
class Node:
def __init__(self, val: int = 0, neighbors: Optional[List['Node']] = None) -> None:
self.val = val
self.neighbors = neighbors or []
def clone_graph_bfs(node: Optional[Node]) -> Optional[Node]:
if node is None:
return None
clones: Dict[Node, Node] = {node: Node(node.val)}
queue: deque[Node] = deque([node])
while queue:
current = queue.popleft()
for neighbor in current.neighbors:
if neighbor not in clones:
clones[neighbor] = Node(neighbor.val)
queue.append(neighbor)
clones[current].neighbors.append(clones[neighbor])
return clones[node]
if __name__ == "__main__":
one = Node(1)
two = Node(2)
one.neighbors.append(two)
two.neighbors.append(one)
copy = clone_graph_bfs(one)
print(copy.val, [nbr.val for nbr in copy.neighbors])
Solution 2: DFS Cloning
Depth-first recursion mirrors the BFS approach but walks neighbors immediately. Memoization ensures each node is cloned once even in cycles.
from typing import Dict, List, Optional
class Node:
def __init__(self, val: int = 0, neighbors: Optional[List['Node']] = None) -> None:
self.val = val
self.neighbors = neighbors or []
def clone_graph_dfs(node: Optional[Node]) -> Optional[Node]:
def dfs(current: Optional[Node]) -> Optional[Node]:
if current is None:
return None
if current in clones:
return clones[current]
clone = Node(current.val)
clones[current] = clone
for neighbor in current.neighbors:
clone.neighbors.append(dfs(neighbor))
return clone
clones: Dict[Node, Node] = {}
return dfs(node)
if __name__ == "__main__":
one = Node(1)
two = Node(2)
three = Node(3)
one.neighbors.extend([two, three])
two.neighbors.append(one)
three.neighbors.append(one)
copy = clone_graph_dfs(one)
print(copy.val, sorted(n.val for n in copy.neighbors))
Given an m × n grid of '1's (land) and '0's (water), return the number of islands. An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically. Sample Input: grid = ["11000","11000","00100","00011"]. Sample Output: 3. (LeetCode 200.)
Every unvisited land cell can be flood-filled to mark the entire connected component. DFS or BFS works; you just need to avoid revisiting cells and stay within bounds.
1 1 0 0 0
1 1 0 0 0
0 0 1 0 0
0 0 0 1 1
Solution 1: Recursive DFS Flood Fill
Recursively explore four directions and mark visited cells as water. Each call covers one island entirely.
from typing import List
def num_islands_dfs(grid: List[List[str]]) -> int:
if not grid:
return 0
rows, cols = len(grid), len(grid[0])
def dfs(row: int, col: int) -> None:
if row < 0 or col < 0 or row >= rows or col >= cols or grid[row][col] != '1':
return
grid[row][col] = '0'
dfs(row + 1, col)
dfs(row - 1, col)
dfs(row, col + 1)
dfs(row, col - 1)
islands = 0
for r in range(rows):
for c in range(cols):
if grid[r][c] == '1':
islands += 1
dfs(r, c)
return islands
if __name__ == "__main__":
board = [list("11000"), list("11000"), list("00100"), list("00011")]
print(num_islands_dfs(board))
Solution 2: BFS Flood Fill
Iterative breadth-first search avoids recursion limits. Push neighbors onto a queue and mark cells as visited as soon as they are enqueued.
from collections import deque
from typing import List, Tuple
def num_islands_bfs(grid: List[List[str]]) -> int:
if not grid:
return 0
rows, cols = len(grid), len(grid[0])
islands = 0
for r in range(rows):
for c in range(cols):
if grid[r][c] != '1':
continue
islands += 1
grid[r][c] = '0'
queue: deque[Tuple[int, int]] = deque([(r, c)])
while queue:
row, col = queue.popleft()
for dr, dc in ((1, 0), (-1, 0), (0, 1), (0, -1)):
nr, nc = row + dr, col + dc
if 0 <= nr < rows and 0 <= nc < cols and grid[nr][nc] == '1':
grid[nr][nc] = '0'
queue.append((nr, nc))
return islands
if __name__ == "__main__":
board = [list("11000"), list("11000"), list("00100"), list("00011")]
print(num_islands_bfs(board))
Given an m × n grid where 0 represents empty, 1 represents a fresh orange, and 2 represents a rotten orange, return the minimum minutes needed so that every fresh orange rots or -1 if impossible. Sample Input: grid = [[2,1,1],[1,1,0],[0,1,1]]. Sample Output: 4. (LeetCode 994.)
Treat every initially rotten orange as a BFS source. Each layer represents one minute of spread, so the deepest layer that reaches fresh fruit is the answer.
2 1 1
1 1 0
0 1 1
Solution 1: Multi-Source BFS with Timestamps
Store the minute each orange becomes rotten directly in the queue. The answer is the maximum timestamp encountered after processing all reachable nodes.
from collections import deque
from typing import Deque, List, Tuple
def oranges_rotting_timestamps(grid: List[List[int]]) -> int:
rows, cols = len(grid), len(grid[0])
queue: Deque[Tuple[int, int, int]] = deque()
fresh = 0
for r in range(rows):
for c in range(cols):
if grid[r][c] == 2:
queue.append((r, c, 0))
elif grid[r][c] == 1:
fresh += 1
minutes = 0
while queue:
row, col, minute = queue.popleft()
minutes = max(minutes, minute)
for dr, dc in ((1, 0), (-1, 0), (0, 1), (0, -1)):
nr, nc = row + dr, col + dc
if 0 <= nr < rows and 0 <= nc < cols and grid[nr][nc] == 1:
grid[nr][nc] = 2
fresh -= 1
queue.append((nr, nc, minute + 1))
return minutes if fresh == 0 else -1
if __name__ == "__main__":
board = [[2, 1, 1], [1, 1, 0], [0, 1, 1]]
print(oranges_rotting_timestamps(board))
Solution 2: Layered BFS
Process the queue level by level so each loop iteration represents exactly one minute. Stop as soon as all fresh oranges are converted.
from collections import deque
from typing import Deque, List, Tuple
def oranges_rotting_layers(grid: List[List[int]]) -> int:
rows, cols = len(grid), len(grid[0])
queue: Deque[Tuple[int, int]] = deque()
fresh = 0
for r in range(rows):
for c in range(cols):
if grid[r][c] == 2:
queue.append((r, c))
elif grid[r][c] == 1:
fresh += 1
minutes = 0
while queue and fresh:
for _ in range(len(queue)):
row, col = queue.popleft()
for dr, dc in ((1, 0), (-1, 0), (0, 1), (0, -1)):
nr, nc = row + dr, col + dc
if 0 <= nr < rows and 0 <= nc < cols and grid[nr][nc] == 1:
grid[nr][nc] = 2
fresh -= 1
queue.append((nr, nc))
minutes += 1
return minutes if fresh == 0 else -1
if __name__ == "__main__":
board = [[2, 1, 1], [1, 1, 0], [0, 1, 1]]
print(oranges_rotting_layers(board))
Given an undirected adjacency list, determine whether the graph is bipartite (two-colorable). Sample Input: graph = [[1,3],[0,2],[1,3],[0,2]]. Sample Output: True. (LeetCode 785.)
Color the graph with two colors while traversing. Any edge that connects two nodes of the same color disproves bipartiteness. BFS and DFS differ only in traversal order.
0: [1, 3]
1: [0, 2]
2: [1, 3]
3: [0, 2]
Solution 1: BFS Two-Coloring
Breadth-first search colors neighbors with the opposite value. Restart the BFS for every disconnected component.
from collections import deque
from typing import Dict, List
def is_bipartite_bfs(graph: List[List[int]]) -> bool:
color: Dict[int, int] = {}
for start in range(len(graph)):
if start in color:
continue
queue = deque([start])
color[start] = 0
while queue:
node = queue.popleft()
for neighbor in graph[node]:
if neighbor not in color:
color[neighbor] = color[node] ^ 1
queue.append(neighbor)
elif color[neighbor] == color[node]:
return False
return True
if __name__ == "__main__":
print(is_bipartite_bfs([[1, 3], [0, 2], [1, 3], [0, 2]]))
Solution 2: DFS Two-Coloring
Depth-first search applies the same coloring constraint but dives down each branch. Early returns on conflicts keep the runtime linear.
from typing import Dict, List
def is_bipartite_dfs(graph: List[List[int]]) -> bool:
color: Dict[int, int] = {}
def dfs(node: int, value: int) -> bool:
if node in color:
return color[node] == value
color[node] = value
return all(dfs(neighbor, value ^ 1) for neighbor in graph[node])
return all(dfs(node, 0) for node in range(len(graph)) if node not in color)
if __name__ == "__main__":
print(is_bipartite_dfs([[1, 3], [0, 2], [1, 3], [0, 2]]))
Given an m × n integer matrix, return the length of the longest strictly increasing path. You may move in the four cardinal directions. Sample Input: matrix = [[9,9,4],[6,6,8],[2,1,1]]. Sample Output: 4. (LeetCode 329.)
Every cell can only move to larger neighbors, so the graph of moves is a DAG. Depth-first search with memoization treats each cell as a subproblem, while a topological pass peels off cells by indegree to accumulate the longest chain length.
9 9 4
6 6 8
2 1 1
Solution 1: DFS with Memoization
Treat each cell as the start of a path and cache the best length leaving that cell. The recursion only explores valid increases, so every cell is solved once.
from typing import List
def longest_increasing_path_dfs(matrix: List[List[int]]) -> int:
if not matrix:
return 0
rows, cols = len(matrix), len(matrix[0])
cache = [[0] * cols for _ in range(rows)]
def dfs(row: int, col: int) -> int:
if cache[row][col]:
return cache[row][col]
best = 1
for dr, dc in ((1, 0), (-1, 0), (0, 1), (0, -1)):
nr, nc = row + dr, col + dc
if 0 <= nr < rows and 0 <= nc < cols and matrix[nr][nc] > matrix[row][col]:
best = max(best, 1 + dfs(nr, nc))
cache[row][col] = best
return best
return max(dfs(r, c) for r in range(rows) for c in range(cols))
if __name__ == "__main__":
grid = [[9, 9, 4], [6, 6, 8], [2, 1, 1]]
print(longest_increasing_path_dfs(grid))
Solution 2: Topological Layering
Build edges from each cell to every larger neighbor and count indegrees. Kahn's algorithm visits cells in increasing order, and each layer represents one extra step in the path.
from collections import deque
from typing import List, Tuple
def longest_increasing_path_topological(matrix: List[List[int]]) -> int:
if not matrix:
return 0
rows, cols = len(matrix), len(matrix[0])
indegree = [[0] * cols for _ in range(rows)]
graph: List[List[Tuple[int, int]]] = [[[] for _ in range(cols)] for _ in range(rows)]
for r in range(rows):
for c in range(cols):
for dr, dc in ((1, 0), (-1, 0), (0, 1), (0, -1)):
nr, nc = r + dr, c + dc
if 0 <= nr < rows and 0 <= nc < cols and matrix[nr][nc] > matrix[r][c]:
graph[r][c].append((nr, nc))
indegree[nr][nc] += 1
queue = deque([(r, c) for r in range(rows) for c in range(cols) if indegree[r][c] == 0])
length = 0
while queue:
for _ in range(len(queue)):
r, c = queue.popleft()
for nr, nc in graph[r][c]:
indegree[nr][nc] -= 1
if indegree[nr][nc] == 0:
queue.append((nr, nc))
length += 1
return length
if __name__ == "__main__":
grid = [[9, 9, 4], [6, 6, 8], [2, 1, 1]]
print(longest_increasing_path_topological(grid))
Given two words beginWord and endWord, and a dictionary of allowed words, return the length of the shortest transformation sequence from begin to end where only one letter can change at a time and each intermediate word must be in the dictionary. Sample Input: beginWord = "hit", endWord = "cog", wordList = ["hot","dot","dog","lot","log","cog"]. Sample Output: 5. (LeetCode 127.)
The problem is an unweighted shortest path in an implicit graph whose nodes are dictionary words. A standard BFS using wildcard buckets enumerates neighbors on demand, while bidirectional BFS reduces the branching factor by searching from both ends simultaneously.
hit -> hot -> dot -> dog -> cog
Words: hot, dot, dog, lot, log, cog
Solution 1: BFS with Intermediate Patterns
Group dictionary words by wildcard patterns (like "*it") so each BFS expansion visits only true neighbors. Clearing a bucket after use prevents redundant enqueues.
from collections import defaultdict, deque
from typing import Deque, Dict, List, Set, Tuple
def ladder_length_bfs(begin_word: str, end_word: str, word_list: List[str]) -> int:
if end_word not in word_list:
return 0
neighbors: Dict[str, List[str]] = defaultdict(list)
length = len(begin_word)
for word in word_list:
for index in range(length):
pattern = word[:index] + '*' + word[index + 1:]
neighbors[pattern].append(word)
queue: Deque[Tuple[str, int]] = deque([(begin_word, 1)])
visited: Set[str] = {begin_word}
while queue:
word, depth = queue.popleft()
if word == end_word:
return depth
for index in range(length):
pattern = word[:index] + '*' + word[index + 1:]
for candidate in neighbors[pattern]:
if candidate not in visited:
visited.add(candidate)
queue.append((candidate, depth + 1))
neighbors[pattern] = []
return 0
if __name__ == "__main__":
print(ladder_length_bfs("hit", "cog", ["hot", "dot", "dog", "lot", "log", "cog"]))
Solution 2: Bidirectional BFS
Expanding from both ends shrinks the shorter frontier every step, dramatically reducing the number of generated words for large dictionaries.
from typing import List, Set
def ladder_length_bidirectional(begin_word: str, end_word: str, word_list: List[str]) -> int:
dictionary: Set[str] = set(word_list)
if end_word not in dictionary:
return 0
begin_front = {begin_word}
end_front = {end_word}
depth = 1
alphabet = [chr(code) for code in range(ord('a'), ord('z') + 1)]
while begin_front and end_front:
if len(begin_front) > len(end_front):
begin_front, end_front = end_front, begin_front
next_front = set()
for word in begin_front:
word_chars = list(word)
for index in range(len(word_chars)):
original = word_chars[index]
for letter in alphabet:
if letter == original:
continue
word_chars[index] = letter
candidate = ''.join(word_chars)
if candidate in end_front:
return depth + 1
if candidate in dictionary:
dictionary.remove(candidate)
next_front.add(candidate)
word_chars[index] = original
begin_front = next_front
depth += 1
return 0
if __name__ == "__main__":
print(ladder_length_bidirectional("hit", "cog", ["hot", "dot", "dog", "lot", "log", "cog"]))
There are numCourses labeled 0 … numCourses-1, and an array prerequisites where [a, b] means you must take b before a. Return True if you can finish all courses. Sample Input: numCourses = 2, prerequisites = [[1,0]]. Sample Output: True. (LeetCode 207.)
This is a cycle-detection problem on a directed graph. Kahn's algorithm consumes nodes with zero in-degree, while a DFS with recursion stack flags when we revisit an active node. Either method verifies that no cycle exists.
0 -> [1, 2]
1 -> [3]
2 -> [3]
3 -> []
Solution 1: Kahn's Algorithm
Maintain in-degrees for every course and repeatedly take any course with no remaining prerequisites. If you process all courses, no cycle exists.
from collections import deque
from typing import Deque, Dict, List
def can_finish_kahn(num_courses: int, prerequisites: List[List[int]]) -> bool:
adjacency: Dict[int, List[int]] = {course: [] for course in range(num_courses)}
indegree = [0] * num_courses
for course, prereq in prerequisites:
adjacency[prereq].append(course)
indegree[course] += 1
queue: Deque[int] = deque([course for course in range(num_courses) if indegree[course] == 0])
taken = 0
while queue:
course = queue.popleft()
taken += 1
for neighbor in adjacency[course]:
indegree[neighbor] -= 1
if indegree[neighbor] == 0:
queue.append(neighbor)
return taken == num_courses
if __name__ == "__main__":
print(can_finish_kahn(2, [[1, 0]]))
Solution 2: DFS Cycle Detection
Track three states for each node: unvisited, visiting, visited. Hitting a visiting node during DFS indicates a cycle and makes the schedule impossible.
from typing import List
def can_finish_dfs(num_courses: int, prerequisites: List[List[int]]) -> bool:
graph = [[] for _ in range(num_courses)]
for course, prereq in prerequisites:
graph[prereq].append(course)
state = [0] * num_courses # 0 = unvisited, 1 = visiting, 2 = visited
def dfs(node: int) -> bool:
if state[node] == 1:
return False
if state[node] == 2:
return True
state[node] = 1
for neighbor in graph[node]:
if not dfs(neighbor):
return False
state[node] = 2
return True
return all(dfs(course) for course in range(num_courses) if state[course] == 0)
if __name__ == "__main__":
print(can_finish_dfs(2, [[1, 0]]))
Given n nodes labeled 0 … n-1 and an undirected edge list, return the number of connected components. Sample Input: n = 5, edges = [[0,1],[1,2],[3,4]]. Sample Output: 2. (LeetCode 323.)
Every edge merges two communities. A disjoint set union keeps track of component parents and counts efficiently, while a DFS traversal marks all nodes reachable from each unvisited start.
n = 5
edges = [[0,1],[1,2],[3,4]]
Solution 1: Disjoint Set Union
Union operations collapse components using path compression and union by rank, and the remaining root count is the answer.
from typing import List
def count_components_union_find(n: int, edges: List[List[int]]) -> int:
parent = list(range(n))
rank = [0] * n
components = n
def find(node: int) -> int:
if parent[node] != node:
parent[node] = find(parent[node])
return parent[node]
def union(a: int, b: int) -> None:
nonlocal components
root_a, root_b = find(a), find(b)
if root_a == root_b:
return
if rank[root_a] < rank[root_b]:
root_a, root_b = root_b, root_a
parent[root_b] = root_a
if rank[root_a] == rank[root_b]:
rank[root_a] += 1
components -= 1
for u, v in edges:
union(u, v)
return components
if __name__ == "__main__":
print(count_components_union_find(5, [[0, 1], [1, 2], [3, 4]]))
Solution 2: DFS Traversal
Build adjacency lists and launch DFS from any unvisited node, marking all nodes it can reach. Each DFS run accounts for one component.
from typing import List
def count_components_dfs(n: int, edges: List[List[int]]) -> int:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
visited = [False] * n
def dfs(node: int) -> None:
stack = [node]
visited[node] = True
while stack:
current = stack.pop()
for neighbor in graph[current]:
if not visited[neighbor]:
visited[neighbor] = True
stack.append(neighbor)
components = 0
for node in range(n):
if not visited[node]:
components += 1
dfs(node)
return components
if __name__ == "__main__":
print(count_components_dfs(5, [[0, 1], [1, 2], [3, 4]]))
You are given a directed graph with n nodes, edge list (u, v, w), and a starting node k. Return the time it takes for all nodes to receive a signal sent from k, or -1 if unreachable. Sample Input: n = 4, times = [[2,1,1],[2,3,1],[3,4,1]], k = 2. Sample Output: 2. (LeetCode 743, Network Delay Time.)
All edge weights are non-negative, so Dijkstra's algorithm applies. A binary heap implementation runs in O((V+E) log V), while a dense-graph variant without a heap is simpler but O(V²).
0 -(2)-> 1
0 -(1)-> 3
1 -(4)-> 2
3 -(3)-> 4
4 -(1)-> 2
Solution 1: Dijkstra with Min-Heap
Maintain adjacency lists and pop the closest node from a priority queue. Each relaxation updates the queue with a better distance.
import heapq
from typing import List, Tuple
def network_delay_time_heap(times: List[Tuple[int, int, int]], n: int, k: int) -> int:
graph = [[] for _ in range(n + 1)]
for u, v, w in times:
graph[u].append((v, w))
distances = [float('inf')] * (n + 1)
distances[k] = 0
heap: List[Tuple[int, int]] = [(0, k)]
while heap:
dist, node = heapq.heappop(heap)
if dist > distances[node]:
continue
for neighbor, weight in graph[node]:
new_dist = dist + weight
if new_dist < distances[neighbor]:
distances[neighbor] = new_dist
heapq.heappush(heap, (new_dist, neighbor))
answer = max(distances[1:])
return answer if answer < float('inf') else -1
if __name__ == "__main__":
print(network_delay_time_heap([(2, 1, 1), (2, 3, 1), (3, 4, 1)], 4, 2))
Solution 2: Dense-Graph Dijkstra
If the graph is dense, a simple O(V²) Dijkstra that repeatedly selects the unvisited node with the smallest tentative distance can avoid heap overhead.
from typing import List, Tuple
def network_delay_time_dense(times: List[Tuple[int, int, int]], n: int, k: int) -> int:
distances = [[float('inf')] * (n + 1) for _ in range(n + 1)]
for node in range(1, n + 1):
distances[node][node] = 0
for u, v, w in times:
distances[u][v] = min(distances[u][v], w)
dist = [float('inf')] * (n + 1)
visited = [False] * (n + 1)
dist[k] = 0
for _ in range(n):
node = -1
best = float('inf')
for candidate in range(1, n + 1):
if not visited[candidate] and dist[candidate] < best:
best = dist[candidate]
node = candidate
if node == -1:
break
visited[node] = True
for neighbor in range(1, n + 1):
weight = distances[node][neighbor]
if weight < float('inf') and dist[node] + weight < dist[neighbor]:
dist[neighbor] = dist[node] + weight
answer = max(dist[1:])
return answer if answer < float('inf') else -1
if __name__ == "__main__":
print(network_delay_time_dense([(2, 1, 1), (2, 3, 1), (3, 4, 1)], 4, 2))
There are n cities labeled 1 … n and bidirectional roads with costs. Return the minimum cost to connect all cities, or -1 if impossible. Sample Input: n = 4, connections = [[1,2,1],[2,3,4],[1,4,3],[4,3,2]]. Sample Output: 6. (LeetCode 1135, Connecting Cities With Minimum Cost.)
A minimum spanning tree connects all nodes with the smallest total weight. Prim’s algorithm grows the tree from any seed, while Kruskal’s algorithm sorts edges and unions components greedily.
0-1 (1)
0-2 (2)
1-2 (3)
1-3 (4)
2-3 (5)
Solution 1: Prim's Algorithm with Heap
Start from any city and always add the cheapest edge to an unvisited city. A min-heap tracks the frontier edges.
import heapq
from typing import List, Tuple
def min_cost_connect_cities_prim(n: int, connections: List[Tuple[int, int, int]]) -> int:
graph = [[] for _ in range(n + 1)]
for u, v, cost in connections:
graph[u].append((cost, v))
graph[v].append((cost, u))
visited = [False] * (n + 1)
visited[0] = True # ignore index 0
total = 0
edges_added = 0
heap: List[Tuple[int, int]] = [(0, 1)]
while heap and edges_added < n:
cost, node = heapq.heappop(heap)
if visited[node]:
continue
visited[node] = True
total += cost
edges_added += 1
for next_cost, neighbor in graph[node]:
if not visited[neighbor]:
heapq.heappush(heap, (next_cost, neighbor))
return total if edges_added == n else -1
if __name__ == "__main__":
print(min_cost_connect_cities_prim(4, [(1, 2, 1), (2, 3, 4), (1, 4, 3), (4, 3, 2)]))
Solution 2: Kruskal with Union Find
Sort edges by cost and union their endpoints if they belong to different components. The sum of accepted edges is the MST cost.
from typing import List, Tuple
def min_cost_connect_cities_kruskal(n: int, connections: List[Tuple[int, int, int]]) -> int:
parent = list(range(n + 1))
rank = [0] * (n + 1)
def find(node: int) -> int:
if parent[node] != node:
parent[node] = find(parent[node])
return parent[node]
def union(a: int, b: int) -> bool:
root_a, root_b = find(a), find(b)
if root_a == root_b:
return False
if rank[root_a] < rank[root_b]:
root_a, root_b = root_b, root_a
parent[root_b] = root_a
if rank[root_a] == rank[root_b]:
rank[root_a] += 1
return True
total = 0
edges_used = 0
for u, v, cost in sorted(connections, key=lambda edge: edge[2]):
if union(u, v):
total += cost
edges_used += 1
return total if edges_used == n - 1 else -1
if __name__ == "__main__":
print(min_cost_connect_cities_kruskal(4, [(1, 2, 1), (2, 3, 4), (1, 4, 3), (4, 3, 2)]))
Given a directed graph with vertices 0 … n-1, return all strongly connected components (each max set of mutually reachable nodes). Sample Input: n = 5, edges = [[0,1],[1,2],[2,0],[1,3],[3,4]]. Sample Output: [[2,1,0],[4],[3]].
Strong connectivity can be found with Tarjan's single-pass low-link DFS or Kosaraju's two-pass approach. Both rely on depth-first search to discover cycles and condensation orderings.
0 -> 1
1 -> 2, 3
2 -> 0
3 -> 4
Solution 1: Tarjan's Low-Link DFS
Assign each node an index and lowest reachable index. When a node's low-link equals its index, you have found the head of an SCC and can pop the stack.
from typing import List
def tarjans_scc(n: int, edges: List[List[int]]) -> List[List[int]]:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
index = 0
stack: list[int] = []
on_stack = [False] * n
indices = [-1] * n
low_link = [0] * n
components: List[List[int]] = []
def strongconnect(node: int) -> None:
nonlocal index
indices[node] = low_link[node] = index
index += 1
stack.append(node)
on_stack[node] = True
for neighbor in graph[node]:
if indices[neighbor] == -1:
strongconnect(neighbor)
low_link[node] = min(low_link[node], low_link[neighbor])
elif on_stack[neighbor]:
low_link[node] = min(low_link[node], indices[neighbor])
if low_link[node] == indices[node]:
component = []
while True:
w = stack.pop()
on_stack[w] = False
component.append(w)
if w == node:
break
components.append(component)
for vertex in range(n):
if indices[vertex] == -1:
strongconnect(vertex)
return components
if __name__ == "__main__":
print(tarjans_scc(5, [[0,1],[1,2],[2,0],[1,3],[3,4]]))
Solution 2: Kosaraju's Two-Pass DFS
First capture a reverse topological order on the original graph, then DFS the transposed graph in that order to peel off SCCs.
from typing import List
def kosaraju_scc(n: int, edges: List[List[int]]) -> List[List[int]]:
graph = [[] for _ in range(n)]
transpose = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
transpose[v].append(u)
order: list[int] = []
seen = [False] * n
def dfs(node: int) -> None:
seen[node] = True
for neighbor in graph[node]:
if not seen[neighbor]:
dfs(neighbor)
order.append(node)
for node in range(n):
if not seen[node]:
dfs(node)
components: List[List[int]] = []
seen = [False] * n
def reverse_dfs(node: int, component: List[int]) -> None:
seen[node] = True
component.append(node)
for neighbor in transpose[node]:
if not seen[neighbor]:
reverse_dfs(neighbor, component)
for node in reversed(order):
if not seen[node]:
component: List[int] = []
reverse_dfs(node, component)
components.append(component)
return components
if __name__ == "__main__":
print(kosaraju_scc(5, [[0,1],[1,2],[2,0],[1,3],[3,4]]))
Given a directed network with capacities, compute the maximum amount of flow that can be pushed from source to sink. Sample Input: capacity matrix below, source = 0, sink = 5. Sample Output: 5.
Max-flow is solved by repeatedly augmenting along residual paths until none remain. Edmonds-Karp uses BFS to find the shortest augmenting path in edge count, while Dinic accelerates the process with layered level graphs and blocking flows.
0 -> 1 (3)
0 -> 2 (2)
1 -> 3 (3)
1 -> 4 (2)
2 -> 1 (1)
2 -> 4 (2)
3 -> 5 (4)
4 -> 3 (1)
4 -> 5 (3)
Solution 1: Edmonds-Karp (BFS Augmentation)
Breadth-first search discovers the shortest residual path. Augmenting along that path and updating reverse edges repeats until no more paths exist.
from collections import deque
from typing import List, Optional
def edmonds_karp(capacity: List[List[int]], source: int, sink: int) -> int:
n = len(capacity)
flow = [[0] * n for _ in range(n)]
def bfs() -> Optional[List[int]]:
parent = [-1] * n
parent[source] = source
queue = deque([source])
while queue:
node = queue.popleft()
for neighbor in range(n):
residual = capacity[node][neighbor] - flow[node][neighbor]
if residual > 0 and parent[neighbor] == -1:
parent[neighbor] = node
if neighbor == sink:
return parent
queue.append(neighbor)
return None
max_flow = 0
parent = bfs()
while parent:
increment = float('inf')
node = sink
while node != source:
prev = parent[node]
increment = min(increment, capacity[prev][node] - flow[prev][node])
node = prev
node = sink
while node != source:
prev = parent[node]
flow[prev][node] += increment
flow[node][prev] -= increment
node = prev
max_flow += increment
parent = bfs()
return max_flow
if __name__ == "__main__":
capacity = [
[0, 3, 2, 0, 0, 0],
[0, 0, 0, 3, 2, 0],
[0, 1, 0, 0, 2, 0],
[0, 0, 0, 0, 0, 4],
[0, 0, 0, 1, 0, 3],
[0, 0, 0, 0, 0, 0],
]
print(edmonds_karp(capacity, 0, 5))
Solution 2: Dinic's Blocking Flow
Build a level graph with BFS, then send blocking flows with DFS until the level graph is saturated. Repeating level builds improves performance on dense networks.
from collections import deque
from typing import Deque, List
class Dinic:
def __init__(self, n: int) -> None:
self.n = n
self.graph: List[List[int]] = [[] for _ in range(n)]
self.edges: List[List[int]] = [] # [u, v, cap]
def add_edge(self, u: int, v: int, cap: int) -> None:
self.graph[u].append(len(self.edges))
self.edges.append([u, v, cap])
self.graph[v].append(len(self.edges))
self.edges.append([v, u, 0])
def max_flow(self, source: int, sink: int) -> int:
flow = 0
level = [-1] * self.n
def bfs() -> bool:
for i in range(self.n):
level[i] = -1
level[source] = 0
queue: Deque[int] = deque([source])
while queue:
node = queue.popleft()
for edge_id in self.graph[node]:
_, neighbor, cap = self.edges[edge_id]
if cap > 0 and level[neighbor] == -1:
level[neighbor] = level[node] + 1
queue.append(neighbor)
return level[sink] != -1
def dfs(node: int, pushed: int, ptr: List[int]) -> int:
if pushed == 0 or node == sink:
return pushed
while ptr[node] < len(self.graph[node]):
edge_id = self.graph[node][ptr[node]]
u, neighbor, cap = self.edges[edge_id]
if cap > 0 and level[neighbor] == level[node] + 1:
flow_sent = dfs(neighbor, min(pushed, cap), ptr)
if flow_sent:
self.edges[edge_id][2] -= flow_sent
self.edges[edge_id ^ 1][2] += flow_sent
return flow_sent
ptr[node] += 1
return 0
while bfs():
ptr = [0] * self.n
pushed = dfs(source, float('inf'), ptr)
while pushed:
flow += pushed
pushed = dfs(source, float('inf'), ptr)
return flow
if __name__ == "__main__":
dinic = Dinic(6)
edges = [
(0, 1, 3), (0, 2, 2),
(1, 3, 3), (1, 4, 2),
(2, 1, 1), (2, 4, 2),
(3, 5, 4), (4, 3, 1), (4, 5, 3),
]
for u, v, cap in edges:
dinic.add_edge(u, v, cap)
print(dinic.max_flow(0, 5))
Return any valid order to finish numCourses given prerequisite pairs [a, b] meaning b must precede a. Sample Input: numCourses = 4, prerequisites = [[1,0],[2,0],[3,1],[3,2]]. Sample Output: [0, 2, 1, 3]. (LeetCode 210.)
A valid schedule corresponds to a topological ordering of the prerequisite graph. Kahn's algorithm repeatedly removes zero in-degree courses, while a DFS postorder (if no cycle) yields the order in reverse finishing time.
0 -> [1, 2]
1 -> [3]
2 -> [3]
3 -> []
Solution 1: Kahn's Algorithm (BFS Order)
Track the number of unmet prerequisites for every course. Whenever a course reaches zero in-degree, add it to the queue and emit it into the final order.
from collections import deque
from typing import Deque, Dict, List
def course_schedule_order_bfs(num_courses: int, prerequisites: List[List[int]]) -> List[int]:
graph: Dict[int, List[int]] = {course: [] for course in range(num_courses)}
indegree = [0] * num_courses
for nxt, pre in prerequisites:
graph[pre].append(nxt)
indegree[nxt] += 1
queue: Deque[int] = deque([course for course in range(num_courses) if indegree[course] == 0])
order: List[int] = []
while queue:
course = queue.popleft()
order.append(course)
for neighbor in graph[course]:
indegree[neighbor] -= 1
if indegree[neighbor] == 0:
queue.append(neighbor)
return order if len(order) == num_courses else []
if __name__ == "__main__":
print(course_schedule_order_bfs(4, [[1, 0], [2, 0], [3, 1], [3, 2]]))
Solution 2: DFS Postorder
Perform DFS with a recursion stack. If you never encounter a back edge (cycle), the reverse postorder of the traversal provides a valid course ordering.
from typing import List
def course_schedule_order_dfs(num_courses: int, prerequisites: List[List[int]]) -> List[int]:
graph = [[] for _ in range(num_courses)]
for nxt, pre in prerequisites:
graph[pre].append(nxt)
order: List[int] = []
state = [0] * num_courses # 0 = unvisited, 1 = visiting, 2 = visited
cycle_detected = False
def dfs(node: int) -> None:
nonlocal cycle_detected
if cycle_detected:
return
state[node] = 1
for neighbor in graph[node]:
if state[neighbor] == 0:
dfs(neighbor)
elif state[neighbor] == 1:
cycle_detected = True
return
state[node] = 2
order.append(node)
for course in range(num_courses):
if state[course] == 0:
dfs(course)
if cycle_detected:
return []
order.reverse()
return order
if __name__ == "__main__":
print(course_schedule_order_dfs(4, [[1, 0], [2, 0], [3, 1], [3, 2]]))
Given a directed acyclic graph with vertices 0 … n-1, count how many distinct paths lead from start to end. Sample Input: n = 5, edges = [[0,1],[0,2],[1,3],[2,3],[3,4]], start = 0, end = 4. Sample Output: 2.
Acyclic structure lets us perform dynamic programming in topological order. Alternatively, memoized DFS accumulates the number of paths leaving each node without recomputation.
0 -> [1, 2]
1 -> [3]
2 -> [3]
3 -> [4]
Solution 1: Topological DP
Generate a topological ordering, seed the start node with 1 path, and push counts forward along edges.
from collections import deque
from typing import List
def count_paths_dag_topo(n: int, edges: List[List[int]], start: int, end: int) -> int:
graph = [[] for _ in range(n)]
indegree = [0] * n
for u, v in edges:
graph[u].append(v)
indegree[v] += 1
queue = deque([node for node in range(n) if indegree[node] == 0])
topo: List[int] = []
while queue:
node = queue.popleft()
topo.append(node)
for neighbor in graph[node]:
indegree[neighbor] -= 1
if indegree[neighbor] == 0:
queue.append(neighbor)
dp = [0] * n
dp[start] = 1
for node in topo:
for neighbor in graph[node]:
dp[neighbor] += dp[node]
return dp[end]
if __name__ == "__main__":
print(count_paths_dag_topo(5, [[0,1],[0,2],[1,3],[2,3],[3,4]], 0, 4))
Solution 2: DFS with Memoization
Treat the number of paths from a node as a memoized subproblem. Recursing only follows edges toward the sink since the DAG prevents cycles.
from functools import lru_cache
from typing import List
def count_paths_dag_dfs(n: int, edges: List[List[int]], start: int, end: int) -> int:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
@lru_cache(maxsize=None)
def dfs(node: int) -> int:
if node == end:
return 1
return sum(dfs(neighbor) for neighbor in graph[node])
return dfs(start)
if __name__ == "__main__":
print(count_paths_dag_dfs(5, [[0,1],[0,2],[1,3],[2,3],[3,4]], 0, 4))
You are given n cities, flights [u, v, cost], a source, destination, and an integer k. Return the cheapest price from source to destination with at most k stops (k+1 flights). Sample Input: n = 3, flights = [[0,1,100],[1,2,100],[0,2,500]], src = 0, dst = 2, k = 1. Sample Output: 200. (LeetCode 787.)
A layered Bellman-Ford keeps track of the best costs after i flights, while a modified Dijkstra that tracks remaining stops in the state can prune high-cost paths early.
0 -> 1 (100)
1 -> 2 (100)
0 -> 2 (500)
Solution 1: Layered Bellman-Ford
Iteratively relax edges k+1 times, but copy the distance array each round so a flight taken in iteration i doesn't chain into iteration i within the same layer.
import math
from typing import List
def cheapest_flight_within_k_stops_bf(n: int, flights: List[List[int]], src: int, dst: int, k: int) -> int:
dist = [math.inf] * n
dist[src] = 0
for _ in range(k + 1):
updated = dist[:]
for u, v, cost in flights:
if dist[u] + cost < updated[v]:
updated[v] = dist[u] + cost
dist = updated
return -1 if dist[dst] == math.inf else dist[dst]
if __name__ == "__main__":
tickets = [[0,1,100],[1,2,100],[0,2,500]]
print(cheapest_flight_within_k_stops_bf(3, tickets, 0, 2, 1))
Solution 2: Dijkstra with Stop Tracking
Push states (cost, city, remaining_stops) into a min-heap and allow visiting a city multiple times with more remaining stops if it offers a cheaper route.
import heapq
from typing import List, Tuple
def cheapest_flight_within_k_stops_heap(n: int, flights: List[List[int]], src: int, dst: int, k: int) -> int:
graph = [[] for _ in range(n)]
for u, v, cost in flights:
graph[u].append((v, cost))
heap: List[Tuple[int, int, int]] = [(0, src, k + 1)]
best = [[math.inf] * (k + 2) for _ in range(n)]
best[src][k + 1] = 0
while heap:
cost, city, remaining = heapq.heappop(heap)
if city == dst:
return cost
if remaining == 0:
continue
for neighbor, price in graph[city]:
new_cost = cost + price
if new_cost < best[neighbor][remaining - 1]:
best[neighbor][remaining - 1] = new_cost
heapq.heappush(heap, (new_cost, neighbor, remaining - 1))
return -1
if __name__ == "__main__":
tickets = [[0,1,100],[1,2,100],[0,2,500]]
print(cheapest_flight_within_k_stops_heap(3, tickets, 0, 2, 1))
Given a weighted directed acyclic graph and a source node, compute the minimum distance to every node. Sample Input: n = 4, edges = [[0,1,2],[0,2,4],[1,2,1],[1,3,7],[2,3,3]], source = 0. Sample Output: [0, 2, 3, 6].
Topological order allows dynamic programming in linear time. You can either relax edges in forward topological order or memoize the minimum cost for each node via DFS.
0 -> 1 (2), 2 (4)
1 -> 2 (1), 3 (7)
2 -> 3 (3)
Solution 1: Topological Relaxation
Construct a topological ordering and relax outgoing edges exactly once per node. Because the graph is acyclic, every relaxation uses finalized distances.
import math
from typing import List, Tuple
def shortest_path_dag_topo(n: int, edges: List[List[int]], source: int) -> List[int]:
graph = [[] for _ in range(n)]
indegree = [0] * n
for u, v, w in edges:
graph[u].append((v, w))
indegree[v] += 1
stack = [node for node in range(n) if indegree[node] == 0]
topo: List[int] = []
while stack:
node = stack.pop()
topo.append(node)
for neighbor, _ in graph[node]:
indegree[neighbor] -= 1
if indegree[neighbor] == 0:
stack.append(neighbor)
dist = [math.inf] * n
dist[source] = 0
for node in topo:
if dist[node] == math.inf:
continue
for neighbor, weight in graph[node]:
dist[neighbor] = min(dist[neighbor], dist[node] + weight)
return dist
if __name__ == "__main__":
edges = [[0,1,2],[0,2,4],[1,2,1],[1,3,7],[2,3,3]]
print(shortest_path_dag_topo(4, edges, 0))
Solution 2: DFS with Memoization
Reverse the graph and compute the minimum distance to the source for each node using memoized recursion, walking only along incoming edges.
import math
from functools import lru_cache
from typing import List
def shortest_path_dag_dfs(n: int, edges: List[List[int]], source: int) -> List[int]:
incoming = [[] for _ in range(n)]
for u, v, w in edges:
incoming[v].append((u, w))
@lru_cache(maxsize=None)
def best(node: int) -> float:
if node == source:
return 0.0
answer = math.inf
for prev, weight in incoming[node]:
answer = min(answer, best(prev) + weight)
return answer
return [best(node) for node in range(n)]
if __name__ == "__main__":
edges = [[0,1,2],[0,2,4],[1,2,1],[1,3,7],[2,3,3]]
print(shortest_path_dag_dfs(4, edges, 0))
For an undirected weighted graph with nodes 1 … n, count the number of restricted paths from node 1 to node n where each step strictly decreases the shortest distance to node n. Sample Input: n = 4, edges = [[1,2,3],[3,2,1],[1,3,4],[2,4,2],[3,4,2],[1,4,7]]. Sample Output: 3. (LeetCode 1786.)
All restricted paths follow decreasing shortest-distance labels, so we can first run Dijkstra from node n. Memoized DFS or a bottom-up DP over nodes sorted by distance each accumulate path counts modulo 1e9+7.
1 -> 2 (3), 3 (4), 4 (7)
2 -> 4 (2)
3 -> 2 (1), 4 (2)
4 -> 3 (1)
Solution 1: Dijkstra + DFS Memo
Run Dijkstra from node n, then memoize the number of restricted paths from each node by only recursing to neighbors with smaller distances.
import heapq
from typing import List
MOD = 10**9 + 7
def count_restricted_paths_dfs(n: int, edges: List[List[int]]) -> int:
graph = [[] for _ in range(n + 1)]
for u, v, w in edges:
graph[u].append((v, w))
graph[v].append((u, w))
dist = [float('inf')] * (n + 1)
dist[n] = 0
heap = [(0, n)]
while heap:
d, node = heapq.heappop(heap)
if d > dist[node]:
continue
for neighbor, weight in graph[node]:
nd = d + weight
if nd < dist[neighbor]:
dist[neighbor] = nd
heapq.heappush(heap, (nd, neighbor))
memo = [-1] * (n + 1)
def dfs(node: int) -> int:
if node == n:
return 1
if memo[node] != -1:
return memo[node]
total = 0
for neighbor, _ in graph[node]:
if dist[neighbor] < dist[node]:
total = (total + dfs(neighbor)) % MOD
memo[node] = total
return total
return dfs(1)
if __name__ == "__main__":
edges = [[1,2,3],[3,2,1],[1,3,4],[2,4,2],[3,4,2],[1,4,7]]
print(count_restricted_paths_dfs(4, edges))
Solution 2: Dijkstra + Bottom-Up DP
After computing distances, sort nodes by distance descending and accumulate path counts in that order, ensuring prerequisites are processed first.
import heapq
from typing import List
MOD = 10**9 + 7
def count_restricted_paths_dp(n: int, edges: List[List[int]]) -> int:
graph = [[] for _ in range(n + 1)]
for u, v, w in edges:
graph[u].append((v, w))
graph[v].append((u, w))
dist = [float('inf')] * (n + 1)
dist[n] = 0
heap = [(0, n)]
while heap:
d, node = heapq.heappop(heap)
if d > dist[node]:
continue
for neighbor, weight in graph[node]:
nd = d + weight
if nd < dist[neighbor]:
dist[neighbor] = nd
heapq.heappush(heap, (nd, neighbor))
order = sorted(range(1, n + 1), key=lambda node: dist[node], reverse=True)
ways = [0] * (n + 1)
ways[n] = 1
for node in order:
for neighbor, _ in graph[node]:
if dist[neighbor] < dist[node]:
ways[node] = (ways[node] + ways[neighbor]) % MOD
return ways[1]
if __name__ == "__main__":
edges = [[1,2,3],[3,2,1],[1,3,4],[2,4,2],[3,4,2],[1,4,7]]
print(count_restricted_paths_dp(4, edges))
Find the shortest route to collect all keys in a grid with locks. Sample Input: grid = ["@.a..","###.#","b.A.B"]. Sample Output: 8.
BFS over grid positions augmented with a key bitmask, unlocking doors when their key bit is present. The first time you reach a state holding all keys gives the shortest path length.
@ . a . .
# # # . #
b . A . B
Solution 1: Primary Approach
from typing import List, Tuple
from collections import deque
def shortest_collect_all_keys(grid: List[str]) -> int:
rows, cols = len(grid), len(grid[0])
start = (0, 0)
all_keys = 0
for r in range(rows):
for c in range(cols):
cell = grid[r][c]
if cell == '@':
start = (r, c)
if 'a' <= cell <= 'f':
all_keys |= 1 << (ord(cell) - ord('a'))
queue = deque([(start[0], start[1], 0, 0)])
visited = set([(start[0], start[1], 0)])
while queue:
r, c, mask, steps = queue.popleft()
if mask == all_keys:
return steps
for dr, dc in ((1,0),(-1,0),(0,1),(0,-1)):
nr, nc = r + dr, c + dc
if not (0 <= nr < rows and 0 <= nc < cols):
continue
cell = grid[nr][nc]
if cell == '#':
continue
new_mask = mask
if 'a' <= cell <= 'f':
new_mask |= 1 << (ord(cell) - ord('a'))
if 'A' <= cell <= 'F' and not (mask & (1 << (ord(cell) - ord('A')))):
continue
state = (nr, nc, new_mask)
if state not in visited:
visited.add(state)
queue.append((nr, nc, new_mask, steps + 1))
return -1
if __name__ == "__main__":
grid = ["@.a..", "###.#", "b.A.B"]
print(shortest_collect_all_keys(grid))
Solution 2: Dijkstra on State Graph
Run Dijkstra on the position+key state space. Equal edge costs make it equivalent to BFS but the priority queue structure extends naturally to weighted variants.
import heapq
from typing import List, Tuple
def shortest_collect_all_keys_dijkstra(grid: List[str]) -> int:
rows, cols = len(grid), len(grid[0])
start = (0, 0)
all_keys = 0
for r in range(rows):
for c in range(cols):
cell = grid[r][c]
if cell == '@':
start = (r, c)
if 'a' <= cell <= 'f':
all_keys |= 1 << (ord(cell) - ord('a'))
heap: list[Tuple[int, int, int, int]] = [(0, start[0], start[1], 0)]
seen = {(start[0], start[1], 0): 0}
while heap:
steps, r, c, mask = heapq.heappop(heap)
if mask == all_keys:
return steps
if seen[(r, c, mask)] < steps:
continue
for dr, dc in ((1,0),(-1,0),(0,1),(0,-1)):
nr, nc = r + dr, c + dc
if not (0 <= nr < rows and 0 <= nc < cols):
continue
cell = grid[nr][nc]
if cell == '#':
continue
new_mask = mask
if 'a' <= cell <= 'f':
new_mask |= 1 << (ord(cell) - ord('a'))
if 'A' <= cell <= 'F' and not (mask & (1 << (ord(cell) - ord('A')))):
continue
state = (nr, nc, new_mask)
if state not in seen or steps + 1 < seen[state]:
seen[state] = steps + 1
heapq.heappush(heap, (steps + 1, nr, nc, new_mask))
return -1
if __name__ == "__main__":
grid = ["@.a..", "###.#", "b.A.B"]
print(shortest_collect_all_keys_dijkstra(grid))
Count the number of shortest paths from node 0 to node n-1. Sample Input: roads connecting nodes 0-3. Sample Output: 2.
Run Dijkstra while tracking the number of shortest paths to each node. When a new shortest distance is found replace the count, and when an equal distance appears add the way count modulo 1e9+7.
0 -> 1 (1)
0 -> 2 (1)
1 -> 3 (1)
2 -> 3 (1)
Solution 1: Primary Approach
from typing import List
import heapq
MOD = 10**9 + 7
def count_shortest_paths(n: int, roads: List[List[int]]) -> int:
graph = [[] for _ in range(n)]
for u, v, w in roads:
graph[u].append((v, w))
graph[v].append((u, w))
dist = [float('inf')] * n
ways = [0] * n
dist[0] = 0
ways[0] = 1
heap = [(0, 0)]
while heap:
d, node = heapq.heappop(heap)
if d > dist[node]:
continue
for neighbor, weight in graph[node]:
nd = d + weight
if nd < dist[neighbor]:
dist[neighbor] = nd
ways[neighbor] = ways[node]
heapq.heappush(heap, (nd, neighbor))
elif nd == dist[neighbor]:
ways[neighbor] = (ways[neighbor] + ways[node]) % MOD
return ways[n - 1]
if __name__ == "__main__":
roads = [[0,1,1],[0,2,1],[1,3,1],[2,3,1]]
print(count_shortest_paths(4, roads))
Solution 2: Bellman-Ford with Path Counts
Relax edges V-1 times while keeping track of the number of shortest paths. Although slower, it works for dense graphs and clarifies the distance relaxation process.
from typing import List
MOD = 10**9 + 7
def count_shortest_paths_bellman_ford(n: int, roads: List[List[int]]) -> int:
dist = [float('inf')] * n
ways = [0] * n
dist[0] = 0
ways[0] = 1
for _ in range(n - 1):
updated = False
for u, v, w in roads:
for a, b in ((u, v), (v, u)):
nd = dist[a] + w
if nd < dist[b]:
dist[b] = nd
ways[b] = ways[a]
updated = True
elif nd == dist[b] and nd != float('inf'):
ways[b] = (ways[b] + ways[a]) % MOD
if not updated:
break
return ways[-1]
if __name__ == "__main__":
roads = [[0,1,1],[0,2,1],[1,3,1],[2,3,1]]
print(count_shortest_paths_bellman_ford(4, roads))
Determine the length of the longest path in a directed acyclic graph. Sample Input: DAG edges [[0,1],[1,2],[0,2],[2,3]]. Sample Output: 3.
Topologically order nodes and relax edges to maximize distance instead of minimize. Because there are no cycles, each node's best distance is fixed once visited.
0 -> [1, 2]
1 -> [2]
2 -> [3]
3 -> []
Solution 1: Primary Approach
from typing import List
def longest_path_dag(n: int, edges: List[List[int]]) -> int:
graph = [[] for _ in range(n)]
indegree = [0] * n
for u, v in edges:
graph[u].append(v)
indegree[v] += 1
stack = [node for node in range(n) if indegree[node] == 0]
topo = []
while stack:
node = stack.pop()
topo.append(node)
for neighbor in graph[node]:
indegree[neighbor] -= 1
if indegree[neighbor] == 0:
stack.append(neighbor)
dp = [float('-inf')] * n
for node in topo:
if dp[node] == float('-inf'):
dp[node] = 0
for neighbor in graph[node]:
dp[neighbor] = max(dp[neighbor], dp[node] + 1)
return max(dp)
if __name__ == "__main__":
edges = [[0,1],[1,2],[0,2],[2,3]]
print(longest_path_dag(4, edges))
Solution 2: DFS Memoization
Use depth-first search with memoisation to compute the longest path starting from each node.
from functools import lru_cache
from typing import List
def longest_path_dag_dfs(n: int, edges: List[List[int]]) -> int:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
@lru_cache(maxsize=None)
def dfs(node: int) -> int:
best = 0
for neighbor in graph[node]:
best = max(best, 1 + dfs(neighbor))
return best
return max(dfs(node) for node in range(n))
if __name__ == "__main__":
edges = [[0,1],[1,2],[0,2],[2,3]]
print(longest_path_dag_dfs(4, edges))
Find the minimal path sum from top-left to bottom-right in a grid. Sample Input: grid = [[1,3,1],[1,5,1],[4,2,1]]. Sample Output: 7.
Fill a DP grid where each cell stores its value plus the minimum of the top or left neighbor. This dynamic programming accumulates the cheapest cost to reach the bottom-right.
1 3 1
1 5 1
4 2 1
Solution 1: Primary Approach
from typing import List
def min_path_sum(grid: List[List[int]]) -> int:
rows, cols = len(grid), len(grid[0])
for r in range(rows):
for c in range(cols):
if r == 0 and c == 0:
continue
top = grid[r - 1][c] if r > 0 else float('inf')
left = grid[r][c - 1] if c > 0 else float('inf')
grid[r][c] += min(top, left)
return grid[-1][-1]
if __name__ == "__main__":
matrix = [[1,3,1],[1,5,1],[4,2,1]]
print(min_path_sum(matrix))
Solution 2: Dijkstra on Grid
Treat grid cells as graph nodes and run Dijkstra with edge weights equal to cell values. This version generalises when movement costs vary.
import heapq
from typing import List, Tuple
def min_path_sum_dijkstra(grid: List[List[int]]) -> int:
rows, cols = len(grid), len(grid[0])
dist = [[float('inf')] * cols for _ in range(rows)]
dist[0][0] = grid[0][0]
heap: List[Tuple[int, int, int]] = [(grid[0][0], 0, 0)]
while heap:
cost, r, c = heapq.heappop(heap)
if cost > dist[r][c]:
continue
if (r, c) == (rows - 1, cols - 1):
return cost
for dr, dc in ((1,0),(0,1)):
nr, nc = r + dr, c + dc
if nr < rows and nc < cols:
nd = cost + grid[nr][nc]
if nd < dist[nr][nc]:
dist[nr][nc] = nd
heapq.heappush(heap, (nd, nr, nc))
return dist[-1][-1]
if __name__ == "__main__":
grid = [[1,3,1],[1,5,1],[4,2,1]]
print(min_path_sum_dijkstra(grid))
Enumerate every path from start to end within a DAG. Sample Input: graph = [[1,2],[3],[3],[]], start=0, end=3. Sample Output: [[0, 1, 3], [0, 2, 3]].
Run DFS with backtracking, appending nodes to the current path and recording when you hit the target. The DAG guarantee avoids cycles so recursion enumerates every path safely.
0 -> [1, 2]
1 -> [3]
2 -> [3]
3 -> []
Solution 1: Primary Approach
from typing import List
def all_paths_dag(graph: List[List[int]], start: int, end: int) -> List[List[int]]:
memo = {}
def dfs(node: int) -> List[List[int]]:
if node == end:
return [[end]]
if node in memo:
return memo[node]
paths = []
for neighbor in graph[node]:
for path in dfs(neighbor):
paths.append([node] + path)
memo[node] = paths
return paths
return dfs(start)
if __name__ == "__main__":
dag = [[1,2],[3],[3],[]]
print(all_paths_dag(dag, 0, 3))
Solution 2: BFS Enumerate
Perform breadth-first search that carries the partial path and extends it to outgoing neighbors.
from collections import deque
from typing import List
def all_paths_dag_bfs(graph: List[List[int]]) -> List[List[int]]:
target = len(graph) - 1
queue = deque([[0]])
result = []
while queue:
path = queue.popleft()
node = path[-1]
if node == target:
result.append(path[:])
continue
for neighbor in graph[node]:
queue.append(path + [neighbor])
return result
if __name__ == "__main__":
graph = [[1,2], [3], [3], []]
print(all_paths_dag_bfs(graph))
Generate every permutation of a list of numbers. Sample Input: [1,2,3]. Sample Output: [[1,2,3],[1,3,2],[2,1,3],...].
Use backtracking with a used array to add unused elements to the current path. Returning after each choice explores every ordering exactly once.
Solution 1: Primary Approach
from typing import List
def permutations(nums: List[int]) -> List[List[int]]:
result: List[List[int]] = []
def backtrack(start: int) -> None:
if start == len(nums):
result.append(nums[:])
return
for index in range(start, len(nums)):
nums[start], nums[index] = nums[index], nums[start]
backtrack(start + 1)
nums[start], nums[index] = nums[index], nums[start]
backtrack(0)
return result
if __name__ == "__main__":
print(permutations([1, 2, 3]))
Solution 2: itertools.permutations
Use Python's itertools.permutations to generate permutations lazily.
from itertools import permutations
from typing import List
def permutations_itertools(nums: List[int]) -> List[List[int]]:
return [list(p) for p in permutations(nums)]
if __name__ == "__main__":
print(permutations_itertools([1, 2, 3]))
Generate every subset of a list of numbers. Sample Input: [1,2,3]. Sample Output: [[],[3],[2],[2,3],[1],...].
Recurse by either taking or skipping each element, adding the current combination at every step. The recursion tree therefore covers all possible subsets.
Solution 1: Primary Approach
from typing import List
def subsets(nums: List[int]) -> List[List[int]]:
result: List[List[int]] = []
def backtrack(index: int, path: List[int]) -> None:
if index == len(nums):
result.append(path[:])
return
backtrack(index + 1, path)
path.append(nums[index])
backtrack(index + 1, path)
path.pop()
backtrack(0, [])
return result
if __name__ == "__main__":
print(subsets([1, 2, 3]))
Solution 2: Iterative Expansion
Start with the empty subset and append each number to existing subsets to build new ones.
from typing import List
def subsets_iterative(nums: List[int]) -> List[List[int]]:
result = [[]]
for num in nums:
result += [subset + [num] for subset in result]
return result
if __name__ == "__main__":
print(subsets_iterative([1, 2, 3]))
Generate all k-length combinations from the numbers 1 through n. Sample Input: n = 4, k = 2. Sample Output: [[1,2],[1,3],[1,4],[2,3],[2,4],[3,4]].
Build combinations by growing a list with increasing indices until you reach size k. Advancing the start index ensures combinations are unique and lexicographically ordered.
Solution 1: Primary Approach
from typing import List
def combinations(n: int, k: int) -> List[List[int]]:
result: List[List[int]] = []
def backtrack(start: int, path: List[int]) -> None:
if len(path) == k:
result.append(path[:])
return
for number in range(start, n + 1):
path.append(number)
backtrack(number + 1, path)
path.pop()
backtrack(1, [])
return result
if __name__ == "__main__":
print(combinations(4, 2))
Solution 2: itertools.combinations
Leverage itertools.combinations to generate combinations directly.
from itertools import combinations
from typing import List
def combinations_itertools(n: int, k: int) -> List[List[int]]:
return [list(combo) for combo in combinations(range(1, n + 1), k)]
if __name__ == "__main__":
print(combinations_itertools(4, 2))
Place N queens on an N×N board so none attack each other. Sample Input: n = 4. Sample Output: 2.
Place queens row by row while tracking occupied columns and diagonals in sets. Conflict checks prune impossible placements so backtracking explores only valid boards.
Solution 1: Primary Approach
from typing import List
def solve_n_queens(n: int) -> List[List[str]]:
solutions: List[List[str]] = []
columns = set()
diagonals = set()
anti_diagonals = set()
board = [['.'] * n for _ in range(n)]
def backtrack(row: int) -> None:
if row == n:
solutions.append([''.join(r) for r in board])
return
for col in range(n):
if col in columns or (row - col) in diagonals or (row + col) in anti_diagonals:
continue
columns.add(col)
diagonals.add(row - col)
anti_diagonals.add(row + col)
board[row][col] = 'Q'
backtrack(row + 1)
columns.remove(col)
diagonals.remove(row - col)
anti_diagonals.remove(row + col)
board[row][col] = '.'
backtrack(0)
return solutions
if __name__ == "__main__":
print(len(solve_n_queens(4)))
Solution 2: Permutation Columns
Assign each queen to a distinct column using permutations and filter placements that violate diagonals.
from itertools import permutations
from typing import List
def solve_n_queens_permutation(n: int) -> List[List[str]]:
solutions: List[List[str]] = []
for cols in permutations(range(n)):
if all(abs(cols[i] - cols[j]) != j - i for i in range(n) for j in range(i + 1, n)):
board = []
for row in range(n):
row_str = ['.'] * n
row_str[cols[row]] = 'Q'
board.append(''.join(row_str))
solutions.append(board)
return solutions
if __name__ == "__main__":
print(len(solve_n_queens_permutation(4)))
Determine if a word exists in a grid via sequential adjacent cells. Sample Input: board, word = "ABCCED". Sample Output: True.
Depth-first search from each cell while marking it as visited for the current path. Backtracking unmarks the cell so other paths can reuse it.
Solution 1: Primary Approach
from typing import List
def exist(board: List[List[str]], word: str) -> bool:
rows, cols = len(board), len(board[0])
def dfs(row: int, col: int, index: int) -> bool:
if index == len(word):
return True
if row < 0 or col < 0 or row >= rows or col >= cols or board[row][col] != word[index]:
return False
temp = board[row][col]
board[row][col] = '#'
found = any(dfs(row + dr, col + dc, index + 1) for dr, dc in ((1,0),(-1,0),(0,1),(0,-1)))
board[row][col] = temp
return found
for r in range(rows):
for c in range(cols):
if dfs(r, c, 0):
return True
return False
if __name__ == "__main__":
grid = [
['A','B','C','E'],
['S','F','C','S'],
['A','D','E','E'],
]
print(exist(grid, "ABCCED"))
Solution 2: Trie Guided DFS
Build a trie of the target word to prune invalid paths early, reducing branching.
from typing import List
class TrieNode:
def __init__(self) -> None:
self.children: dict[str, "TrieNode"] = {}
self.is_end = False
def exist_with_trie(board: List[List[str]], word: str) -> bool:
root = TrieNode()
node = root
for char in word:
node = node.children.setdefault(char, TrieNode())
node.is_end = True
rows, cols = len(board), len(board[0])
def dfs(r: int, c: int, node: TrieNode, visited: set) -> bool:
if node.is_end:
return True
for dr, dc in ((1,0),(-1,0),(0,1),(0,-1)):
nr, nc = r + dr, c + dc
if 0 <= nr < rows and 0 <= nc < cols and (nr, nc) not in visited:
ch = board[nr][nc]
if ch in node.children:
visited.add((nr, nc))
if dfs(nr, nc, node.children[ch], visited):
return True
visited.remove((nr, nc))
return False
for r in range(rows):
for c in range(cols):
ch = board[r][c]
if ch in root.children:
if dfs(r, c, root.children[ch], {(r, c)}):
return True
return False
if __name__ == "__main__":
board = [
['A','B','C','E'],
['S','F','C','S'],
['A','D','E','E'],
]
print(exist_with_trie(board, "ABCCED"))
Find all combinations that sum to the target using unlimited repetitions. Sample Input: candidates=[2,3,6,7], target=7. Sample Output: [[2, 2, 3], [7]].
Sort candidates and recursively add numbers while the remaining target is non-negative. Allowing repeated use of a number yet never stepping backward prevents duplicate combinations.
Solution 1: Primary Approach
from typing import List
def combination_sum(candidates: List[int], target: int) -> List[List[int]]:
candidates.sort()
result: List[List[int]] = []
def backtrack(start: int, remaining: int, path: List[int]) -> None:
if remaining == 0:
result.append(path[:])
return
for index in range(start, len(candidates)):
number = candidates[index]
if number > remaining:
break
path.append(number)
backtrack(index, remaining - number, path)
path.pop()
backtrack(0, target, [])
return result
if __name__ == "__main__":
print(combination_sum([2, 3, 6, 7], 7))
Solution 2: Iterative BFS
Use a queue to expand combination candidates level by level, ensuring non-decreasing order to avoid duplicates.
from collections import deque
from typing import List
def combination_sum_bfs(candidates: List[int], target: int) -> List[List[int]]:
candidates.sort()
queue = deque([([], 0, 0)])
result: List[List[int]] = []
while queue:
combination, start, total = queue.popleft()
if total == target:
result.append(combination)
continue
for index in range(start, len(candidates)):
value = candidates[index]
new_total = total + value
if new_total > target:
break
queue.append((combination + [value], index, new_total))
return result
if __name__ == "__main__":
print(combination_sum_bfs([2,3,6,7], 7))
Produce all letter combinations that a digit string could represent. Sample Input: digits = "23". Sample Output: ['ad','ae','af','bd','be','bf','cd','ce','cf'].
Map digits to letter sets and build strings by appending each possible letter for the current digit. When you reach the end of the digits you record the assembled combination.
Solution 1: Primary Approach
from typing import List
def letter_combinations(digits: str) -> List[str]:
if not digits:
return []
mapping = {
'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl',
'6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz'
}
result: List[str] = []
def backtrack(index: int, path: List[str]) -> None:
if index == len(digits):
result.append(''.join(path))
return
for char in mapping[digits[index]]:
path.append(char)
backtrack(index + 1, path)
path.pop()
backtrack(0, [])
return result
if __name__ == "__main__":
print(letter_combinations("23"))
Solution 2: Iterative Build
Iteratively build combinations by appending letters for each digit.
from typing import List
KEYPAD = {
'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl',
'6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz'
}
def letter_combinations_iterative(digits: str) -> List[str]:
if not digits:
return []
result = ['']
for digit in digits:
result = [prefix + char for prefix in result for char in KEYPAD[digit]]
return result
if __name__ == "__main__":
print(letter_combinations_iterative("23"))
Fill a 9×9 Sudoku grid so each row, column, and subgrid is valid. Sample Input: partially filled 9x9 grid. Sample Output: solved grid rows.
Search for the next empty cell and try digits that obey row, column, and subgrid constraints. If a placement fails later you backtrack, eventually filling the board when a consistent assignment is found.
Solution 1: Primary Approach
from typing import List
def solve_sudoku(board: List[List[str]]) -> None:
rows = [set() for _ in range(9)]
cols = [set() for _ in range(9)]
boxes = [set() for _ in range(9)]
empties = []
for r in range(9):
for c in range(9):
value = board[r][c]
if value == '.':
empties.append((r, c))
else:
rows[r].add(value)
cols[c].add(value)
boxes[(r // 3) * 3 + (c // 3)].add(value)
def backtrack(index: int) -> bool:
if index == len(empties):
return True
r, c = empties[index]
b = (r // 3) * 3 + (c // 3)
for digit in map(str, range(1, 10)):
if digit not in rows[r] and digit not in cols[c] and digit not in boxes[b]:
board[r][c] = digit
rows[r].add(digit)
cols[c].add(digit)
boxes[b].add(digit)
if backtrack(index + 1):
return True
rows[r].remove(digit)
cols[c].remove(digit)
boxes[b].remove(digit)
board[r][c] = '.'
return False
backtrack(0)
if __name__ == "__main__":
puzzle = [
list("53..7...."),
list("6..195..."),
list(".98....6."),
list("8...6...3"),
list("4..8.3..1"),
list("7...2...6"),
list(".6....28."),
list("...419..5"),
list("....8..79"),
]
solve_sudoku(puzzle)
for row in puzzle:
print(''.join(row))
Solution 2: Bitmask Heuristics
Use bitmask representations for rows, columns, and boxes to accelerate search and choose the next cell with the fewest candidates.
from typing import List
def solve_sudoku_bitmask(board: List[List[str]]) -> None:
row_masks = [0] * 9
col_masks = [0] * 9
box_masks = [0] * 9
empties = []
def bit(d: int) -> int:
return 1 << d
for r in range(9):
for c in range(9):
if board[r][c] == '.':
empties.append((r, c))
else:
value = int(board[r][c]) - 1
mask = bit(value)
row_masks[r] |= mask
col_masks[c] |= mask
box_masks[(r // 3) * 3 + c // 3] |= mask
def dfs(index: int) -> bool:
if index == len(empties):
return True
# choose the cell with fewest options
min_idx = index
min_options = 10
for i in range(index, len(empties)):
r, c = empties[i]
used = row_masks[r] | col_masks[c] | box_masks[(r // 3) * 3 + c // 3]
options = 9 - bin(used).count('1')
if options < min_options:
min_options = options
min_idx = i
empties[index], empties[min_idx] = empties[min_idx], empties[index]
r, c = empties[index]
used = row_masks[r] | col_masks[c] | box_masks[(r // 3) * 3 + c // 3]
for digit in range(9):
mask = bit(digit)
if not (used & mask):
board[r][c] = str(digit + 1)
row_masks[r] |= mask
col_masks[c] |= mask
box_masks[(r // 3) * 3 + c // 3] |= mask
if dfs(index + 1):
return True
row_masks[r] &= ~mask
col_masks[c] &= ~mask
box_masks[(r // 3) * 3 + c // 3] &= ~mask
board[r][c] = '.'
return False
dfs(0)
if __name__ == "__main__":
puzzle = [
list("53..7...."),
list("6..195..."),
list(".98....6."),
list("8...6...3"),
list("4..8.3..1"),
list("7...2...6"),
list(".6....28."),
list("...419..5"),
list("....8..79"),
]
solve_sudoku_bitmask(puzzle)
for row in puzzle:
print(''.join(row))
Count distinct ways to climb a staircase by taking 1 or 2 steps. Sample Input: n = 5. Sample Output: 8.
Use the Fibonacci-style recurrence ways[n] = ways[n-1] + ways[n-2]. Building from base cases counts the ways to reach each step.
n = 5
dp: [1,1,2,3,5,8]
Solution 1: Primary Approach
def climb_stairs(n: int) -> int:
if n <= 2:
return n
first, second = 1, 2
for _ in range(3, n + 1):
first, second = second, first + second
return second
if __name__ == "__main__":
print(climb_stairs(5))
Solution 2: Memoised Recursion
Top-down recursion with memoisation mirrors the Fibonacci recurrence directly.
from functools import lru_cache
def climb_stairs_memo(n: int) -> int:
@lru_cache(maxsize=None)
def dfs(steps: int) -> int:
if steps <= 2:
return steps
return dfs(steps - 1) + dfs(steps - 2)
return dfs(n)
if __name__ == "__main__":
print(climb_stairs_memo(5))
Maximize value by choosing items within a weight capacity. Sample Input: weights=[2,3,4,5], values=[3,4,5,6], capacity=5. Sample Output: 7.
Iterate items and capacities backwards so each item is either taken once or skipped. Updating dp[capacity] = max(dp[capacity], dp[capacity-weight] + value) captures the best value for each weight.
Weights: [2,3,4,5]
Values : [3,4,5,6]
Capacity: 5
Solution 1: Primary Approach
from typing import List
def knapsack(weights: List[int], values: List[int], capacity: int) -> int:
n = len(weights)
dp = [[0] * (capacity + 1) for _ in range(n + 1)]
for i in range(1, n + 1):
for w in range(capacity + 1):
dp[i][w] = dp[i - 1][w]
if weights[i - 1] <= w:
dp[i][w] = max(dp[i][w], values[i - 1] + dp[i - 1][w - weights[i - 1]])
return dp[n][capacity]
if __name__ == "__main__":
print(knapsack([2, 3, 4, 5], [3, 4, 5, 6], 5))
Solution 2: Recursive Memo
Recursive exploration with memoisation on (index, capacity) provides a clear alternative viewpoint.
from functools import lru_cache
from typing import List
def knapsack_recursive(weights: List[int], values: List[int], capacity: int) -> int:
@lru_cache(maxsize=None)
def dfs(index: int, remaining: int) -> int:
if index == len(weights) or remaining == 0:
return 0
best = dfs(index + 1, remaining)
if weights[index] <= remaining:
best = max(best, values[index] + dfs(index + 1, remaining - weights[index]))
return best
return dfs(0, capacity)
if __name__ == "__main__":
print(knapsack_recursive([1,3,4], [15,20,30], 4))
Find the fewest coins needed to make up a target amount. Sample Input: coins=[1,2,5], amount=11. Sample Output: 3.
Initialize dp with infinity and relax each coin by taking dp[amount - coin] + 1. Iterating coins outermost allows unlimited use while keeping time O(n * amount).
Coins : [1,2,5]
Amount: 11
dp: [0,1,1,2,2,1,2,2,3,3,2,3]
Solution 1: Primary Approach
from typing import List
def coin_change(coins: List[int], amount: int) -> int:
dp = [float('inf')] * (amount + 1)
dp[0] = 0
for coin in coins:
for value in range(coin, amount + 1):
dp[value] = min(dp[value], dp[value - coin] + 1)
return dp[amount] if dp[amount] != float('inf') else -1
if __name__ == "__main__":
print(coin_change([1, 2, 5], 11))
Solution 2: BFS Levels
Treat each coin subtraction as an edge and run BFS, where the first time you reach zero coins marks the minimum number of steps.
from collections import deque
from typing import List
def coin_change_bfs(coins: List[int], amount: int) -> int:
queue = deque([(amount, 0)])
visited = {amount}
while queue:
remaining, steps = queue.popleft()
if remaining == 0:
return steps
for coin in coins:
nxt = remaining - coin
if nxt >= 0 and nxt not in visited:
visited.add(nxt)
queue.append((nxt, steps + 1))
return -1
if __name__ == "__main__":
print(coin_change_bfs([1, 2, 5], 11))
Maximize robbery amount with no two adjacent houses robbed. Sample Input: [2,7,9,3,1]. Sample Output: 12.
Track two rolling variables: robbing the current house plus the best two houses back, or skipping it to take the previous best. Taking the maximum at each step yields the optimal loot.
Houses: [2,7,9,3,1]
dp: [2,7,11,11,12]
Solution 1: Primary Approach
from typing import List
def house_robber(nums: List[int]) -> int:
previous, current = 0, 0
for value in nums:
previous, current = current, max(current, previous + value)
return current
if __name__ == "__main__":
print(house_robber([2, 7, 9, 3, 1]))
Solution 2: Memoised Recursion
Recursively decide per house whether to rob it or skip it, caching intermediate states.
from functools import lru_cache
from typing import List
def house_robber_memo(nums: List[int]) -> int:
@lru_cache(maxsize=None)
def dfs(index: int) -> int:
if index >= len(nums):
return 0
return max(nums[index] + dfs(index + 2), dfs(index + 1))
return dfs(0)
if __name__ == "__main__":
print(house_robber_memo([1,2,3,1]))
Maximize robbery on circularly arranged houses without adjacent robberies. Sample Input: [1,2,3,1]. Sample Output: 4.
Since houses form a circle, solve two linear robberies excluding either the first or the last house. The better of those two cases respects the circular adjacency constraint.
Circle: [1,2,3,1]
case1 dp: [1,2,4]
case2 dp: [2,3,3]
Solution 1: Primary Approach
from typing import List
def house_robber_circle(nums: List[int]) -> int:
if len(nums) == 1:
return nums[0]
def rob(line: List[int]) -> int:
previous, current = 0, 0
for value in line:
previous, current = current, max(current, previous + value)
return current
return max(rob(nums[:-1]), rob(nums[1:]))
if __name__ == "__main__":
print(house_robber_circle([1, 2, 3, 1]))
Solution 2: Split into Two Cases
Evaluate the best result on the ranges [0, n-2] and [1, n-1] separately using the linear robber DP.
from functools import lru_cache
from typing import List
def house_robber_circle(nums: List[int]) -> int:
if len(nums) == 1:
return nums[0]
def rob_linear(array: List[int]) -> int:
prev = curr = 0
for value in array:
prev, curr = curr, max(curr, prev + value)
return curr
return max(rob_linear(nums[:-1]), rob_linear(nums[1:]))
if __name__ == "__main__":
print(house_robber_circle([2,3,2]))
Compute the length of the longest subsequence common to two strings. Sample Input: text1="abcde", text2="ace". Sample Output: 3.
Fill a DP table where matches extend the diagonal value and mismatches take the max of neighboring cells. The value in the bottom-right cell is the LCS length.
text1: abcde
text2: ace
LCS matrix (prefix lengths)
Solution 1: Primary Approach
def longest_common_subsequence(text1: str, text2: str) -> int:
m, n = len(text1), len(text2)
dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in range(m - 1, -1, -1):
for j in range(n - 1, -1, -1):
if text1[i] == text2[j]:
dp[i][j] = 1 + dp[i + 1][j + 1]
else:
dp[i][j] = max(dp[i + 1][j], dp[i][j + 1])
return dp[0][0]
if __name__ == "__main__":
print(longest_common_subsequence("abcde", "ace"))
Solution 2: Memoised Recursion
Use recursion with memoisation to match characters or skip them, mirroring the DP definition.
from functools import lru_cache
def longest_common_subsequence_memo(text1: str, text2: str) -> int:
@lru_cache(maxsize=None)
def dfs(i: int, j: int) -> int:
if i == len(text1) or j == len(text2):
return 0
if text1[i] == text2[j]:
return 1 + dfs(i + 1, j + 1)
return max(dfs(i + 1, j), dfs(i, j + 1))
return dfs(0, 0)
if __name__ == "__main__":
print(longest_common_subsequence_memo("abcde", "ace"))
Find the longest palindromic substring in a given string. Sample Input: "babad". Sample Output: "bab".
Expand around every center (single character and between characters) and track the longest symmetric window found. Center expansion works because palindromes mirror around their center.
String: babad
Centers -> best: 'bab'
Solution 1: Primary Approach
def longest_palindrome(text: str) -> str:
if len(text) < 2:
return text
def expand(left: int, right: int) -> str:
while left >= 0 and right < len(text) and text[left] == text[right]:
left -= 1
right += 1
return text[left + 1:right]
best = ""
for index in range(len(text)):
best = max(best, expand(index, index), expand(index, index + 1), key=len)
return best
if __name__ == "__main__":
print(longest_palindrome("babad"))
Solution 2: Expand Around Center
Expand around each index (and between indices) to find the longest palindrome in O(n²) time without extra memory.
from typing import Tuple
def longest_palindrome_expand(text: str) -> str:
if not text:
return ""
def expand(left: int, right: int) -> Tuple[int, int]:
while left >= 0 and right < len(text) and text[left] == text[right]:
left -= 1
right += 1
return left + 1, right
best = (0, 1)
for i in range(len(text)):
best = max(best, expand(i, i), expand(i, i + 1), key=lambda s: s[1] - s[0])
return text[best[0]:best[1]]
if __name__ == "__main__":
print(longest_palindrome_expand("babad"))
Find the contiguous subarray with the largest sum. Sample Input: [-2,1,-3,4,-1,2,1,-5,4]. Sample Output: 6.
Maintain a running sum that resets to the current value when it becomes negative. Tracking the best sum seen yields the maximum subarray total in linear time.
Array: [-2,1,-3,4,-1,2,1,-5,4]
max sum = 6
Solution 1: Primary Approach
from typing import List
def max_subarray(nums: List[int]) -> int:
best = current = nums[0]
for number in nums[1:]:
current = max(number, current + number)
best = max(best, current)
return best
if __name__ == "__main__":
print(max_subarray([-2,1,-3,4,-1,2,1,-5,4]))
Solution 2: Divide and Conquer
Split the array and compute the best subarray in the left half, right half, or crossing the midpoint.
from typing import List
def max_subarray_divide(nums: List[int]) -> int:
def helper(left: int, right: int) -> int:
if left == right:
return nums[left]
mid = (left + right) // 2
best_left = helper(left, mid)
best_right = helper(mid + 1, right)
left_sum = curr = nums[mid]
for i in range(mid - 1, left - 1, -1):
curr += nums[i]
left_sum = max(left_sum, curr)
right_sum = curr = nums[mid + 1]
for i in range(mid + 2, right + 1):
curr += nums[i]
right_sum = max(right_sum, curr)
best_cross = left_sum + right_sum
return max(best_left, best_right, best_cross)
return helper(0, len(nums) - 1)
if __name__ == "__main__":
print(max_subarray_divide([-2,1,-3,4,-1,2,1,-5,4]))
Compute the minimum edits needed to transform one string into another. Sample Input: word1="horse", word2="ros". Sample Output: 3.
Populate a matrix where each cell represents the edit distance between prefixes, using insertion, deletion, and replacement transitions. The final cell spells out the minimum edits needed.
word1: horse
word2: ros
dp[6][4] = 3
Solution 1: Primary Approach
def edit_distance(word1: str, word2: str) -> int:
m, n = len(word1), len(word2)
dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in range(m + 1):
dp[i][0] = i
for j in range(n + 1):
dp[0][j] = j
for i in range(1, m + 1):
for j in range(1, n + 1):
if word1[i - 1] == word2[j - 1]:
dp[i][j] = dp[i - 1][j - 1]
else:
dp[i][j] = 1 + min(dp[i - 1][j], dp[i][j - 1], dp[i - 1][j - 1])
return dp[m][n]
if __name__ == "__main__":
print(edit_distance("horse", "ros"))
Solution 2: Memoised Recursion
Recursively align prefixes of the two strings while caching results to avoid recomputation.
from functools import lru_cache
def edit_distance_memo(word1: str, word2: str) -> int:
@lru_cache(maxsize=None)
def dfs(i: int, j: int) -> int:
if i == len(word1):
return len(word2) - j
if j == len(word2):
return len(word1) - i
if word1[i] == word2[j]:
return dfs(i + 1, j + 1)
return 1 + min(
dfs(i + 1, j), # delete
dfs(i, j + 1), # insert
dfs(i + 1, j + 1), # replace
)
return dfs(0, 0)
if __name__ == "__main__":
print(edit_distance_memo("horse", "ros"))
Find the area of the largest square of ones in a binary matrix. Sample Input: binary grid. Sample Output: 4.
Use DP that looks at the top, left, and top-left neighbors; if the current cell is one, it grows the square by one plus the minimum of those neighbors. Squaring the size gives the area of the maximal square.
Matrix:
1 0 1 0 0
1 0 1 1 1
1 1 1 1 1
1 0 0 1 0
Solution 1: Primary Approach
from typing import List
def maximal_square(matrix: List[List[str]]) -> int:
if not matrix:
return 0
rows, cols = len(matrix), len(matrix[0])
dp = [[0] * (cols + 1) for _ in range(rows + 1)]
best = 0
for i in range(1, rows + 1):
for j in range(1, cols + 1):
if matrix[i - 1][j - 1] == '1':
dp[i][j] = 1 + min(dp[i - 1][j], dp[i][j - 1], dp[i - 1][j - 1])
best = max(best, dp[i][j])
return best * best
if __name__ == "__main__":
grid = [list("10100"), list("10111"), list("11111"), list("10010")]
print(maximal_square(grid))
Solution 2: Prefix Sum Check
Compute a prefix sum matrix and binary search the square size, validating each candidate via area checks.
from typing import List
def maximal_square_prefix(matrix: List[List[str]]) -> int:
if not matrix:
return 0
rows, cols = len(matrix), len(matrix[0])
prefix = [[0] * (cols + 1) for _ in range(rows + 1)]
for r in range(rows):
for c in range(cols):
prefix[r + 1][c + 1] = (
prefix[r][c + 1] + prefix[r + 1][c] - prefix[r][c] + int(matrix[r][c])
)
def can(size: int) -> bool:
for r in range(size, rows + 1):
for c in range(size, cols + 1):
total = prefix[r][c] - prefix[r - size][c] - prefix[r][c - size] + prefix[r - size][c - size]
if total == size * size:
return True
return False
left, right = 0, min(rows, cols)
while left < right:
mid = (left + right + 1) // 2
if can(mid):
left = mid
else:
right = mid - 1
return left * left
if __name__ == "__main__":
grid = [["1","0","1","0","0"],["1","0","1","1","1"],["1","1","1","1","1"],["1","0","0","1","0"]]
print(maximal_square_prefix(grid))
Determine if you can reach the last index given jump lengths. Sample Input: [2,3,1,1,4]. Sample Output: True.
Track the farthest reachable index while iterating and fail if you encounter an index beyond it. If you finish scanning with reach covering the last index, the jump is possible.
Nums: [2,3,1,1,4]
Reachable indices expand
Solution 1: Primary Approach
from typing import List
def can_jump(nums: List[int]) -> bool:
farthest = 0
for index, value in enumerate(nums):
if index > farthest:
return False
farthest = max(farthest, index + value)
return True
if __name__ == "__main__":
print(can_jump([2, 3, 1, 1, 4]))
Solution 2: Greedy Farther Jump
Walk the array while tracking the farthest index reachable so far.
from typing import List
def can_jump_greedy(nums: List[int]) -> bool:
far = 0
for index, value in enumerate(nums):
if index > far:
return False
far = max(far, index + value)
return True
if __name__ == "__main__":
print(can_jump_greedy([2,3,1,1,4]))
Compute the sum of distances from each node to all others in the tree. Sample Input: n=6 edges tree. Sample Output: [8,12,6,10,10,10].
Do a postorder traversal to compute subtree sizes and the distance sum from the root, then reroot to adjust distances for each child. Combining the two passes yields results for all nodes.
0
/ \
1 2
/ \
3 4
Solution 1: Primary Approach
from typing import List
def sum_of_distances_in_tree(n: int, edges: List[List[int]]) -> List[int]:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
count = [1] * n
result = [0] * n
def postorder(node: int, parent: int) -> None:
for neighbor in graph[node]:
if neighbor == parent:
continue
postorder(neighbor, node)
count[node] += count[neighbor]
result[node] += result[neighbor] + count[neighbor]
def preorder(node: int, parent: int) -> None:
for neighbor in graph[node]:
if neighbor == parent:
continue
result[neighbor] = result[node] - count[neighbor] + (n - count[neighbor])
preorder(neighbor, node)
postorder(0, -1)
preorder(0, -1)
return result
if __name__ == "__main__":
print(sum_of_distances_in_tree(6, [[0,1],[0,2],[2,3],[2,4],[2,5]]))
Solution 2: Recompute with BFS
For each node, run BFS to sum pairwise distances. This O(n²) method contrasts with the two-pass DP.
from collections import deque
from typing import List
def sum_of_distances_bfs(n: int, edges: List[List[int]]) -> List[int]:
graph = [[] for _ in range(n)]
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
def bfs(start: int) -> int:
visited = [False] * n
queue = deque([(start, 0)])
visited[start] = True
total = 0
while queue:
node, dist = queue.popleft()
total += dist
for neighbor in graph[node]:
if not visited[neighbor]:
visited[neighbor] = True
queue.append((neighbor, dist + 1))
return total
return [bfs(node) for node in range(n)]
if __name__ == "__main__":
print(sum_of_distances_bfs(6, [[0,1],[0,2],[2,3],[2,4],[2,5]]))
Find the minimum tour cost visiting all cities exactly once. Sample Input: 4-city distance matrix. Sample Output: 80.
Let dp[mask][i] represent the shortest route that visits the cities in mask and ends at i. Transition by trying unvisited cities, which keeps the complexity O(n^2 * 2^n).
0 10 15 20
10 0 35 25
15 35 0 30
20 25 30 0
Solution 1: Primary Approach
from typing import List
def traveling_salesman(dist: List[List[int]]) -> int:
n = len(dist)
dp = [[float('inf')] * n for _ in range(1 << n)]
dp[1][0] = 0
for mask in range(1 << n):
for u in range(n):
if not (mask & (1 << u)):
continue
for v in range(n):
if mask & (1 << v):
continue
next_mask = mask | (1 << v)
dp[next_mask][v] = min(dp[next_mask][v], dp[mask][u] + dist[u][v])
final_mask = (1 << n) - 1
return min(dp[final_mask][v] + dist[v][0] for v in range(n))
if __name__ == "__main__":
matrix = [
[0, 10, 15, 20],
[10, 0, 35, 25],
[15, 35, 0, 30],
[20, 25, 30, 0],
]
print(traveling_salesman(matrix))
Solution 2: Recursive Held-Karp
Implement the Held-Karp recursion with memoisation to compute the shortest Hamiltonian tour.
from functools import lru_cache
from typing import List
def traveling_salesman(distance: List[List[int]]) -> int:
n = len(distance)
@lru_cache(maxsize=None)
def dfs(mask: int, last: int) -> int:
if mask == (1 << n) - 1:
return distance[last][0]
best = float('inf')
for city in range(n):
if not mask & (1 << city):
best = min(best, distance[last][city] + dfs(mask | (1 << city), city))
return best
return dfs(1, 0)
if __name__ == "__main__":
distance = [
[0, 10, 15, 20],
[10, 0, 35, 25],
[15, 35, 0, 30],
[20, 25, 30, 0],
]
print(traveling_salesman(distance))
Count numbers up to n without consecutive ones in binary representation. Sample Input: n = 5. Sample Output: 5.
Memoize over the digit position, tight bound, and previous bit while constructing numbers in binary. Skipping placements that repeat a one enforces the no-consecutive-ones rule.
n = 5 (101)
states: (pos, tight, prevOne)
Solution 1: Primary Approach
from functools import lru_cache
def count_without_consecutive_ones(n: int) -> int:
bits = list(map(int, bin(n)[2:]))
@lru_cache(None)
def dp(index: int, tight: bool, prev_one: bool) -> int:
if index == len(bits):
return 1
limit = bits[index] if tight else 1
total = 0
for digit in range(limit + 1):
if prev_one and digit == 1:
continue
total += dp(index + 1, tight and digit == limit, digit == 1)
return total
return dp(0, True, False)
if __name__ == "__main__":
print(count_without_consecutive_ones(5))
Solution 2: Precomputed Fibonacci
Use the Fibonacci-like relationship counting binary strings without consecutive ones up to the bit-length of n.
from typing import List
def find_integers_fibonacci(n: int) -> int:
fib: List[int] = [1, 2]
for _ in range(2, 31):
fib.append(fib[-1] + fib[-2])
prev_bit = 0
result = 0
for i in range(30, -1, -1):
if n & (1 << i):
result += fib[i]
if prev_bit:
return result
prev_bit = 1
else:
prev_bit = 0
return result + 1
if __name__ == "__main__":
print(find_integers_fibonacci(5))
Maximize coins gained by bursting balloons in an optimal order. Sample Input: [3,1,5,8]. Sample Output: 167.
Consider every interval and choose a balloon to burst last so the subproblems on either side are independent. Trying every last balloon and combining subresults gives the optimal coins.
Vals: [1,3,1,5,8,1]
dp[i][j] best coins
Solution 1: Primary Approach
from typing import List
def max_coins(nums: List[int]) -> int:
values = [1] + nums + [1]
n = len(values)
dp = [[0] * n for _ in range(n)]
for length in range(2, n):
for left in range(0, n - length):
right = left + length
for last in range(left + 1, right):
dp[left][right] = max(
dp[left][right],
values[left] * values[last] * values[right] + dp[left][last] + dp[last][right],
)
return dp[0][-1]
if __name__ == "__main__":
print(max_coins([3, 1, 5, 8]))
Solution 2: Top-Down Memo
Memoise the maximum coins obtained by bursting balloons in a subinterval, choosing the last balloon to pop.
from functools import lru_cache
from typing import List
def max_coins_memo(nums: List[int]) -> int:
balloons = [1] + [num for num in nums if num > 0] + [1]
@lru_cache(maxsize=None)
def dfs(left: int, right: int) -> int:
if left + 1 == right:
return 0
best = 0
for k in range(left + 1, right):
best = max(
best,
balloons[left] * balloons[k] * balloons[right] + dfs(left, k) + dfs(k, right),
)
return best
return dfs(0, len(balloons) - 1)
if __name__ == "__main__":
print(max_coins_memo([3, 1, 5, 8]))
Compute the minimum jumps to reach the last index using greedy expansion. Sample Input: [2,3,1,1,4]. Sample Output: 2.
Track the current jump range and the farthest reachable index within that range. When you exhaust the range you commit to another jump and extend the range using the farthest seen.
Solution 1: Primary Approach
from typing import List
def minimum_jumps(nums: List[int]) -> int:
jumps = current_end = farthest = 0
for index in range(len(nums) - 1):
farthest = max(farthest, index + nums[index])
if index == current_end:
jumps += 1
current_end = farthest
return jumps
if __name__ == "__main__":
print(minimum_jumps([2, 3, 1, 1, 4]))
Solution 2: Dynamic Programming
Fill a DP array where dp[i] stores the minimum jumps to reach index i by scanning all previous positions.
from typing import List
def jump_game_dp(nums: List[int]) -> int:
n = len(nums)
dp = [float('inf')] * n
dp[0] = 0
for i in range(n):
for step in range(1, nums[i] + 1):
if i + step < n:
dp[i + step] = min(dp[i + step], dp[i] + 1)
return dp[-1]
if __name__ == "__main__":
print(jump_game_dp([2, 3, 1, 1, 4]))
Determine a starting station that allows completing the circuit once. Sample Input: gas=[1,2,3,4,5], cost=[3,4,5,1,2]. Sample Output: 3.
Maintain a running fuel balance and a global balance; reset the starting station whenever the current balance drops below zero. A non-negative global balance guarantees the last reset index works.
Solution 1: Primary Approach
from typing import List
def can_complete_circuit(gas: List[int], cost: List[int]) -> int:
total = current = start = 0
for index in range(len(gas)):
total += gas[index] - cost[index]
current += gas[index] - cost[index]
if current < 0:
current = 0
start = index + 1
return start if total >= 0 else -1
if __name__ == "__main__":
print(can_complete_circuit([1,2,3,4,5], [3,4,5,1,2]))
Solution 2: Try All Starts
Attempt starting at each station and simulate the circuit. Suitable for small inputs to illustrate the feasibility condition.
from typing import List
def can_complete_circuit_bruteforce(gas: List[int], cost: List[int]) -> int:
n = len(gas)
for start in range(n):
tank = 0
valid = True
for step in range(n):
idx = (start + step) % n
tank += gas[idx] - cost[idx]
if tank < 0:
valid = False
break
if valid:
return start
return -1
if __name__ == "__main__":
print(can_complete_circuit_bruteforce([1,2,3,4,5], [3,4,5,1,2]))
Distribute candies so higher-rated children get more than neighbors. Sample Input: [1,0,2]. Sample Output: 5.
Give each child one candy, sweep left to right increasing when ratings rise, then sweep right to left to fix descending runs. The two passes satisfy neighbor constraints with the fewest candies.
Solution 1: Primary Approach
from typing import List
def candy(ratings: List[int]) -> int:
candies = [1] * len(ratings)
for i in range(1, len(ratings)):
if ratings[i] > ratings[i - 1]:
candies[i] = candies[i - 1] + 1
for i in range(len(ratings) - 2, -1, -1):
if ratings[i] > ratings[i + 1]:
candies[i] = max(candies[i], candies[i + 1] + 1)
return sum(candies)
if __name__ == "__main__":
print(candy([1, 0, 2]))
Solution 2: Priority Queue Fix
Process children in order of rating using a priority queue, adjusting neighbour candies when necessary.
import heapq
from typing import List
def candy_priority(ratings: List[int]) -> int:
n = len(ratings)
candies = [1] * n
heap = [(rating, index) for index, rating in enumerate(ratings)]
heapq.heapify(heap)
while heap:
_, index = heapq.heappop(heap)
if index > 0 and ratings[index] > ratings[index - 1]:
candies[index] = max(candies[index], candies[index - 1] + 1)
if index + 1 < n and ratings[index] > ratings[index + 1]:
candies[index] = max(candies[index], candies[index + 1] + 1)
return sum(candies)
if __name__ == "__main__":
print(candy_priority([1, 0, 2]))
Choose the maximum number of non-overlapping intervals sorted by end time. Sample Input: [(1,3),(2,5),(4,6),(6,7)]. Sample Output: 3.
Sort activities by finishing time and pick each one that starts after the previous selected activity ends. Choosing the earliest finishing task always leaves as much space as possible for later ones.
Solution 1: Primary Approach
from typing import List, Tuple
def select_max_activities(intervals: List[Tuple[int, int]]) -> int:
intervals.sort(key=lambda x: x[1])
count = 0
last_end = -float('inf')
for start, end in intervals:
if start >= last_end:
count += 1
last_end = end
return count
if __name__ == "__main__":
print(select_max_activities([(1, 3), (2, 5), (4, 6), (6, 7)]))
Solution 2: Dynamic Programming
Sort activities by start time and use DP to compute the maximum non-overlapping subset via interval scheduling DP.
from bisect import bisect_left
from typing import List, Tuple
def max_activities_dp(intervals: List[Tuple[int, int]]) -> int:
intervals = sorted(intervals, key=lambda item: item[1])
ends = [end for _, end in intervals]
dp = [0] * (len(intervals) + 1)
for i, (start, end) in enumerate(intervals, 1):
j = bisect_left(ends, start)
dp[i] = max(dp[i - 1], dp[j] + 1)
return dp[-1]
if __name__ == "__main__":
print(max_activities_dp([(1,2),(3,4),(0,6),(5,7),(8,9),(5,9)]))
Build a prefix-free code with minimum weighted path length. Sample Input: frequencies for a,b,c,d. Sample Output: {'a': '110', 'b': '10', ...}.
Repeatedly merge the two least frequent symbols using a min-heap and prepend 0 or 1 to their codes. The accumulating merge cost yields the minimum weighted path length.
Solution 1: Primary Approach
from typing import Dict
import heapq
def huffman_code(frequencies: Dict[str, int]) -> Dict[str, str]:
heap = [[weight, [symbol, ""]] for symbol, weight in frequencies.items()]
heapq.heapify(heap)
while len(heap) > 1:
left = heapq.heappop(heap)
right = heapq.heappop(heap)
for pair in left[1:]:
pair[1] = '0' + pair[1]
for pair in right[1:]:
pair[1] = '1' + pair[1]
heapq.heappush(heap, [left[0] + right[0]] + left[1:] + right[1:])
return {symbol: code for symbol, code in heap[0][1:]}
if __name__ == "__main__":
mapping = huffman_code({'a': 5, 'b': 9, 'c': 12, 'd': 13})
print(mapping)
Solution 2: Recursive Merge
Recursively combine the two least frequent symbols and accumulate code lengths without an explicit priority queue.
from typing import Dict, Tuple
def huffman_recursive(frequencies: Dict[str, int]) -> Dict[str, str]:
if len(frequencies) == 1:
key = next(iter(frequencies))
return {key: '0'}
items = sorted(frequencies.items(), key=lambda item: item[1])
(a, fa), (b, fb) = items[0], items[1]
merged = {key: value for key, value in items[2:]}
merged[a + b] = fa + fb
codes = huffman_recursive(merged)
result: Dict[str, str] = {}
for key, code in codes.items():
if key == a + b:
result[a] = code + '0'
result[b] = code + '1'
else:
result[key] = code
return result
if __name__ == "__main__":
print(huffman_recursive({'a':5,'b':9,'c':12,'d':13,'e':16,'f':45}))
Schedule profit-bearing jobs before deadlines to maximize profit. Sample Input: (profit, deadline) jobs. Sample Output: 100.
Sort jobs by deadline and push profits into a min-heap; if you schedule more jobs than the current deadline allows, drop the smallest profit. The remaining jobs maximize profit while meeting deadlines.
Solution 1: Primary Approach
from typing import List, Tuple
import heapq
def max_profit_with_deadlines(jobs: List[Tuple[int, int]]) -> int:
jobs.sort(key=lambda job: job[1])
heap: list[int] = []
for profit, deadline in jobs:
heapq.heappush(heap, profit)
if len(heap) > deadline:
heapq.heappop(heap)
return sum(heap)
if __name__ == "__main__":
tasks = [(35, 3), (30, 4), (25, 4), (20, 2), (15, 3), (10, 2), (5, 3)]
print(max_profit_with_deadlines(tasks))
Solution 2: Dynamic Programming
Use DP across time slots to decide whether to include each job, similar to knapsack over deadlines.
from typing import List, Tuple
def schedule_jobs_dp(jobs: List[Tuple[int, int]]) -> int:
# jobs: (deadline, profit)
jobs = sorted(jobs)
max_deadline = max(deadline for deadline, _ in jobs)
dp = [0] * (max_deadline + 1)
for deadline, profit in jobs:
for time in range(deadline, 0, -1):
dp[time] = max(dp[time], dp[time - 1] + profit)
return max(dp)
if __name__ == "__main__":
print(schedule_jobs_dp([(2, 100), (1, 19), (2, 27), (1, 25), (3, 15)]))
Sort a linked list in O(n log n) time using merge sort. Sample Input: [4,2,1,3]. Sample Output: [1, 2, 3, 4].
Split the list with fast/slow pointers, recursively sort each half, and merge them. Merge sort works well on linked lists because it only needs sequential access.
Solution 1: Primary Approach
from typing import Optional
class ListNode:
def __init__(self, value: int, next_node: Optional['ListNode'] = None) -> None:
self.value = value
self.next = next_node
def sort_list(head: Optional[ListNode]) -> Optional[ListNode]:
if not head or not head.next:
return head
slow = fast = head
prev = None
while fast and fast.next:
prev = slow
slow = slow.next
fast = fast.next.next
prev.next = None
left = sort_list(head)
right = sort_list(slow)
return merge(left, right)
def merge(left: Optional[ListNode], right: Optional[ListNode]) -> Optional[ListNode]:
dummy = tail = ListNode(0)
while left and right:
if left.value < right.value:
tail.next, left = left, left.next
else:
tail.next, right = right, right.next
tail = tail.next
tail.next = left or right
return dummy.next
def to_array(head: Optional[ListNode]) -> list[int]:
arr = []
while head:
arr.append(head.value)
head = head.next
return arr
if __name__ == "__main__":
node = ListNode(4, ListNode(2, ListNode(1, ListNode(3))))
sorted_head = sort_list(node)
print(to_array(sorted_head))
Solution 2: Recursive Split
Use the classic recursive merge sort splitting the list into halves with slow/fast pointers.
from typing import Optional
class ListNode:
def __init__(self, value: int, next_node: Optional["ListNode"] = None) -> None:
self.value = value
self.next = next_node
def sort_list_recursive(head: Optional[ListNode]) -> Optional[ListNode]:
if not head or not head.next:
return head
slow = fast = head
prev = None
while fast and fast.next:
prev = slow
slow = slow.next
fast = fast.next.next
prev.next = None
left = sort_list_recursive(head)
right = sort_list_recursive(slow)
return merge(left, right)
def merge(left: Optional[ListNode], right: Optional[ListNode]) -> Optional[ListNode]:
dummy = tail = ListNode(0)
while left and right:
if left.value <= right.value:
tail.next, left = left, left.next
else:
tail.next, right = right, right.next
tail = tail.next
tail.next = left or right
return dummy.next
if __name__ == "__main__":
head = ListNode(4, ListNode(2, ListNode(1, ListNode(3))))
sorted_head = sort_list_recursive(head)
values = []
while sorted_head:
values.append(sorted_head.value)
sorted_head = sorted_head.next
print(values)
Sort an array in-place using the quicksort algorithm. Sample Input: [5,3,8,4,2]. Sample Output: [2, 3, 4, 5, 8].
Partition the array around a pivot and recursively sort the halves. The partition routine places elements less than the pivot before it and greater elements after it.
Solution 1: Primary Approach
from typing import List
def quick_sort(nums: List[int]) -> List[int]:
def sort(left: int, right: int) -> None:
if left >= right:
return
pivot = nums[(left + right) // 2]
i, j = left, right
while i <= j:
while nums[i] < pivot:
i += 1
while nums[j] > pivot:
j -= 1
if i <= j:
nums[i], nums[j] = nums[j], nums[i]
i += 1
j -= 1
sort(left, j)
sort(i, right)
sort(0, len(nums) - 1)
return nums
if __name__ == "__main__":
print(quick_sort([5, 3, 8, 4, 2]))
Solution 2: Python sort
Use Python's built-in Timsort algorithm (`list.sort`) which is stable and highly optimised.
from typing import List
def quick_sort_builtin(nums: List[int]) -> List[int]:
return sorted(nums)
if __name__ == "__main__":
print(quick_sort_builtin([3, 6, 8, 10, 1, 2, 1]))
Sort an array using the merge sort divide-and-conquer technique. Sample Input: [5,1,6,2,3,4]. Sample Output: [1, 2, 3, 4, 5, 6].
Recursively divide the array and merge sorted halves using a helper buffer. Each merge step combines two sorted subarrays into a larger sorted segment.
Solution 1: Primary Approach
from typing import List
def merge_sort(nums: List[int]) -> List[int]:
if len(nums) <= 1:
return nums
mid = len(nums) // 2
left = merge_sort(nums[:mid])
right = merge_sort(nums[mid:])
return merge(left, right)
def merge(left: List[int], right: List[int]) -> List[int]:
merged = []
i = j = 0
while i < len(left) and j < len(right):
if left[i] < right[j]:
merged.append(left[i])
i += 1
else:
merged.append(right[j])
j += 1
merged.extend(left[i:])
merged.extend(right[j:])
return merged
if __name__ == "__main__":
print(merge_sort([5, 1, 6, 2, 3, 4]))
Solution 2: Iterative Bottom-Up
Perform merge sort iteratively by merging runs of size 1, 2, 4, etc.
from typing import List
def merge_sort_bottom_up(nums: List[int]) -> List[int]:
width = 1
n = len(nums)
result = nums[:]
buffer = [0] * n
while width < n:
for i in range(0, n, 2 * width):
left, right = i, min(i + width, n)
end = min(i + 2 * width, n)
l, r = left, right
for k in range(left, end):
if l < right and (r >= end or result[l] <= result[r]):
buffer[k] = result[l]
l += 1
else:
buffer[k] = result[r]
r += 1
result, buffer = buffer, result
width *= 2
return result
if __name__ == "__main__":
print(merge_sort_bottom_up([5, 2, 4, 6, 1, 3]))
Return the k-th largest element from an unsorted array using a heap. Sample Input: nums=[3,2,1,5,6,4], k=2. Sample Output: 5.
Maintain a min-heap of size k storing the largest elements seen so far. Whenever the heap exceeds k you drop the smallest, leaving the kth largest at the root.
Solution 1: Primary Approach
from typing import List
import heapq
def kth_largest(nums: List[int], k: int) -> int:
heap = nums[:k]
heapq.heapify(heap)
for value in nums[k:]:
if value > heap[0]:
heapq.heapreplace(heap, value)
return heap[0]
if __name__ == "__main__":
print(kth_largest([3,2,1,5,6,4], 2))
Solution 2: Sorting
Sort the array ascending and index from the end to find the k-th largest.
from typing import List
def kth_largest_sort(nums: List[int], k: int) -> int:
return sorted(nums)[-k]
if __name__ == "__main__":
print(kth_largest_sort([3,2,1,5,6,4], 2))
Sort an array containing 0s, 1s, and 2s in one pass. Sample Input: [2,0,2,1,1,0]. Sample Output: [0, 0, 1, 1, 2, 2].
Use three pointers (low, mid, high) to partition values into less than, equal to, and greater than the pivot. Swapping in-place arranges 0s, 1s, and 2s in one pass.
Solution 1: Primary Approach
from typing import List
def dutch_national_flag(nums: List[int]) -> List[int]:
low = mid = 0
high = len(nums) - 1
while mid <= high:
if nums[mid] == 0:
nums[low], nums[mid] = nums[mid], nums[low]
low += 1
mid += 1
elif nums[mid] == 1:
mid += 1
else:
nums[mid], nums[high] = nums[high], nums[mid]
high -= 1
return nums
if __name__ == "__main__":
print(dutch_national_flag([2, 0, 2, 1, 1, 0]))
Solution 2: Counting
Count occurrences of 0, 1, and 2, then overwrite the array with the correct number of each.
from typing import List
def sort_colors_count(nums: List[int]) -> List[int]:
counts = [0, 0, 0]
for num in nums:
counts[num] += 1
index = 0
for value, count in enumerate(counts):
for _ in range(count):
nums[index] = value
index += 1
return nums
if __name__ == "__main__":
print(sort_colors_count([2,0,2,1,1,0]))
Select the k-th smallest element in expected linear time. Sample Input: nums=[7,10,4,3,20,15], k=2 (0-indexed). Sample Output: 7.
Partition around a pivot and recurse only into the side containing the kth index. Ignoring the other side makes the expected runtime linear.
Solution 1: Primary Approach
from typing import List
import random
def quickselect(nums: List[int], k: int) -> int:
def partition(left: int, right: int, pivot_index: int) -> int:
pivot_value = nums[pivot_index]
nums[pivot_index], nums[right] = nums[right], nums[pivot_index]
store_index = left
for i in range(left, right):
if nums[i] < pivot_value:
nums[store_index], nums[i] = nums[i], nums[store_index]
store_index += 1
nums[right], nums[store_index] = nums[store_index], nums[right]
return store_index
target = k
left, right = 0, len(nums) - 1
while True:
pivot_index = random.randint(left, right)
pivot_index = partition(left, right, pivot_index)
if pivot_index == target:
return nums[pivot_index]
if pivot_index < target:
left = pivot_index + 1
else:
right = pivot_index - 1
if __name__ == "__main__":
print(quickselect([7, 10, 4, 3, 20, 15], 2))
Solution 2: Heap
Maintain a min-heap of size k to track the k largest elements, then return the heap root.
import heapq
from typing import List
def quickselect_heap(nums: List[int], k: int) -> int:
heap = []
for num in nums:
heapq.heappush(heap, num)
if len(heap) > k:
heapq.heappop(heap)
return heap[0]
if __name__ == "__main__":
print(quickselect_heap([3,2,3,1,2,4,5,5,6], 4))
Find the k-th smallest element with worst-case linear time. Sample Input: [12,3,5,7,4,19,26], k=3 (0-indexed). Sample Output: 7.
Group elements into chunks of five, take their medians, and recursively pick a pivot from those medians. This guaranteed-good pivot keeps the selection algorithm O(n) in the worst case.
Solution 1: Primary Approach
from typing import List
def select_kth(nums: List[int], k: int) -> int:
if len(nums) <= 5:
return sorted(nums)[k]
medians = [select_kth(nums[i:i + 5], len(nums[i:i + 5]) // 2) for i in range(0, len(nums), 5)]
pivot = select_kth(medians, len(medians) // 2)
lows = [x for x in nums if x < pivot]
highs = [x for x in nums if x > pivot]
pivots = [x for x in nums if x == pivot]
if k < len(lows):
return select_kth(lows, k)
if k < len(lows) + len(pivots):
return pivot
return select_kth(highs, k - len(lows) - len(pivots))
if __name__ == "__main__":
print(select_kth([12, 3, 5, 7, 4, 19, 26], 3))
Solution 2: Randomised Quickselect
Use randomised quickselect which performs well on average despite worst-case quadratic behaviour.
import random
from typing import List
def median_randomised(nums: List[int], k: int) -> int:
def select(left: int, right: int, index: int) -> int:
if left == right:
return nums[left]
pivot_index = random.randint(left, right)
pivot_value = nums[pivot_index]
nums[pivot_index], nums[right] = nums[right], nums[pivot_index]
store = left
for i in range(left, right):
if nums[i] < pivot_value:
nums[store], nums[i] = nums[i], nums[store]
store += 1
nums[store], nums[right] = nums[right], nums[store]
if index == store:
return nums[store]
if index < store:
return select(left, store - 1, index)
return select(store + 1, right, index)
return select(0, len(nums) - 1, k)
if __name__ == "__main__":
print(median_randomised([7, 10, 4, 3, 20, 15], 2))
Count the number of set bits in a 32-bit integer. Sample Input: n = 0b101101. Sample Output: 4.
Repeatedly clear the lowest set bit with n &= n - 1 and count how many times you do so. Each iteration removes one set bit, so the runtime depends on the number of ones.
n = 0b101101
bits -> count = 4
Solution 1: Primary Approach
def hamming_weight(n: int) -> int:
count = 0
while n:
n &= n - 1
count += 1
return count
if __name__ == "__main__":
print(hamming_weight(0b101101))
Solution 2: Built-in Count
Use Python's built-in `bin` representation to count set bits.
def hamming_weight_builtin(n: int) -> int:
return bin(n).count('1')
if __name__ == "__main__":
print(hamming_weight_builtin(0b1011))
Compute the number of set bits for each number from 0 to n. Sample Input: n = 5. Sample Output: [0,1,1,2,1,2].
Build an array where count[i] equals count[i >> 1] plus the least significant bit. Reusing earlier results lets you fill the table in linear time.
n = 5
result: [0,1,1,2,1,2]
Solution 1: Primary Approach
from typing import List
def count_bits(n: int) -> List[int]:
dp = [0] * (n + 1)
for i in range(1, n + 1):
dp[i] = dp[i >> 1] + (i & 1)
return dp
if __name__ == "__main__":
print(count_bits(5))
Solution 2: Naive Count
Count bits for each number independently using the builtin counter.
from typing import List
def count_bits_naive(n: int) -> List[int]:
return [bin(i).count('1') for i in range(n + 1)]
if __name__ == "__main__":
print(count_bits_naive(5))
Reverse the bits of a 32-bit unsigned integer. Sample Input: 0b00000010100101000001111010011100. Sample Output: 0b111001011110000010100101000000.
Iterate 32 times, shifting the result left and appending the lowest bit of the input each round. Shifting the input right in tandem moves every bit to its mirrored position.
00000010100101000001111010011100
-> 00111001011110000010100101000000
Solution 1: Primary Approach
def reverse_bits(n: int) -> int:
result = 0
for _ in range(32):
result = (result << 1) | (n & 1)
n >>= 1
return result
if __name__ == "__main__":
print(bin(reverse_bits(0b00000010100101000001111010011100)))
Solution 2: String Reverse
Convert to a binary string, reverse it, and parse back.
def reverse_bits_string(n: int) -> int:
bits = f"{n:032b}"[::-1]
return int(bits, 2)
if __name__ == "__main__":
print(reverse_bits_string(0b00000010100101000001111010011100))
Find the element that appears once when all others appear twice. Sample Input: [2,2,1]. Sample Output: 1.
XOR all numbers together so duplicates cancel out, leaving the unique value. XOR is ideal because it is both commutative and self-inverse.
nums: [2,2,1]
xor -> 1
Solution 1: Primary Approach
from typing import List
def single_number(nums: List[int]) -> int:
result = 0
for num in nums:
result ^= num
return result
if __name__ == "__main__":
print(single_number([2, 2, 1]))
Solution 2: Set Arithmetic
Use the formula `2 * sum(set(nums)) - sum(nums)` to isolate the unique element.
from typing import List
def single_number_set(nums: List[int]) -> int:
return 2 * sum(set(nums)) - sum(nums)
if __name__ == "__main__":
print(single_number_set([4,1,2,1,2]))
Find the element that appears once when others appear three times. Sample Input: [2,2,3,2]. Sample Output: 3.
Count set bits at each position modulo three and rebuild the number from bits with remainder one. Bits from numbers that appear three times vanish, leaving the lone value.
nums: [2,2,3,2]
ones/twos tracking
Solution 1: Primary Approach
from typing import List
def single_number_three(nums: List[int]) -> int:
ones = twos = 0
for num in nums:
ones = (ones ^ num) & ~twos
twos = (twos ^ num) & ~ones
return ones
if __name__ == "__main__":
print(single_number_three([2, 2, 3, 2]))
Solution 2: Frequency Counter
Count occurrences with a dictionary and return the number with frequency 1.
from collections import Counter
from typing import List
def single_number_two_counter(nums: List[int]) -> int:
counts = Counter(nums)
for num, freq in counts.items():
if freq == 1:
return num
return -1
if __name__ == "__main__":
print(single_number_two_counter([0,1,0,1,0,1,99]))
Identify the missing number in the range [0, n] using XOR. Sample Input: [3,0,1]. Sample Output: 2.
XOR the indices and the array values together; everything cancels except the missing index. This avoids sorting or extra memory.
nums: [3,0,1]
xor with [0..3] => 2
Solution 1: Primary Approach
from typing import List
def missing_number(nums: List[int]) -> int:
xor = 0
for index, num in enumerate(nums):
xor ^= index ^ num
return xor ^ len(nums)
if __name__ == "__main__":
print(missing_number([3, 0, 1]))
Solution 2: Summation
Subtract the sum of the array from the expected sum of 0..n.
from typing import List
def missing_number_sum(nums: List[int]) -> int:
n = len(nums)
return n * (n + 1) // 2 - sum(nums)
if __name__ == "__main__":
print(missing_number_sum([3,0,1]))
Add two integers without using + or - operators. Sample Input: a = 5, b = 7. Sample Output: 12.
Use XOR for the partial sum and AND shifted left for the carries, repeating until the carry is zero. Handling sign bits with masks keeps the computation in 32-bit space.
a=5 (0101)
b=7 (0111)
carry shifts
Solution 1: Primary Approach
def add_without_plus(a: int, b: int) -> int:
mask = 0xFFFFFFFF
while b:
carry = (a & b) & mask
a = (a ^ b) & mask
b = (carry << 1) & mask
if a >> 31 & 1:
return ~((a ^ mask))
return a
if __name__ == "__main__":
print(add_without_plus(5, 7))
Solution 2: Python Addition
Python's arbitrary precision integers handle addition directly; useful baseline for comparison.
def add_with_plus(a: int, b: int) -> int:
return a + b
if __name__ == "__main__":
print(add_with_plus(5, 7))
Swap each pair of adjacent bits in a 32-bit integer. Sample Input: 0b101100. Sample Output: 0b111000.
Mask out even and odd bits separately, shift them toward each other, and OR the results. The bit masks ensure each pair swaps positions in constant time.
n = 0b101100
even mask 0xAAAAAAAA
odd mask 0x55555555
Solution 1: Primary Approach
def swap_odd_even_bits(n: int) -> int:
even = (n & 0xAAAAAAAA) >> 1
odd = (n & 0x55555555) << 1
return even | odd
if __name__ == "__main__":
print(bin(swap_odd_even_bits(0b101100)))
Solution 2: Bit-by-Bit
Iterate through bit positions swapping odd and even bits manually.
def swap_bits_loop(n: int) -> int:
result = 0
for bit in range(0, 32, 2):
even_bit = (n >> (bit + 1)) & 1
odd_bit = (n >> bit) & 1
result |= even_bit << bit
result |= odd_bit << (bit + 1)
return result
if __name__ == "__main__":
print(bin(swap_bits_loop(0b1010)))
Find the unique element when each other element appears twice. Sample Input: [1,1,2,2,3]. Sample Output: 3.
XOR all numbers so that pairs cancel and the solitary element remains. This works even when the lonely value appears only once among duplicates.
nums: [1,1,2,2,3]
xor accum = 3
Solution 1: Primary Approach
from typing import List
def lonely_integer(nums: List[int]) -> int:
value = 0
for num in nums:
value ^= num
return value
if __name__ == "__main__":
print(lonely_integer([1, 1, 2, 2, 3]))
Solution 2: Dictionary Count
Use a frequency dictionary to locate the value with count one.
from collections import Counter
from typing import List
def lonely_integer_counter(nums: List[int]) -> int:
counts = Counter(nums)
for num, freq in counts.items():
if freq == 1:
return num
return -1
if __name__ == "__main__":
print(lonely_integer_counter([1,2,3,2,1]))
Build a linearly independent XOR basis from a list of integers. Sample Input: [5,3,10]. Sample Output: [10, 5].
Insert numbers into a basis from the highest bit downward, XORing with existing basis vectors to maintain independence. The resulting basis spans all achievable XOR combinations.
nums: [5,3,10]
basis -> [10,5]
Solution 1: Primary Approach
from typing import List
def xor_basis(nums: List[int]) -> List[int]:
basis = [0] * 32
for number in nums:
value = number
for bit in range(31, -1, -1):
if not (value & (1 << bit)):
continue
if not basis[bit]:
basis[bit] = value
break
value ^= basis[bit]
return [b for b in basis if b]
if __name__ == "__main__":
print(xor_basis([5, 3, 10]))
Solution 2: Incremental Gaussian Elimination
Insert vectors into the basis by eliminating highest set bits in ascending order.
from typing import List
def xor_basis_gaussian(nums: List[int]) -> List[int]:
basis: List[int] = []
for num in nums:
value = num
for b in basis:
value = min(value, value ^ b)
if value:
basis.append(value)
basis.sort(reverse=True)
return basis
if __name__ == "__main__":
print(xor_basis_gaussian([3, 10, 5]))
Traverse a matrix in spiral order and return the elements. Sample Input: 3x3 matrix. Sample Output: [1,2,3,6,9,8,7,4,5].
Peel the matrix layer by layer by traversing the top row, right column, bottom row, and left column while shrinking the boundaries. This ordered walk lists the elements in spiral order.
Solution 1: Primary Approach
from typing import List
def spiral_order(matrix: List[List[int]]) -> List[int]:
result = []
if not matrix:
return result
top, bottom = 0, len(matrix) - 1
left, right = 0, len(matrix[0]) - 1
while top <= bottom and left <= right:
for col in range(left, right + 1):
result.append(matrix[top][col])
top += 1
for row in range(top, bottom + 1):
result.append(matrix[row][right])
right -= 1
if top <= bottom:
for col in range(right, left - 1, -1):
result.append(matrix[bottom][col])
bottom -= 1
if left <= right:
for row in range(bottom, top - 1, -1):
result.append(matrix[row][left])
left += 1
return result
if __name__ == "__main__":
print(spiral_order([[1,2,3],[4,5,6],[7,8,9]]))
Solution 2: Visited Walk
Walk the matrix using direction vectors and a visited set, turning right whenever the next cell is invalid.
from typing import List, Tuple
def spiral_order_visited(matrix: List[List[int]]) -> List[int]:
if not matrix:
return []
rows, cols = len(matrix), len(matrix[0])
visited = set()
order: List[int] = []
directions: List[Tuple[int, int]] = [(0,1), (1,0), (0,-1), (-1,0)]
r = c = d = 0
for _ in range(rows * cols):
order.append(matrix[r][c])
visited.add((r, c))
nr, nc = r + directions[d][0], c + directions[d][1]
if not (0 <= nr < rows and 0 <= nc < cols) or (nr, nc) in visited:
d = (d + 1) % 4
nr, nc = r + directions[d][0], c + directions[d][1]
r, c = nr, nc
return order
if __name__ == "__main__":
print(spiral_order_visited([[1,2,3],[4,5,6],[7,8,9]]))
Reverse digits of a 32-bit signed integer with overflow checks. Sample Input: 123. Sample Output: 321.
Extract digits from the absolute value, append them to the reversed number, and reapply the sign. An overflow check against 32-bit bounds guards against out-of-range results.
Solution 1: Primary Approach
def reverse_integer(x: int) -> int:
sign = -1 if x < 0 else 1
x = abs(x)
reversed_value = 0
while x:
reversed_value = reversed_value * 10 + x % 10
x //= 10
reversed_value *= sign
if reversed_value < -2**31 or reversed_value > 2**31 - 1:
return 0
return reversed_value
if __name__ == "__main__":
print(reverse_integer(123))
Solution 2: String Reverse
Convert to string, reverse, and clamp to signed 32-bit range.
def reverse_integer_string(x: int) -> int:
sign = -1 if x < 0 else 1
reversed_str = str(abs(x))[::-1]
value = sign * int(reversed_str)
return value if -2**31 <= value <= 2**31 - 1 else 0
if __name__ == "__main__":
print(reverse_integer_string(123))
Find the maximum number of points that lie on the same straight line. Sample Input: points = [(1,1),(2,2),(3,3),(0,4)]. Sample Output: 3.
For each anchor point compute normalized slopes to every other point while counting duplicates. Slope normalization via gcd clusters points lying on the same line.
Solution 1: Primary Approach
from typing import List, Tuple
from collections import defaultdict
from math import gcd
def max_points(points: List[Tuple[int, int]]) -> int:
if len(points) <= 2:
return len(points)
best = 0
for i, (x1, y1) in enumerate(points):
slopes = defaultdict(int)
duplicates = 1
for j in range(i + 1, len(points)):
x2, y2 = points[j]
if x1 == x2 and y1 == y2:
duplicates += 1
continue
dx, dy = x2 - x1, y2 - y1
g = gcd(dx, dy)
dx //= g
dy //= g
slopes[(dx, dy)] += 1
current = duplicates
for count in slopes.values():
current = max(current, count + duplicates)
best = max(best, current)
return best
if __name__ == "__main__":
print(max_points([(1,1),(2,2),(3,3),(0,4)]))
Solution 2: Brute Force
Check every pair of points and count how many lie on the defined line.
from typing import List, Tuple
def max_points_bruteforce(points: List[Tuple[int, int]]) -> int:
if len(points) < 2:
return len(points)
best = 0
for i in range(len(points)):
slopes = {points[i]: 1}
for j in range(i + 1, len(points)):
count = 2
for k in range(len(points)):
if k == i or k == j:
continue
if (points[j][0] - points[i][0]) * (points[k][1] - points[i][1]) == (points[j][1] - points[i][1]) * (points[k][0] - points[i][0]):
count += 1
best = max(best, count)
return best
if __name__ == "__main__":
print(max_points_bruteforce([(1,1),(2,2),(3,3)]))
Determine the surviving index in the Josephus elimination process. Sample Input: n = 7, k = 3. Sample Output: 3.
The Josephus problem can be solved using various approaches, each with different trade-offs in terms of time complexity and conceptual understanding. The core idea is to find the position of the survivor in a circle where every k-th person is eliminated.
Solution 1: Iterative (Dynamic Programming)
def josephus(n: int, k: int) -> int:
survivor = 0
for size in range(1, n + 1):
survivor = (survivor + k) % size
return survivor
if __name__ == "__main__":
print(josephus(7, 3))
Solution 2: Recursive
def josephus_recursive(n: int, k: int) -> int:
if n == 1:
return 0
return (josephus_recursive(n - 1, k) + k) % n
if __name__ == "__main__":
print(josephus_recursive(7, 3))
Solution 3: Linked List Simulation
class Node:
def __init__(self, value):
self.value = value
self.next = None
def josephus_linked_list(n: int, k: int) -> int:
if n == 0: return -1
if n == 1: return 0
head = Node(0)
current = head
for i in range(1, n):
current.next = Node(i)
current = current.next
current.next = head # Make it circular
while current.next != current:
for _ in range(k - 1):
current = current.next
current.next = current.next.next # Remove the k-th person
return current.value
if __name__ == "__main__":
print(josephus_linked_list(7, 3))
Solution 2: Recursive
This is the recursive formulation of the same mathematical recurrence. The base case is when there is only one person left, they are the survivor (position 0). For `n` people, the survivor is found by taking the survivor of `n-1` people, shifting their position by `k`, and taking the result modulo `n`.
def josephus_recursive(n: int, k: int) -> int:
if n == 1:
return 0
return (josephus_recursive(n - 1, k) + k) % n
if __name__ == "__main__":
print(josephus_recursive(7, 3))
Solution 3: Linked List Simulation
This approach simulates the elimination process directly using a circular linked list. It's less efficient (O(N*K)) but provides a clear step-by-step understanding of the problem. We create a circular linked list of `n` people, then repeatedly skip `k-1` people and remove the `k`-th person until only one remains.
class Node:
def __init__(self, value):
self.value = value
self.next = None
def josephus_linked_list(n: int, k: int) -> int:
if n == 0: return -1
if n == 1: return 0
head = Node(0)
current = head
for i in range(1, n):
current.next = Node(i)
current = current.next
current.next = head # Make it circular
while current.next != current:
for _ in range(k - 1):
current = current.next
current.next = current.next.next # Remove the k-th person
return current.value
if __name__ == "__main__":
print(josephus_linked_list(7, 3))
Compute the n-th triangular number representing stacked dots. Sample Input: n = 5. Sample Output: 15.
Apply the closed-form formula n * (n + 1) // 2 to compute the triangular number directly. The arithmetic derivation comes from summing the first n integers.
Solution 1: Primary Approach
def triangle_number(n: int) -> int:
return n * (n + 1) // 2
if __name__ == "__main__":
print(triangle_number(5))
Solution 2: Iterative Sum
Accumulate numbers from 1 to n iteratively.
def triangle_number_iterative(n: int) -> int:
total = 0
for i in range(1, n + 1):
total += i
return total
if __name__ == "__main__":
print(triangle_number_iterative(5))
Determine point orientation and whether line segments intersect. Sample Input: segments (1,1)-(4,4) and (1,8)-(2,4). Sample Output: True.
Use cross products to determine orientation and special-case collinear overlays. The four orientation tests combined decide whether two segments intersect.
Solution 1: Primary Approach
from typing import Tuple
def orientation(p: Tuple[int, int], q: Tuple[int, int], r: Tuple[int, int]) -> int:
value = (q[1] - p[1]) * (r[0] - q[0]) - (q[0] - p[0]) * (r[1] - q[1])
if value == 0:
return 0
return 1 if value > 0 else 2
def on_segment(p: Tuple[int, int], q: Tuple[int, int], r: Tuple[int, int]) -> bool:
return min(p[0], r[0]) <= q[0] <= max(p[0], r[0]) and min(p[1], r[1]) <= q[1] <= max(p[1], r[1])
def segments_intersect(p1, q1, p2, q2) -> bool:
o1 = orientation(p1, q1, p2)
o2 = orientation(p1, q1, q2)
o3 = orientation(p2, q2, p1)
o4 = orientation(p2, q2, q1)
if o1 != o2 and o3 != o4:
return True
if o1 == 0 and on_segment(p1, p2, q1):
return True
if o2 == 0 and on_segment(p1, q2, q1):
return True
if o3 == 0 and on_segment(p2, p1, q2):
return True
if o4 == 0 and on_segment(p2, q1, q2):
return True
return False
if __name__ == "__main__":
print(segments_intersect((1, 1), (4, 4), (1, 8), (2, 4)))
Solution 2: Vector Cross
Compute cross products explicitly using helper functions and reuse them for all cases.
from typing import Tuple
def orientation_vector(p: Tuple[int, int], q: Tuple[int, int], r: Tuple[int, int]) -> int:
value = (q[0] - p[0]) * (r[1] - p[1]) - (q[1] - p[1]) * (r[0] - p[0])
if value == 0:
return 0
return 1 if value > 0 else 2
def do_intersect_vector(p1: Tuple[int, int], q1: Tuple[int, int], p2: Tuple[int, int], q2: Tuple[int, int]) -> bool:
def on_segment(p: Tuple[int, int], q: Tuple[int, int], r: Tuple[int, int]) -> bool:
return min(p[0], r[0]) <= q[0] <= max(p[0], r[0]) and min(p[1], r[1]) <= q[1] <= max(p[1], r[1])
o1 = orientation_vector(p1, q1, p2)
o2 = orientation_vector(p1, q1, q2)
o3 = orientation_vector(p2, q2, p1)
o4 = orientation_vector(p2, q2, q1)
if o1 != o2 and o3 != o4:
return True
if o1 == 0 and on_segment(p1, p2, q1):
return True
if o2 == 0 and on_segment(p1, q2, q1):
return True
if o3 == 0 and on_segment(p2, p1, q2):
return True
if o4 == 0 and on_segment(p2, q1, q2):
return True
return False
if __name__ == "__main__":
print(do_intersect_vector((1,1),(4,4),(1,8),(2,4)))
Compute the convex hull of a set of planar points. Sample Input: points forming diamond. Sample Output: [(0, 0), (2, 0), (2, 2), (0, 2)].
Pick the lowest point, sort the rest by polar angle, and maintain a stack while ensuring counter-clockwise turns. Popping on clockwise turns keeps only hull boundary points.
Solution 1: Primary Approach
from typing import List, Tuple
def convex_hull(points: List[Tuple[int, int]]) -> List[Tuple[int, int]]:
points = sorted(set(points))
if len(points) <= 1:
return points
def cross(o, a, b):
return (a[0] - o[0]) * (b[1] - o[1]) - (a[1] - o[1]) * (b[0] - o[0])
lower = []
for point in points:
while len(lower) >= 2 and cross(lower[-2], lower[-1], point) <= 0:
lower.pop()
lower.append(point)
upper = []
for point in reversed(points):
while len(upper) >= 2 and cross(upper[-2], upper[-1], point) <= 0:
upper.pop()
upper.append(point)
return lower[:-1] + upper[:-1]
if __name__ == "__main__":
print(convex_hull([(0,0),(1,1),(2,2),(0,2),(2,0)]))
Solution 2: Jarvis March
Use the gift wrapping algorithm to walk around the hull by repeatedly selecting the most counter-clockwise point.
from typing import List, Tuple
def convex_hull_jarvis(points: List[Tuple[int, int]]) -> List[Tuple[int, int]]:
if len(points) <= 1:
return points
hull: List[Tuple[int, int]] = []
left_most = min(points)
point_on_hull = left_most
while True:
hull.append(point_on_hull)
endpoint = points[0]
for candidate in points[1:]:
if endpoint == point_on_hull or orientation_vector(point_on_hull, endpoint, candidate) == 2:
endpoint = candidate
point_on_hull = endpoint
if endpoint == left_most:
break
return hull
if __name__ == "__main__":
pts = [(0,0),(2,0),(2,2),(0,2),(1,1)]
print(convex_hull_jarvis(pts))
Rotate an N×N matrix 90 degrees clockwise in-place. Sample Input: [[1,2,3],[4,5,6],[7,8,9]]. Sample Output: [[7,4,1],[8,5,2],[9,6,3]].
First transpose the matrix to swap rows and columns, then reverse each row. The combination rotates the matrix 90 degrees clockwise in-place.
Solution 1: Primary Approach
from typing import List
def rotate_matrix(matrix: List[List[int]]) -> None:
n = len(matrix)
for i in range(n):
for j in range(i, n):
matrix[i][j], matrix[j][i] = matrix[j][i], matrix[i][j]
for row in matrix:
row.reverse()
if __name__ == "__main__":
mat = [[1,2,3],[4,5,6],[7,8,9]]
rotate_matrix(mat)
print(mat)
Solution 2: New Matrix
Create a rotated copy by writing elements into a fresh matrix, then copy back.
from typing import List
def rotate_matrix_copy(matrix: List[List[int]]) -> List[List[int]]:
n = len(matrix)
rotated = [[0] * n for _ in range(n)]
for r in range(n):
for c in range(n):
rotated[c][n - 1 - r] = matrix[r][c]
return rotated
if __name__ == "__main__":
print(rotate_matrix_copy([[1,2,3],[4,5,6],[7,8,9]]))
Set entire rows and columns to zero if an element is zero. Sample Input: [[1,1,1],[1,0,1],[1,1,1]]. Sample Output: [[1,0,1],[0,0,0],[1,0,1]].
Use the first row and column as markers to note which rows and columns need zeroing, remembering whether the first row or column should be cleared. A second pass zeros marked lines while preserving constant extra space.
Solution 1: Primary Approach
from typing import List
def set_zeroes(matrix: List[List[int]]) -> None:
rows, cols = len(matrix), len(matrix[0])
first_row_zero = any(matrix[0][c] == 0 for c in range(cols))
first_col_zero = any(matrix[r][0] == 0 for r in range(rows))
for r in range(1, rows):
for c in range(1, cols):
if matrix[r][c] == 0:
matrix[r][0] = matrix[0][c] = 0
for r in range(1, rows):
if matrix[r][0] == 0:
for c in range(cols):
matrix[r][c] = 0
for c in range(1, cols):
if matrix[0][c] == 0:
for r in range(rows):
matrix[r][c] = 0
if first_row_zero:
for c in range(cols):
matrix[0][c] = 0
if first_col_zero:
for r in range(rows):
matrix[r][0] = 0
if __name__ == "__main__":
mat = [[1,1,1],[1,0,1],[1,1,1]]
set_zeroes(mat)
print(mat)
Solution 2: Extra Sets
Record rows and columns that contain zeros using two sets, then zero out the marked rows and columns.
from typing import List
def set_zeroes_extra_space(matrix: List[List[int]]) -> None:
rows = set()
cols = set()
for r in range(len(matrix)):
for c in range(len(matrix[0])):
if matrix[r][c] == 0:
rows.add(r)
cols.add(c)
for r in rows:
for c in range(len(matrix[0])):
matrix[r][c] = 0
for c in cols:
for r in range(len(matrix)):
matrix[r][c] = 0
if __name__ == "__main__":
grid = [[1,1,1],[1,0,1],[1,1,1]]
set_zeroes_extra_space(grid)
print(grid)