Top 10 Coding Patterns to ace any coding interviews.
Here are most asked coding patterns in interview and their pseudocode and use cases, that can help you perform well in coding interviews:
1. Backtracking:
This is a general technique for finding all (or some) solutions to some computational problems, that incrementally builds candidates to the solutions, and abandons a candidate (“backtracks”) as soon as it determines that the candidate cannot possibly be completed to a valid solution.
General pseudocode for a Backtracking Algorithm in python.
def backtrack(solution, choices):
if len(choices) == 0:
if is_valid_solution(solution):
return solution
else:
return None
else:
for choice in choices:
solution.append(choice)
if is_promising(solution):
new_choices = generate_choices(solution)
result = backtrack(solution, new_choices)
if result is not None:
return result
solution.pop()
return None
In this example, the backtrack
function takes in a solution
list and a list of choices
to make. The algorithm works recursively by adding a new choice
to the solution
and checking if it's still a valid solution. If it is, the algorithm generates a new set of choices
based on the updated solution
and continues recursively exploring. If the algorithm reaches a dead end, it removes the most recent choice
from the solution
and tries a different choice.
Here are some functions that you would need to implement for this algorithm to work:
is_valid_solution(solution)
: This function takes in asolution
list and checks if it's a valid solution. Depending on the problem you're trying to solve, this could involve checking constraints or requirements specific to the problem.is_promising(solution)
: This function takes in asolution
list and checks if it's worth exploring further. Depending on the problem, this could involve checking if the current solution can possibly lead to a valid solution, given the constraints or requirements of the problem.generate_choices(solution)
: This function takes in asolution
list and generates a list of possiblechoices
that could be made next, based on the currentsolution
. Thechoices
list should not include any choices that have already been made in thesolution
.
Note that the specific implementation of these functions will depend on the problem you’re trying to solve with backtracking.
Applications of Backtracking algorithm.
Backtracking algorithms can be applied to a wide variety of problems in computer science and beyond. Here are some examples of applications of backtracking algorithms:
- Sudoku: Backtracking can be used to solve Sudoku puzzles by trying different possible values in each cell and recursively exploring the possibilities until a solution is found.
- N-Queens: Backtracking can be used to solve the N-Queens problem, which involves placing N queens on an NxN chessboard so that no two queens threaten each other.
- Graph coloring: Backtracking can be used to solve graph coloring problems, which involve assigning colors to vertices in a graph so that no two adjacent vertices have the same color.
- Subset sum: Backtracking can be used to solve the subset sum problem, which involves finding a subset of numbers in a set that add up to a target sum.
- Traveling salesman problem: Backtracking can be used to solve the traveling salesman problem, which involves finding the shortest possible route that visits each of a given set of cities exactly once and returns to the starting city.
- Cryptography: Backtracking can be used to break certain types of encryption, such as the knapsack cipher or the Merkle-Hellman knapsack cryptosystem.
These are just a few examples of the many applications of backtracking algorithms. In general, backtracking can be used to solve any problem that can be expressed as a search for a solution among a large number of possibilities, where some possibilities can be eliminated based on the results of previous searches.
2. Divide and conquer:
This is an algorithmic paradigm that involves dividing a problem into smaller subproblems, solving the subproblems, and then combining the solutions to the subproblems to solve the original problem.
General pseudocode for a Divide and Conquer Algorithm in python.
def divide_and_conquer(problem):
# Divide the problem into smaller subproblems
subproblems = divide(problem)
# Solve each subproblem recursively
solutions = []
for subproblem in subproblems:
solution = divide_and_conquer(subproblem)
solutions.append(solution)
# Combine the solutions of the subproblems
return combine(solutions)
In this example, the divide_and_conquer
function takes in a problem
and solves it using the Divide and Conquer strategy. The algorithm works recursively by first dividing the problem
into smaller subproblems using the divide
function. It then solves each subproblem recursively using the divide_and_conquer
function, and combines the solutions of the subproblems using the combine
function.
Here are some functions that you would need to implement for this algorithm to work:
divide(problem)
: This function takes in aproblem
and divides it into smaller subproblems. Depending on the problem you're trying to solve, this could involve splitting the problem into two smaller subproblems, or breaking it down into several smaller subproblems.combine(solutions)
: This function takes in a list ofsolutions
to the subproblems and combines them to form a solution to the originalproblem
. Depending on the problem, this could involve merging the solutions of the subproblems, comparing them to find the best solution, or using them to construct a larger solution.
Note that the specific implementation of these functions will depend on the problem you’re trying to solve with the Divide and Conquer strategy.
Applications of Divide and Conquer Algorithm.
The Divide and Conquer algorithm is a powerful strategy that can be applied to a wide variety of problems in computer science and beyond. Here are some examples of applications of the Divide and Conquer algorithm:
- Sorting: The most famous Divide and Conquer algorithm is probably the merge sort algorithm, which uses the Divide and Conquer strategy to sort a list of elements by dividing it into smaller sublists, sorting them recursively, and then merging the sorted sublists into a larger sorted list.
- Binary search: Binary search is a classic example of a Divide and Conquer algorithm. Given a sorted list of elements, binary search divides the list in half repeatedly until it finds the desired element.
- Matrix multiplication: The Divide and Conquer strategy can be used to multiply two matrices efficiently by dividing each matrix into four smaller submatrices, multiplying them recursively, and then combining the results.
- Closest pair of points: Given a set of points in a two-dimensional space, the Divide and Conquer strategy can be used to find the pair of points that are closest to each other. This involves dividing the points into two halves, finding the closest pairs in each half, and then checking if there is a closer pair that crosses the dividing line.
- Strassen’s algorithm for matrix multiplication: Strassen’s algorithm is a Divide and Conquer algorithm that improves on the standard matrix multiplication algorithm by reducing the number of multiplications required.
- Fast Fourier Transform: The Divide and Conquer strategy can be used to perform the Fast Fourier Transform (FFT), which is an important algorithm in signal processing and other areas.
These are just a few examples of the many applications of the Divide and Conquer algorithm. In general, Divide and Conquer can be used to solve any problem that can be expressed as a combination of smaller subproblems that can be solved independently and then combined to form a solution to the original problem.
3. Dynamic programming:
This is a method for solving complex problems by breaking them down into simpler subproblems, and storing the solutions to these subproblems to avoid recomputing them.
General pseudocode for a Dynamic Programming Algorithm in python.
def dynamic_programming(problem):
# Create a table to store solutions to subproblems
table = create_table()
# Initialize the base cases of the table
initialize_table(table)
# Fill in the remaining entries of the table
for i in range(...):
for j in range(...):
table[i][j] = compute_solution(table, i, j)
# Return the solution to the original problem
return table[...][...]
In this example, the dynamic_programming
function takes in a problem
and solves it using the Dynamic Programming strategy. The algorithm works by creating a table to store solutions to subproblems, initializing the base cases of the table, and then filling in the remaining entries of the table using a formula or algorithm that depends on the subproblems.
Here are some functions that you would need to implement for this algorithm to work:
create_table()
: This function creates a table that will store solutions to subproblems. Depending on the problem you're trying to solve, the table might be a one-dimensional array, a two-dimensional grid, or some other data structure.initialize_table(table)
: This function initializes the base cases of the table. These are the solutions to the smallest subproblems that can be solved directly without relying on other subproblems.compute_solution(table, i, j)
: This function computes the solution to a subproblem at position(i, j)
in the table, using the solutions to smaller subproblems stored in the table.
Note that the specific implementation of these functions will depend on the problem you’re trying to solve with the Dynamic Programming strategy. In addition, you may need to choose an appropriate formula or algorithm for computing solutions to subproblems, depending on the problem.
Dynamic Programming can be used to solve a wide variety of problems in computer science and beyond, including optimization problems, graph algorithms, and sequence alignment problems. The key idea behind Dynamic Programming is to break a complex problem down into simpler subproblems and then solve each subproblem only once, storing its solution in a table to be used later as needed. By reusing solutions to subproblems, Dynamic Programming can often achieve much better performance than brute-force search or other algorithms.
Applications of Dynamic Programming Algorithm.
Dynamic Programming is a powerful algorithmic technique that is widely used in computer science and beyond. Here are some common applications of Dynamic Programming:
- Optimization problems: Many optimization problems can be solved using Dynamic Programming. For example, the Knapsack problem, in which you have a limited capacity bag and must choose items to maximize the value you can carry, can be solved using a Dynamic Programming approach.
- Pathfinding algorithms: Dynamic Programming can be used to find the shortest path in a graph or grid. For example, Dijkstra’s algorithm uses Dynamic Programming to find the shortest path in a weighted graph.
- Sequence alignment: Dynamic Programming is commonly used in bioinformatics to align sequences of DNA, RNA, or protein molecules. By comparing and scoring the similarity of subsequence pairs, Dynamic Programming can efficiently identify the optimal alignment between two sequences.
- Game theory: Dynamic Programming can be used to solve game theory problems, such as finding the optimal strategy for a player in a game. For example, the minimax algorithm, which is used in game theory to minimize the maximum loss, can be implemented using Dynamic Programming.
- Image and signal processing: Dynamic Programming can be used to process and analyze images and signals. For example, the Seam Carving algorithm uses Dynamic Programming to resize an image by removing or adding seams of pixels in a way that minimizes the distortion of the image.
- Natural language processing: Dynamic Programming can be used to solve problems in natural language processing, such as text segmentation and part-of-speech tagging.
These are just a few examples of the many applications of Dynamic Programming. In general, Dynamic Programming is a versatile algorithmic technique that can be used to solve a wide variety of problems that involve breaking a complex problem down into simpler subproblems and reusing solutions to subproblems to achieve better performance.
4. Greedy algorithms:
This is a class of algorithms that follow the problem-solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum.
General pseudocode for a Greedy Algorithm in python.
def greedy_algorithm(problem):
# Initialize the solution
solution = []
# Sort the elements of the problem in some way
sorted_problem = sort_problem(problem)
# Loop over the elements of the problem in the sorted order
for element in sorted_problem:
# Add the element to the solution if it doesn't violate any constraints
if is_feasible(solution, element):
solution.append(element)
# Return the solution
return solution
In this example, the greedy_algorithm
function takes in a problem
and solves it using the Greedy Algorithm strategy. The algorithm works by initializing the solution to an empty set or list, sorting the elements of the problem in some way, and then looping over the elements of the problem in the sorted order. At each step, the algorithm adds the next element to the solution if it doesn't violate any constraints.
Here are some functions that you would need to implement for this algorithm to work:
sort_problem(problem)
: This function sorts the elements of the problem in some way that is appropriate for the problem. Depending on the problem, you might sort the elements in increasing or decreasing order of some metric, or you might sort them in a more complicated way.is_feasible(solution, element)
: This function checks whether adding theelement
to thesolution
violates any constraints. Depending on the problem, the constraints might be related to the size or capacity of the solution, or they might be related to some other property of the problem.
Note that the specific implementation of these functions will depend on the problem you’re trying to solve with the Greedy Algorithm strategy. In addition, you may need to choose an appropriate sorting method and constraint-checking method for your problem.
Greedy Algorithms can be used to solve a wide variety of problems in computer science and beyond, including optimization problems, scheduling problems, and graph algorithms. The key idea behind Greedy Algorithms is to make locally optimal choices at each step in the hope that they will lead to a globally optimal solution. By prioritizing the most promising options at each step, Greedy Algorithms can often achieve good performance and simple implementations. However, Greedy Algorithms may not always find the globally optimal solution, and they may be sensitive to the ordering of the elements or constraints of the problem.
Applications of Greedy Algorithms.
Greedy Algorithms are widely used in computer science and beyond to solve optimization problems that involve making a sequence of choices. Here are some common applications of Greedy Algorithms:
- Scheduling problems: Greedy Algorithms can be used to solve scheduling problems, such as minimizing the waiting time of jobs in a queue or maximizing the number of tasks completed within a given time frame.
- Graph algorithms: Many graph algorithms can be implemented using Greedy Algorithms, such as finding the minimum spanning tree of a graph or the shortest path in a weighted graph.
- Combinatorial optimization: Greedy Algorithms can be used to solve combinatorial optimization problems, such as the Traveling Salesman Problem, which involves finding the shortest possible route that visits a set of cities and returns to the starting city.
- Huffman coding: Greedy Algorithms are used in Huffman coding, a technique for compressing data that assigns shorter codes to more frequently occurring symbols.
- Data compression: Greedy Algorithms can be used in data compression algorithms, such as the Lempel-Ziv-Welch algorithm, which creates a dictionary of frequently occurring phrases in a piece of text and replaces them with shorter codes.
- Knapsack problem: The Knapsack problem, in which you have a limited capacity bag and must choose items to maximize the value you can carry, can be solved using a Greedy Algorithm approach.
These are just a few examples of the many applications of Greedy Algorithms. In general, Greedy Algorithms are useful when the problem can be broken down into a sequence of choices and there is a simple heuristic for making each choice. By making locally optimal choices at each step, Greedy Algorithms can often achieve good performance and simple implementations. However, Greedy Algorithms may not always find the globally optimal solution, and they may be sensitive to the ordering of the choices or constraints of the problem.
5. Brute force:
This is a straightforward method of solving a problem by trying all possible solutions and checking which one works.
General pseudocode for a Brute force Algorithm in python.
def brute_force_algorithm(problem):
# Initialize the best solution to None
best_solution = None
# Loop over all possible solutions
for solution in generate_solutions(problem):
# Evaluate the solution and update the best solution if it's better
if is_better(solution, best_solution):
best_solution = solution
# Return the best solution
return best_solution
In this example, the brute_force_algorithm
function takes in a problem
and solves it using a Brute Force Algorithm strategy. The algorithm works by initializing the best solution to None
, looping over all possible solutions, evaluating each solution, and updating the best solution if it's better.
Here are some functions that you would need to implement for this algorithm to work:
generate_solutions(problem)
: This function generates all possible solutions to theproblem
. Depending on the problem, the number of possible solutions might be very large, so you might need to optimize this function or restrict the search space to make it tractable.is_better(solution, best_solution)
: This function evaluates whether thesolution
is better than thebest_solution
seen so far. Depending on the problem, the definition of "better" might be related to maximizing or minimizing some metric or objective function.
Note that the specific implementation of these functions will depend on the problem you’re trying to solve with the Brute Force Algorithm strategy. In addition, you may need to optimize the generate_solutions
function or restrict the search space to make the algorithm tractable for larger problem sizes.
Brute Force Algorithms are a simple and general strategy for solving optimization problems. By systematically evaluating all possible solutions, Brute Force Algorithms can guarantee finding the optimal solution, if it exists. However, Brute Force Algorithms can be very slow for large problem sizes, since the number of possible solutions grows exponentially with the size of the problem. Therefore, Brute Force Algorithms are often used as a baseline for comparison with more sophisticated algorithms or for problems with small or medium-sized instances.
Applications of Brute Force Algorithm
Brute Force Algorithm is a general strategy for solving optimization problems by evaluating all possible solutions. Here are some common applications of Brute Force Algorithm:
- Password cracking: Brute Force Algorithm can be used to crack passwords by trying all possible combinations of characters until the correct one is found. This is a slow and computationally expensive approach, but it can be effective against weak passwords.
- Cryptography: Brute Force Algorithm can be used to break encryption by trying all possible keys until the correct one is found. This approach is often used in cryptanalysis, where the goal is to find weaknesses in cryptographic systems.
- Game solving: Brute Force Algorithm can be used to solve games by trying all possible moves and evaluating the outcome of each move. For example, chess engines often use Brute Force Algorithm to evaluate positions by looking ahead several moves and generating all possible move sequences.
- Optimization: Brute Force Algorithm can be used to solve optimization problems by evaluating all possible solutions and choosing the one that maximizes or minimizes the objective function. This approach can be effective for small or medium-sized problems, but it becomes impractical for larger problems due to the large number of possible solutions.
- Combinatorial problems: Brute Force Algorithm can be used to solve combinatorial problems, such as the Traveling Salesman Problem, by generating all possible tours and selecting the one with the minimum distance. However, this approach is only practical for small problem sizes due to the combinatorial explosion of possible solutions.
- Testing: Brute Force Algorithm can be used to test the correctness of algorithms or systems by generating all possible inputs and checking the outputs. This approach can be effective for finding edge cases or corner cases that might be missed by other testing strategies.
These are just a few examples of the many applications of Brute Force Algorithm. In general, Brute Force Algorithm is a useful strategy when the problem size is small or the number of possible solutions is manageable. However, for larger problem sizes, more sophisticated algorithms are needed to make the computation tractable.
6. Bit manipulation:
This is the act of using bitwise operators to manipulate individual bits in a value stored in a variable.
General pseudocode for a Bit Manipulation Algorithm in python.
def bit_manipulation_algorithm(number):
# Get the binary representation of the number
binary = bin(number)[2:]
# Loop over each bit in the binary representation
for i in range(len(binary)):
# Flip the bit
flipped = binary[:i] + str(1 - int(binary[i])) + binary[i+1:]
# Convert the flipped binary back to decimal and print it
print(int(flipped, 2))
In this example, the bit_manipulation_algorithm
function takes in a number
and performs some bit manipulations on it. The algorithm works by getting the binary representation of the number, looping over each bit in the binary representation, flipping the bit, converting the flipped binary back to decimal, and printing it.
Here are some common bit manipulation operations that you can perform:
- Bitwise AND (
&
): Returns a 1 in each bit position where both operands have a 1. - Bitwise OR (
|
): Returns a 1 in each bit position where either operand has a 1. - Bitwise XOR (
^
): Returns a 1 in each bit position where only one operand has a 1. - Bitwise NOT (
~
): Inverts all the bits of the operand. - Left shift (
<<
): Shifts the bits of the operand to the left by a specified number of positions. - Right shift (
>>
): Shifts the bits of the operand to the right by a specified number of positions.
Note that the specific bit manipulation operations and the implementation of the bit_manipulation_algorithm
function will depend on the problem you're trying to solve with Bit Manipulation Algorithms.
Bit Manipulation Algorithms are commonly used in low-level programming, such as device drivers, embedded systems, and cryptography. Bit Manipulation Algorithms can be used to perform fast and efficient bitwise operations, which are essential for manipulating hardware registers, encoding and decoding data, and implementing cryptographic algorithms.
Applications of Bit manipulation Algorithm
Bit Manipulation Algorithms are a powerful tool for low-level programming and are used in a variety of applications. Here are some common applications of Bit Manipulation Algorithms:
- Data compression: Bit Manipulation Algorithms can be used to compress data by representing large numbers using fewer bits. For example, Huffman coding and arithmetic coding are popular compression algorithms that use Bit Manipulation Algorithms to encode and decode data.
- Cryptography: Bit Manipulation Algorithms are essential for implementing cryptographic algorithms, such as symmetric-key encryption, hash functions, and digital signatures. Bit Manipulation Algorithms can be used to perform bitwise operations that are crucial for manipulating and transforming data in a secure way.
- Computer networking: Bit Manipulation Algorithms are used in computer networking to represent and manipulate IP addresses, subnet masks, and port numbers. Bit Manipulation Algorithms can be used to perform bitwise operations that help with routing packets and managing network traffic.
- Embedded systems: Bit Manipulation Algorithms are used in embedded systems to manipulate hardware registers, control I/O pins, and interact with sensors and actuators. Bit Manipulation Algorithms can be used to perform bitwise operations that help with configuring and controlling hardware devices.
- Graphics programming: Bit Manipulation Algorithms are used in graphics programming to manipulate pixels, colors, and other image data. Bit Manipulation Algorithms can be used to perform bitwise operations that help with encoding and decoding image data and manipulating image masks and filters.
- Gaming: Bit Manipulation Algorithms are used in gaming to manipulate game states, perform collision detection, and manage game events. Bit Manipulation Algorithms can be used to perform bitwise operations that help with updating game objects and managing game data structures.
These are just a few examples of the many applications of Bit Manipulation Algorithms. In general, Bit Manipulation Algorithms are a powerful tool for low-level programming and can be used to perform fast and efficient bitwise operations that are essential for many applications in computer science and engineering.
7. Hash table:
A hash table is a data structure that is used to store keys and values in a way that allows for fast insertion and retrieval.
General pseudocode for a Hash Table in python.
class HashTable:
def __init__(self):
self.capacity = 10
self.table = [[] for _ in range(self.capacity)]
def hash(self, key):
return hash(key) % self.capacity
def insert(self, key, value):
hash_value = self.hash(key)
for i, (k, v) in enumerate(self.table[hash_value]):
if k == key:
self.table[hash_value][i] = (key, value)
return
self.table[hash_value].append((key, value))
def get(self, key):
hash_value = self.hash(key)
for k, v in self.table[hash_value]:
if k == key:
return v
raise KeyError(key)
In this example, the HashTable
class is defined, which has a capacity
attribute that determines the size of the hash table. The hash
method is used to convert the key
to an index in the hash table using the built-in hash
function. The insert
method is used to insert a key-value pair into the hash table, and the get
method is used to retrieve the value associated with a given key.
Hash Table Algorithms are commonly used for fast and efficient data storage and retrieval.
Applications of Hash Tables.
Hash Table Algorithm is a widely used data structure that allows for fast and efficient storage and retrieval of data. Here are some common applications of Hash Table Algorithm:
- Databases: Hash tables are used in databases to store and retrieve data quickly. When a query is executed, the hash function is used to locate the relevant data in the hash table, allowing for fast data retrieval.
- Caching: Hash tables are used in caching systems to store frequently accessed data in memory for fast retrieval. For example, web browsers use hash tables to store cached web pages, allowing them to load quickly on subsequent visits.
- Compiler symbol tables: Hash tables are used in compilers to store information about program symbols, such as variables and functions. This information is used during the compilation process to generate executable code.
- Spell checkers: Hash tables are used in spell checkers to store a dictionary of words for fast lookup. When a user types a word, the spell checker uses a hash function to quickly check if the word is spelled correctly.
- Network routing tables: Hash tables are used in network routing systems to store information about network paths and routing decisions. When a packet is sent through the network, the routing table is consulted to determine the most efficient path for the packet to take.
- Password storage: Hash tables are used to store password information in many systems. When a user enters a password, the system hashes the password and compares it to the stored hash value to determine if the password is correct.
These are just a few examples of the many applications of Hash Table Algorithm. In general, Hash Table Algorithm is a powerful tool for fast and efficient data storage and retrieval and is used in many different areas of computer science and engineering.
8. Sliding window:
This is a technique used to analyze sequential data by dividing it into overlapping windows of a fixed length.
General pseudocode for a Sliding Window Algorithm in python.
def sliding_window(nums: List[int], k: int) -> List[int]:
# Initialize the window and the result
window = nums[:k]
result = [max(window)]
# Slide the window and update the result
for i in range(k, len(nums)):
window.pop(0)
window.append(nums[i])
result.append(max(window))
return result
In this example, the sliding_window
function takes in a list of integers nums
and a window size k
, and returns a list of maximum values for each window of size k
in nums
. The function first initializes the window as the first k
elements of nums
, and the result as a list containing the maximum value of the initial window. It then slides the window over nums
, popping the leftmost element of the window and appending the next element of nums
to the right end of the window. It then appends the maximum value of the current window to the result list.
Sliding Window Algorithms are commonly used for solving problems that involve finding a subarray, substring or subsequence of a fixed length with certain properties.
Applications of Sliding window algorithm.
Sliding Window Algorithm is a powerful tool for solving problems that involve finding a subarray, substring, or subsequence of a fixed length with certain properties. Here are some common applications of Sliding Window Algorithm:
- Maximum sum subarray: Sliding Window Algorithm can be used to find the subarray of length
k
with the maximum sum in a given array. This problem is also known as the maximum subarray problem, and it has applications in data analysis, financial modeling, and signal processing. - Longest substring with k distinct characters: Sliding Window Algorithm can be used to find the longest substring of length
k
withk
distinct characters in a given string. This problem has applications in natural language processing, genetics, and data compression. - Maximum average subarray of length k: Sliding Window Algorithm can be used to find the subarray of length
k
with the maximum average value in a given array. This problem has applications in financial modeling, stock market analysis, and sensor data analysis. - Smallest subarray with sum greater than k: Sliding Window Algorithm can be used to find the smallest subarray with a sum greater than a given value
k
in a given array. This problem has applications in resource allocation, load balancing, and data partitioning. - String concatenation: Sliding Window Algorithm can be used to find all possible concatenations of
k
words in a given list of words. This problem has applications in natural language processing, text mining, and search engine optimization. - Moving averages: Sliding Window Algorithm can be used to calculate moving averages of a time series data. This problem has applications in financial modeling, signal processing, and quality control.
These are just a few examples of the many applications of Sliding Window Algorithm. In general, Sliding Window Algorithm is a powerful tool for solving problems that involve finding a subarray, substring, or subsequence of a fixed length with certain properties.
9. Two pointers:
This is a technique used to traverse through a list by maintaining two pointers that refer to two different elements in the list, with one pointer moving faster than the other.
General pseudocode for a Two Pointer Algorithm in python.
def two_pointers_algorithm(nums):
n = len(nums)
left, right = 0, n - 1
while left < right:
# Do something with nums[left] and nums[right]
# Move the pointers
if condition:
left += 1
else:
right -= 1
In the pseudocode above, nums
is a list of integers, n
is the length of nums
, left
and right
are the indices of the left and right pointers, respectively.
The while
loop runs until the left pointer left
is less than the right pointer right
. Inside the loop, you can perform some operations on the elements nums[left]
and nums[right]
. After that, you can move the pointers based on some condition. For example, if nums[left] + nums[right] > target
, you can move the right
pointer to the left by 1, because you need a smaller value to achieve the target sum. If nums[left] + nums[right] < target
, you can move the left
pointer to the right by 1, because you need a larger value to achieve the target sum.
Two Pointers Algorithm is often used for solving problems that involve finding pairs of elements in a sorted list that satisfy certain conditions, or finding subsequences with certain properties. It can also be used to solve some dynamic programming problems that involve maintaining a window or sliding window of elements.
Applications of Two Pointers Algorithm.
Two Pointers Algorithm is a powerful tool for solving problems that involve finding pairs of elements in a sorted list that satisfy certain conditions, or finding subsequences with certain properties. Here are some common applications of Two Pointers Algorithm:
- Two Sum Problem: Given an array of integers
nums
and an integertarget
, find two numbers such that they add up totarget
. Two Pointers Algorithm can be used to solve this problem in O(n) time complexity by using two pointers starting from both ends of the array and moving them towards the middle until the sum of the elements pointed to by the pointers equals the target value. - Trapping Rain Water Problem: Given an array of non-negative integers representing an elevation map where the width of each bar is 1, compute how much water it is able to trap after raining. Two Pointers Algorithm can be used to solve this problem by using two pointers starting from both ends of the array and moving them towards the middle until they meet. During the traversal, you can maintain the maximum height of the left and right bars and calculate the amount of water that can be trapped at each position.
- Container With Most Water Problem: Given
n
non-negative integersa1, a2, ..., an
, where each represents a point at coordinate(i, ai)
. n vertical lines are drawn such that the two endpoints of the linei
is at(i, ai)
and(i, 0)
. Find two lines, which, together with the x-axis forms a container, such that the container contains the most water. Two Pointers Algorithm can be used to solve this problem by using two pointers starting from both ends of the array and moving them towards the middle until they meet. During the traversal, you can calculate the area of the container formed by the two lines pointed to by the pointers and update the maximum area. - Reverse Pairs Problem: Given an array
nums
, we call(i, j)
an important reverse pair ifi < j
andnums[i] > 2*nums[j]
. Two Pointers Algorithm can be used to solve this problem in O(nlogn) time complexity by using the merge sort algorithm with a slight modification to count the number of important reverse pairs. - 3 Sum Problem: Given an array
nums
of n integers, find all unique triplets in the array which gives the sum of zero. Two Pointers Algorithm can be used to solve this problem in O(n^2) time complexity by sorting the array first, and then using two pointers to traverse the array and find all the unique triplets that sum up to zero.
These are just a few examples of the many applications of Two Pointers Algorithm. In general, Two Pointers Algorithm is a powerful tool for solving problems that involve finding pairs of elements in a sorted list that satisfy certain conditions, or finding subsequences with certain properties.
10. Depth-first search (DFS):
This is an algorithm for traversing or searching tree or graph data structures. It starts at the root node and explores as far as possible along each branch before backtracking.
General pseudocode for a Depth-first search (DFS) Algorithm in python.
# Initialize the graph and visited array
graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [],
'E': ['F'],
'F': []
}
visited = {}
# DFS function
def dfs(node):
# Mark the current node as visited
visited[node] = True
print(node, end=' ')
# Recur for all the vertices adjacent to this vertex
for neighbor in graph[node]:
if not visited.get(neighbor):
dfs(neighbor)
# Call DFS function for each unvisited node in the graph
for node in graph.keys():
if not visited.get(node):
dfs(node)
In this pseudocode, we start by initializing the graph and the visited dictionary. The graph is represented as a dictionary where each key represents a node and its value represents the list of nodes adjacent to it.
The DFS function takes a node as input and marks it as visited. It then prints the node and recursively calls itself for all the adjacent nodes that have not been visited yet.
Finally, we call the DFS function for each unvisited node in the graph by iterating through the keys of the graph dictionary.
Note : this pseudocode assumes that the graph is connected. If the graph is disconnected, you would need to modify the code to call DFS for each unvisited node in each disconnected component.
Applications of Depth-first search (DFS) Algorithm.
Depth-first search (DFS) is a versatile algorithm and can be used in various applications. Some of its common applications are:
- Finding connected components: DFS can be used to find all the connected components in a graph. We start by picking an unvisited node and performing DFS on it. Once we have visited all the nodes reachable from that node, we move on to the next unvisited node and repeat the process.
- Topological sorting: DFS can be used to perform a topological sort on a directed acyclic graph (DAG). In this application, we start by performing DFS on a node and marking it as visited after we have visited all the nodes reachable from it. We then add the node to a stack. We repeat this process for all the unvisited nodes in the graph. Once we have visited all the nodes, we pop nodes from the stack to get a topological ordering.
- Finding cycles: DFS can be used to find cycles in a graph. We start by performing DFS on a node and marking it as visited. While we are visiting the adjacent nodes, if we encounter a visited node that is not the parent of the current node, we have found a cycle.
- Maze solving: DFS can be used to solve mazes. In this application, we start at the entrance of the maze and perform DFS to explore all the paths until we reach the exit.
- Finding strongly connected components: DFS can be used to find strongly connected components in a directed graph. In this application, we perform DFS on the graph and mark each node as visited after we have visited all the nodes reachable from it. We then transpose the graph (reverse all the edges) and perform DFS again starting from the node that was last visited in the previous DFS. The nodes visited in this second DFS will form a strongly connected component. We repeat this process for all the unvisited nodes to find all the strongly connected components in the graph.
I hope this article will help you to practice coding patterns and help you to elevate in your coding interviews!
Feel free to suggest any suggestions or your Views, Happy coding.
Good Luck :-)