In the world of programming, writing code that works is just the beginning. The real challenge lies in writing code that works efficiently. Efficiency is key, especially when your programs need to handle large datasets or complex calculations. That's where time complexity comes in.
In this blog, we’ll explore what time complexity is, why it matters, how it's calculated, and the different notations used to describe it. We’ll also look at examples to make these concepts easier to understand.

Whether you're preparing for interviews or trying to strengthen your DSA skills, this topic is crucial. To go deeper into such foundational concepts, consider checking out Uncodemy's Data Structures and Algorithms course designed by industry professionals to give you real-world coding efficiency.
Time complexity is a theoretical measure of the execution time of an algorithm as a function of the input size. It tells us how the run time of an algorithm grows with respect to the input.
In simpler terms, time complexity helps you estimate:
It doesn’t measure actual time (like seconds or milliseconds), but rather the number of basic operations your code performs.
1. Scalability: You may write a program that works perfectly on small inputs but crashes or becomes too slow for large datasets. Time complexity predicts such behavior.
2. Optimization: It helps identify which part of the code is consuming time and guides developers to improve it.
3. Interview Preparation: Most technical interviews focus heavily on DSA and time complexity.
4. Resource Management: Reducing time complexity also often reduces energy and memory usage in embedded or constrained systems.
Let’s look at common time complexities from best to worst in terms of performance:
1. Constant Time O(1)
The execution time remains the same regardless of input size.
Copy Code
int getFirstElement(int arr[]) {
return arr[0];
}2. Logarithmic Time - O(log n)
The run time grows logarithmically with input. Binary search is a good example.
Copy Code
int binarySearch(int arr[], int key, int size) {
int low = 0, high = size - 1;
while (low <= high) {
int mid = (low + high) / 2;
if (arr[mid] == key)
return mid;
else if (arr[mid] < key)
low = mid + 1;
else
high = mid - 1;
}
return -1;
}3. Linear Time - O(n)
The run time grows proportionally with input size.
Copy Code
void printArray(int arr[], int n) {
for (int i = 0; i < n; i++)
cout << arr[i] << " ";
}4. Linearithmic Time - O(n log n)
A combination of linear and logarithmic complexity. Seen in algorithms like merge sort.
5. Quadratic Time - O(n²)
Nested loops over input result in time complexity increasing quadratically. Seen in bubble sort, selection sort, etc.
Copy Code
void bubbleSort(int arr[], int n) {
for (int i = 0; i < n-1; i++)
for (int j = 0; j < n-i-1; j++)
if (arr[j] > arr[j+1])
swap(arr[j], arr[j+1]);
}6. Exponential Time - O(2^n)
Run time doubles with each additional input. Recursive problems like the Fibonacci sequence follow this.
Copy Code
int fibonacci(int n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}Big O notation is used to classify algorithms according to their worst-case run time.
| Time Complexity | Name | Example Algorithm |
|---|---|---|
| O(1) | Constant | Accessing array element |
| O(log n) | Logarithmic | Binary Search |
| O(n) | Linear | Linear Search |
| O(n log n) | Linearithmic | Merge Sort |
| O(n²) | Quadratic | Bubble Sort |
| O(2^n) | Exponential | Recursive Fibonacci |
For example:
You can find time complexity by:
1. Counting the number of loops and recursive calls.
2. Ignoring constants (e.g., 2n is still O(n)).
3. Removing lower order terms (e.g., n² + n + 1 becomes O(n²)).
Example:
Copy Code
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
// some constant time operation
}
}Here, the outer loop runs n times and for each iteration, the inner loop runs n times → Total operations: n * n = n² → O(n²)
While time complexity deals with how fast a program runs, space complexity deals with how much memory it uses.
Some efficient algorithms may consume more memory (e.g., storing results for memoization), and vice versa.
1. Use better algorithms: For example, quick sort (O(n log n)) is faster than bubble sort (O(n²)).
2. Avoid unnecessary loops and recursion.
3. Use data structures wisely: Sets, maps, heaps, etc., can reduce time complexity significantly.
Many interviewers focus not just on getting the correct output, but on how optimal your solution is.
Common interview questions involving time complexity:
If you’re aiming for placements or internships, solid DSA fundamentals (especially time and space complexity) are a must.
To master such topics from the ground up, you can explore Uncodemy’s DSA Course, where real-world examples and company-specific questions are covered in depth. Visit Uncodemy Blog to start now.
Time complexity is more than just a concept; it’s a mindset. Understanding it allows you to build applications that perform well at scale. Whether you're solving a simple problem or working on enterprise-level software, time complexity will always be a part of your decision-making.
By mastering this topic now, you’ll not only improve your coding but also stand out in interviews and professional settings.
If you're serious about acing your DSA and want guided mentorship, check out Uncodemy’s hands-on programs designed specifically for future software developers.
Personalized learning paths with interactive materials and progress tracking for optimal learning experience.
Explore LMSCreate professional, ATS-optimized resumes tailored for tech roles with intelligent suggestions.
Build ResumeDetailed analysis of how your resume performs in Applicant Tracking Systems with actionable insights.
Check ResumeAI analyzes your code for efficiency, best practices, and bugs with instant feedback.
Try Code ReviewPractice coding in 20+ languages with our cloud-based compiler that works on any device.
Start Coding
TRENDING
BESTSELLER
BESTSELLER
TRENDING
HOT
BESTSELLER
HOT
BESTSELLER
BESTSELLER
HOT
POPULAR