Why does this O(n^2) code execute faster than O(n)? [duplicate]

• A+
Category：Languages

This question already has an answer here:

I have written code for two approaches to find out the first unique character in a string on LeetCode.

Problem Statement: Given a string, find the first non-repeating character in it and return it's index. If it doesn't exist, return -1.

Sample Test Cases:

s = "leetcode" return 0.

s = "loveleetcode", return 2.

Approach 1 (O(n)) (correct me if I am wrong):

class Solution {     public int firstUniqChar(String s) {          HashMap<Character,Integer> charHash = new HashMap<>();          int res = -1;          for (int i = 0; i < s.length(); i++) {              Integer count = charHash.get(s.charAt(i));              if (count == null){                 charHash.put(s.charAt(i),1);             }             else {                 charHash.put(s.charAt(i),count + 1);             }         }          for (int i = 0; i < s.length(); i++) {              if (charHash.get(s.charAt(i)) == 1) {                 res = i;                 break;             }         }          return res;     } }

Approach 2 (O(n^2)):

class Solution {     public int firstUniqChar(String s) {          char[] a = s.toCharArray();         int res = -1;          for(int i=0; i<a.length;i++){             if(s.indexOf(a[i])==s.lastIndexOf(a[i])) {                 res = i;                 break;             }         }         return res;     } }

In approach 2, I think the complexity should be O(n^2) since indexOf executes in O(n*1) here.

But when I execute both the solutions on LeetCode I get runtime 19 ms for approach 2 and 92 ms for approach 1. I am confused; why does that happen?

I suppose LeetCode tests for both small and large input values for best, worst and average cases.

Update:

I am aware of the fact that O(n^2 algorithms) can perform better for certain n < n1. In this question I wanted to understand why that is happening in this case. i.e. which part of Approach 1 makes it slower.

LeetCode link to the question

Consider:

• f1(n) = n2
• f2(n) = n + 1000

Clearly f1 is O(n2) and f2 is O(n). For a small input (say, n=5), we have f1(n) = 25 but f2(n) > 1000.

Just because one function (or time complexity) is O(n) and another is O(n2) doesn't mean that the former is smaller for all values of n, just that there is some n beyond which this will be the case.