- A+

Category：Languages

I try to understand a formula when we should use quicksort. For instance, we have an array with *N = 1_000_000* elements. If we will search *only once*, we should use a simple *linear search*, but if we'll do it 10 times we should use sort array *O(n log n)*. How can I detect threshold when and for which size of input array should I use sorting and after that use binary search?

You want to solve inequality that rougly might be described as

`t * n > C * n * log(n) + t * log(n) `

where `t`

is number of checks and `C`

is some constant for sort implementation (should be determined experimentally). When you evaluate this constant, you can solve inequality numerically (with uncertainty, of course)