 A+
Professor says this isn't a efficient algorithm to check whether the number is divisible by a number from 100,000150,000. I'm having trouble finding a better way. Any help would be appreciated.
unsigned short divisibility_check(unsigned long n){ unsigned long i; for(i=100000; i<=150000;i++){ if(n%i == 0){ return 0; } } return 1; }
Ok, so I've implemented the version with sieve primes and factorization mentioned in the comments by m69 and it is ... way faster than the naive approach. I must admit, I didn't expect this at all.
My notations: left == 100'000
and right = 150'000
 naive your version

naive_with_checks your version with simple checks:
if (n < left)
no divisorelse if (n <= right)
divisorelse if (left * 2 >= right && n < left * 2)
divisor

factorization (above checks implemented)
 Precompute the Sieve of Eratosthenes for all primes up to
right
. This time is not measured  factorize
n
(only with the primes from the prev step)  generate all subsets (backtracking, depth first: i.e. generate
p1^0 * p2^0 * p3^0
first, instead ofp1^5
first) with the product< left
or until the product is in[left, right]
(found divisor).
 Precompute the Sieve of Eratosthenes for all primes up to

factorization_opt optimization of the previous algorithm where the subsets are not generated (no vector of subsets is created). I just pass the current product from one backtracking iteration to the next.

Nominal Animal's version I have also ran his version on my system with the same range.
I have written the program in C++
so I won't share it here.
I used std::uint64_t
as data type and I have checked all numbers from 1
to 500'000
to see if each is divisible by a number in interval [100'000, 150'000]
. All version reached the same solution: 170'836
numbers with positive results.
The setup:

Hardware: Intel Core i7920, 4 cores with HT (all algorithm versions are single threaded), 2.66 GHz (boost 2.93 GHz), 8 MB SmartCache; memory: 6 GB DDR3 triple channel.

Compiler: Visual Studio 2017 (v141), Release x64 mode.
I must also add that I haven't profiled the programs so there is definitely room to improve the implementation. However this is enough here as the idea is to find a better algorithm.
version  elapsed time (milliseconds) + naive  167'378 ms (yes, it's thousands separator, aka 167 seconds) naive_with_checks  97'197 ms factorization  7'906 ms factorization_opt  7'320 ms  Nominal Animal version  14 ms
Some analysis:
For naive vs naive_with_checks: all the numbers in [1 200'000]
can be solved with just the simple checks. As these represent 40% of all the numbers checked, the naive_with_checks version does roughly 60% of the work naive does. The execution time reflect this as naive_with_checks runtime is ≅58% of the naive version.
The factorization version is a whopping 12.3 times faster. That is indeed impressive. I haven't analyzed the time complexity of the alg.
And the final optimization brings a further 1.08x speedup. This is basically the time gained by removing the creation and copy of the small vectors of subset factors.
For those interested the sieve precomputation which is not included above takes about 1 ms
. And this is the naive implementation from wikipedia, no optimizations whatsoever.