Python Numpy vectorize nested for-loops for combinatorics

  • A+

Given an nxn array A of real positive numbers, I'm trying to find the minimum of the maximum of the element-wise minimum of all combinations of three rows of the 2-d array. Using for-loops, that comes out to something like this:

import numpy as np  n = 100 np.random.seed(2) A = np.random.rand(n,n) global_best = 1000000000000000.0 # print(A)  for i in range(n):     for j in range(i+1, n):         for k in range(j+1, n):             # find the maximum of the element-wise minimum of the three vectors             local_best = np.amax(np.array([A[i,:], A[j,:], A[k,:]]).min(0))             # if local_best is lower than global_best, update global_best             if (local_best < global_best):                 global_best = local_best                 save_rows = [i, j, k]  print global_best, save_rows 

In the case for n = 100, the output should be this:

Out[]: 0.492652949593 [6, 41, 58] 

I have a feeling though that I could do this much faster using Numpy vectorization, and would certainly appreciate any help on doing this. Thanks.

This solution is 5x faster for n=100:

coms = np.fromiter(itertools.combinations(np.arange(n), 3), 'i,i,i').view(('i', 3)) best = A[coms].min(1).max(1) at = best.argmin() global_best = best[at] save_rows = coms[at] 

The first line is a bit convoluted but turns the result of itertools.combinations into a NumPy array which contains all possible [i,j,k] index combinations.

From there, it's a simple matter of indexing into A using all the possible index combinations, then reducing along the appropriate axes.

This solution consumes a lot more memory as it builds the concrete array of all possible combinations A[coms]. It saves time for smallish n, say under 250, but for large n the memory traffic will be very high and it may be slower than the original code.


:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: