I was doing one of the course exercises on codeacademy for python and I had a few questions I couldn't seem to find an answer to:
Also, how would this code be affected if it were running with a massive list of numbers (thousands or millions)? Would it slow down as the list size increases, and are there better alternatives?
numbers = [1, 1, 2, 3, 5, 8, 13] def remove_duplicates(list): new_list =  for i in list: if i not in new_list: new_list.append(i) return new_list remove_duplicates(numbers)
P.S. Why does this code not function the same?
numbers = [1, 1, 2, 3, 5, 8, 13] def remove_duplicates(list): new_list =  new_list.append(i for i in list if i not in new_list) return new_list
to answer the question in the title: python has more efficient data types but the
list() object is just a plain array, if you want a more efficient way to search values you can use
dict() which uses a hash of the object stored to insert it into a tree which i assume is what you were thinking of when you mentioned "a quicker process".
as to the second code snippet:
list().append() inserts whatever value you give it to the end of the list,
i for i in list if i not in new_list is a generator object and it inserts that generator as an object into the array,
list().extend() does what you want: it takes in an iterable and appends all of its elements to the list