I have a list in Python that I generate as part of the program. I have a strong assumption that these are all different, and I check this with an assertion.
This is the way I do it now:
If there are two elements:
try:
assert(x[0] != x[1])
except:
print debug_info
raise Exception("throw to caller")
If there are three:
try:
assert(x[0] != x[1])
assert(x[0] != x[2])
assert(x[1] != x[2])
except:
print debug_info
raise Exception("throw to caller")
And if I ever have to do this with four elements I'll go crazy.
Is there a better way to ensure that all the elements of the list are unique?
# Given List Alist = ['Mon','Tue','Wed'] print("The given list : ",Alist) # Compare length for unique elements if(len(set(Alist)) == len(Alist)): print("All elements are unique. ") else: print("All elements are not unique. ")
if len(set(input_list)) == 1: # input_list has all identical elements.
Using Python's import numpy, the unique elements in the array are also obtained. In the first step convert the list to x=numpy.array(list) and then use numpy.unique(x) function to get the unique values from the list. numpy.unique() returns only the unique values in the list.
The all() function returns True if all items in an iterable are true, otherwise it returns False. If the iterable object is empty, the all() function also returns True.
Maybe something like this:
if len(x) == len(set(x)):
print "all elements are unique"
else:
print "elements are not unique"
The most popular answers are O(N) (good!-) but, as @Paul and @Mark point out, they require the list's items to be hashable. Both @Paul and @Mark's proposed approaches for unhashable items are general but take O(N squared) -- i.e., a lot.
If your list's items are not hashable but are comparable, you can do better... here's an approach that always work as fast as feasible given the nature of the list's items.
import itertools
def allunique(L):
# first try sets -- fastest, if all items are hashable
try:
return len(L) == len(set(L))
except TypeError:
pass
# next, try sort -- second fastest, if items are comparable
try:
L1 = sorted(L)
except TypeError:
pass
else:
return all(len(list(g))==1 for k, g in itertools.groupby(L1))
# fall back to the slowest but most general approach
return all(v not in L[i+1:] for i, L in enumerate(L))
This is O(N) where feasible (all items hashable), O(N log N) as the most frequent fallback (some items unhashable, but all comparable), O(N squared) where inevitable (some items unhashable, e.g. dicts, and some non-comparable, e.g. complex numbers).
Inspiration for this code comes from an old recipe by the great Tim Peters, which differed by actually producing a list of unique items (and also was so far ago that set
was not around -- it had to use a dict
...!-), but basically faced identical issues.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With