I want to remove specific duplicates from a list. With Perl I would do the task with this code:
my @list = ( 'a1', 'a1', 'b1', 'b1' );
my %seen;
@list = grep( !/a\d/ || !$seen{ $_ }++, @list );
and the wanted result would be this:
@list = ( 'a1', 'b1', 'b1' );
How could I do this in Python 3 using regular expression and list comprehension. Thanks.
You can use itertools.chain and groupby :
>>> list(chain(*[[i[0]] if 'a1' in i else i for i in [list(g) for _,g in groupby(sorted(l))]]))
['a1', 'b1', 'b1']
and if you just want to use regex you can concatenate the elements the n use re.sub , but note that it works for this special case ! that , is the delimiter ! :
>>> l =['a1', 'a1', 'b1', 'b1']
>>> re.sub(r'(a1,)+','a1,',','.join(sorted(l))).split(',')
['a1', 'b1', 'b1']
import re
from functools import reduce # this import is not needed in python 2.*
l = ['a1', 'a1', 'b1', 'b1']
print reduce(lambda acc, el: acc if re.match(r'a\d', el) and el in acc else acc + [el], l, [])
Sorry, this is solution without list comprehensions. Is it strictly required?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With