I have a naive "parser" that simply does something like:[x.split('=') for x in mystring.split(',')]
However mystring can be something like'foo=bar,breakfast=spam,eggs'
Obviously,
The naive splitter will just not do it. I am limited to Python 2.6 standard library for this,
So for example pyparsing can not be used.
Expected output is[('foo', 'bar'), ('breakfast', 'spam,eggs')]
I'm trying to do this with regex, but am facing the following problems:
My First attemptr'([a-z_]+)=(.+),?'
Gave me[('foo', 'bar,breakfast=spam,eggs')]
Obviously,
Making .+
non-greedy does not solve the problem.
So,
I'm guessing I have to somehow make the last comma (or $
) mandatory.
Doing just that does not really work,r'([a-z_]+)=(.+?)(?:,|$)'
As with that the stuff behind the comma in an value containing one is omitted,
e.g. [('foo', 'bar'), ('breakfast', 'spam')]
I think I must use some sort of look-behind(?) operation.
The Question(s)
1. Which one do I use? or
2. How do I do that/this?
Edit:
Based on daramarak's answer below,
I ended up doing pretty much the same thing as abarnert later suggested in a slightly more verbose form;
vals = [x.rsplit(',', 1) for x in (data.split('='))]
ret = list()
while vals:
value = vals.pop()[0]
key = vals[-1].pop()
ret.append((key, value))
if len(vals[-1]) == 0:
break
EDIT 2:
Just to satisfy my curiosity, is this actually possible with pure regular expressions? I.e so that re.findall()
would return a list of 2-tuples?
The 0-9 indicates characters 0 through 9, the comma , indicates comma, and the semicolon indicates a ; . The closing ] indicates the end of the character set. The plus + indicates that one or more of the "previous item" must be present.
[] denotes a character class. () denotes a capturing group. [a-z0-9] -- One character that is in the range of a-z OR 0-9. (a-z0-9) -- Explicit capture of a-z0-9 .
A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. The use of the comma as a field separator is the source of the name for this file format.
Just for comparison purposes, here's a regex that seems to solve the problem as well:
([^=]+) # key
= # equals is how we tokenise the original string
([^=]+) # value
(?:,|$) # value terminator, either comma or end of string
The trick here it to restrict what you're capturing in your second group. .+
swallows the =
sign, which is the character we can use to distinguish keys from values. The full regex doesn't rely on any back-tracking (so it should be compatible with something like re2, if that's desirable) and can work on abarnert's examples.
Usage as follows:
re.findall(r'([^=]+)=([^=]+)(?:,|$)', 'foo=bar,breakfast=spam,eggs,blt=bacon,lettuce,tomato,spam=spam')
Which returns:
[('foo', 'bar'), ('breakfast', 'spam,eggs'), ('blt', 'bacon,lettuce,tomato'), ('spam', 'spam')]
daramarak's answer either very nearly works, or works as-is; it's hard to tell from the way the sample output is formatted and the vague descriptions of the steps. But if it's the very-nearly-works version, it's easy to fix.
Putting it into code:
>>> bits=[x.rsplit(',', 1) for x in s.split('=')]
>>> kv = [(bits[i][-1], bits[i+1][0]) for i in range(len(bits)-1)]
The first line is (I believe) daramarak's answer. By itself, the first line gives you pairs of (value_i, key_i+1)
instead of (key_i, value_i)
. The second line is the most obvious fix for that. With more intermediate steps, and a bit of output, to see how it works:
>>> s = 'foo=bar,breakfast=spam,eggs,blt=bacon,lettuce,tomato,spam=spam'
>>> bits0 = s.split('=')
>>> bits0
['foo', 'bar,breakfast', 'spam,eggs,blt', 'bacon,lettuce,tomato,spam', 'spam']
>>> bits = [x.rsplit(',', 1) for x in bits0]
>>> bits
[('foo'), ('bar', 'breakfast'), ('spam,eggs', 'blt'), ('bacon,lettuce,tomato', 'spam'), ('spam')]
>>> kv = [(bits[i][-1], bits[i+1][0]) for i in range(len(bits)-1)]
>>> kv
[('foo', 'bar'), ('breakfast', 'spam,eggs'), ('blt', 'bacon,lettuce,tomato'), ('spam', 'spam')]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With