Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python parse text from multiple txt file

Seeking advice on how to mine items from multiple text files to build a dictionary.

This text file: https://pastebin.com/Npcp3HCM

Was manually transformed into this required data structure: https://drive.google.com/file/d/0B2AJ7rliSQubV0J2Z0d0eXF3bW8/view

There are thousands of such text files and they may have different section headings as shown in these examples:

  1. https://pastebin.com/wWSPGaLX
  2. https://pastebin.com/9Up4RWHu

I started off by reading the files

from glob import glob

txtPth = '../tr-txt/*.txt'
txtFiles = glob(txtPth)

with open(txtFiles[0],'r') as tf:
    allLines = [line.rstrip() for line in tf]

sectionHeading = ['Corporate Participants',
                  'Conference Call Participiants',
                  'Presentation',
                  'Questions and Answers']

for lineNum, line in enumerate(allLines):
    if line in sectionHeading:
        print(lineNum,allLines[lineNum])

My idea was to look for the line numbers where section headings existed and try to extract the content in between those line numbers, then strip out separators like dashes. That didn't work and I got stuck in trying to create a dictionary of this kind so that I can later run various natural language processing algorithms on quarried items.

{file-name1:{
    {date-time:[string]},
    {corporate-name:[string]},
    {corporate-participants:[name1,name2,name3]},
    {call-participants:[name4,name5]},
    {section-headings:{
        {heading1:[
            {name1:[speechOrderNum, text-content]},
            {name2:[speechOrderNum, text-content]},
            {name3:[speechOrderNum, text-content]}],
        {heading2:[
            {name1:[speechOrderNum, text-content]},
            {name2:[speechOrderNum, text-content]},
            {name3:[speechOrderNum, text-content]},
            {name2:[speechOrderNum, text-content]},
            {name1:[speechOrderNum, text-content]},
            {name4:[speechOrderNum, text-content]}],
        {heading3:[text-content]},
        {heading4:[text-content]}
        }
    }
}

The challenge is that different files may have different headings and number of headings. But there will always be a section called "Presentation" and very likely to have "Question and Answer" section. These section headings are always separated by a string of equal-to signs. And content of different speaker is always separated by string of dashes. The "speech order" for Q&A section is indicated with a number in square brackets. The participants are are always indicated in the beginning of the document with an asterisks before their name and their tile is always on the next line.

Any suggestion on how to parse the text files is appreciated. The ideal help would be to provide guidance on how to produce such a dictionary (or other suitable data structure) for each file that can then be written to a database.

Thanks

--EDIT--

One of the files looks like this: https://pastebin.com/MSvmHb2e

In which the "Question & Answer" section is mislabeled as "Presentation" and there is no other "Question & Answer" section.

And final sample text: https://pastebin.com/jr9WfpV8

like image 699
samkhan13 Avatar asked Apr 24 '17 20:04

samkhan13


People also ask

How do I extract data from multiple text files in Python?

1)First, we are going to import the OS and glob libraries. We need them to navigate through different working directories and getting their paths. 2) We also need to import the pandas library as we need to work with data frames. 3) Let's change our working directory to the directory where we have all the data files.

How do I read multiple text files from multiple folders in Python?

You can do something similar using glob like you have, but with the directory names. Show activity on this post. Below function will return list of files in all the directories and sub-directories without using glob. Read from the list of files and open to read.


2 Answers

The comments in the code should explain everything. Let me know if anything is under specified, and needs more comments.

In short I leverage regex to find the '=' delimiter lines to subdivide the entire text into subsections, then handle each type of sections separately for clarity sake ( so you can tell how I am handling each case).

Side note: I am using the word 'attendee' and 'author' interchangeably.

EDIT: Updated the code to sort based on the '[x]' pattern found right next to the attendee/author in the presentation/QA section. Also changed the pretty print part since pprint does not handle OrderedDict very well.

To strip any additional whitespace including \n anywhere in the string, simply do str.strip(). if you specifically need to strip only \n, then just do str.strip('\n').

I have modified the code to strip any whitespace in the talks.

import json
import re
from collections import OrderedDict
from pprint import pprint


# Subdivides a collection of lines based on the delimiting regular expression.
# >>> example_string =' =============================
#                       asdfasdfasdf
#                       sdfasdfdfsdfsdf
#                       =============================
#                       asdfsdfasdfasd
#                       =============================
# >>> subdivide(example_string, "^=+")
# >>> ['asdfasdfasdf\nsdfasdfdfsdfsdf\n', 'asdfsdfasdfasd\n']
def subdivide(lines, regex):
    equ_pattern = re.compile(regex, re.MULTILINE)
    sections = equ_pattern.split(lines)
    sections = [section.strip('\n') for section in sections]
    return sections


# for processing sections with dashes in them, returns the heading of the section along with
# a dictionary where each key is the subsection's header, and each value is the text in the subsection.
def process_dashed_sections(section):

    subsections = subdivide(section, "^-+")
    heading = subsections[0]  # header of the section.
    d = {key: value for key, value in zip(subsections[1::2], subsections[2::2])}
    index_pattern = re.compile("\[(.+)\]", re.MULTILINE)

    # sort the dictionary by first capturing the pattern '[x]' and extracting 'x' number.
    # Then this is passed as a compare function to 'sorted' to sort based on 'x'.
    def cmp(d):
        mat = index_pattern.findall(d[0])
        if mat:
            print(mat[0])
            return int(mat[0])
        # There are issues when dealing with subsections containing '-'s but not containing '[x]' pattern.
        # This is just to deal with that small issue.
        else:
            return 0

    o_d = OrderedDict(sorted(d.items(), key=cmp))
    return heading, o_d


# this is to rename the keys of 'd' dictionary to the proper names present in the attendees.
# it searches for the best match for the key in the 'attendees' list, and replaces the corresponding key.
# >>> d = {'mr. man   ceo of company   [1]' : ' This is talk a' ,
#  ...     'ms. woman  ceo of company    [2]' : ' This is talk b'}
# >>> l = ['mr. man', 'ms. woman']
# >>> new_d = assign_attendee(d, l)
# new_d = {'mr. man': 'This is talk a', 'ms. woman': 'This is talk b'}
def assign_attendee(d, attendees):
    new_d = OrderedDict()
    for key, value in d.items():
        a = [a for a in attendees if a in key]
        if len(a) == 1:
            # to strip out any additional whitespace anywhere in the text including '\n'.
            new_d[a[0]] = value.strip()
        elif len(a) == 0:
            # to strip out any additional whitespace anywhere in the text including '\n'.
            new_d[key] = value.strip()
    return new_d


if __name__ == '__main__':
    with open('input.txt', 'r') as input:
        lines = input.read()

        # regex pattern for matching headers of each section
        header_pattern = re.compile("^.*[^\n]", re.MULTILINE)

        # regex pattern for matching the sections that contains
        # the list of attendee's (those that start with asterisks )
        ppl_pattern = re.compile("^(\s+\*)(.+)(\s.*)", re.MULTILINE)

        # regex pattern for matching sections with subsections in them.
        dash_pattern = re.compile("^-+", re.MULTILINE)

        ppl_d = dict()
        talks_d = dict()

        # Step1. Divide the the entire document into sections using the '=' divider
        sections = subdivide(lines, "^=+")
        header = []
        print(sections)
        # Step2. Handle each section like a switch case
        for section in sections:

            # Handle headers
            if len(section.split('\n')) == 1:  # likely to match only a header (assuming )
                header = header_pattern.match(section).string

            # Handle attendees/authors
            elif ppl_pattern.match(section):
                ppls = ppl_pattern.findall(section)
                d = {key.strip(): value.strip() for (_, key, value) in ppls}
                ppl_d.update(d)

                # assuming that if the previous section was detected as a header, then this section will relate
                # to that header
                if header:
                    talks_d.update({header: ppl_d})

            # Handle subsections
            elif dash_pattern.findall(section):
                heading, d = process_dashed_sections(section)

                talks_d.update({heading: d})

            # Else its just some random text.
            else:

                # assuming that if the previous section was detected as a header, then this section will relate
                # to that header
                if header:
                    talks_d.update({header: section})

        #pprint(talks_d)
        # To assign the talks material to the appropriate attendee/author. Still works if no match found.
        for key, value in talks_d.items():
            talks_d[key] = assign_attendee(value, ppl_d.keys())

        # ordered dict does not pretty print using 'pprint'. So a small hack to make use of json output to pretty print.
        print(json.dumps(talks_d, indent=4))
like image 147
entrophy Avatar answered Oct 14 '22 21:10

entrophy


Could you please confirm that whether you only require "Presentation" and "Question and Answer" sections? Also, regarding the output is it ok to dump CSV format similar to what you have "manually transformed".

Updated solution to work for every sample file you provided.

Output is from Cell "D:H" as per "Parsed-transcript" file shared.

#state = ["other", "head", "present", "qa", "speaker", "data"]
# codes : 0, 1, 2, 3, 4, 5
def writecell(out, data):
    out.write(data)
    out.write(",")

def readfile(fname, outname):
    initstate = 0
    f = open(fname, "r")
    out = open(outname, "w")
    head = ""
    head_written = 0
    quotes = 0
    had_speaker = 0
    for line in f:
        line = line.strip()
        if not line: continue
        if initstate in [0,5] and not any([s for s in line if "=" != s]):
            if initstate == 5:
                out.write('"')
                quotes = 0
                out.write("\n")
            initstate = 1
        elif initstate in [0,5] and not any([s for s in line if "-" != s]):
            if initstate == 5:
                out.write('"')
                quotes = 0
                out.write("\n")
                initstate = 4
        elif initstate == 1 and line == "Presentation":
            initstate = 2
            head = "Presentation"
            head_written = 0
        elif initstate == 1 and line == "Questions and Answers":
            initstate = 3
            head = "Questions and Answers"
            head_written = 0
        elif initstate == 1 and not any([s for s in line if "=" != s]):
            initstate = 0
        elif initstate in [2, 3] and not any([s for s in line if ("=" != s and "-" != s)]):
            initstate = 4
        elif initstate == 4 and '[' in line and ']' in line:
            comma = line.find(',')
            speech_st = line.find('[')
            speech_end = line.find(']')
            if speech_st == -1:
                initstate = 0
                continue
            if comma == -1:
                firm = ""
                speaker = line[:speech_st].strip()
            else:
                speaker = line[:comma].strip()
                firm = line[comma+1:speech_st].strip()
            head_written = 1
            if head_written:
                writecell(out, head)
                head_written = 0
            order = line[speech_st+1:speech_end]
            writecell(out, speaker)
            writecell(out, firm)
            writecell(out, order)
            had_speaker = 1
        elif initstate == 4 and not any([s for s in line if ("=" != s and "-" != s)]):
            if had_speaker:
                initstate = 5
                out.write('"')
                quotes = 1
            had_speaker = 0
        elif initstate == 5:
            line = line.replace('"', '""')
            out.write(line)
        elif initstate == 0:
            continue
        else:
            continue
    f.close()
    if quotes:
        out.write('"')
    out.close()

readfile("Sample1.txt", "out1.csv")
readfile("Sample2.txt", "out2.csv")
readfile("Sample3.txt", "out3.csv")

Details

in this solution there is a state machine which works as follows: 1. detects whether heading is present, if yes, write it 2. detects speakers after heading is written 3. writes notes for that speaker 4. switches to next speaker and so on...

You can later process the csv files as you want. You can also populate the data in any format you want once basic processing is done.

Edit:

Please replace the function "writecell"

def writecell(out, data):
    data = data.replace('"', '""')
    out.write('"')
    out.write(data)
    out.write('"')
    out.write(",")
like image 22
mangupt Avatar answered Oct 14 '22 21:10

mangupt