Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scraping with BeautifulSoup and multiple paragraphs

I'm trying to scrape a speech from a website using BeautifulSoup. I'm encountering problems, however, since the speech is divided into many different paragraphs. I'm extremely new to programming and am having trouble figuring out how to deal with this. The HTML of the page looks like this:

<span class="displaytext">Thank you very much. Mr. Speaker, Vice President Cheney, 
Members of Congress, distinguished guests, fellow citizens: As we gather tonight, our Nation is    
at war; our economy is in recession; and the civilized world faces unprecedented dangers. 
Yet, the state of our Union has never been stronger.
<p>We last met in an hour of shock and suffering. In 4 short months, our Nation has comforted the victims, 
begun to rebuild New York and the Pentagon, rallied a great coalition, captured, arrested, and  
rid the world of thousands of terrorists, destroyed Afghanistan's terrorist training camps, 
saved a people from starvation, and freed a country from brutal oppression. 
<p>The American flag flies again over our Embassy in Kabul. Terrorists who once occupied 
Afghanistan now occupy cells at Guantanamo Bay. And terrorist leaders who urged followers to 
sacrifice their lives are running for their own.

It continues on like that for awhile, with multiple paragraph tags. I'm trying to extract all of the text within the span.

I've tried a couple of different ways to get the text, but both have failed to get the text that I want.

The first I tried is:

import urllib2,sys
from BeautifulSoup import BeautifulSoup, NavigableString

address = 'http://www.presidency.ucsb.edu/ws/index.php?pid=29644&st=&st1=#axzz1fD98kGZW'
html = urllib2.urlopen(address).read()

soup = BeautifulSoup(html)
thespan = soup.find('span', attrs={'class': 'displaytext'})
print thespan.string

which gives me:

Mr. Speaker, Vice President Cheney, Members of Congress, distinguished guests, fellow citizens: As we gather tonight, our Nation is at war; our economy is in recession; and the civilized world faces unprecedented dangers. Yet, the state of our Union has never been stronger.

That is the portion of the text up until the first paragraph tag. I then tried:

import urllib2,sys
from BeautifulSoup import BeautifulSoup, NavigableString

address = 'http://www.presidency.ucsb.edu/ws/index.php?pid=29644&st=&st1=#axzz1fD98kGZW'
html = urllib2.urlopen(address).read()

soup = BeautifulSoup(html)
thespan = soup.find('span', attrs={'class': 'displaytext'})
for section in thespan:
     paragraph = section.findNext('p')
     if paragraph and paragraph.string:
         print '>', paragraph.string
     else:
         print '>', section.parent.next.next.strip()

This gave me the text between the first paragraph tag and the second paragraph tag. So, I'm looking for a way to get the entire text, instead of just sections.

like image 739
user1074057 Avatar asked Nov 30 '11 19:11

user1074057


People also ask

How do you scrape multiple pages using BeautifulSoup in Python?

Parsing the available data from the target page using the BeautifulSoup library's parser. From the target page, Identify and Extract the classes and tags which contain the information that is valuable to us. Prototype it for one page using a loop and then apply it to all the pages.

Is Scrapy faster than BeautifulSoup?

Scrapy is incredibly fast. Its ability to send asynchronous requests makes it hands-down faster than BeautifulSoup. This means that you'll be able to scrape and extract data from many pages at once. BeautifulSoup doesn't have the means to crawl and scrape pages by itself.

Is BeautifulSoup better than Scrapy?

'Beautiful Soup' can help extract specific elements from a target web page, while 'Scrapy' can manage asynchronous data retrieval, increasing efficiency.


2 Answers

import urllib2,sys
from BeautifulSoup import BeautifulSoup

address = 'http://www.presidency.ucsb.edu/ws/index.php?pid=29644&st=&st1=#axzz1fD98kGZW'
soup = BeautifulSoup(urllib2.urlopen(address).read())

span = soup.find("span", {"class":"displaytext"})  # span.string gives you the first bit
paras = [x.contents[0] for x in span.findAllNext("p")]  # this gives you the rest
# use .contents[0] instead of .string to deal with last para that's not well formed

print "%s\n\n%s" % (span.string, "\n\n".join(paras))

As pointed out in the comments, the above does not work so well if the <p> tags contain more nested tags. This can be dealt with using:

paras = ["".join(x.findAll(text=True)) for x in span.findAllNext("p")]

However, that doesn't work too well with the last <p> that does not have a closing tag. A hacky workaround would be to treat that differently. For example:

import urllib2,sys
from BeautifulSoup import BeautifulSoup

address = 'http://www.presidency.ucsb.edu/ws/index.php?pid=29644&st=&st1=#axzz1fD98kGZW'
soup = BeautifulSoup(urllib2.urlopen(address).read())
span = soup.find("span", {"class":"displaytext"})  
paras = [x for x in span.findAllNext("p")]

start = span.string
middle = "\n\n".join(["".join(x.findAll(text=True)) for x in paras[:-1]])
last = paras[-1].contents[0]
print "%s\n\n%s\n\n%s" % (start, middle, last)
like image 132
Shawn Chin Avatar answered Nov 15 '22 18:11

Shawn Chin


Here's how it would be done with lxml:

import lxml.html as lh

tree = lh.parse('http://www.presidency.ucsb.edu/ws/index.php?pid=29644&st=&st1=#axzz1fD98kGZW')

text = tree.xpath("//span[@class='displaytext']")[0].text_content()

Alternatively, the answers to this question covers how to achieve the same thing using beautifulsoup: BeautifulSoup - easy way to to obtain HTML-free contents

The helper function from the accepted answer:

def textOf(soup):
    return u''.join(soup.findAll(text=True))
like image 30
Acorn Avatar answered Nov 15 '22 18:11

Acorn