Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best practices for searchable archive of thousands of documents (pdf and/or xml)

Revisiting a stalled project and looking for advice in modernizing thousands of "old" documents and making them available via web.

Documents exist in various formats, some obsolete: (.doc, PageMaker, hardcopy (OCR), PDF, etc.). Funds are available to migrate the documents into a 'modern' format, and many of the hardcopies have already been OCR'd into PDFs - we had originally assumed that PDF would be the final format but we're open to suggestions (XML?).

Once all docs are in a common format we would like to make their contents available and searchable via a web interface. We'd like the flexibility to return only portions (pages?) of the entire document where a search 'hit' is found (I believe Lucene/elasticsearch makes this possible?!?) Might it be more flexible if content was all XML? If so how/where to store the XML? Directly in database, or as discrete files in the filesystem? What about embedded images/graphs in the documents?

Curious how others might approach this. There is no "wrong" answer I'm just looking for as many inputs as possible to help us proceed.

Thanks for any advice.

like image 421
Meltemi Avatar asked Jun 01 '12 17:06

Meltemi


People also ask

Is PDF good for archiving?

PDF/A is the standard for archiving electronic documents. The PDF format is widespread globally. It is used in both the public and private sectors for a wide range of purposes. The PDF/A Standard is the perfect instrument to ensure long-term preservation and reproduc- ibility of documents over extended periods.

How do I create a searchable PDF library?

To set up a PDF database file, your best bet is to create it first using a database or spreadsheet program, such as Microsoft Excel. Then you can convert the file into a PDF and add Adobe Acrobat's search bar and index features, making it easy for users to search the database.

What is the difference between a PDF and a searchable PDF?

A searchable PDF file is a PDF file that includes text that can be searched upon using the standard Adobe Reader “search” functionality. In addition, the text can be selected and copied from the PDF.

Is every PDF searchable?

“Image-only” or Scanned PDFs Consequently, image-only PDF files are not searchable, and their text usually cannot be modified or marked up. An “image-only” PDF can be made searchable by applying OCR with which a text layer is added, normally under the page image.


1 Answers

In summary: I'm going to be recommending ElasticSearch, but let's break the problem down and talk about how to implement it:

There are a few parts to this:

  1. Extracting the text from your docs to make them indexable
  2. Making this text available as full text search
  3. Returning highlighted snippets of the doc
  4. Knowing where in the doc those snippets are found to allow for paging
  5. Return the full doc

What can ElasticSearch provide:

  1. ElasticSearch (like Solr) uses Tika to extract text and metadata from a wide variety of doc formats
  2. It, pretty obviously, provides powerful full text search. It can be configured to analyse each doc in the appropriate language with, stemming, boosting the relevance of certain fields (eg title more important than content), ngrams etc. ie standard Lucene stuff
  3. It can return highlighted snippets for each search result
  4. It DOESN'T know where those snippets occur in your doc
  5. It can store the original doc as an attachment, or it can store and return the extracted text. But it'll return the whole doc, not a page.

You could just send the whole doc to ElasticSearch as an attachment, and you'd get full text search. But the sticking points are (4) and (5) above: knowing where you are in a doc, and returning parts of a doc.

Storing individual pages is probably sufficient for your where-am-I purposes (although you could equally go down to paragraph level), but you want them grouped in a way that a doc would be returned in the search results, even if search keywords appear on different pages.

First the indexing part: storing your docs in ElasticSearch:

  1. Use Tika (or whatever you're comfortable with) to extract the text from each doc. Leave it as plain text, or as HTML to preserve some formatting. (forget about XML, no need for it).
  2. Also extract the metadata for each doc: title, authors, chapters, language, dates etc
  3. Store the original doc in your filesystem, and record the path so that you can serve it later
  4. In ElasticSearch, index a "doc" doc which contains all of the metadata, and possibly the list of chapters
  5. Index each page as a "page" doc, which contains:

    • A parent field which contains the ID of the "doc" doc (see "Parent-child relationship" below)
    • The text
    • The page number
    • Maybe the chapter title or number
    • Any metadata which you want to be searchable

Now for searching. How you do this depends on how you want to present your results - by page, or grouped by doc.

Results by page are easy. This query returns a list of matching pages (each page is returned in full) plus a list of highlighted snippets from the page:

curl -XGET 'http://127.0.0.1:9200/my_index/page/_search?pretty=1'  -d ' {    "query" : {       "text" : {          "text" : "interesting keywords"       }    },    "highlight" : {       "fields" : {          "text" : {}       }    } } ' 

Displaying results grouped by "doc" with highlights from the text is a bit trickier. It can't be done with a single query, but a little client side grouping will get you there. One approach might be:

Step 1: Do a top-children-query to find the parent ("doc") whose children ("page") best match the query:

curl -XGET 'http://127.0.0.1:9200/my_index/doc/_search?pretty=1'  -d ' {    "query" : {       "top_children" : {          "query" : {             "text" : {                "text" : "interesting keywords"             }          },          "score" : "sum",          "type" : "page",          "factor" : "5"       }    } } 

Step 2: Collect the "doc" IDs from the above query and issue a new query to get the snippets from the matching "page" docs:

curl -XGET 'http://127.0.0.1:9200/my_index/page/_search?pretty=1'  -d ' {    "query" : {       "filtered" : {          "query" : {             "text" : {                "text" : "interesting keywords"             }          },          "filter" : {             "terms" : {                "doc_id" : [ 1,2,3],             }          }       }    },    "highlight" : {       "fields" : {          "text" : {}       }    } } ' 

Step 3: In your app, group the results from the above query by doc and display them.

With the search results from the second query, you already have the full text of the page which you can display. To move to the next page, you can just search for it:

curl -XGET 'http://127.0.0.1:9200/my_index/page/_search?pretty=1'  -d ' {    "query" : {       "constant_score" : {          "filter" : {             "and" : [                {                   "term" : {                      "doc_id" : 1                   }                },                {                   "term" : {                      "page" : 2                   }                }             ]          }       }    },    "size" : 1 } ' 

Or alternatively, give the "page" docs an ID consisting of $doc_id _ $page_num (eg 123_2) then you can just retrieve that page:

curl -XGET 'http://127.0.0.1:9200/my_index/page/123_2 

Parent-child relationship:

Normally, in ES (and most NoSQL solutions) each doc/object is independent - there are no real relationships. By establishing a parent-child relationship between the "doc" and the "page", ElasticSearch makes sure that the child docs (ie the "page") are stored on the same shard as the parent doc (the "doc").

This enables you to run the top-children-query which will find the best matching "doc" based on the content of the "pages".

like image 101
DrTech Avatar answered Sep 22 '22 16:09

DrTech