There are eight parts of speech in the English language: noun, pronoun, verb, adjective, adverb, preposition, conjunction, and interjection.
There are four major word classes: verb, noun, adjective, adverb.
Required Parts of a Sentence. The eight parts of speech — nouns, verbs, adjectives, prepositions, pronouns, adverbs, conjunctions, and interjections — form different parts of a sentence. However, to be a complete thought, a sentence only needs a subject (a noun or pronoun) and a predicate (a verb).
If you download just the database files from wordnet.princeton.edu/download/current-version you can extract the words by running these commands:
egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z_]*\s" data.adj | cut -d ' ' -f 5 > conv.data.adj
egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z_]*\s" data.adv | cut -d ' ' -f 5 > conv.data.adv
egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z_]*\s" data.noun | cut -d ' ' -f 5 > conv.data.noun
egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z_]*\s" data.verb | cut -d ' ' -f 5 > conv.data.verb
Or if you only want single words (no underscores)
egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z]*\s" data.adj | cut -d ' ' -f 5 > conv.data.adj
egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z]*\s" data.adv | cut -d ' ' -f 5 > conv.data.adv
egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z]*\s" data.noun | cut -d ' ' -f 5 > conv.data.noun
egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z]*\s" data.verb | cut -d ' ' -f 5 > conv.data.verb
This is a highly ranked Google result, so I'm digging up this 2 year old question to provide a far better answer than the existing one.
The "Kevin's Word Lists" page provides old lists from the year 2000, based on WordNet 1.6.
You are far better off going to https://wordnet.princeton.edu/download/current-version and downloading WordNet 3.0 (the Database-only version) or whatever the latest version is when you're reading this.
Parsing it is very simple; just apply a regex of "/^(\S+?)[\s%]/"
to grab every word, and then replace all "_"
(underscores) in the results with spaces. Finally, dump your results to whatever storage format you want. You'll be given separate lists of adjectives, adverbs, nouns, verbs and even a special (very useless/useful depending on what you're doing) list called "senses" which relates to our senses of smell, sight, hearing, etc, i.e. words such as "shirt" or "pungent".
Enjoy! Remember to include their copyright notice if you're using it in a project.
As others have suggested, the WordNet database files are a great source for parts of speech. That said, the examples used to extract the words isn't entirely correct. Each line is actually a "synonym set" consisting of multiple synonyms and their definition. Around 30% of words only appear as synonyms, so simply extracting the first word is missing a large amount of data.
The line format is pretty simple to parse (search.c
, function parse_synset
), but if all you're interested in are the words, the relevant part of the line is formatted as:
NNNNNNNN NN a NN word N [word N ...]
These correspond to:
For example, from data.adj
:
00004614 00 s 02 cut 0 shortened 0 001 & 00004412 a 0000 | with parts removed; "the drastically cut film"
s
, corresponding to adjective (wnutil.c
, function getpos
)cut
with lexical ID 0shortened
with lexical ID 0A short Perl script to simply dump the words from the data.*
files:
#!/usr/bin/perl
while (my $line = <>) {
# If no 8-digit byte offset is present, skip this line
if ( $line !~ /^[0-9]{8}\s/ ) { next; }
chomp($line);
my @tokens = split(/ /, $line);
shift(@tokens); # Byte offset
shift(@tokens); # File number
shift(@tokens); # Part of speech
my $word_count = hex(shift(@tokens));
foreach ( 1 .. $word_count ) {
my $word = shift(@tokens);
$word =~ tr/_/ /;
$word =~ s/\(.*\)//;
print $word, "\n";
shift(@tokens); # Lexical ID
}
}
A gist of the above script can be found here.
A more robust parser which stays true to the original source can be found here.
Both scripts are used in a similar fashion: ./wordnet_parser.pl DATA_FILE
.
See Kevin's word lists. Particularly the "Part Of Speech Database." You'll have to do some minimal text-processing on your own, in order to get the database into multiple files for yourself, but that can be done very easily with a few grep
commands.
The license terms are available on the "readme" page.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With