I understand the differences between supervised and unsupervised learning:
Supervised Learning is a way of "teaching" the classifier, using labeled data.
Unsupervised Learning lets the classifier "learn by itself", for example, using clustering.
But what is "weakly supervised learning"? How does it classify its examples?
Weak Supervision is a branch of machine learning to acquire more labeled data for supervised training and modeling when: The available labeled data is insufficient to obtain a supervised model with good performance. The available labeled data is noisy or obtained from an imprecise source.
Weak supervision is an approach to machine learning in which high-level and often noisier sources of supervision are used to create much larger training sets much more quickly than could otherwise be produced by manual supervision (i.e. labeling examples manually, one by one).
Learn more. It has been suggested that Semi-supervised learning be merged into this article. (Discuss) Weak supervision is a branch of machine learning where noisy, limited, or imprecise sources are used to provide supervision signal for labeling large amounts of training data in a supervised learning setting.
An example of semi-supervised learning is merging clustering and classification algorithms. Clustering algorithms are unsupervised machine learning approaches for grouping data based on similarity.
As several comments below mention, the situation is not as simple as I originally wrote in 2013.
The generally accepted view is that
There are also classifications that are more along with my original answer, for example, Zhi-Hua Zhou's 2017 A brief introduction to weakly supervised learning considers weak supervision to be an umbrella term for
In short: In weakly supervised learning, you use a limited amount of labeled data.
How you select this data, and what exactly you do with it depends on the method. In general you use a limited number of data that is easy to get and/or makes a real difference and then learn the rest. I consider bootstrapping to be a method that can be used in weakly supervised learning, but as the comment by Ben below shows, this is not a generally accepted view.
See, for example Chris Bieman's 2007 dissertation for a nice overview, it says the following about bootstrapping/weakly-supervised learning:
Bootstrapping, also called self-training, is a form of learning that is designed to use even less training examples, therefore sometimes called weakly-supervised. Bootstrapping starts with a few training examples, trains a classifier, and uses thought-to-be positive examples as yielded by this classifier for retraining. As the set of training examples grows, the classifier improves, provided that not too many negative examples are misclassified as positive, which could lead to deterioration of performance.
For example, in case of part-of-speech tagging, one usually trains an HMM (or maximum-entropy or whatever) tagger on 10,000's words, each with it's POS. In the case of weakly supervised tagging, you might simply use a very small corpus of 100s words. You get some tagger, you use it to tag a corpus of 1000's words, you train a tagger on that and use it to tag even bigger corpus. Obviously, you have to be smarter than this, but this is a good start. (See this paper for a more advance example of a bootstrapped tagger)
Note: weakly supervised learning can also refer to learning with noisy labels (such labels can but do not need to be the result of bootstrapping)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With