KeyphraseCountVectorizer

class keyphrase_vectorizers.keyphrase_count_vectorizer.KeyphraseCountVectorizer(spacy_pipeline: ~typing.Union[str, ~spacy.language.Language] = 'en_core_web_sm', pos_pattern: str = '<J.*>*<N.*>+', stop_words: ~typing.Union[str, ~typing.List[str]] = 'english', lowercase: bool = True, workers: int = 1, spacy_exclude: ~typing.List[str] = ['parser', 'attribute_ruler', 'lemmatizer', 'ner', 'textcat'], custom_pos_tagger: ~typing.Optional[callable] = None, max_df: ~typing.Optional[int] = None, min_df: ~typing.Optional[int] = None, binary: bool = False, dtype: ~numpy.dtype = <class 'numpy.int64'>, decay: ~typing.Optional[float] = None, delete_min_df: ~typing.Optional[float] = None)

KeyphraseCountVectorizer converts a collection of text documents to a matrix of document-token counts. The tokens are keyphrases that are extracted from the text documents based on their part-of-speech tags. The matrix rows indicate the documents and columns indicate the unique keyphrases. Each cell represents the count. The part-of-speech pattern of keyphrases can be defined by the pos_pattern parameter. By default, keyphrases are extracted, that have 0 or more adjectives, followed by 1 or more nouns. A list of extracted keyphrases matching the defined part-of-speech pattern can be returned after fitting via get_feature_names_out().

Attention

If the vectorizer is used for languages other than English, the spacy_pipeline and stop_words parameters must be customized accordingly. Additionally, the pos_pattern parameter has to be customized as the spaCy part-of-speech tags differ between languages. Without customizing, the words will be tagged with wrong part-of-speech tags and no stopwords will be considered. In addition, you may have to exclude/include different pipeline components using the spacy_exclude parameter for the spaCy POS tagger to work properly.

Parameters:
  • spacy_pipeline (Union[str, spacy.Language], default='en_core_web_sm') – A spacy.Language object or the name of the spaCy pipeline, used to tag the parts-of-speech in the text. Standard is the ‘en’ pipeline.

  • pos_pattern (str, default='<J.*>*<N.*>+') – The regex pattern of POS-tags used to extract a sequence of POS-tagged tokens from the text. Standard is to only select keyphrases that have 0 or more adjectives, followed by 1 or more nouns.

  • stop_words (Union[str, List[str]], default='english') – Language of stopwords to remove from the document, e.g. ‘english’. Supported options are stopwords available in NLTK. Removes unwanted stopwords from keyphrases if ‘stop_words’ is not None. If given a list of custom stopwords, removes them instead.

  • lowercase (bool, default=True) – Whether the returned keyphrases should be converted to lowercase.

  • workers (int, default=1) – How many workers to use for spaCy part-of-speech tagging. If set to -1, use all available worker threads of the machine. SpaCy uses the specified number of cores to tag documents with part-of-speech. Depending on the platform, starting many processes with multiprocessing can add a lot of overhead. In particular, the default start method spawn used in macOS/OS X (as of Python 3.8) and in Windows can be slow. Therefore, carefully consider whether this option is really necessary.

  • spacy_exclude (List[str], default=['parser', 'attribute_ruler', 'lemmatizer', 'ner']) – A list of spaCy pipeline components that should be excluded during the POS-tagging. Removing not needed pipeline components can sometimes make a big difference and improve loading and inference speed.

  • custom_pos_tagger (callable, default=None) – A callable function which expects a list of strings in a ‘raw_documents’ parameter and returns a list of (word token, POS-tag) tuples. If this parameter is not None, the custom tagger function is used to tag words with parts-of-speech, while the spaCy pipeline is ignored.

  • max_df (int, default=None) – During fitting ignore keyphrases that have a document frequency strictly higher than the given threshold.

  • min_df (int, default=None) – During fitting ignore keyphrases that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature.

  • binary (bool, default=False) – If True, all non zero counts are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts.

  • dtype (type, default=np.int64) – Type of the matrix returned by fit_transform() or transform().

  • decay (float, default=None) – A value between [0, 1] to weight the percentage of frequencies the previous bag-of-words should be decreased. For example, a value of .1 will decrease the frequencies in the bag-of-words matrix with 10% at each iteration.

  • delete_min_df (float, default=None) – Delete words at each iteration from its vocabulary that are below a minimum frequency. This will keep the resulting bag-of-words matrix small such that it does not explode in size with increasing vocabulary. If decay is None then this equals min_df.

build_tokenizer() callable

Return a function that splits a string into a sequence of tokens.

Returns:

tokenizer – A function to split a string into a sequence of tokens.

Return type:

callable

fit(raw_documents: List[str]) object

Learn the keyphrases that match the defined part-of-speech pattern from the list of raw documents.

Parameters:

raw_documents (iterable) – An iterable of strings.

Returns:

self – Fitted vectorizer.

Return type:

object

fit_transform(raw_documents: List[str]) List[List[int]]

Learn the keyphrases that match the defined part-of-speech pattern from the list of raw documents and return the document-keyphrase matrix. This is equivalent to fit followed by transform, but more efficiently implemented.

Parameters:

raw_documents (iterable) – An iterable of strings.

Returns:

X – Document-keyphrase matrix.

Return type:

array of shape (n_samples, n_features)

get_feature_names() List[str]

DEPRECATED: get_feature_names() is deprecated in scikit-learn 1.0 and will be removed with scikit-learn 1.2. Please use get_feature_names_out() instead.

Array mapping from feature integer indices to feature name.

Returns:

feature_names – A list of fitted keyphrases.

Return type:

list

get_feature_names_out() -> array(<class 'str'>, dtype=object)

Get fitted keyphrases for transformation.

Returns:

feature_names_out – Transformed keyphrases.

Return type:

ndarray of str objects

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params – Parameter names mapped to their values.

Return type:

dict

inverse_transform(X: List[List[int]]) List[List[str]]

Return keyphrases per document with nonzero entries in X.

Parameters:

X ({array-like, sparse matrix} of shape (n_samples, n_features)) – Document-keyphrase matrix.

Returns:

X_inv – List of arrays of keyphrase.

Return type:

list of arrays of shape (n_samples,)

partial_fit(raw_documents: List[str]) None

Perform a partial fit and update internal list of keyphrases with OOV keyphrases

Parameters:

raw_documents (iterable) – An iterable of strings.

Returns:

self – Partial fitted vectorizer.

Return type:

object

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:

**params (dict) – Estimator parameters.

Returns:

self – Estimator instance.

Return type:

estimator instance

transform(raw_documents: List[str]) List[List[int]]

Transform documents to document-keyphrase matrix. Extract token counts out of raw text documents using the keyphrases fitted with fit.

Parameters:

raw_documents (iterable) – An iterable of strings.

Returns:

X – Document-keyphrase matrix.

Return type:

sparse matrix of shape (n_samples, n_features)

update_bow(raw_documents: List[str]) csr_matrix

Create or update the bag-of-keywords matrix

Update the bag-of-keywords matrix by adding the newly transformed documents. This may add empty columns if new words are found and/or add empty rows if new topics are found.

During this process, the previous bag-of-keywords matrix might be decayed if self.decay has been set during init. Similarly, words that do not exceed self.delete_min_df are removed from its vocabulary and bag-of-keywords matrix.

Parameters:

raw_documents (iterable) – An iterable of strings.

Returns:

X_ – Bag-of-keywords matrix

Return type:

scipy.sparse.csr_matrix

KeyphraseTfidfVectorizer

class keyphrase_vectorizers.keyphrase_tfidf_vectorizer.KeyphraseTfidfVectorizer(spacy_pipeline: ~typing.Union[str, ~spacy.language.Language] = 'en_core_web_sm', pos_pattern: str = '<J.*>*<N.*>+', stop_words: ~typing.Union[str, ~typing.List[str]] = 'english', lowercase: bool = True, workers: int = 1, spacy_exclude: ~typing.List[str] = ['parser', 'attribute_ruler', 'lemmatizer', 'ner'], custom_pos_tagger: ~typing.Optional[callable] = None, max_df: ~typing.Optional[int] = None, min_df: ~typing.Optional[int] = None, binary: bool = False, dtype: ~numpy.dtype = <class 'numpy.float64'>, decay: ~typing.Optional[float] = None, delete_min_df: ~typing.Optional[float] = None, norm: str = 'l2', use_idf: bool = True, smooth_idf: bool = True, sublinear_tf: bool = False)

KeyphraseTfidfVectorizer converts a collection of text documents to a normalized tf or tf-idf document-token matrix. The tokens are keyphrases that are extracted from the text documents based on their part-of-speech tags. The matrix rows indicate the documents and columns indicate the unique keyphrases. Each cell represents the tf or tf-idf value, depending on the parameter settings. The part-of-speech pattern of keyphrases can be defined by the pos_pattern parameter. By default, keyphrases are extracted, that have 0 or more adjectives, followed by 1 or more nouns. A list of extracted keyphrases matching the defined part-of-speech pattern can be returned after fitting via get_feature_names_out().

Attention

If the vectorizer is used for languages other than English, the spacy_pipeline and stop_words parameters must be customized accordingly. Additionally, the pos_pattern parameter has to be customized as the spaCy part-of-speech tags differ between languages. Without customizing, the words will be tagged with wrong part-of-speech tags and no stopwords will be considered. In addition, you may have to exclude/include different pipeline components using the spacy_exclude parameter for the spaCy POS tagger to work properly.

Tf means term-frequency while tf-idf means term-frequency times inverse document-frequency. This is a common term weighting scheme in information retrieval, that has also found good use in document classification.

The goal of using tf-idf instead of the raw frequencies of occurrence of a token in a given document is to scale down the impact of tokens that occur very frequently in a given corpus and that are hence empirically less informative than features that occur in a small fraction of the training corpus.

The formula that is used to compute the tf-idf for a term t of a document d in a document set is tf-idf(t, d) = tf(t, d) * idf(t), and the idf is computed as idf(t) = log [ n / df(t) ] + 1 (if smooth_idf=False), where n is the total number of documents in the document set and df(t) is the document frequency of t; the document frequency is the number of documents in the document set that contain the term t. The effect of adding “1” to the idf in the equation above is that terms with zero idf, i.e., terms that occur in all documents in a training set, will not be entirely ignored. (Note that the idf formula above differs from the standard textbook notation that defines the idf as idf(t) = log [ n / (df(t) + 1) ]).

If smooth_idf=True (the default), the constant “1” is added to the numerator and denominator of the idf as if an extra document was seen containing every term in the collection exactly once, which prevents zero divisions: idf(t) = log [ (1 + n) / (1 + df(t)) ] + 1.

Furthermore, the formulas used to compute tf and idf depend on parameter settings that correspond to the SMART notation used in IR as follows:

Tf is “n” (natural) by default, “l” (logarithmic) when sublinear_tf=True. Idf is “t” when use_idf is given, “n” (none) otherwise. Normalization is “c” (cosine) when norm='l2', “n” (none) when norm=None.

Parameters:
  • spacy_pipeline (Union[str, spacy.Language], default='en_core_web_sm') – A spacy.Language object or the name of the spaCy pipeline, used to tag the parts-of-speech in the text. Standard is the ‘en’ pipeline.

  • pos_pattern (str, default='<J.*>*<N.*>+') – The regex pattern of POS-tags used to extract a sequence of POS-tagged tokens from the text. Standard is to only select keyphrases that have 0 or more adjectives, followed by 1 or more nouns.

  • stop_words (Union[str, List[str]], default='english') – Language of stopwords to remove from the document, e.g. ‘english’. Supported options are stopwords available in NLTK. Removes unwanted stopwords from keyphrases if ‘stop_words’ is not None. If given a list of custom stopwords, removes them instead.

  • lowercase (bool, default=True) – Whether the returned keyphrases should be converted to lowercase.

  • workers (int, default=1) – How many workers to use for spaCy part-of-speech tagging. If set to -1, use all available worker threads of the machine. SpaCy uses the specified number of cores to tag documents with part-of-speech. Depending on the platform, starting many processes with multiprocessing can add a lot of overhead. In particular, the default start method spawn used in macOS/OS X (as of Python 3.8) and in Windows can be slow. Therefore, carefully consider whether this option is really necessary.

  • spacy_exclude (List[str], default=['parser', 'attribute_ruler', 'lemmatizer', 'ner']) – A list of spaCy pipeline components that should be excluded during the POS-tagging. Removing not needed pipeline components can sometimes make a big difference and improve loading and inference speed.

  • custom_pos_tagger (callable, default=None) – A callable function which expects a list of strings in a ‘raw_documents’ parameter and returns a list of (word token, POS-tag) tuples. If this parameter is not None, the custom tagger function is used to tag words with parts-of-speech, while the spaCy pipeline is ignored.

  • max_df (int, default=None) – During fitting ignore keyphrases that have a document frequency strictly higher than the given threshold.

  • min_df (int, default=None) – During fitting ignore keyphrases that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature.

  • binary (bool, default=False) – If True, all non-zero counts are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts.

  • dtype (type, default=np.int64) – Type of the matrix returned by fit_transform() or transform().

  • decay (float, default=None) – A value between [0, 1] to weight the percentage of frequencies the previous bag-of-words should be decreased. For example, a value of .1 will decrease the frequencies in the bag-of-words matrix with 10% at each iteration.

  • delete_min_df (float, default=None) – Delete words at each iteration from its vocabulary that are below a minimum frequency. This will keep the resulting bag-of-words matrix small such that it does not explode in size with increasing vocabulary. If decay is None then this equals min_df.

  • norm ({'l1', 'l2'}, default='l2') – Each output row will have unit norm, either: - ‘l2’: Sum of squares of vector elements is 1. The cosine similarity between two vectors is their dot product when l2 norm has been applied. - ‘l1’: Sum of absolute values of vector elements is 1.

  • use_idf (bool, default=True) – Enable inverse-document-frequency reweighting. If False, idf(t) = 1.

  • smooth_idf (bool, default=True) – Smooth idf weights by adding one to document frequencies, as if an extra document was seen containing every term in the collection exactly once. Prevents zero divisions.

  • sublinear_tf (bool, default=False) – Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf).

build_tokenizer() callable

Return a function that splits a string into a sequence of tokens.

Returns:

tokenizer – A function to split a string into a sequence of tokens.

Return type:

callable

fit(raw_documents: List[str]) object

Learn the keyphrases that match the defined part-of-speech pattern and idf from the list of raw documents.

Parameters:

raw_documents (iterable) – An iterable of strings.

Returns:

self – Fitted vectorizer.

Return type:

object

fit_transform(raw_documents: List[str]) List[List[float]]

Learn the keyphrases that match the defined part-of-speech pattern and idf from the list of raw documents. Then return document-keyphrase matrix. This is equivalent to fit followed by transform, but more efficiently implemented.

Parameters:

raw_documents (iterable) – An iterable of strings.

Returns:

X – Tf-idf-weighted document-keyphrase matrix.

Return type:

sparse matrix of (n_samples, n_features)

get_feature_names() List[str]

DEPRECATED: get_feature_names() is deprecated in scikit-learn 1.0 and will be removed with scikit-learn 1.2. Please use get_feature_names_out() instead.

Array mapping from feature integer indices to feature name.

Returns:

feature_names – A list of fitted keyphrases.

Return type:

list

get_feature_names_out() -> array(<class 'str'>, dtype=object)

Get fitted keyphrases for transformation.

Returns:

feature_names_out – Transformed keyphrases.

Return type:

ndarray of str objects

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params – Parameter names mapped to their values.

Return type:

dict

inverse_transform(X: List[List[int]]) List[List[str]]

Return keyphrases per document with nonzero entries in X.

Parameters:

X ({array-like, sparse matrix} of shape (n_samples, n_features)) – Document-keyphrase matrix.

Returns:

X_inv – List of arrays of keyphrase.

Return type:

list of arrays of shape (n_samples,)

partial_fit(raw_documents: List[str]) None

Perform a partial fit and update internal list of keyphrases with OOV keyphrases

Parameters:

raw_documents (iterable) – An iterable of strings.

Returns:

self – Partial fitted vectorizer.

Return type:

object

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:

**params (dict) – Estimator parameters.

Returns:

self – Estimator instance.

Return type:

estimator instance

transform(raw_documents: List[str]) List[List[float]]

Transform documents to document-keyphrase matrix. Uses the keyphrases and document frequencies (df) learned by fit (or fit_transform).

Parameters:

raw_documents (iterable) – An iterable of strings.

Returns:

X – Tf-idf-weighted document-keyphrase matrix.

Return type:

sparse matrix of (n_samples, n_features)

update_bow(raw_documents: List[str]) csr_matrix

Create or update the bag-of-keywords matrix

Update the bag-of-keywords matrix by adding the newly transformed documents. This may add empty columns if new words are found and/or add empty rows if new topics are found.

During this process, the previous bag-of-keywords matrix might be decayed if self.decay has been set during init. Similarly, words that do not exceed self.delete_min_df are removed from its vocabulary and bag-of-keywords matrix.

Parameters:

raw_documents (iterable) – An iterable of strings.

Returns:

X_ – Bag-of-keywords matrix

Return type:

scipy.sparse.csr_matrix