skoot.model_validation.DistHypothesisValidator

class skoot.model_validation.DistHypothesisValidator(cols=None, as_df=True, alpha=0.05, action='warn', categorical_strategy='ratio')[source][source]

Validate test distributions using various hypothesis tests.

The distribution validator learns statistics from the training set and then validates that the test set features match their expected distributions. This can be useful for model validation tasks where model monitoring needs to take place.

For continuous (float) features, a two-tailed T-test will be applied to the test data to ensure it matches the distribution of the training data. For categorical (int, object) features, we compare the frequencies of different categorical levels within a tolerance of alpha.

Note: this class is NaN-safe, meaning if it is used early in your pipeline when you still have NaN values in your features, it will still function!

Parameters:

cols : array-like, shape=(n_features,)

The names of the columns on which to apply the transformation. Unlike other BasePDTransformer instances, if cols is None, it will only fit the numerical columns, since statistics such as standard deviation cannot be computed on categorical features. For column types that are integers or objects, the ratio of frequency for each class level will be compared to the expected ratio within a tolerance of alpha.

as_df : bool, optional (default=True)

Whether to return a Pandas DataFrame in the transform method. If False, will return a Numpy ndarray instead. Since most skoot transformers depend on explicitly-named DataFrame features, the as_df parameter is True by default.

alpha : float, optional (default=0.05)

The \(\alpha\) value for the T-test or level ratio comparison. If the resulting p-value is LESS than alpha, it means that we would reject the null hypothesis, and that the variable likely follows a different distribution from the training set.

action : str or unicode, optional (default=”warn”)

The default action for handling validation mismatches. Options include “warn”, “raise” or “ignore”. If action is “raise”, will raise a ValueError if mismatched.

categorical_strategy : str, unicode or None, optional (default=”ratio”)

How to validate categorical features. Default is “ratio”, which will compare the ratio of each level’s frequency to the overall count of samples in the feature within an absolute tolerance of alpha. If None, will not perform validation on categorical features.

Attributes

statistics_ (list, shape=(n_features,)) A list of tuples over the training features. For continuous features:: (mean, standard_dev, n_obs) For categorical features: (present_levels, present_counts, n_obs)
fit_cols_ (list) The list of column names on which the transformer was fit. This is used to validate the presence of the features in the test set during the transform stage.

Notes

This class is NaN-safe, meaning if it is used early in your pipeline when you still have NaN values in your features, it will still function. This is a double-edge sword, since computing the np.nanmean on a feature of mostly-NaN values will not be very meaningful.

Methods

fit(X[, y]) Fit the transformer.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Validate the features in the test dataframe.
__init__(cols=None, as_df=True, alpha=0.05, action='warn', categorical_strategy='ratio')[source][source]

Initialize self. See help(type(self)) for accurate signature.

fit(X, y=None)[source][source]

Fit the transformer.

Parameters:

X : pd.DataFrame, shape=(n_samples, n_features)

The Pandas frame to fit. The frame will only be fit on the prescribed cols (see __init__) or all if cols is None.

y : array-like or None, shape=(n_samples,), optional (default=None)

Pass-through for sklearn.pipeline.Pipeline.

fit_transform(X, y=None, **fit_params)[source]

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:

X : numpy array of shape [n_samples, n_features]

Training set.

y : numpy array of shape [n_samples]

Target values.

Returns:

X_new : numpy array of shape [n_samples, n_features_new]

Transformed array.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:

deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self
transform(X)[source]

Validate the features in the test dataframe.

This method will apply the validation test over each prescribed feature, and raise or warn appropriately.

Parameters:

X : pd.DataFrame, shape=(n_samples, n_features)

The Pandas frame to validate. The operation will be applied to a copy of the input data, and the result will be returned.

Returns:

X : pd.DataFrame or np.ndarray, shape=(n_samples, n_features)

The operation is applied to a copy of X, and the result set is returned.

Examples using skoot.model_validation.DistHypothesisValidator