interpretability

view markdown


some interesting papers on interpretable machine learning, largely organized based on this interpretable ml review (murdoch et al. 2019) and notes from this interpretable ml book (molnar 2019).

reviews

definitions

The definition of interpretability I find most useful is that given in murdoch et al. 2019: basically that interpretability requires a pragmatic approach in order to be useful. As such, interpretability is only defined with respect to a specific audience + problem and an interpretation should be evaluated in terms of how well it benefits a specific context. It has been defined and studied more broadly in a variety of works:

overviews

  • Towards a Generic Framework for Black-box Explanation Methods (henin & metayer 2019)
    • sampling - selection of inputs to submit to the system to be explained
    • generation - analysis of links between selected inputs and corresponding outputs to generate explanations
      1. proxy - approximates model (ex. rule list, linear model)
      2. explanation generation - explains the proxy (ex. just give most important 2 features in rule list proxy, ex. LIME gives coefficients of linear model, Shap: sums of elements)
    • interaction (with the user)
    • this is a super useful way to think about explanations (especially local), but doesn’t work for SHAP / CD which are more about how much a variable contributes rather than a local approximation

evaluating interpretability

Evaluating interpretability can be very difficult (largely because it rarely makes sense to talk about interpretability outside of a specific context). The best possible evaluation of interpretability requires benchmarking it with respect to the relevant audience in a context. For example, if an interpretation claims to help understand radiology models, it should be tested based on how well it helps radiologists when actually making diagnoses. The papers here try to find more generic alternative ways to evaluate interp methods (or just define desiderata to do so).

basic failures

adv. vulnerabilities

  • Interpretation of Neural Networks is Fragile (ghorbani et al. 2018)
    • minor perturbations to inputs can drastically change DNN interpretations
  • How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods (slack, …, singh, lakkaraju, 2020)
    • we can build classifiers which use important features (such as race) but explanations will not reflect that
    • basically classifier is different on X which is OOD (and used by LIME and SHAP)
  • Fooling Neural Network Interpretations via Adversarial Model Manipulation (heo, joo, & moon 2019) - can change model weights so that it keeps predictive accuracy but changes its interpretation
    • motivation: could falsely look like a model is “fair” because it places little saliency on sensitive attributes
      • output of model can still be checked regardless
    • fooled interpretation generalizes to entire validation set
    • can force the new saliency to be whatever we like
      • passive fooling - highlighting uninformative pixels of the image
      • active fooling - highlighting a completely different object, the firetruck
    • model does not actually change that much - predictions when manipulating pixels in order of saliency remains similar, very different from random (fig 4)
  • Counterfactual Explanations Can Be Manipulated (slack,…,singh, 2021) - minor changes in the training objective can drastically change counterfactual explanations

intrinsic interpretability (i.e. how can we fit a simpler model)

For an implementation of many of these models, see the python imodels package.

decision rules overview

📌 see also notes on logic

  • 2 basic concepts for a rule
    • converage = support
    • accuracy = confidence = consistency
      • measures for rules: precision, info gain, correlation, m-estimate, Laplace estimate
  • these algorithms usually don’t support regression, but you can get regression by cutting the outcome into intervals
  • why might these be useful?
  • connections
    • every decision list is a (one-sided) decision tree
    • every decision tree can be expressed as an equivalent decision list (by listing each path to a leaf as a decision rule)
    • leaves of a decision tree (or a decision list) form a decision set
  • recent work directly optimizes the performance metric (e.g., accuracy) with soft or hard sparsity constraints on the tree size, where sparsity is measured by the number of leaves in the tree using:
    1. mathematical programming, including mixed integer programming (MIP) / SAT solvers
    2. stochastic search through the space of trees
    3. customized dynamic programming algorithms that incorporate branch-and-bound techniques for reducing the size of the search space

rule_models

rule sets

Rule sets commonly look like a series of independent if-then rules. Unlike trees / lists, these rules can be overlapping and might not cover the whole space. Final predictions can be made via majority vote, using most accurate rule, or averaging predictions. Sometimes also called rule ensembles.

rule lists

  • oneR algorithm - select feature that carries most information about the outcome and then split multiple times on that feature
  • sequential covering - keep trying to cover more points sequentially
  • pre-mining frequent patterns (want them to apply to a large amount of data and not have too many conditions)
    • FP-Growth algorithm (borgelt 2005) is fast
    • Aprior + Eclat do the same thing, but with different speeds
  • interpretable classifiers using rules and bayesian analysis (letham et al. 2015)
    • start by pre-mining frequent patterns rules
      • current approach does not allow for negation (e.g. not diabetes) and must split continuous variables into categorical somehow (e.g. quartiles)
      • mines things that frequently occur together, but doesn’t look at outcomes in this step - okay (since this is all about finding rules with high support)
    • learn rules w/ prior for short rule conditions and short lists
      • start w/ random list
      • sample new lists by adding/removing/moving a rule
      • at the end, return the list that had the highest probability
    • scalable bayesian rule lists (yang et al. 2017) - faster algorithm for computing
      • doesn’t return entire posterior
    • learning certifiably optimal rules lists (angelino et al. 2017) - even faster optimization for categorical feature space
      • can get upper / lower bounds for loss = risk + $\lambda$ * listLength
      • doesn’t return entire posterior
  • Expert-augmented machine learning (gennatas et al. 2019)
    • make rule lists, then compare the outcomes for each rule with what clinicians think should be outcome for each rule
    • look at rules with biggest disagreement and engineer/improve rules or penalize unreliable rules
  • Fast and frugal heuristics: The adaptive toolbox. - APA PsycNET (gigerenzer et al. 1999) - makes rule lists that can split on either node of the tree each time

trees

Trees suffer from the fact that they have to cover the entire decision space and often we end up with replicated subtrees.

  • optimal trees
    • motivation
      • cost-complexity pruning (breiman et al. 1984 ch 3) - greedily prune while minimizing loss function of loss + $\lambda \cdot (\text{numLeaves})$
      • replicated subtree problem (Bagallo & Haussler, 1990) - they propose iterative algorithms to try to overcome it
    • Generalized and Scalable Optimal Sparse Decision Trees (lin…rudin, seltzer, 2020)
      • optimize for $\min L(X, y) + \lambda \cdot (\text{numLeaves})$
      • full decision tree optimization is NP-hard (Laurent & Rivest, 1976)
      • can optimize many different losses (e.g. accuracy, AUC)
      • speedups: use dynamic programming, prune the search-space with bounds
        • How Smart Guessing Strategies Can Yield Massive Scalability Improvements for Sparse Decision Tree Optimization (mctavish…rudin, seltzer, 2021)
        • hash trees with bit-vectors that represent similar trees using shared subtrees
          • tree is a set of leaves
        • derive many bounds
          • e.g. if know best loss so far, know shouldn’t add too many leaves since each adds $\lambda$ to the total loss
          • e.g. similar-support bound - if two features are similar, then bounds for splitting on the first can be used to obtain bounds for the second
      • optimal sparse decision trees (hu et al. 2019) - previous paper, slower
        • bounds: Upper Bound on Number of Leaves, Leaf Permutation Bound
    • optimal classification trees methodology paper (bertsimas & dunn, 2017) - solve optimal tree with expensive, mixed-integer optimization - realistically, usually too slow
      • $\begin{array}{cl} \min & \overbrace{R_{x y}(T)}^{\text{misclassification err}}+\alpha|T|
        \text { s.t. } & N_{x}(l) \geq N_{\min } \quad \forall l \in \text { leaves }(T) \end{array}$
      • $ T $ is the number of branch nodes in tree $T$
      • $N_x(l)$ is the number of training points contained in leaf node $l$
      • optimal classification trees vs PECARN (bertsimas et al. 2019)
    • Learning Optimal Fair Classification Trees (jo et al. 2022)
    • Better Short than Greedy: Interpretable Models through Optimal Rule Boosting (boley, …, webb, 2021) - find optimal tree ensemble (only works for very small data)
  • connections with boosting
    • Fast Interpretable Greedy-Tree Sums (FIGS) (tan et al. 2022) - extend cart to learn concise tree ensembles 🌳 ➡️ 🌱+🌱
      • very nice results for generalization + disentanglement
    • AdaTree - learn Adaboost stumps then rewrite as a tree (grossmann, 2004) 🌱+🌱 ➡️ 🌳
      • note: easy to rewrite boosted stumps as tree (just repeat each stump for each node at a given depth)
    • MediBoost - again, learn boosted stumps then rewrite as a tree (valdes…solberg 2016) 🌱+🌱 ➡️ 🌳 but with 2 tweaks:
      • shrinkage: use membership function that accelerates convergence to a decision (basically shrinkage during boosting)
      • prune the tree in a manner that does not affect the tree’s predictions
        • prunes branches that are impossible to reach by tracking the valid domain for every attribute (during training)
        • post-prune the tree bottom-up by recursively eliminating the parent nodes of leaves with identical predictions
    • AddTree = additive tree - learn single tree, but rather than only current node’s data to decide the next split, also allow the remaining data to also influence this split, although with a potentially differing weight (luna, …, friedman, solberg, valdes, 2019)
      • the weight is chosen as a hyperparameter
  • bayesian trees
    • Bayesian Treed Models (chipman et al. 2001) - impose priors on tree parameters
      • treed models - fit a model (e.g. linear regression) in leaf nodes
      • tree structure e.g. depth, splitting criteria
      • values in terminal nodes coditioned on tree structure
      • residual noise’s standard deviation
      • Stochastic gradient boosting (friedman 2002) - boosting where at iteration a subsample of the training data is used
    • BART: Bayesian additive regression trees (chipman et al. 2008) - learns an ensemble of tree models using MCMC on a distr. imbued with a prior (not interpretable)
      • pre-specify number of trees in ensemble
      • MCMC step: add split, remove split, switch split
      • cycles through the trees one at a time
  • history
    • automatic interaction detection (AID) regression trees (Morgan & Sonquist, 1963)
    • THeta Automatic Interaction Detection (THAID) classification trees (Messenger & Mandell, 1972)
    • Chi-squared Automatic Interaction Detector (CHAID) (Kass, 1980)
    • CART: Classification And Regression Trees (Breiman et al. 1984) - splits on GINI
    • ID3 (Quinlan, 1986)
    • C4.5 (Quinlan, 1993) - splits on binary entropy instead of GINI
  • open problems
    • ensemble methods
    • improvements in splitting criteria, missing variables
    • longitudinal data, survival curves
  • misc

linear (+algebraic) models

supersparse models

gams (generalized additive models)

symbolic regression

Symbolic regression learns a symbolic (e.g. a mathematical formula) for a function from data priors on what kinds of symboles (e.g. sin, exp) are more “difficult”

example-based = case-based (e.g. prototypes, nearest neighbor)

  • ProtoPNet: This looks like that (2nd paper) (chen, …, rudin, 2018) - learn convolutional prototypes that are smaller than the original input size
    • use L2 distance in repr space to measure distance between patches and prototypes

    • loss function
      • require the filters to be identical to the latent representation of some training image patch
      • cluster image patches of a particular class around the prototypes of the same class, while separating image patches of different classes
    • maxpool class prototypes so spatial location doesn’t matter
      • also get heatmap of where prototype was activated (only max really matters)
    • train in 3 steps
      • train everything: classification + clustering around intraclass prototypes + separation between interclass prototypes (last layer fixed to 1s / -0.5s)
      • project prototypes to data patches
      • learn last (linear) layer
    • ProtoNets: original prototypes paper (li, …, rudin, 2017)
      • uses encoder/decoder setup
      • encourage every prototype to be similar to at least one encoded input
      • results: learned prototypes in fact look like digits
        • correct class prototypes go to correct classes
      • loss: classification + reconstruction + distance to a training point
  • This Looks Like That, Because … Explaining Prototypes for Interpretable Image Recognition (nauta et al. 2020)
    • add textual quantitative information about visual characteristics deemed important by the classification model e.g. colour hue, shape, texture, contrast and saturation
  • Neural Prototype Trees for Interpretable Fine-Grained Image Recognition (nauta et al. 2021) - build decision trees on top of prototypes
    • performance is slightly poor until they use ensembles
  • XProtoNet: Diagnosis in Chest Radiography With Global and Local Explanations (kim et al. 2021)
    • alter ProtoPNet to use dynamically sized patches for prototype matching rather than fixed-size patches
  • TesNet: Interpretable Image Recognition by Constructing Transparent Embedding Space (wang et al. 2021) - alter ProtoPNet to get “orthogonal” basis concepts
  • ProtoPShare: Prototype Sharing for Interpretable Image Classification and Similarity Discovery (Rymarczyk et al. 2020),- share some prototypes between classes with data-dependent merge pruning

    • merge “similar” prototypes, where similarity is measured as dist of all training patches in repr. space
    • ProtoMIL: Multiple Instance Learning with Prototypical Parts for Fine-Grained Interpretability (Rymarczyk et al. 2021)
    • Interpretable Image Classification with Differentiable Prototypes Assignment (rmyarczyk et al. 2021)
  • These do not Look Like Those: An Interpretable Deep Learning Model for Image Recognition (singh & yow, 2021) - all weights for prototypes are either 1 or -1
  • Towards Explainable Deep Neural Networks (xDNN) (angelov & soares 2019) - more complex version of using prototypes
  • Self-Interpretable Model with Transformation Equivariant Interpretation (wang & wang, 2021)
    • generate data-dependent prototypes for each class and formulate the prediction as the inner product between each prototype and the extracted features
      • interpretation is hadamard product of prototype and extracted features (prediction is sum of this product)
    • interpretations can be easily visualized by upsampling from the prototype space to the input data space
    • regularization
      • reconstruction regularizer - regularizes the interpretations to be meaningful and comprehensible
        • for each image, enforce each prototype to be similar to its corresponding class’s latent repr.
      • transformation regularizer - constrains the interpretations to be transformation equivariant
    • self-consistency score quantifies the robustness of interpretation by measuring the consistency of interpretations to geometric transformations
  • Case-Based Reasoning for Assisting Domain Experts in Processing Fraud Alerts of Black-Box Machine Learning Models
  • ProtoAttend: Attention-Based Prototypical Learning (arik & pfister, 2020) - unlike ProtoPNet, each prediction is made as a weighted combination of similar rinput samples (like nearest-neighbor)
  • Explaining Latent Representations with a Corpus of Examples (crabbe, …, van der schaar 2021) - for an individual prediction,
    1. Which corpus examples explain the prediction issued for a given test example?
    2. What features of these corpus examples are relevant for the model to relate them to the test example?

interpretable neural nets

connecting dnns and rules

  • interpretable
    • Distilling a Neural Network Into a Soft Decision Tree (frosst & hinton 2017) - distills DNN into DNN-like tree which uses sigmoid neuron decides which path to follow
      • training on distilled DNN predictions outperforms training on original labels
      • to make the decision closer to a hard cut, can multiply by a large scalar before applying sigmoid
      • parameters updated with backprop
      • regularization to ensure that all paths are taken equally likely
    • Learning Binary Decision Trees by Argmin Differentiation (zantedeschi et al. 2021)
      • argmin differentiation - solving an optimization problem as a differentiable module within a parent problem tackled with gradient-based optimization methods
      • relax hard splits into soft ones and learn via gradient descent
    • [Optimizing for Interpretability in Deep Neural Networks with Tree Regularization Journal of Artificial Intelligence Research](https://www.jair.org/index.php/jair/article/view/12558) (wu…doshi-velez, 2021) - regularize DNN prediction function towards tree (potentially only for some region)
    • Adaptive Neural Trees (tanno et al. 2019) - adaptive neural tree mechanism with trainable nodes, edges, and leaves
  • loosely interpretable
    • Attention Convolutional Binary Neural Tree for Fine-Grained Visual Categorization (ji et al. 2020)
    • Oblique Decision Trees from Derivatives of ReLU Networks (lee & jaakkola, 2020)
      • locally constant networks (which are derivatives of relu networks) are equivalent to trees
      • they perform well and can use DNN tools e.g. Dropconnect on them
      • note: deriv wrt to input can be high-dim
        • they define locally constant network LCN scalar prediction as derivative wrt every parameter transposed with the activations of every corresponding neuron
        • approximately locally constant network ALCN: replace Relu $\max(0, x)$ with softplus $1+\exp(x)$
        • ensemble locally constant net ELCN - enesemble of LCN or ALCN (they train via boosting)
    • TAO: Alternating optimization of decision trees, with application to learning sparse oblique trees (carreira-perpinan, tavallali, 2018)
      • Minimize loss function with sparsity of features at each node + predictive performance
      • Algorithm: update each node one at a time while keeping all others fixed (finds a local optimum of loss)
      • Fast for 2 reasons
        • separability - nodes which aren’t on same path from root-to-leaf can be optimized separately
        • reduced problem - for any given node, we only solve a binary classification where the labels are the predictions that a point would be given if it were sent left/right
          • in this work, solve binary classification by approximating it with sparse logistic regression
      • TAO trees with boosting performs well + works for regression (Zharmagambetov and Carreira-Perpinan, 2020)
      • TAO trees with bagging performs well (Carreira-Perpiñán & Zharmagambetov, 2020)
      • Learning a Tree of Neural Nets (Zharmagambetov and Carreira-Perpinan, 2020) - use neural net rather than binary classification at each node
      • Also use TAO trained on neural net features do speed-up/improve the network
  • incorporating prior knowledge
    • Controlling Neural Networks with Rule Representations (seo, …, pfister, 21)
      • DEEPCTRL - encodes rules into DNN
        • one encoder for rules, one for data
          • both are concatenated with stochastic parameter $\alpha$ (which also weights the loss)
          • at test-time, can select $\alpha$ to vary contribution of rule part can be varied (e.g. if rule doesn’t apply to a certain point)
        • training
          • normalize losses initially to ensure they are on the same scale
          • some rules can be made differentiable in a straightforward way: $r(x, \hat y) \leq \tau \to \max (r(x, \hat y ) - \tau, 0)$, but can’t do this for everything e.g. decision tree rules
          • rule-based loss is defined by looking at predictions fo perturbations of the input
        • evaluation
          • verification ratio - fraction of samples that satisfy the rule
        • see also Lagrangian Duality for Constrained Deep Learning (fioretto et al. 2020)
    • RRL: A Scalable Classifier for Interpretable Rule-Based Representation Learning (wang et al. 2020)
      • Rule-based Representation Learner (RRL) - automatically learns interpretable non-fuzzy rules for data representation
      • project RRL it to a continuous space and propose a novel training method, called Gradient Grafting, that can directly optimize the discrete model using gradient descent
    • Harnessing Deep Neural Networks with Logic Rules (hu, …, xing, 2020) - iterative distillation method that transfers the structured information of logic rules into the weights of neural networks
  • just trying to improve performance

constrained models

  • different constraints in tensorflow lattice
    • e.g. monoticity, convexity, unimodality (unique peak), pairwise trust (model has higher slope for one feature when another feature is in particular value range)
    • e.g. regularizers = laplacian (flatter), hessian (linear), wrinkle (smoother), torsion (independence between feature contributions)
    • lattice regression (garcia & gupta, 2009) - learn keypoints of look-up table and at inference time interpolate the table
      • to learn, view as kernel method and then learn linear function in the kernel space
    • Monotonic Calibrated Interpolated Look-Up Tables (gupta et al. 2016)
      • speed up $D$-dimensional interpolation to $O(D \log D)$
      • follow-up work: Deep Lattice Networks and Partial Monotonic Functions (you,…,gupta, 2017) - use many layers
  • monotonicity constrainits in histogram-based gradient boosting (see sklearn)

misc models

bayesian models

programs

  • program synthesis - automatically find a program in an underlying programming language that satisfies some user intent
    • ex. program induction - given a dataset consisting of input/output pairs, generate a (simple?) program that produces the same pairs
  • Programs as Black-Box Explanations (singh et al. 2016)
  • probabilistic programming - specify graphical models via a programming language

posthoc interpretability (i.e. how can we interpret a fitted model)

Note that in this section we also include importances that work directly on the data (e.g. we do not first fit a model, rather we do nonparametric calculations of importance)

model-agnostic

  • local surrogate (LIME) - fit a simple model locally to on point and interpret that
    • select data perturbations and get new predictions
      • for tabular data, this is just varying the values around the prediction
      • for images, this is turning superpixels on/off
      • superpixels determined in unsupervised way
    • weight the new samples based on their proximity
    • train a kernel-weighted, interpretable model on these points
    • LEMNA - like lime but uses lasso + small changes
  • anchors (ribeiro et al. 2018) - find biggest square region of input space that contains input and preserves same output (with high precision)
    1. does this search via iterative rules
  • What made you do this? Understanding black-box decisions with sufficient input subsets
    • want to find smallest subsets of features which can produce the prediction
      • other features are masked or imputed
  • VIN (hooker 04) - variable interaction networks - globel explanation based on detecting additive structure in a black-box, based on ANOVA
  • local-gradient (bahrens et al. 2010) - direction of highest slope towards a particular class / other class
  • golden eye (henelius et al. 2014) - randomize different groups of features and search for groups which interact
  • shapley value - average marginal contribution of a feature value across all possible sets of feature values
    • “how much does prediction change on average when this feature is added?”
    • tells us the difference between the actual prediction and the average prediction
    • estimating: all possible sets of feature values have to be evaluated with and without the j-th feature
      • this includes sets of different sizes
      • to evaluate, take expectation over all the other variables, fixing this variables value
    • shapley sampling value - sample instead of exactly computing
      • quantitative input influence is similar to this…
    • satisfies 3 properties
      • local accuracy - basically, explanation scores sum to original prediction
      • missingness - features with $x’_i=0$ have 0 impact
      • consistency - if a model changes so that some simplified input’s contribution increases or stays the same regardless of the other inputs, that input’s attribution should not decrease.
    • interpretation: Given the current set of feature values, the contribution of a feature value to the difference between the actual prediction and the mean prediction is the estimated Shapley value
    • recalculate via sampling other features in expectation
    • followup propagating shapley values (chen, lundberg, & lee 2019) - can work with stacks of different models
    • averaging these across dataset can be misleading (okeson et al. 2021)
  • probes - check if a representation (e.g. BERT embeddings) learned a certain property (e.g. POS tagging) by seeing if we can predict this property (maybe linearly) directly from the representation
    • problem: if the post-hoc probe is a complex model (e.g. MLP), it can accurately predict a property even if that property isn’t really contained in the representation
    • potential solution: benchmark against control tasks, where we construct a new random task to predict given a representation, and see how well the post-hoc probe can do on that task
  • Explaining individual predictions when features are dependent: More accurate approximations to Shapley values (aas et al. 2019) - tries to more accurately compute conditional expectation
  • Feature relevance quantification in explainable AI: A causal problem (janzing et al. 2019) - argues we should just use unconditional expectation
  • quantitative input influence - similar to shap but more general
  • permutation importance - increase in the prediction error after we permuted the feature’s values
    • $\mathbb E[Y] - \mathbb E[Y\vert X_{\sim i}]$
    • If features are correlated, the permutation feature importance can be biased by unrealistic data instances (PDP problem)
    • not the same as model variance
    • Adding a correlated feature can decrease the importance of the associated feature
  • L2X: information-theoretical local approximation (chen et al. 2018) - locally assign feature importance based on mutual information with function
  • Learning Explainable Models Using Attribution Priors + Expected Gradients - like doing integrated gradients in many directions (e.g. by using other points in the training batch as the baseline)
    • can use this prior to help improve performance
  • Variable Importance Clouds: A Way to Explore Variable Importance for the Set of Good Models
  • All Models are Wrong, but Many are Useful: Learning a Variable’s Importance by Studying an Entire Class of Prediction Models Simultaneously (Aaron, Rudin, & Dominici 2018)
  • Interpreting Black Box Models via Hypothesis Testing

vim (variable importance measure) framework

  • VIM
    1. a quantitative indicator that quantifies the change of model output value w.r.t. the change or permutation of one or a set of input variables
    2. an indicator that quantifies the contribution of the uncertainties of one or a set of input variables to the uncertainty of model output variable
    3. an indicator that quantifies the strength of dependence between the model output variable and one or a set of input variables.
  • difference-based - deriv=based methods, local importance measure, morris’ screening method
    • LIM (local importance measure) - like LIME
      • can normalize weights by values of x, y, or ratios of their standard deviations
      • can also decompose variance to get the covariances between different variables
      • can approximate derivative via adjoint method or smth else
    • morris’ screening method
      • take a grid of local derivs and look at the mean / std of these derivs
      • can’t distinguish between nonlinearity / interaction
    • using the squared derivative allows for a close connection w/ sobol’s total effect index
      • can extend this to taking derivs wrt different combinations of variables
  • parametric regression
    • correlation coefficient, linear reg coeffeicients
    • partial correlation coefficient (PCC) - wipe out correlations due to other variables
      • do a linear regression using the other variables (on both X and Y) and then look only at the residuals
    • rank regression coefficient - better at capturing nonlinearity
    • could also do polynomial regression
    • more techniques (e.g. relative importance analysis RIA)
      • nonparametric regression
        • use something like LOESS, GAM, projection pursuit
        • rank variables by doing greedy search (add one var at a time) and seeing which explains the most variance
  • hypothesis test
    • grid-based hypothesis tests: splitting the sample space (X, Y) into grids and then testing whether the patterns of sample distributions across different grid cells are random
      • ex. see if means vary
      • ex. look at entropy reduction
    • other hypothesis tests include the squared rank difference, 2D kolmogorov-smirnov test, and distance-based tests
  • variance-based vim (sobol’s indices)
    • ANOVA decomposition - decompose model into conditional expectations $Y = g_0 + \sum_i g_i (X_i) + \sum_i \sum_{j > i} g_{ij} (X_i, X_j) + \dots + g_{1,2,…, p}$
      • $g_0 = \mathbf E (Y)\ g_i = \mathbf E(Y \vert X_i) - g_0 \ g_{ij} = \mathbf E (Y \vert X_i, X_j) - g_i - g_j - g_0\…$
      • take variances of these terms
      • if there are correlations between variables some of these terms can misbehave
      • note: $V(Y) = \sum_i V (g_i) + \sum_i \sum_{j > i} V(g_{ij}) + … V(g_{1,2,…,p})$ - variances are orthogonal and all sum to total variance
      • anova decomposition basics - factor function into means, first-order terms, and interaction terms
    • $S_i$: Sobol’s main effect index: $=V(g_i)=V(E(Y \vert X_i))=V(Y)-E(V(Y \vert X_i))$
      • small value indicates $X_i$ is non-influential
      • usually used to select important variables
    • $S_{Ti}$: Sobol’s total effect index - include all terms (even interactions) involving a variable
    • equivalently, $V(Y) - V(E[Y \vert X_{\sim i}])$
      • usually used to screen unimportant variables
        • it is common to normalize these indices by the total variance $V(Y)$
      • three methods for computation - Fourire amplitude sensitivity test, meta-model, MCMC
      • when features are correlated, these can be strange (often inflating the main effects)
        • can consider $X_i^{\text{Correlated}} = E(X_i \vert X_{\sim i})$ and $X_i^{\text{Uncorrelated}} = X_i - X_i^{\text{Correlated}}$
    • this can help us understand the contributions that come from different features, as well as the correlations between features (e.g. $S_i^{\text{Uncorrelated}} = V(E[Y \vert X_i^{\text{Uncorrelated}}])/V(Y)$
    • efficiently compute SHAP values directly from data (williamson & feng, 2020 icml)
  • moment-independent vim
    • want more than just the variance ot the output variables
    • e.g. delta index = average dist. between $f_Y(y)$ and $f_{Y \vert X_i}(y)$ when $X_i$ is fixed over its full distr.
      • $\delta_i = \frac 1 2 \mathbb E \int \vert f_Y(y) - f_{Y\vert X_i} (y) \vert dy = \frac 1 2 \int \int \vert f_{Y, X_i}(y, x_i) - f_Y(y) f_{X_i}(x_i) \vert dy \,dx_i$
      • moment-independent because it depends on the density, not just any moment (like measure of dependence between $y$ and $X_i$
    • can also look at KL, max dist..
  • graphic vim - like curves
    • e.g. scatter plot, meta-model plot, regional VIMs, parametric VIMs
    • CSM - relative change of model ouput mean when range of $X_i$ is reduced to any subregion
    • CSV - same thing for variance
  • A Simple and Effective Model-Based Variable Importance Measure
    • measures the feature importance (defined as the variance of the 1D partial dependence function) of one feature conditional on different, fixed points of the other feature. When the variance is high, then the features interact with each other, if it is zero, they don’t interact.
  • Learning to Explain: Generating Stable Explanations Fast - ACL Anthology (situ et al. 2021) - train a model on “teacher” importance scores (e.g. SHAP) and then use it to quickly predict importance scores on new examples

importance curves

  • pdp plots - marginals (force value of plotted var to be what you want it to be)
  • possible solution: Marginal plots M-plots (bad name - uses conditional, not marginal)
    • only use points conditioned on certain variable
    • problem: this bakes things in (e.g. if two features are correlated and only one important, will say both are important)
  • ALE-plots - take points conditioned on value of interest, then look at differences in predictions around a window
    • this gives pure effect of that var and not the others
    • needs an order (i.e. might not work for caterogical)
    • doesn’t give you individual curves
    • recommended very highly by the book…
    • they integrate as you go…
  • summary: To summarize how each type of plot (PDP, M, ALE) calculates the effect of a feature at a certain grid value v:
    • Partial Dependence Plots: “Let me show you what the model predicts on average when each data instance has the value v for that feature. I ignore whether the value v makes sense for all data instances.”
  • M-Plots: “Let me show you what the model predicts on average for data instances that have values close to v for that feature. The effect could be due to that feature, but also due to correlated features.”
    • ALE plots: “Let me show you how the model predictions change in a small “window” of the feature around v for data instances in that window.”

tree ensembles

  • mean decrease impurity = MDI = Gini importance
  • Breiman proposes permutation tests = MDA: Breiman, Leo. 2001. “Random Forests.” Machine Learning 45 (1). Springer: 5–32
    • conditional variable importance for random forests (strobl et al. 2008)
      • propose permuting conditioned on the values of variables not being permuted
        • to find region in which to permute, define the grid within which the values of $X_j$ are permuted for each tree by means of the partition of the feature space induced by that tree
      • many scores (such as MDI, MDA) measure marginal importance, not conditional importance
        • as a result, correlated variables get importances which are too high
  • Extracting Optimal Explanations for Ensemble Trees via Logical Reasoning (zhang et al. ‘21) - OptExplain: extracts global explanation of tree ensembles using logical reasoning, sampling, + optimization
  • treeshap (lundberg, erion & lee, 2019): prediction-level
    • individual feature attribution: want to decompose prediction into sum of attributions for each feature
      • each thing can depend on all features
    • Saabas method: basic thing for tree
      • you get a pred at end
      • count up change in value at each split for each variable
    • three properties
      • local acc - decomposition is exact
      • missingness - features that are already missing are attributed no importance
        • for missing feature, just (weighted) average nodes from each split
      • consistency - if F(X) relies more on a certain feature j, $F_j(x)$ should
        • however Sabaas method doesn’t change $F_j(X)$ for $F’(x) = F(x) + x_j$
    • these 3 things iply we want shap values
    • average increase in func value when selecting i (given all subsets of other features)
    • for binary features with totally random splits, same as Saabas
    • can cluster based on explanation similarity (fig 4)
      • can quantitatively evaluate based on clustering of explanations
    • their fig 8 - qualitatively can see how different features alter outpu
    • gini importance is like weighting all of the orderings
  • Explainable AI for Trees: From Local Explanations to Global Understanding (lundberg et al. 2019)
    • shap-interaction scores - distribute among pairwise interactions + local effects
    • plot lots of local interactions together - helps detect trends
    • propose doing shap directly on loss function (identify how features contribute to loss instead of prediction)
    • can run supervised clustering (where SHAP score is the label) to get meaningful clusters
      • alternatively, could do smth like CCA on the model output
  • understanding variable importances in forests of randomized trees (louppe et al. 2013)
    • consider fully randomized trees
      • assume all categorical
      • randomly pick feature at each depth, split on all possibilities
      • also studied by biau 2012
      • extreme case of random forest w/ binary vars?
    • real trees are harder: correlated vars and stuff mask results of other vars lower down
    • asymptotically, randomized trees might actually be better
  • Actionable Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles (lucic et al. 2019)

neural nets (dnns)

dnn visualization

  • good summary on distill
  • visualize intermediate features
    1. visualize filters by layer - doesn’t really work past layer 1
    2. decoded filter - rafegas & vanrell 2016 - project filter weights into the image space - pooling layers make this harder
    3. deep visualization - yosinski 15
    4. Understanding Deep Image Representations by Inverting Them (mahendran & vedaldi 2014) - generate image given representation
    5. pruning for identifying critical data routing paths - prune net (while preserving prediction) to identify neurons which result in critical paths
  • penalizing activations
    • interpretable cnns (zhang et al. 2018) - penalize activations to make filters slightly more intepretable
      • could also just use specific filters for specific classes…
    • teaching compositionality to cnns - mask features by objects
  • approaches based on maximal activation
    • images that maximally activate a feature
      • deconv nets - Zeiler & Fergus (2014) use deconvnets (zeiler et al. 2011) to map features back to pixel space
        • given one image, get the activations (e.g. maxpool indices) and use these to get back to pixel space
        • everything else does not depend on the original image
        • might want to use optimization to generate image that makes optimal feature instead of picking from training set - before this, erhan et al. did this for unsupervised features - dosovitskiy et al 16 - train generative deconv net to create images from neuron activations - aubry & russel 15 do similar thing
      • deep dream - reconstruct image from feature map
      • could use natural image prior
      • could train deconvolutional NN
      • also called deep neuronal tuning - GD to find image that optimally excites filters
    • neuron feature - weighted average version of a set of maximum activation images that capture essential properties - rafegas_17
      • can also define color selectivity index - angle between first PC of color distribution of NF and intensity axis of opponent color space
      • class selectivity index - derived from classes of images that make NF
    • saliency maps for each image / class
      • simonyan et al 2014
    • Diagnostic Visualization for Deep Neural Networks Using Stochastic Gradient Langevin Dynamics - sample deep dream images generated by gan
  • Zoom In: An Introduction to Circuits (olah et al. 2020)
    • study of inceptionV1 (GoogLeNet)
    • some interesting neuron clusters: curve detectors, high-low freq detectors (useful for finding background)
  • an overview of early vision (olah et al. 2020)
    • many groups
      • conv2d0: gabor, color-contrast, other
      • conv2d1: low-freq, gabor-like, color contrast, multicolor, complex gabor, color, hatch, other
      • conv2d2: color contrast, line, shifted line, textures, other, color center-surround, tiny curves, etc.
  • curve-detectors (cammarata et al. 2020)
  • curve-circuits (cammarata et al. 2021)
    • engineering curve circuit from scratch
    • Screen Shot 2021-10-01 at 10.54.49 AM
  • posthoc prototypes
    • counterfactual explanations - like adversarial, counterfactual explanation describes smallest change to feature vals that changes the prediction to a predefined output
      • maybe change fewest number of variables not their values
      • counterfactual should be reasonable (have likely feature values)
      • human-friendly
      • usually multiple possible counterfactuals (Rashomon effect)
      • can use optimization to generate counterfactual
      • anchors - opposite of counterfactuals, once we have these other things won’t change the prediction
    • prototypes (assumed to be data instances)
      • prototype = data instance that is representative of lots of points
      • criticism = data instances that is not well represented by the set of prototypes
      • examples: k-medoids or MMD-critic
        • selects prototypes that minimize the discrepancy between the data + prototype distributions
  • Architecture Disentanglement for Deep Neural Networks (hu et al. 2021) - “NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks”
  • Explaining Deep Learning Models with Constrained Adversarial Examples
  • Understanding Deep Architectures by Visual Summaries
  • Semantics for Global and Local Interpretation of Deep Neural Networks
  • Iterative augmentation of visual evidence for weakly-supervised lesion localization in deep interpretability frameworks
  • explaining image classifiers by counterfactual generation
    • generate changes (e.g. with GAN in-filling) and see if pred actually changes
    • can search for smallest sufficient region and smallest destructive region
    • Screen Shot 2020-01-20 at 9.14.44 PM

dnn concept-based explanations

  • concept activation vectors
    • Given: a user-defined set of examples for a concept (e.g., ‘striped’), and random examples, labeled training-data examples for the studied class (zebras)
      • given trained network
      • TCAV can quantify the model’s sensitivity to the concept for that class. CAVs are learned by training a linear classifier to distinguish between the activations produced by a concept’s examples and examples in any layer
      • CAV - vector orthogonal to the classification boundary
      • TCAV uses the derivative of the CAV direction wrt input
    • automated concept activation vectors - Given a set of concept discovery images, each image is segmented with different resolutions to find concepts that are captured best at different sizes. (b) After removing duplicate segments, each segment is resized tothe original input size resulting in a pool of resized segments of the discovery images. (c) Resized segments are mapped to a model’s activation space at a bottleneck layer. To discover the concepts associated with the target class, clustering with outlier removal is performed. (d) The output of our method is a set of discovered concepts for each class, sorted by their importance in prediction
  • On Completeness-aware Concept-Based Explanations in Deep Neural Networks
  • Interpretable Basis Decomposition for Visual Explanation (zhou et al. 2018) - decompose activations of the input image into semantically interpretable components pre-trained from a large concept corpus
  • Explaining in Style: Training a GAN to explain a classifier in StyleSpace (lang et al. 2021)

dnn causal-motivated attribution

  • Explaining The Behavior Of Black-Box Prediction Algorithms With Causal Learning - specify some interpretable features and learn a causal graph of how the classifier uses these features (sani et al. 2021)
    • partial ancestral graph (PAG) (zhang 08) is a graphical representation which includes
      • directed edges (X $\to$ Y means X is a causal ancestor of Y)
      • bidirected edges (X $\leftrightarrow$ Y means X and Y are both caused by some unmeasured common factor(s), e.g., X ← U → Y )
      • partially directed edges (X $\circ \to$ Y or X $\circ-\circ$ Y ) where the circle marks indicate ambiguity about whether the endpoints are arrows or tails
      • PAGs may also include additional edge types to represent selection bias
    • given a model’s predictions $\hat Y$ and some potential causes $Z$, learn a PAGE among them all
      • assume $\hat Y$ is a causal non-ancestor of $Z$ (there is no directed path from $\hat Y$ into any element of $Z$)
      • search for a PAG and not DAG bc $Z$ might not include all possibly relevant variables
  • Neural Network Attributions: A Causal Perspective (Chattopadhyay et al. 2019)
    • the neural network architecture is viewed as a Structural Causal Model, and a methodology to compute the causal effect of each feature on the output is presented
  • CXPlain: Causal Explanations for Model Interpretation under Uncertainty (schwab & karlen, 2019)
    • model-agnostic - efficiently query model to figure out which inputs are most important
    • pixel-level attributions
  • Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals (elezar…goldberg, 2020)
    • instead of simple probing, generate counterfactuals in representations and see how final prediction changes
      • remove a property (e.g. part of speech) from the repr. at a layer using Iterative Nullspace Projection (INLP) (Ravfogel et al., 2020)
        • iteratively tries to predict the property linearly, then removes these directions
  • Bayesian Interpolants as Explanations for Neural Inferences (mcmillan 20)
    • if $A \implies B$, interpolant $I$ satisfies $A\implies I$, $I \implies B$ and $I$ expressed only using variables common to $A$ and $B$
      • here, $A$ is model input, $B$ is prediction, $I$ is activation of some hidden layer
    • Bayesian interpolant show $P(A B) \geq \alpha^2$ when $P(I A) \geq \alpha$ and $P(B I) \geq \alpha$

dnn feature importance

dnn textual explanations

interactions

model-agnostic interactions

How interactions are defined and summarized is a very difficult thing to specify. For example, interactions can change based on monotonic transformations of features (e.g. $y= a \cdot b$, $\log y = \log a + \log b$). Nevertheless, when one has a specific question it can make sense to pursue finding and understanding interactions.

  • build-up = context-free, less faithful: score is contribution of only variable of interest ignoring other variables
  • break-down = occlusion = context-dependent, more faithful: score is contribution of variable of interest given all other variables (e.g. permutation test - randomize var of interest from right distr.)
  • H-statistic: 0 for no interaction, 1 for complete interaction
    • how much of the variance of the output of the joint partial dependence is explained by the interaction instead of the individuals
    • \[H^2_{jk} = \underbrace{\sum_i [\overbrace{PD(x_j^{(i)}, x_k^{(i)})}^{\text{interaction}} \overbrace{- PD(x_j^{(i)}) - PD(x_k^{(i)})}^{\text{individual}}]^2}_{\text{sum over data points}} \: / \: \underbrace{\sum_i [PD(x_j^{(i)}, x_k^{(i)})}_{\text{normalization}}]^2\]
    • alternatively, using ANOVA decomp: $H_{jk}^2 = \sum_i g_{ij}^2 / \sum_i (\mathbb E [Y \vert X_i, X_j])^2$
    • same assumptions as PDP: features need to be independent
  • alternatives
    • variable interaction networks (Hooker, 2004) - decompose pred into main effects + feature interactions
    • PDP-based feature interaction (greenwell et al. 2018)
  • feature-screening (feng ruan’s work)
    • want to find beta which is positive when a variable is important
    • idea: maximize difference between (distances for interclass) and (distances for intraclass)
    • using an L1 distance yields better gradients than an L2 distance
  • ANOVA - factorial method to detect feature interactions based on differences among group means in a dataset
  • Automatic Interaction Detection (AID) - detects interactions by subdividing data into disjoint exhaustive subsets to model an outcome based on categorical features
  • Shapley Taylor Interaction Index (STI) (Dhamdhere et al., 2019) - extends shap to all interactions
  • Faith-Shap: The Faithful Shapley Shapley Interaction Index (tsai, yeh, & ravikumar, 2019)
    • SHAP axioms for interactions no longer specify a unique interaction index
    • here, adopt the viewpoint of Shapley values as coefficients of the most faithful linear approximation to the pseudo-Boolean coalition game value function
  • gradient-based methods (originally Friedman and Popescu, 2008 then later used with many models such as logit)
    • test if partial derivatives for some subset (e.g. $x_1, …, x_p$) are nonzero \(\mathbb{E}_{\mathbf{x}}\left[\frac{\partial^p f(\mathbf{x})}{\partial x_{i_{1}} \partial x_{i_{2}} \ldots \partial x_{i_p}}\right]^{2}>0\)
    • doesn’t work well for piecewise functions (e.g. Relu) and computationally expensive
  • include interactions explicitly then run lasso (e.g. bien et al. 2013)
  • methods for finding frequent item sets

tree-based interactions

  • iterative random forest (basu et al. 2018)
    • interaction scoring - find interactions as features which co-occur on paths (using RIT algorithm)
    • repeated refitting
      • fit RF and get MDI importances
      • iteratively refit RF, weighting probability of feature being selected by its previous MDI
  • Additive groves (Sorokina, carauna, & riedewald 2007) proposed use random forest with and without an interaction (forcibly removed) to detect feature interactions - very slow
  • SBART: Bayesian regression tree ensembles that adapt to smoothness and sparsity (linero & yang, 2018) - adapts BART to sparsity
  • DP-Forests: bayesian decision tree ensembles for interaction detection (du & linero, 2018)
    • Bayesian tree ensembles (e.g. BART) generally detect too many (high-order) interactions
    • Dirichlet-process forests (DP-Forests) emphasize low-order interactions
      • create groups of trees which each use non-overlapping features
      • hopefully, each group learns a single low-order interaction
      • dirichlet process prior is used to pick the number of groups
        • $\alpha$ parameter for Dirichlet process describes how strongly to enforce this grouping
    • interaction definition: $x_{j}$ and $x_{k}$ are interact if $f_{0}(x)$ cannot be written as $f_{0}(x)=f_{0 \backslash j}(x)+f_{0 \backslash k}(x)$ where $f_{0 \backslash j}$ and $f_{0 \backslash k}$ do not depend on $x_{j}$ and $x_{k}$ respectively

dnn interactions

linear interactions

example-based explanations

  • influential instances - want to find important data points
  • deletion diagnostics - delete a point and see how much it changed
  • influence funcs (koh & liang, 2017): use Hessian ($\theta x \theta$) to give effect of upweighting a point
    • influence functions = inifinitesimal approach - upweight one person by infinitesimally small weight and see how much estimate changes (e.g. calculate first derivative)
    • influential instance - when data point removed, has a strong effect on the model (not necessarily same as an outlier)
    • requires access to gradient (e.g. nn, logistic regression)
    • take single step with Newton’s method after upweighting loss
    • yield change in parameters by removing one point
    • yield change in loss at one point by removing a different point (by multiplying above by cahin rule)
    • yield change in parameters by modifying one point

model summarization

distillation

  • usually distillation refers to training a surrogate model on the predictions of the original model, but here I use it looser
  • model distillation (model-agnostic)
    • Trepan - approximate model w/ a decision tree
    • BETA (lakkaraju et al. 2017) - approximate model by a rule list
  • set of methods for extracting rules from DNNN
  • exact distillation
    • Born-again tree ensembles (vidal et al. 2020) - efficient algorithm to exactly find a minimal tree which reproduces the predictions of a tree ensemble
  • Knowledge Distillation as Semiparametric Inference (dao…mackey, 2021
    • background on when kd should succeed
      • probabilities more informative than labels (hinton, vinyals, & dean, 2015)
      • linear students exactly mimic linear teachers (phuong & lampert, 2019)
      • students can learn at a faster rate given knowledge of datapoint difficulty (lopez-paz et al. 2015)
      • regularization for kernel ridge regression (mobahi farajtabar, & bartlett, 2020)
      • teacher class probabilities are proxies for the true bayes class probabilities $\mathbb E [Y x]$
    • adjustments
      • teacher underfitting $\to$ loss correction
      • teacher overfitting $\to$ cross-fitting (chernozhukov et al. 2018) - like cross-validation, fit student only to held-out predictions

different problems / perspectives

improving models

complementarity

complementarity - ML should focus on points hard for humans + seek human input on points hard for ML

  • note: goal of ML isn’t to learn categories but learn things that are associated with actions
  • Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer (madras et al. 2018) - adaptive rejection learning - build on rejection learning considering the strengths/weaknesses of humans
  • Learning to Complement Humans (wilder et al. 2020) - 2 approaches for how to incorporate human input
    • discriminative approach - jointly train predictive model and policy for deferring to human (with a cost for deferring)
    • decision-theroetic approach - train predictive model + policy jointly based on value of information (VOI)
    • do real-world experiments w/ humans to validate: scientific discovery (a galaxy classification task) & medical diagnosis (detection of breast cancer metastasis)
  • Gaining Free or Low-Cost Transparency with Interpretable Partial Substitute (wang, 2019) - given a black-box model, find a subset of the data for which predictions can be made using a simple rule-list (tong wang has a few papers like this)
    • Interpretable Companions for Black-Box Models (pan, wang, et al. 2020 ) - offer an interpretable, but slightly less acurate model for each decision
      • human experiment evaluates how much humans are able to tolerate
    • Hybrid Predictive Models: When an Interpretable Model Collaborates with a Black-box Model (wang & lin, 2021) - use interpretable model on subset where it works
      • objective function considers predictive accuracy, model interpretability, and model transparency (defined as the percentage of data processed by the interpretable substitute)
    • Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning (wang et al. 2021) - interpretable model for individual features and black-box model captures feature interactions (on residuals)

human-in-the-loop (HITL)

  • rethinking human-ai interaction (blog)
    • humans-as-backup - most common hitl system - human checks prediction if it is low confidence
      • e.g. driving, content moderation, machine translation
      • this system can behave worse if humans (e.g. drivers, doctors) expect it to work autonomously
        • e.g. translators in conjunction w/ google translate are worse than raw translations
    • change the loop: how can humans direct where / when machine aid is helpful?
      • e.g. instead of post-editing a machine translation, can provide auto fill-ins as a translator types
      • e.g. restrict interacting with chat bots to a set of choices
      • innteractive optimization procedures like human-guided search - provide mechanisms for people to provide “soft” constraints and knowledge by adding constraints as the search evolves or intervening by manually modifying computer-generated solutions
    • change the inputs: how can we make it more natural for humans to specify what they want?
    • change the outputs: how can we help humans understand and solve their own problems?
      • e.g. use ML in agent-based modeling
  • Making deep neural networks right for the right scientific reasons by interacting with their explanations (schramowski, … kersting, 2020 ) - scientist iteratively provides feedback on DNN’s explanation
  • POTATO: exPlainable infOrmation exTrAcTion framewOrk (kovacs et al. 2022) - humans select rules using graph-based feature for text classification
  • A Survey of Human-in-the-loop for Machine Learning (wu…he, 2021)
    • HITL data processing
      • ex. identify key examples to annotate to improve performance / reduce discriminatory bias
    • HITL during training
  • Interactive Disentanglement: Learning Concepts by Interacting with their Prototype Representations (stammer…scharmowski, kersting, 2021)
    • humans provide weak supervision, tested on toy datasets
  • Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models (zhao et al. 2021)
    • human knowledge and feedback are combined to train a concept extractor
    • by identifying visual concepts that negatively affect model performance, we develop the corresponding data augmentation strategy that consistently improves model performance
  • Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for Learned Systems (bayan & mitsch, 2022) - Fanoos, a framework for combining formal verification techniques, heuristic search, and user interaction to explore explanations at the desired level of granularity and fidelity
    • asks questions about sets e.g. “when do you”, or “what do you do when”

recourse

recourse - can a person obtain desired prediction from fixed mode by changing actionable input variables (not just standard explainability)

  • actionable recourse in linear classification (ustun et al. 2019)
    • want model to provide actionable inputs (e.g. income) rather than immutable variables (e.g. age, marital status)
      • drastic changes in actionable inputs are basically immutable

interp for rl

  • heatmaps
  • visualize most interesting states / rollouts
  • language explanations
  • interpretable intermediate representations (e.g. bounding boxes for autonomous driving)
  • policy extraction - distill a simple model from a bigger model (e.g. neural net -> tree)

differential privacy

  • differential private if the outputs of the model do not change (within some epsilon tolerance) if you remove a single datapoint

interpretation over sets / perturbations

These papers don’t quite connect to prediction, but are generally about finding stable interpretations across a set of models / choices.

  • Exploring the cloud of variable importance for the set of all good models (dong & rudin, 2020)
  • All Models are Wrong, but Many are Useful: Learning a Variable’s Importance by Studying an Entire Class of Prediction Models Simultaneously (fisher, rudin, & dominici, 2019) - also had title Model class reliance: Variable importance measures for any machine learning model class, from the “Rashomon” perspective
    • model reliance = MR - like permutation importance, measures how much a model relies on covariates of interest for its accuracy
      • defined (for a feature) as the ratio of expected loss after permuting (with all possible permutation pairs) to before permuting
        • could also be defined as a difference or using predictions rather than loss
      • connects to U-statistics - can shows unbiased etc.
      • related to Algorithm Reliance (AR) - fitting with/without a feature and measuring the difference in loss (see gevrey et al. 03)
    • model-class reliance = MCR = highest/lowest degree of MR within a class of well-performing models
      • with some assumptions on model class complexity (in the form of a covering number), can create uniform bounds on estimation error
      • MCR can be efficiently computed for (regularized) linear / kernel linear models
    • Rashomon set = class of well-performing models
      • “Rashomon” effect of statistics - many prediction models may fit the data almost equally well (breiman 01)
      • “This set can be thought of as representing models that might be arrived at due to differences in data measurement, processing, filtering, model parameterization, covariate selection, or other analysis choices”
      • can study these tools for describing rank of risk predictions, variance of predictions, e.g. confidence intervals
      • Screen Shot 2020-09-27 at 8.20.35 AM
    • confidence intervals - can get finite-sample interval for anything, not just loss (e.g. norm of coefficients, prediction for a specific point)
    • connections to causality
      • when function is conditional expectation, then MR is similar to many things studies in causal literature
      • conditional importance measures a different notion (takes away things attributed to spurious variables)
        • can be hard to do conditional permutation well when some feature pairs are rare so can use weighting, matching, or imputation
    • here, application is to see on COMPAS dataset whether one can build an accurate model which doesn’t rely on race / sex (in order to audit black-box COMPAS models)
  • A Theory of Statistical Inference for Ensuring the Robustness of Scientific Results (coker, rudin, & king, 2018)
    • Inference = process of using facts we know to learn about facts we do not know
    • hacking intervals - the range of a summary statistic one may obtain given a class of possible endogenous manipulations of the data
      • prescriptively constrained hacking intervals - explicitly define reasonable analysis perturbations
        • ex. hyperparameters (e.g. k in kNN), matching algorithm, adding a new feature
      • tethered hacking intervals - take any model with small enough loss on the data
        • rather than choosing $\alpha$, we choose error tolerance
        • for MLE, equivalent to profile likelihood confidence intervals
        • ex. SVM distance from point to boundary, Kernel regression prediction for a specific new point, feature selection
        • ex. linear regression ATE, individual treatment effect
      • PCS intervals could be seen as slightly broader, including data cleaning and problem translations
    • different theories of inference have different counterfactual worlds
      • p-values - data from a superpopulation
      • Fisher’s exact p-values - fix the data and randomize counterfactual treatment assignments
      • Causal sensitivity analysis - unmeasured confounders from a defined set
      • bayesian credible intervals - redrawing the data from the same data generating process, given the observed data and assumed prior and likelihood model
      • hacking intervals - counterfactual researchers making counterfactual analysis choices
    • 2 approaches to replication
      • replicating studies - generally replication is very low
      • p-curve approach: look at distr. of p-values, check if lots of things are near 0.05
  • A study in Rashomon curves and volumes: A new perspective on generalization and model simplicity in machine learning (semenova, rudin, & parr, 2020)
    • rashomon ratio - ratio of the volume of the set of accurate models to the volume of the hypothesis space
      • can use this to perform model selection over different hypothesis spaces using empirical risk v. rashomon ratio (rashomon curve)
    • pattern Rashomon ratio - considers unique predictions on the data (called “patterns”) rather than the count of functions themselves.
  • Underspecification Presents Challenges for Credibility in Modern Machine Learning (D’Amour et al. 2020)
    • different models can achieve the same validation accuracy but perform differently wrt different data perturbations
    • shortcuts = spurious correlations cause failure because of ambiguity in the data
    • stress tests probe a broader set of requirements
      • ex. subgroup analyses, domain shift, contrastive evaluations (looking at transformations of an individual example, such as counterfactual notions of fairness)
    • suggestions
      • need to test models more thoroughly
      • need criteria to select among good models (e.g. explanations)
  • Predictive Multiplicity in Classification (marx et al. 2020)
    • predictive multiplicity = ability of a prediction problem to admit competing models with conflicting predictions
  • A general framework for inference on algorithm-agnostic variable importance (williamson et al. 2021)
  • An Automatic Finite-Sample Robustness Metric: When Can Dropping a Little Data Make a Big Difference? (broderick et al. 2021)
    • Approximate Maximum Influence Perturbation - method to assess the sensitivity of econometric analyses to the removal of a small fraction of the data
    • results of several economics papers can be overturned by removing less than 1% of the sample

misc new papers