An introduction to counterfactual explanations using the novel approach developed by Microsoft Corporation, India.
This study implemented SimCLR on EHR data to detect rare dseases. In general, rare diseases are underrepresentative classes in any clinical dataset. Therefore, using SimCLR (contrastive learning) boosts their classification accuracy.
TIME is a great tool for explaining the predictions made by a model. It is very similar to Grad-CAM, however, it can work with tabular data as well. Also, TIME can provide global expainability as well.
This research paper aims at building an entropy layer for end-to-end explainable AI framework. Unlike LIME and Grad-CAM, Entropy Net does not rely on any auxuliary tool/model to explain its predictions. Also, Entropy Net focuses on downsizing the concepts required to explain predictions.
REP-Net is a counter to traditional transfer learning for On-board model training with more focus on memory efficiency.
Training GANs require large amount of data. If tried with small datasets, the discriminator often times overfit producing meaningless feedback to the generator. One solution to training GANs with smaller dta could be using adaptive data augmentation.
Including domain knowledge can increase explainability without compromising on model performance. This paper mainly discusses a preprocessing technique to incorporate domain knowledge in a way they become very useful for explaining model's predictions.
Incorporating domain knowledge to neural networks is a creative and case specific approach. This paper modifies the loss function of a fully-connected network with domain knowledge from kinetics which helped the model make precise prediction in its regression task.
Optimizes change in consumed nutrients while driving users to their desired diet.
LIME is a great tool for explaining the predictions made by a model. LIME can explain any model regardless of their type, it works by building a linear model on vicinity of the sample intended to be explained.