Explanations by Counterfactual Argument in Recommendation Systems
Abstract
In recent advances in the domains of Artificial Intelligence (AI) and MachineLearning (ML), complex models are used. Due to their complexity and approaches,they have black box type of nature and raise the question of a trustworthy for decisionprocess especially in the high cost decisions scenario. To overcome thisproblem, users of these systems can ask for an explanation about the decisionwhich can be provided by system in various ways. One way of generating theseexplanations is by the help of Counterfactual (CF) arguments. Although there is adebate on how AI can generate these explanations, either by Correlation or CausalInference, in Recommendation Systems (RecSys) the aim is to generate these explanationswith minimum Oracle calls and have near optimal length (eg., in termsof interactions) of provided explanations. In this study we analyze the nature ofCFs and different methods (eg., Model Agnostic approach, Genetic Algorithms(GA)) to generate them along with the quality measures. Extensive experimentsshow that the generation of CFs can be done through multiple approaches andselecting optimal CFs will improve the explanations.
Collections
- M Tech Dissertations [923]