Work

Interpretable Machine Learning with Applications to Computational Materials Science

Public

Machine learning and deep learning have been proven successful across various scientific fields, such as computer vision, natural language processing, and recommendation systems. As models become more complex, with more parameters and intricate architectures, they can achieve higher prediction accuracy when trained on larger datasets. However, despite the great prediction power of machine learning and deep learning models, they are notorious for their black-box properties, where humans cannot understand why the fitted models arrive at their predictions. This lack of transparency is a significant concern, particularly for high-stakes applications like healthcare or finance, where the reasoning behind model predictions is critical. There has been increasing attention on interpretable machine learning recently, where researchers tried to shed light on the internal working of the black-box models. In light of this, this thesis proposes a general framework for understanding and visualizing the function of black-box models and exploring their applications in computational material science. It aims to provide a deeper understanding of how black-box models operate and improve human trust and model transparency.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items