Machine learning of microstructure-property relationships in materials with robust features from foundational vision transformers
Machine learning of microstructure-property relationships from data is an emerging approach in computational materials science. Most existing machine learning efforts focus on the development of task-specific models for each microstructure-property relationship. We propose utilizing pre-trained foundational vision transformers for the extraction of task-agnostic microstructure features and subsequent light-weight machine learning of a microstructure-dependent property. We demonstrate our approach with pre-trained state-of-the-art vision transformers (CLIP, DINOV2, SAM) in two case studies on machine-learning: (i) elastic modulus of two-phase microstructures based on simulations data; and (ii) Vicker's hardness of Ni-base and Co-base superalloys based on experimental data published in literature. Our results show the potential of foundational vision transformers for robust microstructure representation and efficient machine learning of microstructure-property relationships without the need for expensive task-specific training or fine-tuning of bespoke deep learning models.
To test on your own dataset, check out the "Feature_Extraction_ViTs" Notebook under either case study to first extract features from your images using ViTs.
The dataset was manually extracted from the following papers:
Paper | DOI |
---|---|
Paper 1 | Link |
Paper 2 | Link |
Paper 5 | Link |
Paper 6 | Link |
Paper 7 | Link |
Paper 15 | Link |
Paper 16 | Link |
Paper 19 | Link |
Paper 20 | Link |
Paper 21 | Link |
Paper 22 | Link |
Paper 23 | Link |
Paper 24 | Link |
Paper 25 | Link |
Paper 26 | Link |
Paper 27 | Link |
Paper 28 | Link |
Paper 29 | Link |
Paper 30 | Link |