Přehled

Machine learning methods are already a common part of scientific practice. However, the most accurate models are currently black-box models, where it is not easy to determine how the model works. It is possible that even accurate predictions are based on misinterpretation of data, for example due to incomplete training data. Explanable machine learning models attempt to address this problem. Their primary goal is to explain how the output of a model (such as a deep neural network) is formed for a given set of data. A critical research question is how to properly choose the „vocabulary“ of terms in which the explanation is presented to humans. However, if these explanatory principles work properly, it is possible to use this vocabulary in model building and to build models using human-machine learning interaction. This approach is still in its infancy, but has the potential to significantly speed up development and reduce the number of experiments. The sponsor is working on model development in plasma physics and astronomy.

https://www.utia.cas.cz/cs