Reliable and Logically Transparent Artificial Intelligence
Major research project: "Reliable and Logically Transparent Artificial Intelligence: Technology, Verification and Application in Socially Significant and Infectious Diseases"
The project aims to create a new generation of artificial intelligence and machine learning systems capable of detecting and promptly correcting errors, in particular by implementing logically explicable solutions.These systems are primarily required in sensitive areas, such as biomedical applications where human lives may depend on the decisions that are made.
The main result of the project should be the development of new methods and technologies to overcome the two main barriers in machine learning and artificial intelligence systems: the error problem and the problem concerning explicit explanation of solutions. To date, these problems have not been satisfactorily addressed and require new research.
The new technology for reliable and explicable neural network-based AI will be implemented in a wide range of strategically important areas where the requirement of reliability and logical explainability is critical. These areas include: analysis of big biomedical data and detection of ultra-early predictors of diseases, analysis of climate data and prediction of extreme events, engineering of new materials, quantum and optical technologies, development of neural networks that implement information and computation (intelligent) brain functions.
The project consortium includes scientists from Lobachevsky University, ITMO University, Institute of Applied Mathematics RAS, Institute of Systems Programming RAS, Privolzhsky Research Medical University, as well as leading foreign researchers from France, Germany, UK, Sweden, Italy, and over 40 young scientists, postgraduates and students.