Advances in medical informatics and the widespread collection of patients’ data have led to the development of new medical decision support systems. Some of them are simply used to guide the medical practicioner, while others automatically generate decisions. Known under various names (“expert systems”, “decision support systems”, “medical AI”), they can be used for diagnosis, prognosis or care. They all function by means of algorithms that process data. These algorithms are more or less complex, more or less supervised, but they are also frequently opaque. Yet these tools are becoming increasingly important in the medical world. It seems therefore necessary to better inform users and beneficiaries – doctors and patients – of their operating logic, their limits and their possible biases. This study suggests strengthening European law (and French national law) on this point. A greater intelligibility of algorithms in decision support systems would meet the ethical requirements of medical decision making and the objective of shared medical decision making in conscience.