Machine learning has the potential for tremendous health innovations, but applying it in healthcare poses novel and interesting challenges. Data privacy is paramount, applications require high confidence in model quality, and practitioners demand explainable and comprehensible models. Ultimately, practitioners and patients alike must be able to trust these methods. In our research group on Trustworthy Machine Learning we tackle these challenges, investigating novel approaches to privacy-preserving federated learning, the theoretical foundations of deep learning, and collaborative training of explainable models.
- Full Ph.D. Position (requires Master’s degree)
- Scientific Software Developer
- We offer Master and Bachelor theses
If you are interested, please send an email to Michael Kamp.