KI-Update Deep-Dive: Hospitals Develop Their Own Language Models

by Sophie Williams
0 comments

The use of artificial intelligence in healthcare is expanding as hospitals and clinics across Europe begin developing their own specialized language models, according to recent reporting. Researchers at the Leibniz AI Lab, led by Wolfgang Nejdl of the L3S Research Center at Leibniz University Hannover, have been working since 2020 on medical applications using AI, particularly in leukemia treatment. Their work involves analyzing genomic and clinical data from pediatric patients, often in collaboration with the Hannover Medical School (MHH). By leveraging data from MHH’s network of clinics across Europe, the team has developed a system that improves risk stratification for young leukemia patients, helping determine whether a more aggressive or less intensive therapy is appropriate. Nejdl emphasized that while AI has the potential to enhance patient care—especially in cases where disease progression is difficult to predict—research models such as BioGPT and BioMedLM require robust safeguards. He made these remarks in an interview with heise online, underscoring the importance of responsible development in medical AI. Public acceptance of AI-driven health tools is growing, despite ongoing concerns. A Bitkom survey cited in heise online found that 71 percent of Germans view the use of AI in healthcare positively. Nearly half (45 percent) already consult chatbots like ChatGPT, Gemini, or Copilot to assess symptoms or learn about medical conditions. More than half of those users (55 percent) trust the responses they receive, and half believe AI helps them understand symptoms better than traditional online searches. Yet, skepticism remains. Over two-thirds of respondents worry about data misuse, and nearly 70 percent fear reduced human interaction in medical care. Still, 76 percent support doctors using AI assistance when possible, and almost half believe AI could outperform physicians in certain diagnostic scenarios. 46 percent said they would consent to their health data being used to train AI models. As integration continues, experts highlight ongoing challenges. A May 2025 analysis from The Decoder outlined five key barriers slowing the adoption of chatbots in clinical settings, ranging from technical limitations to regulatory and trust-related issues. These hurdles underscore that while enthusiasm for medical AI is rising, widespread deployment will require addressing both technological and societal concerns.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy