Patient trust in AI, a significant challenge for the implementation of this technology

confianza de los pacientes en la IA

Artificial Intelligence (AI) has become a highly useful tool spanning all aspects of modern life, including medicine. However, despite the benefits it offers, its implementation in the healthcare sector poses ethical challenges and raises concerns about patient trust in this technology. It is crucial to understand the context of using such technology in medicine, particularly in rehabilitation therapies. Additionally, addressing ethical issues is essential to enhance patient confidence in AI.

Artificial Intelligence in Medicine: A Context to Increase Patient Trust in AI

AI in medicine refers to the use of machine learning models to sift through medical data and uncover insights that enhance health outcomes and patient experiences. Common applications of AI in medical settings include supporting clinical decision-making and image analysis. These tools assist doctors in making decisions regarding treatments, medications, and patient needs, and are also utilized to analyze medical images for injuries or other findings.

Since when has AI been used in medicine?

The use of Artificial Intelligence (AI) in medicine actually dates back several decades. However, it has become more relevant in recent years due to the digital transformation associated with the COVID-19 pandemic. In the 1960s and 1970s, expert systems or rule-based logical systems emerged.

Subsequently, the development of AI algorithms took place in the 1980s. These algorithms enabled the automatic analysis of large amounts of medical data, driving the interpretation of medical images such as X-rays and magnetic resonance imaging.

Later on, AI expanded to big data analysis and precision medicine, allowing the development of prediction models and discoveries.

Do assistive robots instill more patient confidence in AI?

The use of robots in medicine began around 1985, coinciding with the transformation of industrial robots into precision machines to aid doctors. However, with advances in AI, assistive robots are becoming increasingly autonomous and capable of complementing the skills of human doctors.

Currently, assistive robots have three main areas of application: robot-assisted surgery, nursing, and rehabilitation. These robots can perform a variety of automated services, from providing companionship and entertainment to carrying out assistance and rehabilitation measures. They can even remind patients to take their medication.

AI also plays a significant role in facial recognition and monitoring the health and activities of individuals in need of care. While these units offer many advantages, they also pose challenges in terms of trust and acceptance from patients and healthcare professionals.

Why is patient trust in AI still low?

Undoubtedly, AI has the potential to improve healthcare, but there are also risks associated with its implementation in this field. The World Health Organization has warned about the improper use of AI, which can lead to misdiagnoses or incorrect treatments. Therefore, it is crucial for AI tools to be developed following scientific and ethical parameters, and decisions should always be made by trained healthcare professionals.

The lack of patient trust in AI can be attributed to several reasons:

  • Many patients have concerns about the privacy and security of their data.
  • They fear that AI may make mistakes.
  • They are afraid that robots may replace doctors.

However, there are also reasons why patients might trust AI:

  • The possibility of receiving more personalized care.
  • Having access to healthcare 24/7.
  • Reduction of waiting times for medical attention.

Positions For and Against

In this regard, some studies reveal the level of trust and reservations among doctors, patients, and family members regarding the use of AI. On one hand, a significant portion of those surveyed feel more comfortable if the technology in question is limited to administrative tasks, such as billing or scheduling patient appointments. However, they would disagree if it assumes more personal faculties, such as diagnosis and treatment.

On the contrary, others believe that AI can play a more significant role in the healthcare sector. It is not surprising that younger adults and those with higher educational levels are more in favor of this idea. Many of those surveyed expect AI to decrease the number of errors in healthcare and increase accuracy in diagnoses. Nevertheless, the majority believes that the incorporation of AI will harm the relationships between patients and healthcare providers.

How to Increase Patient Trust in AI?

To increase patient trust in AI, providers of AI and robotics tools for medical uses and therapists can take key measures:

  1. It is essential to transparently inform patients about the benefits and limitations of AI.
  2. They should follow ethical guidelines and use AI responsibly, supported by scientific evidence.
  3. Privacy and data security are also crucial to building patient trust. Providers must ensure that patient data is encrypted and stored securely, and only authorized personnel can access it.
  4. Educating patients about the use of AI and addressing their questions can help build trust.

To increase the general public’s trust in AI products, factors such as representation, user feedback, ease of explanation and understanding, testability, communication, and socialization should be considered. These aspects will help establish initial trust and build strong relationships between humans and AI.

Overall, trust in AI in the healthcare sector is a personal decision for patients. Some patients may feel comfortable trusting AI for their healthcare, while others may prefer the intervention of human doctors. It is important to respect individual preferences and ensure that the implementation of AI is done ethically and transparently.

At Inrobics, we strengthen trust in AI and robotics.

Inrobics proposes a disruptive rehabilitation model, using artificial intelligence and social robots. Thanks to this model, we have helped many people with functional or neurological limitations improve their quality of life. Proof of this is our system that successfully combines both technologies and has been tested in collective and individual therapies. In this same line, our algorithms capture patient knowledge, allowing for fully personalized sessions tailored to the individual’s physical and cognitive conditions.

Moreover, the robot can recognize the person and create narratives based on their preferences. Additionally, we monitor and objectively measure the degree of movement of the user’s joints. This allows us to obtain precise, objective, and reliable data, with which we generate reports for family members and therapists regarding the individual’s condition and progress. All these conditions enhance patients’ trust in AI and robotics for therapies. Contact us and request a free demonstration!


Picture of Fernando Fernández

Fernando Fernández

Full Professor in the Department of Computer Science at Universidad Carlos III de Madrid (UC3M) since October 2005 and former CEO of Adact Solution, SL. He has received various awards and grants, such as the predoctoral FPU fellowship from the Ministry of Education (MEC) and the MEC-Fulbright postdoctoral fellowship. In 2020, he was honored with one of the JPMorgan AI Research Awards. He has published more than 50 articles in journals and conferences in the field of AI. Since July 2022, he is on sabbatical as a Visiting Researcher at Texas Robotics, University of Texas at Austin, conducting his research and laying the groundwork for internationalization. A distinguished professional with solid international training and experience in technology and innovation. He is driven by the desire to apply rigorous scientific methods to develop and validate innovative solutions in the health and technology sectors.