Categories: Other Courts

AI, HEALTH CARE AND LAW: PART 2

In the context of AI in health care, Jurisdiction plays a significant role specifically in the light of determining and adjudicating legal issues arising thereof. Given the complexities in AI Algorithms, the risk of data breaches is likely to be high. In case the AI system is developed by a party in “A” Jurisdiction and is used by the doctor in “B” Jurisdiction for analysing and predicting the health outcomes of a patients in “C” Jurisdiction and the results of such analysis and predictions are shared to the party located in “D” jurisdiction for the benefit of the patient and for getting the best possible results, then in case a data breach occurs which Jurisdiction shall adjudicate the breach will be an important aspect requiring significant consideration.

• CYBER SECURITY

Given the fact that the use of AI in healthcare by and large makes use of large data sets involving sensitive and personal data of patients for performing complex analysis and cognitive tasks instantly. While AI Algorithms can be subject to Cyber Security breaches and can also be used for committing Cyber Crimes, Medical Crimes, and other Crimes, therefore, Cyber Security in the context of AI in health care assumes significant role.

Let us consider a scenario where in AI Algorithm is compromised and data is altered erroneously in a wrong manner which can entirely change the outcome and can lead to a catastrophe. Constant monitoring of AI Algorithm becomes necessary for Cybersecurity purposes. Cyber Crimes like unauthorised access, hacking, data manipulation and tampering, man in the middle attacks, data thefts, denial of service and denial of distributed service attacks can be considered as common Cyber Crimes in the context of AI in health care. In addition to the same, Cyber murders can also take place in the context of AI in health care as explained above.

The AI Algorithms must be tested frequently for different vulnerabilities and must incorporate anti-malware, anti-spyware. Let us take another example, in case, an AI Algorithm gets unauthorizedly accessed and the data possessed by the data AI Algorithm gets into the hands of different players like Health Care Providers, Medical Equipment Sellers, Pharmacies, other Specialised Medical Hospitals, Doctors, Pharma Companies, and other Stake Holders. These Stake Holders may specifically offer services, products suitable for the patients since they are aware of their medical and health conditions.

• MEDICO LEGAL CONSIDERATIONS

DOCTOR PATIENT RELATIONSHIP-ETHICS

In case any AI Algorithm assume the role of a Doctor during the interactions with patients, the entire concept of Doctor Patient relationship becomes different. Given the fact that Doctor Patient relationship is based on trust and confidentiality, in this kind of a scenario the machine ethics assume significant consideration specifically regarding the moral behaviour of AI Algorithms. It is also imperative to note if there is a duty on Human Beings to observe ethical standards while designing, creating AI Algorithms. Even when the patient is aware that they are interacting with the machine it is still possible that such interactions may evoke intense emotions specifically during counselling and therapy related sessions. The Doctor-Patient relationship usually involves disclosure of sensitive personal data and other details of the Patient to the doctor. The patient is usually prepared with such disclosure since they believe that such information is protected by the doctor in a confidential manner and any breach of confidentiality can result in damage claims by the aggrieved patient.

Given the fact that AI Algorithm is not a person and that it possesses a lot of data of the patient after interactions, the question that has to be decided is whether the patient information held by the AI Algorithm is protected by the duty of confidentiality. Further recording of patient’s data becomes a prerequisite by the Algorithms for analysing the medical records, suggesting medical treatments, analysing the medical health conditions counselling and therapeutic sessions. In case the patient is subject to multi-disciplinary therapies there is a possibility that the AI Algorithms are likely to be in contact, in a connection, interact with the other Algorithms in order to serve the best interest of the patient. Given this scenario the question of confidentiality will have to be looked into the context of such Algorithms.

It is imperative to note that patients need to be protected beyond the Doctor-Patient relationship which otherwise might have a different impact including impact on relationships, job opportunities. Given the scenario that AI Algorithms stores lots of patient’s personal data which might be shared not just with the doctor but also with the family members and others, such kind of protection may seem challenging.

Let us take an example, that the personal and sensitive data of the patient possessed by AI Algorithm is used for increasing the health insurance premium of the patient. The predictive capability of AI brings in significant ethical concerns in health care. In case an AI is used for health and medical predictions, such information can be included in a patient’s electronic health records. Therefore, anybody accessing such health records could get access to such medical and health predictions. These kinds of access could also lead to discrimination including discrimination in employment.

Health predictions through AI can also cause psychological harms. For example, many people could be dramatized in case they come to know that they are likely to suffer with cognitive decline at the later part of their lives. Further, AI health predictions may be erroneous. There can be many factors that contribute to such errors. In case, the data that is used to develop the Algorithm is flawed and in case it uses any medical information or records that contain errors, then the output of the Algorithm shall be incorrect, therefore patients may suffer discrimination or psychological harm, when in fact they are not at the risk of such predicted ailments. In these kinds of scenarios, who will be legally accountable is an interesting question.

• ALGORITHMIC FAIRNESS AND BIAS

Algorithmic Fairness in health care usually takes place, in events where the Algorithm predictions are used to support decision making that is beneficial. It becomes important to understand that the Algorithmic Fairness also involves ethical, political, and constitutional concerns. It can be understood that machine-learning and Algorithms make predictions using mathematical models which are not programmed explicitly but are however developed using rules that associate variables with outcomes in a specified data set. The importance of Algorithm Fairness is assuming greater significance given the rapid expansion and usage of Algorithmic Prediction in health care which focuses on how the principles of Algorithmic Fairness shall be made applicable in clinical decision making. Considering the general principle of Fairness, similar individuals shall not be subject to differential treatments subject to attributes and discrimination including Race, Gender, Ethnicity, Religion, Creed or National, Origin etc., However, in the context of AI in health care it can be understood that Algorithmic Fairness is maintained, in case no differential treatment is provided as an outcome of analysing the medical and health conditions of two similar patients.

However, there is a possibility that the Algorithms used for predictions can inadvertently be biased or unfair in decision making, despite the broad Algorithmic Fairness. However, data or any other sampling issues can lead to predictions that are biased. Given the differences in both consistent and substantial treatments with regard to medical conditions in patients subject to Gender, Race, and other parameters contribute to clinical bias and disparities in health care.

In the context of AI in health care, Algorithmic Bias takes place in case different outcomes and predictions are provided in case of similar patients. Let us take an example of Thyroid, because Thyroid diagnose is an imperfect proxy for thyroid incidence and further the rates of Thyroid is considered relatively high in obese people compared to normal people. In case, the AI prediction models are developed routinely based on the above parameters to target the screening of obese people for high Thyroid levels, then this itself could lead to Bias and mistargeting.

Let us consider the example of Pregnancy test, in case the AI Algorithm is designed to predict pregnancy which is subject to gauging of Human Chorionic Gonadotropin hormone based on the urine, in this kind of scenario if the male urine also gauges the presence of this hormone, then AI Algorithm shall predict and attribute pregnancy to the male person, based on the above parameters. It can be inferred that AI Algorithm using Gender based data blindly may be less effective than the AI Algorithm that possess trained data sets including information on gender. In this context, it shall be understood that eliminating Bias from the data initially fed into the AI Algorithms shall be challenging since such training data may be subject to historical Bias. The concept of debiasing is also being developed in order to address the issue of Algorithmic Bias. These bring in huge amounts of potential challenges from a legal perspective while using AI Algorithms in health care since there is an involvement of health care data which is sensitive and personal.

The post AI, HEALTH CARE AND LAW: PART 2 appeared first on The Daily Guardian.

- -

Recent Posts

Delhi HC Judge Steps Down From Hearing Raghav Chadha’s Bungalow Plea

The Delhi High Court on Monday recused itself from hearing the plea filed by Aam…

43 minutes ago

‘Jai Shri Ram’ Slogans In Mosque Case: SC Seeks Karnataka Govt Response

The Supreme Court on Monday sought Karnataka government’s response over the case related to the…

1 hour ago

Pak Court To Hear Petition On Alleged Killings Of PTI Supporters During Protest On Dec 23

A district court in Islamabad recently state that it is set to hear a petition…

2 hours ago

Delhi Court Issues Criminal Defamation Notice To BJP MP Bansuri Swaraj

Delhi’s Rouse Avenue Court on Monday issued a criminal defamation notice to BJP MP Bansuri…

3 hours ago

SC Stresses Rehabilitation Over Stigma In Tackling Drug Abuse

The Supreme Court on Monday expressed grave concern over the escalating drug abuse issue in…

3 hours ago

SC Seeks Email On Ghaziabad ‘Dharam Sansad’ Plea

The Supreme Court on Monday directed a group of former bureaucrats and social activists to…

3 hours ago