ALGORITHMIC EXCLUSION AND DISABILITY: CAN INDIAN LAW RESPOND TO AI DRIVEN DISCRIMINATION?

by Nandhana V and Adiya Garg are Fourth Year Students

AI Driven Discrimination Against Persons with Disabilities

The provision of public services, employment opportunities, education, social welfare, and medical facilities are all presently influenced by AI. AI systems often act as invisible decision-makers that discriminate against persons with disabilities without any rationale. Explicit examples of AI-based discriminations are rare. Instead, it occurs in an indirect manner through non-accessible interfaces of digital technology, rigorous automated assessments, and proxy variables. Gaps in employment related to medical treatment are discriminated against through recruitment software. Disability-related characteristics can be incorrectly perceived as lack of capability through AI-based interview software that focuses on speech and facial expressions. Job applicants who are accompanied by assistive devices and/or display involuntary motions are identified as suspicious individuals through online proctoring software of computer-based exams, leading to cancellation of assessments without direct human interference. AI systems appear to be objective decision-makers that are highly discriminatory of persons with disabilities.

Inaccessibility also marks the condition wherein exclusion occurs. This is precisely because AI-powered platforms are becoming the sole option available for enjoying rights in view of the digital first or digital only approach being pursued by public authorities and institutions. Inaccessibility occurs in chatbots which do not comprehend nonstandard speech, automated complaint mechanisms which are unsuited to screen readers, or welfare sites with inaccessibly implemented verification processes. Inaccessibility is particularly egregious inasmuch as it has the effect of both enforcing existing inequalities despite the lack of justification for doing so while being couched in terms of progress rather than oppression.

The Rights of Persons with Disabilities Act and Algorithmic Discrimination

Though it does not refer to artificial intelligence in any specific manner, the Rights of Persons with Disabilities Act, 2016 provides a strong legal foundation to address discrimination driven by artificial intelligence. The Act is more concerned with the impact an action has upon the enjoyment of rights rather than being concerned with the drivers behind actions/systems in place. It therefore becomes very useful in dealing with algorithmic discrimination, which can often be systematic yet unintentional in nature.

Section 3 of this Act clearly points out that while promoting equality, a reasonable accommodation is acknowledged and ensured. The provision includes denial of reasonable accommodation in discrimination, providing a leeway in discrimination on the basis of equal treatment required in achieving substantive equality. The provision is violated when AI technology is applied in employment, in education, or in providing services, failing to disregard disability-related variables. There is, however, a non-decision-making excuse by a public authority in considering accommodation.

Reasonable accommodation becomes particularly crucial in circumstances where AI systems are being utilized. The Act ensures that the failure to provide an alternative when such systems result in exclusion confirms disabled people’s right under the Equality Act. The revision may entail human review of exam recordings initially flagged by proctoring software, the manual re-evaluation of candidates rejected by automated screening, or, for AI platforms in cases of insufficiency, the utilization of accessible non-digital mechanisms. These provisions are not discretionary. Unless the authority can demonstrate excessive or disproportionate burden, these rights are justiciable.

The State is further under a duty to ensure that the systems are accessible positively under the Act. Inaccessibility of AI-enabled systems serving as access points leading to public services violates the law. An inaccessible AI system is, in fact, not just lousy technology but also illegal because it deprives equal access to rights protected under the Act.

Constitutional and International Law Constraints on AI Systems

The use of disability rights principles within the context of AI-driven decision-making has been supported by constitutional law, particularly where the State is utilizing the system. The Constitution has its Article 14 that prohibits arbitrary administrative action. The use of opaque AI that requires strict automated thresholds or criteria that are not known could be challenged on the grounds of the Constitution, particularly where decisions affect social welfare, employment, and access to education. The system is procedurally unfair mainly because decisions cannot be explained.

The applicability of automated systems in the public service employment sector is also prohibited by Article 16. The equality of opportunity provision in the Constitution is violated by AI recruitment platforms which fail to offer reasonable adjustments or systematically eliminate the disadvantaged group of the disabled. Especially where the AI system conducts invasive monitoring or assessment of the disabled without adequate remedy, the provision of Article 21 is applicable.

It is also impacted by the obligations of the Republic of India in the United Nations Convention on the Rights of Persons with Disabilities. This agreement prioritizes the principles of equality, accessibility, and inclusion in every aspect of life, including the use of technology. It could be argued by the support of the Indian judiciary to the UNCRPD to make purposative interpretations of the disability act to meet the challenges of the new form of exclusion of the disability of the use of AI.

Gaps, Challenges, and the Way Forward

Even the current legal system has a few problems: Due to the fact that disability is inferred by proxies and not processed per se, algorithmic discrimination is often difficult to prove. Proprietary AI systems limit transparency, making it even more difficult for the affected ones to gain evidence. Indian data protection law does not yet offer a clear right to explanation or a guaranteed right to human review when it comes to automated decision making.

It does not require a full redesigning of disability law to realize these issues. Rather, in AI contexts, specific interventions are needed to operationalize existing rights. Public authorities should be under an obligation to perform impact assessments with a disability perspective before AI systems are implemented in high-stakes domains such as hiring, education, welfare, or healthcare. Accessibility, auditability, and accommodation by design should be imposed by procurement policies. When an AI system performs a major role in decisions, notice should be given to individuals, and human review with effective redress should be provided through clear procedural protections.

Conclusion

The issue of people with disabilities being discriminated against by artificial intelligence is not something that could happen only in the future. Already, the issue of rights accessibility is being affected. A strong normative and legal framework that can address these issues is already present through Indian disability law, specifically the interpretation of the Rights of Persons with Disabilities Act in conjunction with the constitutional provisions. It is the task to fulfill the current rights through technological know-how and treating algorithmic exclusion as a form of discrimination. The law can ensure that artificial intelligence supports inclusion and not exclusion.