AI in Indian Workplaces: Balancing Innovation and Fairness
Seeing the People Behind the Algorithms
Artificial Intelligence is no longer a distant idea; it is part of everyday life in India’s tech and service sectors. From large corporations using AI to sift through thousands of job applications to teams tracking productivity through digital dashboards, today’s workforce operates under the increasing influence of algorithms. Yet many employees remain unaware of how these systems shape their careers.
Consider the case of a Bengaluru-based software engineer (name withheld) who was blindsided by a low “engagement score” in her quarterly review. The algorithm, which measured her online presence, login times, message responsiveness, and keystroke frequency, automatically excluded her from the promotion cycle. When she raised concerns, her manager simply replied, “The system decides.” No one could explain how the score was calculated or whether it could be wrong.
At NITES LEGAL, we view this not as an isolated story but as a symptom of an emerging workplace dilemma: how do we preserve fairness and dignity when machines begin influencing human potential? The question is no longer whether AI can help, but how to ensure it does not reinforce bias, erode transparency, or deny workers their voice.
What the Law Says, Even If It Does Not Say “AI” Yet
India’s legal framework has not explicitly caught up with artificial intelligence, but the principles embedded in existing labour laws already provide strong protections. The laws that govern industrial relations, wages, and workplace safety rest on fairness, reasoned decision-making, and protection of worker dignity even when decisions are made by algorithms.
If an AI-driven assessment results in a termination or demotion without clear reasons, it could violate provisions of the Industrial Disputes Act, 1947 and the Industrial Relations Code, 2020, which require due process and fair hearing before adverse employment action. Similarly, gender-biased algorithms used in hiring or evaluations may breach the Equal Remuneration Act, 1976 (now integrated into the Code on Wages, 2019), which mandates equal treatment for work of equal value.
India’s Digital Personal Data Protection Act, 2023 (DPDP Act) further strengthens this framework by requiring consent, purpose limitation, and proportionality in data processing. Employers deploying AI-based monitoring systems must justify their necessity and ensure that employees’ personal data is collected and processed responsibly. While detailed guidelines and enforcement mechanisms are still being rolled out, companies are expected to act in anticipation of oversight by the forthcoming Data Protection Board of India.
Indian courts have long upheld the principles of natural justice and reasoned decision-making. These doctrines mean that every employee is entitled to understand and respond to the reasons behind an adverse action. An algorithm that produces cryptic results without explanation may therefore be vulnerable to legal challenge for violating constitutional and procedural fairness principles.
When Algorithms Reflect Human Bias
AI systems learn from data, and that data reflects society’s flaws. Globally, recruitment and performance tools have shown tendencies to marginalise women, older applicants, or candidates from less privileged backgrounds. The Indian context is no exception.
Imagine a hiring algorithm trained predominantly on historical data from male engineers in major cities. When used for screening new candidates, it may unconsciously undervalue a qualified woman or a rural applicant simply because those profiles differ from its success template. Some Indian startups have experimented with facial-recognition tools claiming to measure confidence, but these systems have stumbled over biases related to darker skin tones, regional accents, or disabilities.
While foreign companies have faced legal and reputational backlash for similar issues, Indian firms are just beginning to grapple with them. Discriminatory AI is not just an ethical lapse; it can breach Article 14 of the Constitution (Right to Equality) and intersect with workplace laws such as the POSH Act, 2013, especially when AI tools monitor behaviour or assist in internal investigations without proper consent or transparency.
Recognising these risks, the next question for employers is how to design safeguards that preserve fairness without compromising innovation.
A Practical Roadmap for Responsible AI
Businesses do not need to choose between efficiency and ethics. The first step is transparency. Employees deserve to know where and how AI systems are influencing decisions. HR policies and employment contracts should clearly outline the role of automation in recruitment, promotions, appraisals, or disciplinary actions.
The second step is governance. Organisations should form AI ethics committees comprising experts from HR, law, compliance, and technology. Much like POSH or grievance committees, these bodies can review algorithmic design, test outcomes for bias, and ensure documentation of accountability.
Third, human oversight must remain central. The principle of “human in the loop” should guide all critical employment decisions. No termination, demotion, or warning should occur without a responsible manager reviewing the AI’s recommendation. Although Indian courts have not yet mandated this, experience from the EU and US suggests that judicial scrutiny will soon demand it.
Regular algorithmic audits, internal or external, should evaluate whether AI systems align with Indian fairness, diversity, and labour law standards. Such audits can detect unintended patterns, such as consistent under-rating of women or employees from certain regions, and help companies proactively correct bias.
Employees also need empowerment. Workers have a right to question AI-generated decisions and to seek transparent explanations. As labour jurisprudence evolves, the right to know why a decision was made will become as vital as the right to be heard.
Looking Ahead: Technology That Respects Human Dignity
The intersection of AI, privacy, and workplace law touches the most fundamental value in employment: human dignity. Responsible employers understand that AI should augment, not replace, human empathy. Trust remains the foundation of healthy workplaces, and trust requires clarity and openness.
At NITES LEGAL, we believe India is at a defining crossroads. As AI becomes embedded in HR and compliance systems, regulators and courts must evolve to protect employees without stifling innovation. We are closely observing global developments from the European Union’s AI Act and the US EEOC’s guidance on automated hiring to help Indian employers align with emerging ethical and legal norms.
Our mission is clear: technology must serve fairness, not undermine it. A workplace powered by AI should reflect India’s constitutional promise of equality, respect, and justice for every worker. As the line between human judgment and machine logic blurs, ethical governance will define the most trusted organisations of the future.
At NITES LEGAL, we remain committed to helping employers and employees co-create workplaces that are both cutting-edge and compassionate.