New Delhi: Days before the India AI Impact Summit 2026, two watchdog groups have raised alarm over the political use of artificial intelligence in India. The Internet Freedom Foundation and the Centre for the Study of Organised Hate released a report titled India AI Impact Summit 2026: AI Governance at the Edge of Democratic Backsliding.
The report states that generative AI has turned into a tool to demonise minorities. It cites an AI generated video uploaded by the BJP Assam unit on X. The video showed Assam Chief Minister Himanta Biswa Sarma shooting at two visibly Muslim men under the title No Mercy. One image appeared to depict Opposition leader Gaurav Gogoi wearing a skullcap. The report lists similar examples from Delhi, Chhattisgarh and Karnataka.
Researchers also tested popular text to image tools such as Meta AI, Microsoft Copilot, ChatGPT and Adobe Firefly. They found weak safety controls in local languages. Harmful prompts produced images reinforcing stereotypes against Muslims.
On surveillance, the report flags an AI tool announced by Maharashtra Chief Minister Devendra Fadnavis in collaboration with IIT Bombay. The tool seeks to identify alleged Bangladeshi immigrants and Rohingya refugees through speech analysis. Linguistic experts question its accuracy due to overlap in Bengali dialects across India and Bangladesh.
The report also criticises facial recognition systems used by police. It notes high error rates and lack of transparency. It links AI errors to exclusion from welfare schemes and deletion of valid voters. The authors urge policy reform, transparency from industry, and public oversight to protect rights.


