IIT Madras receives $1 mn investment from Google for its new Centre for Responsible AI

Share This

The IIT Madras AI section will fund research initiatives and provide datasets for AI applications.

The US technology behemoth Google has been named the first “platinum consortium” member of the Centre for Responsible Artificial Intelligence (CeRAI) at the Indian Institute of Technology Madras (IIT Madras), recognising its commitment with an initial investment of $1 million.

The news was made during the Center’s first-ever panel discussion and workshop on Monday. IIT Madras’ AI section will fund research initiatives and provide datasets for AI applications.

On April 27, Rajeev Chandrasekhar, the Union Minister of State for Electronics and Information Technology, officiated at the ceremony to officially inaugurate CeRAI.

The Centre has formed partnerships with the trade association Nasscom, the Southern Indian Chambers of Commerce and Industry (SICCI), the policy think tank Vidhi Centre for Legal Policy, and the thinktank Research and Information Systems (RIS), which is associated with the Ministry of External Affairs (MEA).

Through the establishment of academic curricula, investigating AI’s ramifications, creating a participatory AI framework, and coaching companies to develop responsible AI applications, these partnerships seek to promote the responsible use of AI. According to a news release, The Indus Entrepreneurs (TIE), a startup mentoring and incubation organisation, will also be affiliated with CeRAI.

According to a news release from IIT Madras, CeRAI will seek to “formulate sector-specific guidelines and recommendations for policymakers” as part of its advocacy work for AI policies.

Balaraman Ravindran, director of CeRAI and the Robert Bosch Centre for Data Science and AI at IIT Madras, made the following remarks at the opening: “When AI models and their predictions are to be deployed in various critical sectors, such as healthcare, manufacturing, banking, and finance, it is important for them to be explainable and interpretable. Additionally, they must offer performance assurances for data integrity, privacy, and robustness of decision-making that are appropriate to the applications in which they are implemented.

The managing director and chief executive of the Digital India Corporation of the Centre, Abhishek Singh, stated that it is crucial for researchers and policymakers to “be aware of the risks and challenges while using technologies for solving societal problems, ensuring access to healthcare, making healthcare more affordable, education more inclusive, and agriculture more productive.”

They have particular needs that necessitate modification to meet them, thus, they need an unbiased and non-discriminatory AI framework, he continued.

Of course, this is not the first time that business, government, and academia have come together to discuss the creation of ethical AI applications in India. A discussion paper on the use of responsible AI in creating the nation’s facial recognition technology infrastructure was produced in November of last year by the policy think tank Niti Aayog. A responsible AI development “toolkit” has also been released by the Ministry of Electronics and IT (Meity), the National e-Governance Division (NeGD), and Nasscom to promote the creation of applications and policies for the “IndiaAI” programme.

While discussions about the accountability and explainability of AI models have raised concerns about how the developing technology should be regulated in Europe, Union IT Minister Ashwini Vaishnaw stated in Parliament on April 6 that the Centre does not intend to adopt legislation to control AI development. He did, however, concede that there are ethical issues with the creation of AI, such as racial bias, discrimination, invasion of privacy, and lack of transparency in AI decision-making.

Vaishnaw noted in his statement that the Centre is working to standardise and promote “best practises” regarding the creation of ethical AI models.