As part of our initiative to engage with law and policy makers, the Technology and Society Initiative at CPR launched a new series on ‘Navigating Interactions between Technology and Policy’. The focus audience for this initiative are Legislative Assistants to Members of Parliament (LAMP) fellows, parliamentary aides and others directly involved with law and policy making in India. This three-part series of workshops, consisting of talks and presentations by experts from and outside CPR, followed by lively interactions, aims to shed light on current debates pertaining to technology.
The first workshop in this series, with its focus on informational privacy in the digital age, was conducted at the CPR Conference Room on 22nd August. The discussion had three segments and was led by key resource persons in the form of individual presentations, followed by an open round of questions and answers. The open round was moderated by Ananth Padmanabhan, Visiting Fellow at CPR.
This workshop shed insights on several critical aspects of ‘informational privacy’ - The meaning of such privacy within the context of digital technologies, deficiencies in the current legal and policy framework to optimally safeguard the same, proposed regulatory framework to address the current gap in the form of the Personal Data Protection Bill, its significance and potential impact, and the need to constantly engage with this theme in the light of emerging technologies like automated facial recognition.
The first resource person, Lalit Panda, Research Fellow at Vidhi Centre for Legal Policy, discussed the fundamentals of privacy, focusing on the need for sui generis protection and the concepts and tools required to offer such protection. Privacy entails control over personal information, which then secures the freedom of thought and political association, the freedom to make self-defining life choices, and from surveillance. The debate on secrecy versus privacy needs to be seen in the light of justificatory principles and values which are rooted in social norms and political structures, and which broadly impact economic development as well. These principles and values have found new meaning in the digital era as a consequence of all-pervasive technologies expanding exponentially alongside conditions of weak regulatory capacity. The possession of data by third parties and the concepts of permanence and traceability of information on individuals complicates matters further. Panda therefore emphasised the need for special data protection rules.
Nehaa Chaudhari, public policy lead at Ikigai Law, built on this insight and detailed the regulatory structures and compliance and enforcement regime envisaged under the draft Personal Data Protection Bill, 2018. The bill has been centred around the concepts of personal and sensitive personal data, the obligations on data fiduciaries who make determinative choices on how such personal data may be processed, and the rights of citizens whose data is being collected, processed and monetised. In the draft bill, both data fiduciaries and data processors have obligations, but the latter are saddled with a lesser number of such obligations. The bill fundamentally reiterates well-accepted data processing principles including purpose limitation, collection limitation, lawful processing, notice and consent, data accountability, quality and storage limitations. It is imperative that fiduciaries must adopt ‘privacy by design’ in their business operations and ensure that privacy is protected at all points of processing. By design, the systems should be able to anticipate, identify and prevent harms to data principals. Chaudhari discussed the role of two key players, the Data Protection Authority of India (DPAI) and the Central Government, in shaping regulations and enforcement norms under the bill. The DPAI is in charge of enforcement but the Central Government has a crucial role in tasking this independent regulator. The DPAI is designed as a highly centralised regulatory institution, with wide-ranging authority including the power to issue directions and codes of practice addressed to the data fiduciaries/processors, request information from them, conduct inquiries, appoint investigators, and decide on the merits of a case and pass final orders. The DPAI also has the task of increasing the public awareness on data protection. With these multiple and diverse functions vested in it, DPAI runs the risk of becoming an overburdened regulatory body with limited resources and capacity.
Responding to the complex question of interaction between emerging technologies and privacy, Smriti Parsheera, Fellow at the National Institute of Public Finance and Policy, deep-dived into facial recognition technologies (FRTs) to present her insights. FRTs are essentially algorithms used to run captured video feed against pre-existing databases of facial images and then identify matches between the two. FRTs are prone to high error rates as disclosed by compelling research studies. Even where they work, it is usually for verifications done with the cooperation or consent of individuals, such as in airports, classrooms, or for biometric attendance systems at workplaces. Importance of discussing this technology emerges from the availability of a large database of pictures uploaded from social media accounts or collected through CCTVs, and the computational advances in processing these large data sets. Alongside FRTs, drones and self-driving cars which use similar software could potentially interfere with individual and collective privacy. In India, FRTs were earlier proposed to be used as part of Aadhaar identification / authentication, though this project finds little mention now. Additionally, the National Crime Records Bureau has issued a tender for an Automated Facial Recognition System, and airports have initiated DigitYatri for seamless and hassle-free check-in. While the technology debate is swinging between ‘convenience and efficiency’ on one side, and ‘human errors and privacy concerns’ on the other, national security remains to be the most dominant rationale offered by public authorities in support of this technology. In this scenario, one needs to ask and answer the following questions: Is this technology reliable enough given instances of false identification of women and people of colour? Can it be considered legally tenable, considering that data is being collected and processed ubiquitously for an altogether different purpose without the consent of individuals? Would it end up being a tool that supports discrimination, considering the reliance on this technology on biased input feeds? Parsheera stressed the need for applying the Puttaswamy standards to a rights-based assessment of this technology, and the additional need for self-regulatory and ethical frameworks that work alongside statutory protection.