"I was pleased to see that the new journal is aimed at managers in the field to better understand the benefits of supply chain management thinking. The journal is focused on delivering these developing best practices to practicing managers. There is a vast gulf between academic’s theory and managerial practice [and] your journal should be a timely addition."
Volume 6 (2023-24)
Each volume of Journal of Data Protection & Privacy consists of four 100-page issues published both in print and online.
The articles published in Volume 6 are listed below.
Volume 6 Number 4
-
Editorial
The outcome of the US Election will determine the future of Big Tech and data privacy
Ardi Kolah, Founding Editor-in-Chief, Journal of Data Protection & Privacy -
Practice Papers
The AI Act: A compass for approaching legislation with innovative and disruptive effects
Rocco Panetta, Managing partner at Panetta Law Firm, Chairman and CEO at Panetta Consulting Group, and Vincenzo Tiani, Resident partner, Panetta Law Firm
This paper aims to provide an overview of the EU regulation on artificial intelligence (hereafter the AI Act) by navigating through its main proposals and innovations. A geopolitical framework will be provided with reference to the balance to be struck between innovation and the protection of fundamental rights and the decisions that led to the final version of the AI Act, during the inter-institutional agreement. The overview will include each risk category, including how the regulation dealt with AI for general purposes, which was not considered in the European Commission's first draft published in April 2021. In addition to the risk categories, the governance model and innovation support measures will also be analysed. The paper is addressed to practitioners, academics, policy makers and postgraduate researchers who are approaching the AI Act for the first time.
Keywords: Artificial Intelligence Act; EU regulation; data protection; risk-based approach; high-risk AI; regulatory compliance -
Synthetic data and European General Data Protection Regulation: Ethics, quality and legality of data sharing
Shalini Dwivedi, VP, Head of Medical Writing and Clinical Trial Transparency, Krystelis
Synthetic data is increasingly being used across the financial services, clinical research, manufacturing and transport industries. In clinical research, use cases for synthetic data include secondary analysis to identify novel treatment pathways, to develop healthcare policies, to evaluate research methods and, importantly, to evaluate research hypotheses without exposing real patients to potentially harmful experimental treatments. Methods for creating synthetic data in a manner that can reconcile the privacy of clinical trial participants while preserving the utility of data for analysis are rapidly evolving. However, challenges remain that include obtaining appropriate consent for the use of real patient data in the creation of synthetic datasets, eliminating bias in synthetic data and ensuring that data privacy concerns can be addressed.
Keywords: synthetic data; GDPR; anonymisation; personal data; data sharing; clinical trial transparency; data privacy; data protection; data transfer; reidentification risk -
High-fidelity synthetic patient data applications and privacy considerations
Puja Myles, Director, Clinical Practice Research Datalink, Medicines and Healthcare products Regulatory Agency, Colin Mitchell, Head of Humanities, PHG Foundation, University of Cambridge, Elizabeth Redrup Hill, Senior Policy Analyst (Law and Regulation), PHG Foundation, Luca Foschini, President, Sage Bionetworks, Zhenchen Wang, Head of Data Analytics and Machine Learning, Scientific Data and Insights, Medicines and Healthcare products Regulatory Agency (MHRA)
This paper explores the potential applications of high-fidelity synthetic patient data in the context of healthcare research, including challenges and benefits. The paper starts by defining synthetic data, types of synthetic data and approaches to generating synthetic data. It then discusses the potential applications of synthetic data in addition to as a privacy enhancing technology and current debates around whether synthetic data should be considered personal data and,therefore, should be subjected to privacy controls to minimise reidentification risks. This will be followed by a discussion of privacy preservation approaches and privacy metrics that can be applied in the context of synthetic data. The paper includes a case study based on synthetic electronic healthcare record data from the Clinical Practice Research Datalink on how privacy concerns due to reidentification have been addressed in order to make this data available for research purposes. The authors conclude that synthetic data, particularly high-fidelity synthetic patient data, has the potential to add value over and above real data for public health and that it is possible to address privacy concerns to make synthetic data available via a combination of privacy measures applied during the synthetic data generation process and post-generation reidentification risk assessments as part of data protection impact assessments.
Keywords: synthetic patient data; re-identification risk; privacy metrics; CPRD; data governance; differential privacy -
Research Papers
Protecting privacy in the digital marketplace: A comparative study of legal mechanisms for consumer rights in the metaverse
Akshay Baburao Yadav, Research Scholar, M.S. Ramaiah University of Applied Science, Teaching Associate, National Law School of India University, Consumer International Next Generation Leaders Network, and Prashant Desai, Professor and Dean, School of Law, Dayananda Sagar University
The promotion of fair trade and consumer protection is the heart of every nation. The world, during the struggles of the COVID-19 pandemic, has witnessed huge growth in e-commerce markets and has scaled new heights by reducing e-physical barriers and maintaining social distancing. E-commerce entities have been trying to fill the gaps in their business by incorporating a new type of e-commerce market called business to avatar (B2A) through the metaverse. These technologies have been providing personalised recommendations and have been very profitable for every e-commerce platform and consumer. However, this win-win situation is marred by an increase in consumer privacy and data protection concerns. This paper explores the legal regulations protecting consumer privacy in India, analysing whether the existing legislation adequately addresses contemporary privacy challenges, especially when compared to the robust legal framework in the US and EU. Based on the research, the authors observe that: (i) adoption of the metaverse application in e-commerce has fundamentally benefited consumers; (ii) the developing e-commerce market has been adversely impacted by consumer privacy and data protection concerns; and (iii) to create a balance between developing e-commerce industries and to build consumer trust the regulators are required to improve the level of supervision along with imposing a huge penalty for data breaches. Finally, a way forward is suggested to achieve a constitutional mandate for the state to protect the fundamental right to privacy of every individual.
Keywords: digital marketing; privacy and data protection; consumer protection; consumer trust; metaverse -
American Privacy Rights Act: A first glance at the US Congress’s newest comprehensive privacy bill
Lothar Determann, Brian Hengesbaugh, Partner & Chair, NA IP & Technology Practice, and Avi Toltzis, Knowledge Lawyer, Intellectual Property and Technology, Baker McKenzie
The recently introduced the American Privacy Rights Act (APRA) represents the latest attempt to pass a comprehensive federal privacy law in the US that would vern privacy generally across the country. The draft bill proposes novel compromises on controversial topics such as federal pre-emption and rights of private action, which need refinement and are likely to be changed in the legislative process. The attempt to cover not-for-profit entities without accounting for their different purposes seems ill conceived and raises constitutional concerns. This paper examines the APRA in its constitutional, historical and policy contexts.
Keywords: APRA; American Privacy Rights Act; pre-emption; right of private action; CCPA; GDPR; data privacy; consumer privacy; United States; federal; DSARs; transparency; privacy notice; privacy compliance -
Recognising generative and autonomous AI as a ‘juridical person’
Prabuddha Ganguli, CEO, Vision-IPR and Adjunct Faculty, Centre of Excellence on Intellectual Property, Indian Institute of Technology, Jodhpur
In the current era, there is a significant global effort to establish legal and regulatory frameworks for the responsible use of artificial intelligence (AI). The discussions surrounding autonomous AI highlight challenges related to its technological transparency and, often, opacity. Despite the widespread application of AI in various fields, debates persist on decision-making processes, the necessity for safe and fair outcomes and the need for regulatory platforms ensuring compliance and governance in AI implementation. Issues such as the ‘authorship’ and ‘inventorship’ of autonomously generated creations, particularly in cases like the ‘device for autonomous bootstrapping of unified sentience’, have sparked intense debates and legal proceedings in multiple locations, including the UK, USA, Australia, Germany, New Zealand, Taiwan and the EU. The Supreme Court of India judgment in 2019 presents a detailed analysis for the recognition of idols as juristic personality. The judgment provides sufficient basis for the creative recognition of ‘generative and autonomous AI’ as a ‘juridical person’. Such a recognition would entitle the AI system to a patent both as an inventor and an applicant satisfying all the essential requirements of the Indian Patents Act. Alternatively, an appropriate `sui generis` system will have to be developed in various jurisdictions based on some commonly accepted principles.
Keywords: artificial intelligence (AI); legal aspects; regulatory guidelines; autonomous; inventions; patent; copyright; IP ownership -
Book Reviews
The Right to Data Protection: Individual and Structural Dimensions of Data Protection in EU Law by Felix Bieker
Reviewed by Dr Jacob Kornbeck, Belgium Legal Officer, Brussels -
Determann’s Field Guide to Artificial Intelligence Law: International Corporate Compliance by Lothar Determann
Reviewed by Ardi Kolah, Founding Editor-in-Chief, Journal of Data Protection & Privacy -
Privacy and AI: Protecting Individuals in the Age of AI by Federico Marengo
Reviewed by Ardi Kolah, Founding Editor-in-Chief, Journal of Data Protection & Privacy
Volume 6 Number 3
-
Editorial
Will ‘AI for All’ become a reality for 1.4 billion citizens in India this year?
Ardi Kolah, Founding Editor-in-Chief, Journal of Data Protection & Privacy -
Practice Papers
Synthetic data and data protection laws
Giuseppe D’Acquisto, Garante per la protezione dei dati personali
While synthetic data has the potential to address privacy concerns associated with real-world data in many scenarios — ranging from health applications to machine learning — organisations must proceed carefully to ensure they do not inadvertently violate the General Data Protection Regulation or other data protection laws. The boundaries between the processing of personal and anonymised data are sometimes overlooked, creating potential risks for individuals' rights and freedoms. This paper will start from a definition of synthetic data and, after reviewing some foreseeable use cases (also promoted by forthcoming sector legislations in the areas of artificial intelligence and data sharing), it will address the conditions set out in data protection laws (the EU General Data Protection Regulation in primis) in order to consider a set of data as properly anonymised, as well as the phases of synthetic data generation and use where personal data might still be processed, proposing some reflections towards genuine legal compliance.
Keywords: synthetic data; privacy enhancing technologies; data protection compliance; anonymisation; identification risks -
The protection of personal data according to the civil and criminal Moroccan laws in light of jurisprudence
Anass Gaagouch, Sidi Mohamed Ben Abdellah University of Fez
With the growing dependence on information technology for personal data processing and the diverse activities requiring data use, risks to privacy have increased. Violations are more prevalent in the computer age. In this regard, the study of the protection of personal data, as formulated, has both theoretical and practical importance. This paper aims to analyse two aspects of personal data protection within Moroccan law. It encompasses civil protection based on civil liability rules, and penal protection as guaranteed by the Moroccan Penal Code. Taking into account Moroccan judicial practice and jurisprudence in this field and incorporating insights from over 28 relevant judgments, this analysis will be conducted independently of the provisions of Law No. 09-08 governing privacy and data protection. It also considers the widespread use of the Civil Code and Penal Code by Moroccan courts as foundations in matters involving personal data protection.
Keywords: Moroccan data protection law; personal data and cybercrimes; civil liability; contractual liability; personal data protection; privacy; image protection; data falsification; Moroccan jurisprudence -
Research Papers
AI and data privacy in big-tech: A new frontier in the digital market
Summayah Muncey, Riyadh, Saudi Arabia
In recent years, there have been increased risks and challenges to data privacy and protection, specifically in digital markets where personal data is often a key competitive asset and a source of market power.1 Large digital platforms (LDPs) hold a dominant position in the digital market which continues to increase by using personal data to effectively target consumers and influence their decisions.2 AI has recently made remarkable progress and breakthroughs in various domains and applications allowing LDPs who adopt them to process an unprecedented amount of data.3 Although the rapid evolution of AI has the potential to revolutionise human life in abundant ways, AI also poses significant challenges and risks to data privacy and protection as it can collect, process and use large amounts of personal data from various sources, make decisions or predictions based on complex and opaque models, attack or manipulate personal data or systems and affect the privacy rights and expectations of individuals in public spaces. This paper argues that it is crucial to strike a balance between the benefits and opportunities offered by AI and the rights and interests of individuals and society concerning data privacy, protection and competition law. It also discusses potential safeguards that individuals, organisations and governments can adopt to achieve this equilibrium.
Keywords: data privacy; data protection; artificial intelligence; machine learning; generative artificial intelligence; digital market; monopolies; competition law -
Data colonialism on Facebook for personalised advertising: The discrepancy of privacy concerns and the privacy paradox
Seyha Chan, University of Melbourne
In the digital era, Facebook serves as one of the leading sources of personal data and monetises that information for advertising purposes. This study aims to investigate users' perceptions regarding personal data collection and privacy concerns on Facebook and to explore the mediation between privacy concerns and the privacy paradox in relation to personalised advertising outcomes. To address this gap, data was collected through a mixed-method approach by conducting an online survey with 155 respondents, followed by five in-depth semi-structured interviews. The study indicated that most respondents were concerned about their personal information being aggregated and monetised by Facebook without users' consent. Most survey respondents never/rarely clicked on Facebook advertising formats such as photo, video, stories, messenger, carousel, slideshow, collection and playable advertising. However, survey and interview data divulged that they sometimes clicked on and engaged in influencer advertising because they regarded this content as informative, attractive and reliable for product reviews. The study discovered that respondents felt ambivalent towards personalised advertising and sometimes traded their privacy to get immediate benefits from the advertising. Their privacy decision-making process was affected by a cost–benefit calculation, which resulted in what is known as the ‘privacy paradox’, where an individual intentionally divulges personal information on social media despite stating privacy concerns.
Keywords: privacy concerns; privacy paradox; Facebook advertising; personalised advertising; advertising engagement; advertising avoidance -
Peculiarities of processing children’s personal data in mobile applications and games
Salome Sigua, Ivane Javakhishvili Tbilisi State University
Children and young people are often at the forefront of grasping the new and exciting opportunities the Internet can offer such as playing, communicating, experimenting with relationships and identities, learning, creating and expressing themselves. It is estimated that globally one in three of Internet users are under the age of 18. This paper examines how online platforms (Meta, Google, TikTok, etc.) protect children's personal data and how they determine whether consent is given by the correct person and whether the consent can be used as the lawful ground for processing personal data. This paper also observes the role of a parent or guardian in the giving of consent regarding the processing of a child's personal data. The paper will examine questions such as what is the digital age, what is the General Data Protection Regulation regarding the protection of children's personal data and what are the examples of European countries in this regard?
Keywords: children's personal data; consent; digital age; online marketing; parental control -
Preserving privacy in European health research: The case of synthetic data
Sara Bonomi, France and Georgia Vasileiadou, Luxembourg
This paper investigates the role of synthetic data in the field of health research, with a particular focus on data protection. More specifically, it aims at clarifying whether this new technology represents an alternative to more classic anonymisation techniques. The analysis is construed on a review of the existing literature; nevertheless, it is noted that the majority of contributions focuses on the technical aspects of synthetic data and machine learning, while less legal studies have been conducted on this topic. The outcome of this study outlines that, by using synthetic data which respects the ‘privacy by design’ principle (although the identifiability risk still exists), researchers are no longer occupied by the question of re-identification but rather focus on the quality and utility of synthetic datasets. After examining the different solutions applied to enshrine privacy, however, this paper concludes there is a necessity for regulating the use of artificially generated data for research and machine learning purposes.
Keywords: data protection; synthetic data; anonymisation; identifiability; machine learning -
Book Reviews
Privacy and AI: Protecting Individuals in the Age of AI, by Federico Marengo
Reviewed by Steve Wilkinson, M&C Saatchi World Services -
Landmark Cases in Privacy Law, edited by Paul Wragg and Peter Coe
Reviewed by Ardi Kolah, Founding Editor-in-Chief, Journal of Data Protection & Privacy - Information Technology Law, 5th Edition, by Andrew Murray
Volume 6 Number 2
-
Editorial
Made in India: Opportunity to lead the way in AI regulation
Ardi Kolah, Founding Editor-in-Chief, Journal of Data Protection & Privacy -
Practice papers
Benchmarking the Indian Digital Personal Data Protection Act 2023 against data protection frameworks in Singapore, the EU, US and Australia
Mathew Chacko and Shambhavi Mishra, Spice Route Legal
This paper benchmarks the Indian Digital Personal Data Protection Act 2023 in comparison to its global counterparts and identifies ambiguities, defects and issues with the new law that might complicate compliance by global businesses. The paper suggests reforms and approaches based on learnings from best practices under data protection laws in the EU, Singapore, the USA and Australia.
Keywords: Digital Personal Data Protection Act 2023; GDPR; PDPA; APPs; Indian data laws; American Data Privacy and Protection Act -
The EU–US data privacy framework and the impact on companies in the EEA and USA compared to other international data transfer mechanisms
Lothar Determann, Michaela Nebel and Michael Schmidl, Baker Mckenzie
Third time's a charm? Companies in the European Economic Area, Switzerland and the UK (EEA+) are considering the pros and cons of the third attempt of the EU Commission and US government to establish interoperability between their data protection and privacy law systems, after the demise of the US Safe Harbor Program and the EU–US Privacy Shield. Should US companies register? Are the efforts worth the potential benefits, given that the new programme has already been challenged and may be invalidated like previous programmes for reasons that businesses cannot control? Should companies that were already enrolled in the previous programmes accept automatic enrolment or leave the programme? Can and should companies in the EEA+ rely on EU–US Data Privacy Framework (DPF) registration for international transfers? Or insist on registration in addition to standard contractual clauses (EU SCC 2021) or other compliance mechanisms? Are data transfer impact assessments (DTIAs) still required for transfers to the US? Should they be updated? This paper seeks to help companies find answers to these questions and (I) outlines the background and context of the Adequacy Decision, (II) explains how US companies can join the DPF, (III) discusses the impact of the Adequacy Decision, (IV) summarises requirements for other compliance mechanisms for international data transfers under the GDPR, (V) compares the DPF to other transfer compliance mechanisms and (VI) provides practical considerations and a summary.
Keywords: EU–US Data Privacy Framework; EU–US Privacy Shield; US Safe Harbor Program; GDPR; data protection law; international data transfers; data transfer impact assessments; three hurdles -
Balancing AI innovation with data protection: A closer look at the EU AI Act
Sean Musch and Michael Charles Borrelli, AI & Partners, and Charles Kerrigan, CMS
In this paper the authors explore the intricate relationship between artificial intelligence (AI) innovation and data protection within the framework of the EU AI Act.1 This groundbreaking legislation addresses the challenges posed by the rapid advancement of AI to safeguarding individual privacy rights. The paper analyses the EU AI Act's provisions, including in-scope entities, extraterritorial applicability, AI system classification, permitted usage and breach notification. It delves into the protection of individuals' fundamental rights, transparency and consent mechanisms. Ultimately, the study underscores the EU AI Act's significance in shaping responsible AI development amid evolving data protection concerns.
Keywords: artificial intelligence; EU AI Act; AI; data protection; privacy -
The proportionality between trade secret and privacy protection: How to strike the right balance when designing generative AI tools
Anna Popowicz-Pazdej, Dentons and University of Wroclaw
Conflict between the right to privacy and data protection and the right to protect trade secrets must be regarded as more or less inevitable. The balancing of different rights is an issue of fundamental importance in data protection and privacy. Navigating the spectrum between the protection of technological development and the protection of fundamental rights is crucial to ensure safer implementation of generative AI tools. The proportionality principle serves as a globally recognised legal instrument to resolve the existing conflicts of fundamental rights. This poses the question of whether there is a need to reevaluate the balance between the disclosure of technical aspects and privacy and data protection rights and related obligations imposed on privacy engineers when developing generative AI tools. The concept of the proportionality principle, which is composed of the test of necessity, suitability and proportionality `sensu stricto`, can address the most vital tensions or interactions between these rights (especially when supported by the application of the appropriate legal framework). Therefore, this paper contributes not only to the discussion of a balanced approach when implementing AI tools, but also presents some general considerations for the global, regional and country-specific legal regulations (including different types of regulations and modes of enforcement when taking into account some technical aspects of the AI tools) within artificial intelligence that can support achieving this aim. To this end, this paper could be of value not only for lawmakers and developers of generative AI systems, but equally for practitioners, including law firms, to navigate the complex ethical and regulatory landscape in a thoughtful and cautious way.
Keywords: AI tools; artificial intelligence; conflicts of rights; trade secret; right to privacy and data protection; proportionality principle; fundamental rights; automated decision making; weights; coefficients; explainable AI; algorithmic decision -
Research papers
The path to privacy: Navigating the global data protection landscape
Noémie Weinbaum, UKG
This paper embarks on a comprehensive analysis of the global data protection climate, emphasising the imperative of adhering to the stringent data protection standards exemplified by the EU. By discussing the prominent case of a significant fine imposed on Meta1 and the adoption of new data protection legislation across the world, the paper underscores the mounting importance of privacy and data protection in an increasingly digital and interconnected world. Additionally, it delves into the practicalities of ensuring data protection, examining strategies such as data minimisation and de-identification and exploring the role of privacy by design principles in safeguarding privacy rights while facilitating technological advancement.
Keywords: privacy by design; Meta; EU–US adequacy decision; AI; India DPDPA; international business; practical strategies; de-identification; encryption; data localisation -
The challenge of defining artificial intelligence in the EU AI Act
Theodore S. Boone, Corvinus University Budapest and Dentons
The EU Commission, the EU Council and the EU Parliament have each issued their own versions of the text of a new EU AI Act. Throughout the gestation of the EU AI Act a core and complex question has arisen: how should ‘AI system’ be defined in the EU AI Act? This paper examines the evolution of the definition of ‘AI system’ in the draft EU AI Act. This paper suggests that, in order to achieve the EU’s goals, a definition of ‘AI system’ which is clear and cannot be modified outside the EU legislative process, which is sufficiently broad to accommodate future technological developments and which focuses on systems which make predictions, recommendations and decisions would appear to be both the most practical and the most appropriate.
Keywords: AI; artificial intelligence; AI system; EU; EU AI Act; regulation -
Video surveillance and the right to privacy in the AI era: Proposed new rules
Konstantinos Kouroupis, Frederick University
Artificial intelligence is omnipresent in many areas of our lives. It may provide numerous services, involving the processing of undefined amounts of personal data and the insertion of algorithms. Therefore, its connection with privacy is very strong. A proposal for a Regulation on AI — the EU Artificial Intelligence Act (also known as the AI Act) is about to come into force in 2024. This paper aims to demonstrate the great impact of AI on the right to privacy when video-surveillance technology is being used. The introduction of AI tools lends a particular context to that practice. Thus, through a descriptive methodology, this paper attempts to demonstrate the nature, scope and aim of the draft AI Act. Then, the study puts special emphasis on the governance of video-surveillance systems, especially the governance of facial recognition, as it has been originally regulated under the AI Act. Additionally, following an intense study of national policies, a critical approach is pursued to the new rules on the issue, which have been proposed by MEPs. Achieving a human-centric dimension regarding AI is of primary interest. Consequently, this paper aims, finally, to offer original and fruitful suggestions for the regulation of video-surveillance systems in the new AI era. Keywords: privacy; AI; mass surveillance; face recognition; remote identification system
-
Book reviews
Data Protection Without Data Protectionism: The Right to Protection of Personal Data and Data Transfers in Eu Law and International Trade Law by Tobias Naef
Reviewed by Dr Jacob Kornbeck - Your Privacy Is Important To Us! Restoring Human Dignity in Data-Driven Marketing by Jan Trzaskowski
- Containing Big Tech: How to Protect our Civil Rights, Economy, and Democracy by Tom Kemp
-
Automated Decision-Making and Effective Remedies: The New Dynamics in the Protection of EU Fundamental Rights in the Area of Freedom, Security and Justice by Simona Demková
Reviewed by Ardi Kolah, Founding Editor-in-Chief, Journal of Data Protection & Privacy -
In memoriam
Spiros Simitis (1934–2023)
Peter J. Hustinx
Volume 6 Number 1
-
Editorial
Third time lucky for the European Commission? Let's hope so
Ardi Kolah, Founding Editor-in-Chief, Journal of Data Protection & Privacy -
Practice paper
As interest in using artificial intelligence increases, can UK and EU compliance legislation keep pace with the rate of change?
Steve Wilkinson, Freelance Data Protection Officer
Legislation usually follows technological developments, in this case the advancement of artificial intelligence (AI). AI could assist predictions of case outcomes for litigators by methodically reviewing vast data lakes related to previous judgments, reviewing the issues in each related case along with conclusions the judge reached. Therefore, draft proposals such as the EU's Artificial Intelligence Act (AIA), the EU's AI Liability Directive (AILD), guidance from the UK's Information Commissioner's Office (ICO) as well as recommendations from organisations such as the Organisation for Economic Co-operation and Development (OECD) regulating the use of AI from both the UK and EU will be discussed. The development of common AI definitions, technical standards and related tools can assist in the requirement for international harmonisation through other mechanisms as well as judicial awareness of the impending issue. The key areas of research will focus on the following: proposed legislation, existing legislation, journals and books. Case law will also be reviewed to ascertain any awareness from the judiciary as to the complexities related to AI.
Keywords: artificial intelligence; risk assessment; UK General Data Protection Regulation (GDPR); liability; tort -
A reflection on the UAE's new data protection law: A comparative approach with GDPR
Laroussi Chemlali, Ajman University, Leila Benseddik, Canadian University Dubai, and Abdesselam Salmi, Ajman University
On 2nd January, 2022, the United Arab Emirates' (UAE) new Federal Decree-Law on the Protection of Personal Data came into force and became the first federal-level law to address processing of personal data. This law, which is largely influenced by major international privacy and data protection legislation, in particular the European General Data Protection Regulation (GDPR), is intended to align UAE data protection standards with global standards and principles and also follows the recent trend in legislation for privacy and data protection in the Golf Corporation Council region. This paper follows a comparative approach by highlighting the key aspects of this law through the lens of the GDPR, in attempt to provide an overview of requirements that should be taken into consideration by companies operating or wishing to settle in the UAE.
Keywords: data protection; data protection law; data processing; General Data Protection Regulation (GDPR); privacy; United Arab Emirates (UAE) -
Japan's PrivacyMark system as a good illustration of certification mechanisms
Masao Horibe, Hitotsubashi University
Privacy and data protection certification mechanisms have increasingly been attracting great interest. Japan was one of the first countries in the world to introduce privacy and data protection certification mechanisms and privacy and data protection seals and marks. At the prefectural level, this began in 1990, and at the national level, in 1998. JIPDEC (then the Japan Information Processing and Development Corporation) launched the PrivacyMark system in April 1998. Applications from private enterprises are assessed by JIPDEC or one of 19 designated assessment bodies. There were 1,380 registered assessors (391 lead assessors, 282 assessors and 707 provisional assessors) as at 1st April, 2022. There are three assessor training bodies. The number of registered entities has been increasing year by year, and as at 10th May, 2023, the number of PrivacyMark Entities is 17,447.
Keywords: privacy; data protection; PrivacyMark; JIS Q 15001; JIPDEC; granting body; designated assessment bodies; assessors -
Pilot project lighthouse: A proposed GDPR compliant methodology for analysing special categories of personal data
Collin R. Walke, Estill Hall
The General Data Protection Regulation (GDPR) is designed, in part, to prevent discrimination in algorithmic decision making. However, the GDPR's requirements, as well as EU member states' implementing laws, often make testing for bias using special categories of data, such as race, either impractical or impossible. This paper argues that the pilot project lighthouse methodology is a GDPR compliant method for bias-testing special categories of data in algorithms. This paper finds that the pilot project lighthouse methodology is permissible in the majority of EU member states and argues that to the extent pilot project lighthouse methodology would be prohibited by either the GDPR or an individual member state's implementing laws, the same are contrary to the letter and intent of the GDPR.
Keywords: General Data Protection Regulation (GDPR); algorithmic bias; EU; special categories of data -
What is left of consent when it is deemed consent: A data protection experiment in India
Indranath Gupta and Paarth Naithani, Jindal Global Law School
Recently, the latest draft data protection legislation in India, the Digital Personal Data Protection Bill, 2022, introduced the concept of deemed consent. Among other situations, consent can be deemed to be given through voluntary participation rather than an express statement. This paper positions deemed consent by situating it in recent discussions around consent. Deemed consent, as it stands, sits uncomfortably within the data protection rubric. The paper suggests that the proposed structure of deemed consent in India needs alteration and may be adequately amended with effective learning emanating from jurisdictions like the UK, Canada and Singapore.
Keywords: deemed consent; consent; data protection; India; Digital Personal Data Protection Bill; Personal Data Protection Bill -
Ethics is nothing other than reverence for life . . . and data
Sascha Francis Schneider, Alight Solutions
Ethics may be, by far, the most overlooked aspect of data protection programmes, and not because people do not consider ‘being ethical’ important when processing data but because the regulator does not require processing to be ethical and omits its presence in data protection and privacy regulations, keeping this to non-binding guidance or general recommendations. Not surprising considering that it is neither actively taught to legal professionals during law school, nor explicitly detailed in the applicable code of conducts. When referring to ethical or moral behaviour in a legal framework, religious morals and ethics should be avoided and instead it should be linked to present-day society and the world that surrounds us, taking into consideration modern practices and technologies that each and every one of us is confronted with on a daily basis, plus the fact that this technology is yet to be further evolved. This paper focuses on just a couple of those modern scenarios where ethics should be a key component when it relates to data processing yet is somehow overlooked. When thinking of marketing practices, the days of the salesperson knocking on our doors are long gone, and everything is virtual now. No matter where you go on the Internet, you are confronted with banners and pop-ups, never-ending sections which ask you about your date of birth, your address, or the last time you enjoyed a hot coffee. Sometimes one does not even realise how much data is requested or how one can suddenly feel bad about not providing certain information. Whereas the salesperson knocking on your door talked you into buying stuff you did not need, marketing these days has taken another approach to sell you something, which consists of collecting considerable amounts of data from individuals, exploiting social engineering procedures and thereby manipulating the individual's actual will. In EU legislation, under the GDPR, ‘consent’ must be ‘freely given, specific, informed and unambiguous’, but it is not established how this consent may be induced or collected, leading to obscure practices that would be against that ‘free’ will and are commonly known as ‘dark patterns’. Ethics is not widely considered, legislatively speaking, to generally protect data, but there is one area in which ethics is not only considered but also made a key component and a fundamental aspect to take into account: artificial intelligence (AI). Certain principles are expected to be introduced into any AI system for it to be deemed trustworthy, and from the field of AI, and its focus on ethics, it is possible to learn how to improve, not only automated interactions with a machine, but also how to protect data in general.
Keywords: ethics; consent; GDPR; dark patterns; AI -
Book reviews:
Handbook on Crime and Technology
Don Hummer and James M. Bryne (eds) -
California Privacy Law: Practical Guide and Commentary: US Federal and California Law, Fifth Edition
Lothar Determann -
The Fight for Privacy: Protecting Dignity, Identity and Love in the Digital Age
Danielle Keats Citron -
Regulating Social Network Sites: Data Protection, Copyright and Power
Asma Vranaki - Reviewed by Ardi Kolah, Founding Editor-in-Chief, Journal of Data Protection & Privacy