DPO Newsletter : march 2025

Click here to download our newsletter.

 

IN BRIEF:

 

  • SANCTIONS– 2024 review of the sanctions and corrective measures pronounced by the CNIL and sanction of a company for the excessive surveillance of its employees.
  • ARTIFICIAL INTELLIGENCE – Clarification of the definition of AI systems by the European Commission and new recommendations from the CNIL to support responsible AI.
  • ANONYMIZATION/PSEUDONYMIZATION – A search engine called to order by the CNIL and publication of guidelines by the European Data Protection Board.
  • RIGHT OF ACCESS – European coordinated action identifies gaps in the implementation of the right of access.
  • TRANSFER OUTSIDE THE EUROPEAN UNION – Publication of the CNIL guide on impact assessments of data transfers.

 

I. SANCTIONS TO REMEMBER

 

a. 2024 report on the CNIL’s sanctions

 

In 2024, the Commission Nationale de l’Informatique et des Libertés (“CNIL“) (France) issued 87 sanctions, including 69 under the simplified procedure (here). This significant increase compared to 2023 (42 sanctions) and 2022 (21 sanctions) can be explained by the increasingly frequent use of the simplified procedure (almost three times more than in 2023).

 

As part of its ordinary procedure, the CNIL has sanctioned companies in particular for:

  • Commercial prospecting: in particular for the failure to collect prior consent from individuals before sending commercial communications.
  • Health data processing: in particular with regard to anonymisation (e.g. clarification of the qualification of data processed in health data warehouses).

 

As part of its simplified procedure, the CNIL has sanctioned (i) the failure to cooperate with the CNIL, (ii) the failure to comply with the exercise of rights, (iii) the failure to minimise data, (iv) the breach relating to the security of personal data, and (v) the breach of the regulations relating to cookies.

 

b. Excessive surveillance of employees: €40,000 fine for a company in the real estate sector

 

The CNIL, by deliberation SAN-2024-021 of December 19, 2024 (here), imposed a fine of €40,000 on a company in the real estate sector for having set up excessive surveillance of its employees, by means of software for monitoring working time and employee performance and a continuous video surveillance system set up in employees’ work and break areas. The CNIL has identified several shortcomings, in particular:

 

Failures Details
Excessive surveillance

(i)     The continuous recording of images and sounds of employees is contrary to the principle of data minimization (Article 5 of the GDPR); and

(ii)   There is no legal basis for implementing endpoint monitoring software (Article 6 of the GDPR).

Lack of information

Oral information on the implementation of the monitoring software does not meet the conditions of accessibility over time and, in the absence of a written record of it, its completeness is not established (Articles 12 and 13 of the GDPR).

Lack of security measures

The CNIL recalls the reinforced requirement for individualized access to administrator accounts, which have very extensive rights over personal data – here, several employees shared the same access to data from the surveillance software (Article 32 of the GDPR).

Lack of impact assessment (DPIA)

The systematic monitoring of employees at their workstations required the formalization of a DPIA (Article 35 of the GDPR).

 

II. TOWARDS RESPONSIBLE AI

 

a. Prohibited practices in artificial intelligence: the new guidelines of the European Commission

 

On 6 February 2025, the European Commission adopted guidelines on the definition of artificial intelligence (“AI“) systems to help stakeholders identify whether a software system falls under AI. It should be noted that these guidelines do not address general-purpose AI models. The Commission has identified and clarified the 7 elements that make up the definition of ‘AI system’, introduced in Article 3(1) of Regulation (EU) 2024/1689 on AI:

 

Definition of the AI Act Commission clarifications
Machine-based system

AI systems must be computationally driven and based on machine operations.

that is designed to operate at varying levels of autonomy

The deductive capacity of systems is key to ensuring their autonomy: an AI system must operate with a certain reasonable degree of independence of action (which excludes systems requiring full manual human involvement and intervention).

and that may exhibit adaptiveness after deployment

The condition of the system’s self-learning capacity is optional and non-decisive.

and that, for explicit or implicit objectives

Explicit (encoded) or implicit (inferred from behavior or assumptions) objectives are internal and refer to the goals and results of the tasks to be performed. They are part of a broader notion of the “purpose” of the AI system, which corresponds to the context in which it is designed and how it must be operated.

infers, from the input it receives, how to generate outputs

This notion refers to the building phase of the AI system, and is therefore broader than just the phase of use of the system. The Commission distinguishes between AI systems and other forms of software that have only a limited capacity to analyse patterns and adjust autonomously their output.

such as predictions, content, recommendations, or decisions

AI systems are distinguished by their ability to generate nuanced results, leveraging complex models or expertly defined rules. The Commission details each of the terms of the definition.

that can influence physical or virtual environments.

AI systems are not passive but actively impact the environments in which they are deployed.

 

 

b. The CNIL’s new recommendations for responsible AI

 

On February 7, 2025, the CNIL published new recommendations to support the development of responsible AI, in compliance with the GDPR (here). These relate both to the information of individuals and to the exercise of their rights:

 

  • Information: the data controller must inform individuals when their personal data is used to train an AI model. This information can be adapted according to the risks to people and operational constraints and can therefore sometimes be limited to general information (when people cannot be contacted individually) and/or global information (when many sources are used, for example by indicating only categories of sources).
  • Rights of individuals: the CNIL invites stakeholders to take into account the protection of privacy from the design stage of the model (e.g. anonymization strategy, non-disclosure of confidential data). The implementation of rights in the context of AI models can be difficult and a refusal to exercise rights can sometimes be justified. When these rights must be guaranteed, the CNIL will take into account the reasonable solutions available and may adjust the conditions of delay.

 

III. ANONYMIZATION AND PSEUDONYMIZATION UNDER DEBATE

 

a. The new EDPS Guidelines on pseudonymisation

 

On 16 January 2025, the European Data Protection Board (EDPB) adopted new guidelines 01/2025 on pseudonymisation, which are subject to public consultation until 14 March 2025.

 

Pseudonymisation means that personal data is no longer attributed to a data subject without additional information (Article 4(5) GDPR). Pseudonymised data is personal data because there is a risk of re-identification of the data subjects.

 

The EDPB states that pseudonymisation can (i) facilitate the use of the legal basis of legitimate interest, provided that all other requirements of the GDPR are met, (ii) ensure compatibility with the original purpose in the context of further processing, and (iii) help organisations comply with obligations relating to the principles of the GDPR, protection by design and by default, and security.

 

The EDPB is also analysing a set of robust technical measures to prevent unauthorised re-identification. Recommended techniques include hashing with a secret key or salt, separation of information for attribution, and strict access control.

 

It will be pointed out that these guidelines are to be read in the light of Case C-413/23 pending before the Court of Justice of the European Union between the European Data Protection Supervisor and the Single Resolution Board (SRB). In this case, pseudonymised data was transferred by the SRB to Deloitte for the purposes of an analysis mission. In his Opinion of 6 February 2025, the Advocate General asks the Court to rule on whether the recipient of pseudonymised data who does not have reasonable means to re-identify the data subjects could be considered not to be processing personal data insofar as the risk of identification is ‘non-existent or insignificant’.

 

IV. SPOTLIGHT ON THE RIGHT OF ACCESS

 

The CNIL and the European Data Protection Supervisor participated in a coordinated action of the European Data Protection Board in order to evaluate the implementation of the right of access to personal data.

 

During 2024, the CNIL inspected public and private bodies, chosen on the basis of complaints received, and issued several reminders of legal obligations. She notes that the organizational measures implemented by these organizations to process right-of-access requests are sometimes insufficient/unsatisfactory. Organizations should both (i) provide information about the processing, (ii) include a copy of the data processed, and (iii) should not systematically exclude certain processing or categories of personal data from their responses.

 

The EDPS has monitored the processing of requests for the right of access by the EU institutions, bodies, offices and agencies and has highlighted in his report of 16 January 2025 : (i) the low volume of requests, (ii) the decentralisation of the management of requests, (iii) the fact that it is difficult to distinguish between access requests and other types of requests,  (iv) the excessive processing of data caused by the verification of the identity of applicants, (v) the difficulty of reconciling the protection of rights and freedoms and respect for the right of access of individuals. Controllers and processors are invited by the EDPS to refer to Guideline 01/2022 on the right of access of data subjects.

 

V. IMPACT ANALYSIS OF DATA TRANSFERS

 

On January 31, 2025, the CNIL published the final version of its guide on the Impact Assessment of Data Transfers (AITD) (here) to help data exporters assess the level of protection in destination countries located outside the European Economic Area and the need to put in place additional safeguards. This analysis is necessary when the transfer is based on a tool of Article 46 of the GDPR (standard contractual clauses, binding corporate rules, etc.): the destination country does not benefit from an adequacy decision and the transfer is not carried out on the basis of a derogation from Article 49 of the GDPR.

 

The guide proposes a six-step methodology:

  • Identify the data concerned and the actors involved;
  • Choose the appropriate transfer tool;
  • Analyze risks related to the laws and practices of the third country;
  • Determine and apply additional measures (e.g. encryption or anonymization);
  • Implement these additional measures;
  • Reassess the compliance of the transfer at appropriate intervals.

 

This publication follows a public consultation that allowed the CNIL to adapt its guide to the practical realities of companies, and to modify it in order to take into account the latest opinions of the European Data Protection Board.

 

DPO Newsletter: February 2025

🚨 DPO Newsletter: What You Need to Know! 🔒

 

 

🔥 Our latest issue is out, covering key decisions, upcoming regulations, and major trends to watch:

 

 

In this edition:

🚫 Record-breaking fines – Orange (€50M), Meta (€251M), and OpenAI (€15M) hit with major sanctions.
📉 Data Transfers outside the EU – The CJEU condemns the European Commission for illegal data transfers to the U.S.
📢 GDPR Certification for Processors – The CNIL opens a public consultation. Be ready for what’s next!
⚠️ Deceptive Cookie Banners – Time’s up for several website publishers ordered to comply.
🤖 Responsible AI – The EDPB sets the tone for AI development within GDPR rules.
📊 2025-2028 Strategic Focus – CNIL’s roadmap to secure the digital future.

 

 

👉 Stay sharp and anticipate the impact on your business!

 

 

Should you have any questions, do not hesitate to contact us: contact@joffeassocies.com

 

 

DPO Newsletter october 2024

🚨 New DPO Newsletter Alert! 🚨

Our latest issue is out, covering key GDPR updates and regulatory changes. 🏛️📜

Highlights:
📈 CNIL’s rise in simplified sanctions: 28 cases in 9 months
🏥 €800,000 fine for health data breach by CEGEDIM SANTE
🧑‍⚖️ CJEU rulings on GDPR enforcement
📜 EDPB guidelines on cookies, legitimate interest, and subcontracting

 

DPO NEWSLETTER: AN UPDATE FROM THE IT-DATA TEAM

Download the newsletter here

 

1) CNIL SANCTION: COMPANY SAF LOGISTICS FINED 200,000 EUROS

On 18 September 2023, the Commission Nationale de l’Informatique et des Libertés (CNIL) fined the Chinese air freight company SAF LOGISITIC €200,000 and published the penalty on its website.

The severity of this penalty is justified by the seriousness of the breaches committed by the company:

 

  • Failure to comply with the principle of minimisation (article 5-1 c of the GDPR): the data controller must only collect data that is necessary for the purpose of the processing. In this case, the company was collecting personal data on members of its employees’ families (identity, contact details, job title, employer and marital status), which had no apparent use.

 

  • Unlawful collection of sensitive data (article 9 of the GDPR) and data relating to offences, convictions and security measures (article 10): in this case, employees were asked to provide so-called sensitive data, i.e. blood group, ethnicity and political affiliation. As a matter of principle, the collection of sensitive data is prohibited. By way of exception, it is permitted if it appears legitimate with regard to the purpose of the processing and if the data controller has an appropriate legal basis, which was not the case here. Furthermore, SAF LOGISITIC collected and kept extracts from the criminal records of employees working in air freight, who had already been cleared by the competent authorities following an administrative enquiry. Therefore, such a collection did not appear necessary.

 

  • Failure to cooperate with the supervisory authority (Article 31 of the GDPR): The CNIL also considered that the company had deliberately attempted to obstruct the control procedure. Indeed, SAF LOGISITIC had only partially translated the form, which was written in Chinese. The fields relating to ethnicity or political affiliation were missing. It should be noted that a lack of cooperation is an aggravating factor in determining the amount of the penalty imposed by the supervisory authority.

 

2) THE CONTROLLER AND THE PROCESSOR ARE LIABLE IN THE EVENT OF FAILURE TO CONCLUDE A DATA PROTECTION AGREEMENT

 

On 29 September 2023, the Belgian Data Protection Authority (DPA) issued a decision shedding some interesting light on (i) the data controller’s and processor’s obligations and the late correction of the GDPR breaches. In this regard, the ADP stated that:

 

  • Both the controller and the processor have breached the provisions of Article 28 of the GDPR by failing to enter into a data protection agreement (DPA) at the outset of data processing. The obligation to enter into a contract or to be bound by a binding legal act falls on both the controller and the processor and not on the controller alone.
  • The retroactive clause provided for in the DPA does not compensate for the absence of a contract at the time of the event: only the date of signature of the DPA should be taken into account to determine the compliance of the processing concerned. The ADP pointed out that allowing such retroactivity would allow companies to avoid the application of the obligation outlined in Article 28.3 of the GDPR over time. However, the GDPR itself provided for a period of 2 years between its entry into force and its entry into application for gradual compliance by all the entities concerned with a view to guaranteeing the protection of data subjects’rights.

 

3) A NEW COMPLAINT HAS BEEN LODGED AGAINST THE OPENAI START-UP BEHIND THE CHATGPT GENERATIVE ARTIFICIAL INTELLIGENCE SYSTEM

The Polish Data Protection Office has opened an investigation following the filing of a complaint by Polish researcher Lukasz Olejnik against the start-up Open AI in September 2023. The complaint highlights the chatbot’s many failings to comply with the General Data Protection Regulation (GDPR).

 

Breaches of the GDPR raised by the complaint

 

The complaint identifies numerous breaches of the GDPR, including a violation of the following articles:

 

  • Article 5 on the obligation to ensure data accuracy and fair processing (there is an obligation to limit the purposes);
  • Article 6 on the legal basis for processing;
  • Articles 12 and 14 on information for data subjects;
  • Article 15 on the data subject’s right of access to information on the processing of his or her data;
  • Article 16 on the right of data subjects to rectify inaccurate personal data.

 

The legitimate interests pursued by OpenAI hardly seem to outweigh the invasion of users’ privacy.

 

Repeated complaints against OpenAI

This is not the first time that ChatGPT has been the target of such accusations since it went online. Eight complaints have been lodged worldwide this year for breaches of personal data protection. These include:

 

  • The absence of consent from individuals to the processing of their data
  • Inaccurate data processing
  • No filter to check the age of individuals
  • Failure to respect the right to object.

 

The “scraping” technique used by this artificial intelligence (a technique that automatically extracts a large amount of information from one or more websites) was highlighted by the CNIL back in 2020 in a series of recommendations aimed at regulating this practice in the context of commercial canvassing. These inspections led the CNIL to identify a number of breaches of data protection legislation, including :

 

  • Failure to inform those targeted by canvassing ;
  • The absence of consent from individuals prior to canvassing;
  • Failure to respect their right to object.

 

Towards better regulation of artificial intelligence?

In April 2021, the European Commission put forward a proposal for a regulation specifying new measures to ensure that artificial intelligence systems used in the European Union are safe, transparent, ethical and under human control. The regulation classifies systems as high risk, limited risk and minimal risk, depending on their characteristics and purposes.

Pending the entry into force of this regulation, the CNIL is working to provide concrete responses to the issues raised by artificial intelligence. To this end, in May 2023 it deployed an action plan designed to become a regulatory framework, the aim of which is to enable the operational deployment of artificial intelligence systems that respect personal data.

 

Repeated complaints against OpenAI

This is not the first time that ChatGPT has been the target of such accusations since it went online. Eight complaints have been lodged worldwide this year for breaches of personal data protection. These include:

 

  • The absence of consent from individuals to the processing of their data
  • Inaccurate data processing
  • No filter to check the age of individuals
  • Failure to respect the right to object.

 

The “scraping” technique used by this artificial intelligence (a technique that automatically extracts a large amount of information from one or more websites) was highlighted by the CNIL back in 2020 in a series of recommendations aimed at regulating this practice in the context of commercial canvassing. These inspections led the CNIL to identify a number of breaches of data protection legislation, including :

 

  • Failure to inform those targeted by canvassing ;
  • The absence of consent from individuals prior to canvassing;
  • Failure to respect their right to object.

 

Towards better regulation of artificial intelligence?

In April 2021, the European Commission put forward a proposal for a regulation specifying new measures to ensure that artificial intelligence systems used in the European Union are safe, transparent, ethical and under human control. The regulation classifies systems as high risk, limited risk and minimal risk, depending on their characteristics and purposes.

Pending the entry into force of this regulation, the CNIL is working to provide concrete responses to the issues raised by artificial intelligence. To this end, in May 2023 it deployed an action plan designed to become a regulatory framework, the aim of which is to enable the operational deployment of artificial intelligence systems that respect personal data.

 

4) TRANSFER OF DATA TO THE UNITED STATES

On 10 July 2023, the European Commission adopted a new adequacy decision allowing transatlantic data transfers, known as the Data Privacy Framework (DPF).

Since 10 July, it has therefore been possible for companies subject to the GDPR to transfer personal data to US companies certified as “DPF” without recourse to the European Commission’s standard contractual clauses and additional measures.

It should be noted that the United Kingdom has also signed an agreement with the United States on the transfer of data, which will come into force on 12 October.

As a reminder, on 16 July 2020, the Court of Justice of the European Union (CJEU) invalidated the Privacy Shield, the previous adequacy decision allowing the transfer of personal data to the United States.

 

1)The content of the Data Privacy Framework

The decision of 10 July 2023 formalises a number of binding guarantees in an attempt to remedy the weaknesses of the Privacy Shield, which was invalidated two years earlier.

 

a)The new obligations

In order to benefit from this new framework and receive personal data from European residents, American companies will have to :

 

  • Declare that you adhere to the DPO’s personal data protection principles (data minimisation, retention periods, security, etc.).
  • Indicate a certain amount of mandatory information: the name of the organisation concerned, a description of the purposes for which the transfer of personal data is necessary, the personal data covered by the certification and the verification method chosen.
  • Formalise a privacy policy in line with the CFO principles and specify the type of relevant independent recourse available to primary data holders, as well as the statutory body responsible for ensuring compliance with these principles.

 

On Monday 17 July, the US Department of Commerce launched the Data Privacy Framework website, offering companies a one-stop shop for signing up to the DPF and listing the companies that have signed up.

Participating US companies must conduct annual self-assessments to demonstrate their compliance with the DPF requirements. In the event of a breach of these principles, the US Department of Commerce may impose sanctions.

It should be noted that companies already affiliated to the Privacy Shield are automatically affiliated to the DPF provided that they update their privacy policy before 10 October 2023.

 

  1. b) The creation of a Data Protection Review Court

The DPF is innovative in that it establishes a Data Protection Review Court (DPRC) to provide EU residents with easier, impartial and independent access to remedies, and to ensure that breaches of the rules under the EU-US framework are dealt with effectively. The Court has investigative powers and can order binding corrective measures, such as the deletion of illegally imported data.

 

  1. c) A new appeal mechanism for EU nationals

The planned appeal mechanism will operate at two levels:

 

  • Initially, the complaint will be lodged with the competent national authority (for example, the CNIL in France). This authority will be the complainant’s point of contact and will provide all information relating to the procedure. The complaint is forwarded to the United States via the European Data Protection Committee (EDPS), where it is examined by the Data Protection Officer, who decides whether or not there has been a breach.
  • The complainant may appeal against the decision of the Civil Liberties Protection Officer to the DPRC. In each case, the DPRC will select a special advocate with the necessary experience to assist the complainant.

 

Other remedies such as arbitration are also available.

 

2) Future developments: new legal battles?

This new legal framework will be subject to periodic reviews, the first of which is scheduled for the year following the entry into force of the adequacy decision. These reviews will be carried out by the European Commission, the relevant American authorities (U.S. Department of Commerce, Federal Trade Commission and U.S. Department of Transportation) and by various representatives of the European data protection authorities.

Despite the introduction of these new safeguards, the legal response has already taken place.

On 6 September 2023, French MP Philippe Latombe (MoDem) lodged two complaints with the CJEU seeking the annulment of the DPF.

Max Schrems, president of the Austrian privacy protection association Noyb, which brought the actions against the previous agreements (Safe Harbor and Privacy Shield), is likely to follow suit.

 

5) ISSUES SURROUNDING THE MATERIAL SCOPE OF THE GDPR

A divisive position by an Advocate General concerning the material scope of the GDPR could, if followed by the CJEU, clearly limit the application of the GDPR to many sectors of activity (Case C-115/22).

In this case, the full name of an Austrian sportswoman, who had tested positive for doping, was published on the publicly accessible website of the independent Austrian Anti-Doping Agency (NADA).

The sportswoman has asked the Austrian Independent Arbitration Commission (USK) to review this decision. In particular, this authority questioned the compatibility with the GDPR of publishing the personal data of a doping professional athlete on the Internet. A reference for a preliminary ruling was therefore made to the CJEU.

The Advocate General considers that the GDPR is not applicable in this case insofar as the anti-doping rules essentially regulate the social and educational functions of sport rather than its economic aspects. However, there are currently no rules of EU law relating to Member States’ anti-doping policies. In the absence of a link between anti-doping policies and EU law, the GDPR cannot regulate such processing activities.

 

This analysis is based on Article 2.2.a) of the GDPR, which states:

 

“This Regulation shall not apply to the processing of personal data carried out :

a)in the context of an activity that does not fall within the scope of Union law;”.

The scope of the Union’s intervention is variable and imprecise, leading to uncertainty as to its application to certain sectors.

In the alternative, and assuming that the GDPR applies, the Advocate General believes that the Austrian legislature’s decision to require the public disclosure of personal data of professional athletes who violate anti-doping rules is not subject to a proportionality test under the terms of the regulation.

However, the Advocate General’s conclusions are not binding on the CJEU. The European Court’s decision is therefore eagerly awaited, as it will clarify the application of the GDPR.

 


1Last March, the Italian CNIL went so far as to temporarily suspend ChatGPT on its territory because of a suspected breach of European Union data protection rules.

OpenAI failed to implement an age verification system for users. Following on from this event, on 28 July a US class action denounced the accessibility of services to minors under the age of 13, as well as the use of “scraping” methods on platforms such as Instagram, Snapchat and even Microsoft Teams.

2Proposal for a Regulation laying down harmonised rules on artificial intelligence

The “cyber-score” law comes into force: what are the new obligations for platform operators?

In the Senate report of 16 February 2022 on the introduction of cybersecurity certification for digital platforms aimed at Senator Anne-Catherine Loisier pointed out that, despite a growing increase in cyber attacks[1] , companies were not changing their behaviour in the face of the threat[2] .

 

In recognition of the fact that cybersecurity is an essential counterpart to the digital economy and, more broadly, to the digitalization of all areas of society, the legislator has imposed new obligations on platform operators.

Act no. 2022-309 of 3 March 2022 for the introduction of cybersecurity certification for digital platforms aimed at the general public (known as the “Cyber-score Act“) introduced into the consumer code[3] an obligation to inform consumers about the level of security of platform operators and the data they host.

This law introduces an obligation for digital operators to inform users of their services about the level of security of their data, which was not provided for in the General Data Protection Regulation (GDPR). The latter only provides for personal data security measures to be put in place but does not inform data subjects of their robustness[4] .

The new article L.111-7-3 of the French Consumer Code states that:

 

“Operators of online platforms (…) whose activity exceeds one or more thresholds defined by decree shall carry out a cybersecurity audit, the results of which shall be presented to the consumer (…), covering the security and location of the data they host, directly or via a third party, and their own security (…)”.

The audit referred to in the first paragraph is carried out by audit service providers qualified by the Agence nationale de la sécurité des systèmes d’information.

(…)

The result of the audit is presented to the consumer in a legible, clear and comprehensible manner and is accompanied by a complementary presentation or expression, by means of a colour information system.”

 

The cyber-score law came into force on 1er October 2023.

The implementing decree and the order specifying its application are awaiting publication.

 

1. Who is affected by this communication obligation?

 

The scope of application is particularly broad as it concerns (i) online platform operators as defined in Article L111-7 of the French Consumer Code and (ii) persons providing non-number-based interpersonal communications services whose activity exceeds a certain threshold set by decree. The draft decree provides for a threshold of 25 million unique visitors per month from French territory by 2024 [5]. The legislator’s aim is not to penalise very small businesses (VSEs), SMEs or innovative start-ups in terms of online services.

 

In concrete terms, digital platforms (marketplaces, comparison sites, search engines, social networks, etc.), messaging services and videoconferencing software intended for the general public are affected by the obligation to carry out cyber-security audits and to communicate the results to the public, provided they exceed the threshold of 25 million unique visitors per month from French territory by 2024.

 

 

2. How does a cyber sucurity audit work?

 

The operators concerned will have to use an information systems security audit service provider (PASSI) qualified by the French National Agency for Information Systems Security (ANSSI).

The audit will be carried out based on information that is open, freely accessible, and non-intrusive by the service provider, and will cover the security and location of the data. In this respect, a location within the European Union is a guarantee of data security, in terms of the application of the RGPD, but also in terms of digital sovereignty.

However, data location is not the only criterion to be considered. The draft decree provides for the following control points [6]  :

 

  • Organisation and governance (cyber insurance, security certification, etc.)
  • Data protection (security measures relating to data hosting, exposure of data to extraterritorial legislation, sharing of data with third parties)
  • Knowledge and control of the digital service (mapping of information processed by the digital service and sensitivity, mapping of service providers, existence of network partitioning mechanisms to protect the digital service from a rebound attack on shared environments).
  • Level of outsourcing (location of digital service hosting infrastructures in the EU, etc.)
  • Level of exposure on the Internet (regular security scans, implementation of a solution to protect against denial of service (DDoS), user identification/authentication management, etc.).
  • Security incident handling system
  • Digital service audits (Carrying out regular security audits before the digital service is implemented (audit/Bug bounty/etc.))
  • Raising awareness of cyber-risks and the fight against fraud (raising awareness of cyber-security risks, warning users of cyber-risks of scams and fraud and recommendations for precautions, etc.).
  • Secure development (OWASP rules, etc.)

 

It should be noted that the control points mentioned above must already be considered by businesses as part of their GDPR compliance.

 

We will have to wait for the publication of the decree before we have an exhaustive list of the cyber security audit checkpoints.

 

3. How does a cyber sucurity audit work?

 

Following the example of the “nutriscore”, the legislator stipulates that economic operators must publish a “cyberscore” on their website. The draft decree states that the marking must be displayed prominently on the home screen and that the cyberscore audit score and the date on which it was carried out must appear prominently in the online service’s legal notices.

 

Screenshot taken from the draft order setting the criteria for the application of Law 2022-309 of 3 March 2022 for the introduction of cybersecurity certification of digital platforms intended for the public.
Screenshot taken from the draft order setting the criteria for the application of Law 2022-309 of 3 March 2022 for the introduction of cybersecurity certification of digital platforms intended for the public.

 

The result of any cyber-audit must be clearly displayed and accessible on the operator’s website.

The aim is to enable consumers to be better informed about the protection of their online data.

 

4. How do i display my cyber-score?

 

In the event of failure to comply with this obligation, and in accordance with Article L131-4 of the French Consumer Code, the operator is liable to an administrative fine imposed by the DGCCRF of up to €75,000 for an individual and €375,000 for a legal entity.

In addition, a low cyber-score will inevitably damage the image of the operator concerned and reduce the confidence of users of its site.

 

***

 

In this context, it is essential for the companies concerned to put in place the appropriate technical and organisational security measures now.

The IT/Data department at Joffe & Associés can help you ensure that your platforms are compliant (GDPR compliance, securing relations with third parties, cyber-security awareness, etc.).


 

[1] According to the report, 54% of businesses said they had suffered at least one cyber-attack in 2021, and 30% of cyber-attacks led to the theft of personal, strategic or technical data.

[2] Senate report n°503 p6 https://www.senat.fr/rap/l21-503/l21-5031.pdf

[3] Article L.111-7-3 of the Consumer Code

[4] Article 32 of the RGPD

[5] https://www.entreprises.gouv.fr/files/files/secteurs-d-activite/numerique/ressources/consultations/projet-decret-cyberscore.pdf

[6]https://www.entreprises.gouv.fr/files/files/secteurs-d-activite/numerique/ressources/consultations/projet-arrete-cyberscore.pdf

DIGITAL FINANCE: FINANCIAL SECTOR PLAYERS MUST ANTICIPATE THE NEW DORA REGULATION NOW

European regulation no. 2022/2554 on Digital Operational Resilience for the financial sector (“DORA“) was adopted on December 14, 2022 and will apply from January 17, 2025.

 

The aim of this regulation is to reinforce the technological security and smooth operation of the financial sector. It lays down security requirements so that financial services can withstand and recover from disruptions and threats linked to information and communication technologies (“ICT“) throughout the European Union.

It applies to a wide range of players in the financial sector and their technology partners, including credit institutions, investment firms, payment institutions, asset management companies, insurance companies and third-party ICT service providers operating in the financial services sector.

 

The DORA regulation is structured around five chapters, which lay down a set of rules with a major impact on internal security procedures and the contractual relations of players in the financial sector.

 

The main measures are as follows:

 

1° ICT risk management

 

The DORA regulation requires the adoption of internal governance and control frameworks to ensure effective and prudent management of all ICT risks.

Financial entities will also need to put in place an ICT risk management framework tailored to their activities, enabling them to deal with ICT risks quickly and efficiently.

 

As a preventive measure, they must :

 

  • Use and maintain appropriate, reliable and technologically resilient ICT systems, protocols and tools;
  • Identify all forms of ICT risk;
  • Ensure permanent monitoring and control of the operation of ICT systems and tools;
  • Implement mechanisms to detect abnormal activity;
  • Define continuous improvement processes and measures, a business continuity policy, a backup policy, and restoration and recovery procedures and methods.

 

The companies concerned will need to have the capacity and manpower to gather information on vulnerabilities, cyber threats and ICT-related incidents. As part of this, they will have to carry out post-incident reviews following major incidents that have disrupted their core activities.

 

The new regulations also require the formalization of crisis communication plans to promote responsible disclosure of major ICT-related incidents.

 

It should be noted that the regulation provides a simplified ICT risk management framework for certain small players, such as small non-interconnected investment companies

 

2° ICT-related incident reporting

 

Financial entities are required to formalize and implement an ICT-related incident management process for the management, classification and reporting of incidents. The DORA regulation introduces a standard methodology for classifying security incidents according to specific criteria (duration of the incident, criticality of services affected, number of clients or financial counterparts affected, etc.).

 

Financial entities will be obliged to report ICT-related incidents classified as major to competent national authorities designated according to the type of financial entity (notably the ACPR and AMF in France). These notifications will have to be made within deadlines subsequently set by the European supervisory authorities.

 

In the event of a “major” incident affecting the financial interests of clients, financial entities will also have to inform the latter, as soon as they become aware of the incident, of the measures taken to mitigate its effects.

 

3° Digital operational resilience testing

 

In order to assess their preparedness in the event of ICT-related incidents, and to implement corrective measures where necessary, financial sector players will need to formalize a robust digital operational resilience testing program, comprising a series of assessments, tests, methodologies, practices and tools to be applied.

 

Every three years, they will also have to carry out threat-based penetration tests, performed by independent, certified testers.

 

Managing of ICT third-party risks

 

The DORA regulation introduces general principles to be respected by financial entities in their relations with ICT third-party service providers.

 

They will need to adopt a third-party risk strategy, and keep a record of information relating to all contractual agreements concerning the use of ICT services provided by ICT third-party service providers.

 

At least once a year, financial entities must provide the competent authorities with information on new agreements relating to the use of ICT services, and must inform them of any draft contractual agreements concerning the use of such services supporting critical functions.

It also requires companies to enter into contracts with such ICT third-party service providers only if they meet appropriate information security standards.

 

The rights and obligations between financial entities and ICT third-party service providers must be defined in a written contract, which must include the following conditions:

 

  • A clear and exhaustive description of the services provided;
  • Where the ICT services will be provided and what data will be processed;
  • Provisions on the accessibility, availability, integrity, security and protection of personal data;
  • Service level descriptions ;
  • The obligation for the ICT third-party service providers to provide the financial entity with assistance in the event of an ICT incident, at no extra cost or at a cost determined ex ante;
  • The ICT third-party service providers obligation to cooperate fully with the competent authorities;
  • Right of termination and minimum notice period.

 

Where ICT third-party service providers supply ICT services supporting critical or important functions, contracts will need to define additional conditions including:

 

  • The provider’s obligation to cooperate in threat-based penetration testing;
  • The obligation for the service provider to implement contingency plans and put in place security measures providing an appropriate level of security;
  • Unlimited rights of access, inspection and audit by the financial entity;
  • Exit strategies, such as setting an appropriate mandatory transition period.

 

In addition, the regulation introduces a supervisory framework for critical ICT third-party service providers, based on a series of criteria (systemic effect on service provision in the event of failure, systemic importance of financial entities dependent on the provider, degree of substitutability of the provider, etc.). Critical ICT third-party service providers will be subject to a monitoring framework based on a set of criteria: security requirements, risk management processes, availability, continuity, governance arrangements, etc.

 

These service providers will be assessed by the supervisory bodies, which will have the power to request information, carry out general inspections and on-site checks, and make recommendations.

 

5° Information-sharing

 

The DORA regulation introduces guidelines for the exchange of information between financial entities on cyber threats. These exchanges should aim to improve the digital operational resilience of financial entities in particular, and should be carried out in full respect of business confidentiality.  In addition, financial entities will be required to notify the competent authorities when participating in information exchange schemes.

 

Lastly, the regulation provides for the various competent authorities to have powers of supervision, investigation and sanction in the event of non-compliance with its provisions.

 

The Member States will be responsible for laying down the rules providing for administrative sanctions and appropriate remedies in the event of a breach of the DORA regulation, and for ensuring their effective implementation. It should be noted that, unlike the GDPR, the DORA regulation does not provide for a ceiling in the event of a pecuniary penalty but requires that penalties be “effective, proportionate and dissuasive“.

 

Our IT-Digital and Data team at Joffe & Associés is at your disposal to support you in your compliance process in order to best anticipate the implementation of this regulation, particularly when negotiating contracts with ICT service providers but also to audit current contracts. Note that the DORA regulation has a broader scope than the French decree of November 3, 2014.

ADOPTION OF THE ARTIFICIAL INTELLIGENCE ACT BY THE EUROPEAN PARLIAMENT : WHAT DOES IT MEAN ?

On Wednesday 14 June 2023, the European Parliament adopted the Artificial Intelligence Act (“AI Act“), a regulation regulating the development and use of artificial intelligence (AI) within the European Union. The text, which is said to hold the record for legislative amendments, is now being discussed by the Member States in the Council. The aim is to reach an agreement by the end of the year.

 

While the date on which the AI Act will come into force remains uncertain, companies involved in the AI sector have every interest in anticipating this future regulation.

 

What are the main measures?

 

Objectives

 

The regulation harmonises Member States’ legislation on AI systems, thereby providing legal certainty that is conducive to innovation and investment in this field. The text is intended to be protective but balanced, so as not to hinder the development of the innovation needed to meet the challenges of the future (the fight against climate change, the environment, health).

 

Like the General Data Protection Regulation (GDPR), which follows the same logic throughout its articles, the AI Act sets itself up as a global benchmark.

 

The scope of application is deliberately broad in order to avoid any circumvention of the regulations. It applies both to AI suppliers (who develop or have developed an AI system with a view to placing it on the market or putting it into service under their own name or brand) and to users (who use an AI system under their own authority, except where the system is used in the context of a personal non-professional activity).

 

In practical terms, it applies to :

  • suppliers, established in the EU or in a third country, who place AI systems on the market or put them into service in the EU;
  • users of AI systems located in the EU;
  • suppliers and users of AI systems located in a third country, where the results generated by the system are used in the EU.

 

A risk-based approach

 

Artificial intelligence is defined as the ability to generate results such as content, predictions, recommendations or decisions that influence the environment with which the system interacts, whether in a physical or digital dimension. The regulation adopts a risk-based approach and introduces a distinction between uses of AI that create an unacceptable risk, a high risk and a low or minimal risk:

 

 

Regarding high-risk AI systems:

 

The following minimum requirements must be met:

 

  • Establish a risk management system: this system consists of a continuous iterative process which takes place throughout the life cycle of a high-risk AI system and which must be periodically and methodically updated.
  • Ensuring the quality of the datasets: the training, validation and test datasets will have to meet quality criteria and in particular be relevant, representative, error-free and complete. In particular, the aim is to avoid “algorithmic discrimination”.
  • Formalise technical documentation: technical documentation containing all the information needed to assess the conformity of a high-risk AI system must be drawn up and kept up to date before the system is placed on the market or put into service.
  • Providing for traceability: the design and development of high-risk AI systems should include features for automatic recording of events (“logs”) during the operation of these systems.
  • Provide transparent information: high-risk AI systems will be accompanied by a user manual containing information on the characteristics of the AI (identity and contact details of the supplier, characteristics, capabilities and performance limits of the AI system, human control measures, etc.) that is accessible and understandable to users.
  • Provide for human control: effective control by natural persons must be provided for during the period of use of the AI system.
  • Ensuring system security: the design and development of high-risk AI systems will have to achieve an appropriate level of accuracy, robustness and cybersecurity, and operate consistently in this respect throughout their lifecycle.

 

All players in the supply chain – suppliers, importers and distributors alike – are subject to these obligations, so everyone will have to assume their responsibilities and be even more vigilant.

 

In particular, suppliers must:

  • demonstrate compliance with the above minimum requirements by maintaining technical documentation;
  • subject their AI systems to a conformity assessment procedure before they are placed on the market or put into service;
  • take the necessary corrective measures to bring the AI system into compliance, withdraw it or recall it;
  • cooperate with national authorities
  • onotify serious incidents and malfunctions involving a high-risk AI placed on the market to the supervisory authorities of the Member State where the incident occurred no later than 15 days after the supplier becomes aware of the serious incident or malfunction.

It should be noted that these obligations also apply to the manufacturer of a product that incorporates a high-risk AI system.

 

  • The importer of a high-risk AI system will have to ensure that the supplier of this AI system has followed the appropriate conformity assessment procedure, that the technical documentation is established and that the system bears the required conformity marking and is accompanied by the required documentation and instructions for use.
  • Distributors will also have to check that the high-risk AI system they intend to place on the market bears the required CE conformity marking, that it is accompanied by the required documentation and instructions for use, and that the supplier and importer of the system, as the case may be, have complied with their obligations.

 

Enforcement and governance

 

At national level, the Member States will have to designate one or more competent national authorities, including the national supervisory authority responsible for monitoring the application and implementation of the Regulation.

 

A European Artificial Intelligence Committee (made up of the national supervisory authorities) will be set up to provide advice and assistance to the European Commission, in particular on the consistent application of the Regulation within the EU. Notified bodies will carry out the conformity assessment of AI systems. Notified bodies should be designated by the competent national authorities, provided that they comply with a set of requirements relating in particular to their independence, competence and absence of conflicts of interest.

 

 

Support SMEs and start-ups through the establishment of AI regulatory sandboxes and other measures to reduce the regulatory burden.

 

Regulatory AI sandboxes will provide a controlled environment to facilitate the development, testing and validation of innovative AI systems for a limited time before they are brought to market or commissioned according to a specific plan.

 

Penalties

 

The AI Act provides for three penalty ceilings depending on the nature of the offence:

 

  • Administrative fines of up to €30,000,000 or, if the offender is a company, up to 6% of its total worldwide annual turnover in the previous financial year for:

— non-compliance with the ban on artificial intelligence practices;

— non-compliance of the AI system with the requirements relating to data quality criteria.

  • Failure of the AI system to comply with the requirements or obligations of the other provisions of the AI Act will be subject to an administrative fine of up to €20,000,000 or, if the offender is a company, up to 4% of its total worldwide annual turnover in the previous financial year.
  • Providing incorrect, incomplete or misleading information to notified bodies and competent national authorities in response to a request is subject to an administrative fine of up to €10,000,000 or, if the offender is an undertaking, up to 2% of its total worldwide annual turnover in the preceding business year, whichever is the greater.

Influencers under regulatory scrutiny

Law no. 2023-451 of June 9, 2023 aimed at regulating commercial influence and combating the abuses of influencers on social networks was published in the Journal Officiel on June 10.

 

New definitions

 

The law now defines influencers as “natural or legal persons who, for remuneration, communicate content to the public by electronic means with a view to promoting, directly or indirectly, goods, services or any cause whatsoever, engage in the activity of commercial influence by electronic means”, as well as the activity of influencer agent, which consists of “representing or putting in contact, for remuneration” persons engaging in the activity of commercial influence.

 

Certain activities prohibited or more tightly supervised, and in all cases an obligation of transparency

While influencers must already comply with existing legal provisions governing advertising practices for product placements, they must also refrain from any direct or indirect promotion of medicinal treatments, cosmetic surgery, alcoholic or nicotine-containing products, certain financial products and services (notably crypto-currencies), sports betting subscriptions or products involving wild animals. They will also have to comply with provisions governing the promotion of gambling.

 

In addition, to better inform subscribers and young users of social networks, influencers will have to indicate, in a clear, legible and identifiable manner, the terms “advertising” or “commercial collaboration” in the case of partnerships, and “retouched images” or “virtual images” on their photos and videos affected by the use of filters or artificial intelligence processes.

 

Greater responsibility for influencers to combat drop-shipping

 

In order to adapt to the dropshipping phenomenon, influencers will henceforth be fully liable to buyers, within the meaning of article 15 of the LCEN, for the products they sell on their social networks. They will therefore have to provide the buyer with the information stipulated in article L. 221-5 of the French Consumer Code, as well as the identity of the supplier, and ensure that the products are available and legal, in particular that they are not counterfeit.

 

More formal contracts, including for influencers based abroad

 

Influencers will have to formalize written contracts with their agents and advertisers, when the sums involved exceed a certain threshold, to be defined within an implementing decree. These contracts will have to include several mandatory clauses (concerning, for example, remuneration conditions, submission to French law, missions entrusted, etc.). The law also stipulates that the advertiser, its agent and the influencer will be “jointly and severally liable for any damage caused to third parties in the performance of the influencing contract between them”.

 

These obligations apply to all influencers targeting a French audience, including foreign-based influencers. The latter will be required to designate a legal or natural person within the European Union who will be criminally liable in the event of an infringement. The text also requires influencers operating outside the European Union or the European Economic Area to take out civil liability insurance in the Union.

 

As for the platforms hosting influencer content, they must allow Internet users to report any content that does not comply with the new provisions on commercial influence.

 

Greater powers for the DGCCRF

 

In addition to its supervisory role, the DGCCRF (Direction Générale de la Concurrence, de la Consommation et de la Répression des Fraudes) now has enhanced powers to impose injunctions, fines and formal notices against influencers. The DGCCRF has set up a 15-strong commercial influence unit.

 

In the event of infringement of the obligations laid down in this text, influencers risk up to 2 years’ imprisonment and a fine of 300,000 euros, and may be banned from exercising their profession.

 

They may also be banned, permanently or temporarily, “from exercising the professional or social activity in the exercise or on the occasion of the exercise of which the offence was committed”.

 

 

Publication of a guide to good conduct

 

In order to assist influencers in bringing their content and activities into compliance, the government has published a guide to good conduct. The sector is now awaiting the implementing decrees, which should provide details of the changes made for the activity of content creators.

 


Article by Véronique Dahan, Emilie de Vaucresson, Thomas Lepeytre and Romain Soiron.

PUBLICATION OF A FRENCH DECREE ON ELECTRONIC TERMINATION OF CONTRACTS

Article L. 215-1-1 of the Consumer Code introduced by the law of 16 August 2022 on emergency measures to protect purchasing power has created an obligation to facilitate the electronic termination of contracts.

 

The French decree no. 2023-417, published on 31 May 2023 and entered into force on 1 June 2023, sets out the terms and conditions for terminating contracts electronically.

 

It requires professionals to provide fast, easy, direct and permanent access enabling consumers and non-professionals to notify a professional of the termination of a contract.

 

In concrete terms, this functionality must be presented as “terminate your contract” or a similar unambiguous wording and be directly and easily accessible on the interface from which the consumer can conclude contracts electronically. The professional may include a reminder of the information on cancellation conditions, but must refrain from requiring the consumer to create a personal space in order to access it.

 

This cancellation feature must also include sections enabling the consumer to provide the professional with information enabling him to prove his/her identity, identify the contract and, where appropriate, justify any legitimate grounds for his/her request for early cancellation. In such cases, the professional must provide a postal address and an e-mail address or include a feature for sending proof of legitimate grounds.

Finally, the decree stipulates that once these sections have been completed, consumers must be able to access a page presenting a summary of the termination, enabling them to check and amend the information provided before notifying their request.

 

As a reminder, any failure to comply with the provisions of this article L. 215-1-1 is punishable by an administrative fine of up to €15,000 for an individual and €75,000 for a legal entity.

 


 

Article by Emilie de Vaucresson, Amanda Dubbary and Camille Leflour.

Data transfers to the United States – Record €1.2 billion fine for Meta Ireland

Article written byEmilie de Vaucresson, Amanda Dubarry and Camille Leflour.

 

On 22 May 2023, the Irish Data Protection Commission (the “DPC”), acting as the lead supervisory authority, announced that it has fined Meta Ireland a record €1.2 billion for violating Article 46(1) of the GDPR by transferring personal data to the U.S. without implementing the appropriate safeguards.

 

Since the invalidation of the Privacy Shield, Meta Ireland had been implementing these transfers on the basis of the standard contractual clauses, in conjunction with additional measures that the DPC considered insufficient in light of the risks to the rights and freedoms of data subjects. The data of its European users is indeed stored in the United States, exposing them to potential surveillance by the US authorities.

 

The investigation was initially launched in August 2020 as part of a cooperation procedure. The draft decision prepared by the DPC was then submitted to its counterpart regulators in the EU/EEA, who rejected it and referred it to the European Data Protection Committee (the “EDPS”).

 

On the basis of the EDPB’s decision, the DPC adopted the final decision under which Meta Ireland is required:

  • to suspend any future transfers of personal data to the United States within 5 months from the date of notification of the decision to Meta Ireland;
  • to pay an administrative fine of €1.2 billion – the highest fine ever imposed under the GDPR – justified by the seriousness of the alleged breaches by Facebook’s parent company, which has millions of users in Europe, involving a huge volume of data transferred in violation of the GDPR; and
  • to bring its processing operations into compliance with the GDPR by ceasing the unlawful processing, including storage, in the United States of personal data of EU/EEA users transferred without safeguards, within 6 months from the date of notification of the DPC’s decision to Meta Ireland.

In the words of Andrea Jelinek, President of the EDPS, “this sanction is a strong signal to organizations that serious breaches have considerable consequences”. Indeed, it comes in a context of increasing controls on GAFAMs, this sanction being the fourth fine imposed on Meta Ireland in 6 months.

 

For its part, Meta Ireland describes this fine as “unjustified and unnecessary” and wants to request its suspension in court. In this context, the social network hopes that the European Commission will soon adopt the new draft adequacy decision for data transfer to the United States.

 

For the time being, as long as no agreement has been reached between Europe and the United States on the framework for data flows to the United States, we would like to remind you that the simple signing of standard contractual clauses is not sufficient to ensure a data transfer that complies with the GDPR. It is necessary to verify that additional guarantees have been implemented by the recipient of data in the United States to ensure the confidentiality of data and the impossibility of access for the American authorities.