Search
Close this search box.

AI and Employment in Greece

27 Μαρτίου, 2025

Adopting an AI governance policy for businesses has become imperative in the technological era. Being aware and transparent of the logic, procedures and principles of AI systems in the workplace is considered a fundamental pre-condition for their use.

The Greek legal framework imposes transparency obligations on AI systems decision-making involvement. In particular according to L.4961/2022, companies that use AI systems that affect any decision-making process with regard to employees and prospective employees and concern the employment conditions and/or the selection, recruitment or evaluation process, must provide sufficient and explicit information, before the use of the system (transparency principle).  Such information must contain at a minimum an analysis on how a decision is taken. Cases that require prior consultation with the employees are excluded according to national legal framework (Presidential Decree 240/2006).  As a result, private entities must be aware:

  1. how the AI system works,
  2. based on what weights decisions are taken,
  3. how the AI system was trained and historical data,
  4. possibilities of mistakes and ways to encounter this,
  5. if data are used for training purposes by the AI provider and where such processing takes place
  6. how the AI system takes decisions when confronted with similar scenarios and circumstances and when human involvement takes place.

 

In general, private entities must ensure compliance with the principle of equal treatment and the fight against discrimination in employment on the grounds of sex, race, colour, national or ethnic origin, genetic features, origin, religious or other beliefs, disability or chronic illness, age, marital or social status, sexual orientation, gender identity or gender characteristics.

More importantly, the Artificial Regulation Act (REGULATION (EU) 2024/1689) prohibits the use of AI systems to detect the emotional state of individuals in situations related to the workplace and education (rec. 44). 

The following AI systems are classified as high risk:

  1. in the employment for the recruitment and selection of persons,
  2. for making decisions affecting terms of the work-related promotion and termination of work-related relationships,
  3. for allocating tasks on the basis of individual behaviour or personal traits,
  4. for analysing and filtering job applications,
  5. for monitoring or evaluating persons in work relationships (Art. 6(2), Annex III and rec. 57).

 

The reasoning for such classification is that throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of such persons may also undermine their fundamental rights to data protection and privacy.

The requirements for high-risk AI systems according to the AI Act are:

  1. to have a risk-management system in place,
  2. to ensure the quality and relevance of data sets used,
  3. to have technical documentation and record-keeping,
  4. to abide by the transparency principle,
  5. to provide information to deployers,
  6. to have human oversight and robustness, accuracy and cyber-security

 

(Arts 8,9,10,11,12,13,14 and 15 of the AI Act).

Exceptionally, the AI systems referred to above shall not be considered to be of high risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making.  In particular where: (a) the AI system is intended to perform a narrow procedural task; (b) the AI system is intended to improve the result of a previously completed human activity; (c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or (d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.

To conclude businesses must implement an AI governance framework, audit their AI systems and embed compliance by design and keep human resources for critical decision-making.

 

Marios D. Sioufas
Deputy Managing Partner
LL.M. in Intellectual Property Law – Queen Mary University of London

Για να αποθηκεύσετε το άρθρο σε μορφή Pdf:

Σιούφας & Συνεργάτες | Γιώργος Σιούφας | Μάριος Σιούφας

Για περισσότερες πληροφορίες

Επικοινωνήστε με τη γραμματεία της Διεύθυνσης Νομικών Υπηρεσιών στο τηλ.: 213 017 5600, ή στείλτε mail στο info@sioufaslaw.gr και θα επικοινωνήσουμε άμεσα μαζί σας.

Μοιραστείτε το:

Θέλετε να συζητήσουμε περισσότερο για το άρθρο μας;
Συμπληρώστε τα στοιχεία επικοινωνίας σας
και εξειδικευμένος συνεργάτης μας
θα επικοινωνήσει μαζί σας σήμερα
μεταξύ 15:00 - 17:00.