News Alerts

Bill to regulate Artificial Intelligence Systems is introduced to the Chilean Chamber of Deputies
May 13, 2024


On May 07th, 2024, the Chilean President of the Republic sent to the Chamber of Deputies a bill (the “bill”) regarding the regulation of Artificial Intelligence (“AI”) systems, being the result of the work carried out on the former bill, the National AI Policy, comparative experience, and several international recommendations regarding AI ethics.

Specifically, the bill aims to promote the creation, development, innovation, and implementation of AI systems that are at the service of people, and that safeguard democratic principles, the rule of law and people's fundamental rights.

The main points of the bill are mentioned below:

Scope of application

The bill will apply, on the one hand, to suppliers and implementers of AI systems, both domiciled in Chilean territory and foreigners, whose output information is used in Chile; and, on the other, to importers and distributors of AI systems, together with their representatives.

Likewise, AI systems for national defense purposes are excluded from the scope of application; as well as research, testing and development of AI systems prior to their introduction to the market; and AI components provided for the purpose of free and open-source licenses.

Principles applicable to AI systems

The bill contemplates several principles to be observed by the operators subject to its provisions regarding the development and use of AI systems, among which it is possible to mention:

  • Principle of human intervention and supervision: the development and use of AI systems must be carried out as tools at the service of people, respecting people’s dignity, and personal autonomy.
  • Principle of privacy and data governance: AI systems will be developed and used in accordance with current regulations on privacy and data protection, also ensuring the interoperability of the data used.
  • Principle of transparency and explainability: traceability and explainability shall be provided, ensuring that individuals know and are aware that they are communicating with an AI system.
  • Principle of diversity, non-discrimination, and equity: AI systems should be developed and used in a way that promotes equal access, gender, and cultural diversity, avoiding discriminatory effects and selection biases that may generate them.
  • Principle of protection of consumer rights: AI systems shall ensure fair treatment, delivery of accurate, timely and transparent information, and safeguard freedom of choice and safety in consumption.

Classification of AI systems

In accordance with European regulation, the bill regulates AI systems according to their risk level, classifying them as follows:

1.- AI systems of unacceptable risk

Firstly, unacceptable-risk AI systems are those that are incompatible with the compliance and guarantee of the fundamental rights of individuals, so that their introduction on the market is prohibited.

By way of example, unacceptable risk systems would be those of subliminal manipulation, which induce actions that cause damage to the physical and/or mental health of people; or those of generic social classification, which classify people according to their behavior, socioeconomic level, or personal characteristics, resulting in prejudicial or discriminatory treatment.

2.- High-risk AI systems

Secondly, high-risk systems are those that may adversely affect the health and safety of individuals, their fundamental rights, or the environment, as well as the rights of consumers.

Thus, the bill submits high-risk AI systems to rules relating to i) risk management; ii) data governance; iii) technical documentation; iv) system of records; v) transparency; vi) human oversight; and vii) cybersecurity.

In line with the above, when a high-risk AI system does not comply with such rules, the respective operator must immediately take the necessary measures to deactivate it, withdraw it from the market or recall it.

3.- Limited-risk AI systems

Thirdly, the bill states that limited risk AI systems are those that present non-significant risks of manipulation, deception, or error, in relation with their interaction with natural persons.

These systems must be provided under transparent conditions, ensuring that people are informed in such a way that they are aware that they are interacting with a machine.

4.- AI systems without evident risk

Finally, all AI systems that do not fall into the other categories are systems without evident risk.

Institutionality and governance

Regarding the institutional framework for the regulation of AI, the bill contemplates, on the one hand, the creation of the Artificial Intelligence Technical Advisory Council (the “Council”), and, on the other, entrusts the supervisory and sanctioning powers to the Personal Data Protection Agency (the “Agency”), whose creation is contemplated in the bill that amends the current Law No. 19,628 on the Protection of Private Life, currently in process.

1.- Artificial Intelligence Technical Advisory Council

Regarding the Council, the bill defines it as an advisory and permanent entity, which will advise the Ministry of Science, Technology, Knowledge, and Innovation (“Ministry”) in matters related to the development, promotion, and continuous improvement of AI systems in the country.

Thus, the Council will have the following functions:

  • Proposing to the Ministry a list of high risk and limited risk AI systems.
  • Advising the Ministry on the scope of the rules applicable to operators of high-risk and limited-risk AI systems.
  • Proposing to the Ministry the guidelines for the development of controlled AI system test sites.

2.- Personal Data Protection Agency

The supervision and enforcement of the provisions of the bill will correspond to the Agency, whose functions, in relation to the regulation of the IA, will be:

  • To oversee compliance with the provisions of the bill and its regulations, for which purpose it may require any operator to deliver any information necessary to comply with it.
  • To determine the infractions incurred by those who breach the provisions or fail to comply with the obligations of the bill, exercising the sanctioning power over them.
  • Resolve requests and claims made by the affected persons against those who incur infractions.

Infractions and associated penalties

For the purposes of the exercise of the Agency's powers, the bill contemplates three types of infringements: minor, serious, and very serious, according to the type of AI system on which it falls.

Thus, a minor infraction will be the failure to comply with the transparency obligations regarding limited risk AI systems, punishable with a fine of up to 5,000 Monthly Tax Units (“UTM”); a serious infraction will be the failure to comply with the rules established for high risk AI systems, punishable with a fine of up to 10,000 UTM; and finally, the commissioning or use of an unacceptable risk AI system will be a very serious infraction, punishable with a fine of up to 20,000 UTM.



AUTHORS: Guillermo Carey, José Ignacio Mercado, Stefano De Cristofaro. Ricardo Alonso



Follow us in Wechat Síguenos en Instagram Síguenos en YouTube