Statement on the Responsible Use of Generative Artificial Intelligence by Lexum

In accordance with its mission, vision, and values, Lexum is dedicated to utilizing and developing cutting-edge technologies to improve access and promote comprehension of legal information for individuals accessing its systems and services (the “Users”). The latest advancements in generative artificial intelligence (AI) are making substantial strides towards achieving these objectives.

Generative AI refers to systems capable of autonomously generating content based on machine learning models trained on vast datasets.

Lexum adheres to widely recognized standards and values for the ethical and responsible use of generative AI. Additionally, Lexum actively monitors legislative developments and best practices in this field and takes the necessary steps to ensure proactive compliance.

Lexum integrates generative AI into its products and services mostly to automate the generation of textual analyses and summaries. This usage entails risks that are considered limited and unlikely to cause significant harm to its Users. This declaration sets out the guiding principles that govern risk management related to Lexum’s current use of generative AI. These principles are regularly reviewed to reflect technological advancements and the evolving regulatory framework.

Well-being, Fairness and Sustainability

The use of generative AI by Lexum prioritizes the protection of human interests, fundamental rights, dignity, and privacy, in order to contribute positively to society.

  • Systems that use generative AI systems are designed and deployed in a way that prevents discrimination, reduces inequalities, and promotes fair access to information for all communities, including vulnerable or underrepresented groups.
  • Generated content aims to assist individuals and communities in making decisions and gaining greater control over their lives. This content aims to effectively assist Users in finding and understanding legal information.
  • Technological decisions prioritize efficient approaches that preserve the resources required to develop, test and deploy the generated content.

Human Oversight and Reliability

Individuals involved in the development and use of generative AI by Lexum are identifiable and accountable, with human supervision in critical decision-making related to the reliability of the generated content.

  • The use of generative AI is monitored and tested by humans at every stage of the design, development, and deployment of Lexum’s information systems and services to minimize the risk of errors or inaccuracies.
  • Feedback mechanisms are available to Users, allowing them to report any anomalies.
  • Materialized risks of errors or inaccuracies are proactively managed through their documentation and continuous improvement of the relevant systems.
  • Generative AI is presented to Users as being able to assist rather than replace human judgment.

Transparency and Explainability

Lexum’s development processes for systems incorporating generative AI are structured to ensure transparency and explainability, thereby enhancing accountability.

  • Processes, methodologies, language models, and data sources are documented to ensure that any errors or inaccuracies can be accurately traced.
  • The use of generative AI is systematically disclosed to Users accessing the generated content, enabling them to give informed consent to its use and enable them to assess potential risks of errors or inaccuracies.
  • The sources of information from which AI-generated content is automatically produced are clearly indicated to the Users who consult them.

Safety and Data Protection

Generative AI is used by Lexum in a manner that ensures reliable and secure operation throughout its lifecycle, with mechanisms in place to identify, assess, and mitigate potential risks.

  • Systems utilizing generative AI are secured against cyberattacks and the dissemination of proprietary or confidential data pertaining to Lexum employees, clients or Users.
  • Providers of the language models used must ensure that these models do not retain or use data beyond what is necessary for generating the requested content. Applicable contractual clauses are closely monitored to ensure the continued protection of confidentiality.
  • Providers operating within the jurisdiction of Users and clients are prioritized to ensure better compliance with local regulations and enhanced data protection.

Updated : 2025-08-28

Ready to put Norma to work?

Request a demo today and see how we can help you streamline your legal publishing workflows and serve your users better.