Inteligencia Artificial (IA)
Impact of AI "Hallucinations" in the Legal Sector: Current Challenges and Risks
Paloma Firgaira
2026-01-31
5 min read
Artificial intelligence (AI) has established itself as a fundamental tool in numerous sectors, including the legal field. However, its improper use has led to concerning situations, especially when it is used to draft legal documents without proper supervision. Recently, several cases have been detected where AI has generated non-existent jurisprudence, causing serious problems in judicial proceedings.
A recent example occurred in the Canary Islands, where the Criminal Chamber of the Superior Court of Justice of the Canary Islands (TSJC) is investigating a lawyer for submitting a motion that cited jurisprudence and official documents allegedly generated by AI, which turned out to be false. The court suspects that the lawyer relied too heavily on the information provided by the tool without verifying its authenticity, leading to an investigation that could result in disciplinary sanctions.
This is not an isolated case. In September 2024, the Superior Court of Justice of Navarre opened an investigation into a lawyer who, when filing a complaint, incorrectly cited an article from the Colombian penal code believing it was from the Spanish code, due to an incorrect reference generated by ChatGPT and not reviewed by the professional. Although the case did not end in sanctions, it highlighted the risks of blindly trusting AI.
According to Carlota Planas, a lawyer specializing in new technologies, the use of AI in the legal field has spread so much that there are already specialized tools developed by legal publishers such as Aranzadi, Lefebvre, or vLex. However, AI continues to make errors, known as "hallucinations," which consist of generating seemingly truthful information without real basis, as explained by Concepción Campos, advisor to the General Council of Spanish Law (CGAE).
In the legal sector, these "hallucinations" can translate into the invention of rulings, doctrinal citations, or non-existent legal interpretations. Campos points out that the lack of digital training in the sector exacerbates the problem, as only those who understand the limitations of AI can establish verification protocols and distinguish which tasks require human intervention.
Ramon Àngel Casanova, a member of the governing board of the Illustrious Bar Association of Barcelona (ICAB), emphasizes that errors often occur due to the absence of human supervision. Miguel Hermosa, president of the digital justice subcommittee of the CGAE, reminds that the responsibility always lies with the lawyer, not with the AI, and that professional supervision is essential to avoid risks.
The Superior Court of Justice of Navarre and the Council of European Bar Associations (CCBE) insist on the need for lawyers to verify any results generated by AI before using them, understanding both their capabilities and limitations. The CCBE, in a guide published in October 2025, stresses the importance of understanding the risks associated with the use of these technologies.
The misuse of AI in the legal field is not exclusive to Spain; numerous cases have also been reported in the United States. Although the ICAB has not yet received formal complaints, its officials believe it is only a matter of time. Source: lavanguardia.com