ITM AI tools: convergence of legal research and practice

In the age of advancing digitalization and the ubiquitous implementation of artificial intelligence in almost all areas of life, the jurisprudential examination of the associated normative implications plays a key role. The Institute for Information, Telecommunications and Media Law (ITM) is dedicated to this challenge with a particular focus on linking theoretical legal doctrine with practical application experience.

The current discussion on the legal framework for artificial intelligence, which has gained new momentum in particular with the adoption of Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonized rules for artificial intelligence (AI Regulation, “AI Act”), often suffers from a significant lack of practical application experience. The qualification of certain AI systems as high-risk systems in accordance with Art. 6 in conjunction with Annex III of the AI Regulation. Annex III of the AI Regulation, the implementation of appropriate risk management and data management systems, the guarantee of sufficient transparency and compliance with copyright implications and data protection requirements pose complex challenges for legal scholars and practitioners.

The ITM has set itself the goal of bridging the often noticeable gap between legal theory on the one hand and technological practice on the other through an innovative approach: The independent development and systematic investigation of AI-supported systems enables a well-founded analysis of the legal, ethical and social implications of digital transformation processes. In this way, the Institute not only aims to contribute to the academic debate, but also to develop concrete implementation strategies for the legally compliant integration of AI systems into legal education.

The tools and application aids presented below represent the result of this research and development work and are available to students and teachers as practical instruments. On the one hand, they serve to increase efficiency and improve quality in legal education, but on the other hand they also function as research objects for the continuous evaluation of the performance, limits and legal implications of artificial intelligence in an academic context. The Institute invites all interested parties to participate in this research process by actively using and critically reflecting on the systems provided.

NOW NEW: RechtsMentor

RechtsMentor accompanies teachers and students together through their legal education! This AI corrects papers, provides topic ideas, creates individual practice cases and optimizes seminars – for more time for learning and teaching!

REF-GPT

The REF-GPT helps trainee lawyers to process files efficiently! This AI analyzes facts, creates detailed A and B opinions, checks procedural peculiarities and generates final decrees – for structured and time-saving case processing.

IT contracts

Our first ChatBot is designed to help shed light on the maze of German IT and media law. With access to extensive ITM resources, the GPT can answer questions, draft IT contracts and find relevant case law.

The progressive integration of AI systems such as ChatGPT, Claude or other generative AI applications into the academic context marks a significant change in the educational landscape. These technologies offer countless possibilities for supporting and enriching teaching and learning processes, ranging from information research and structuring complex content to self-monitoring. In the course of this development, educational institutions are required to define an appropriate regulatory framework which, on the one hand, preserves academic integrity and, on the other, does not unnecessarily restrict innovative forms of learning.

In this context, the categorical defensiveness towards AI technologies in educational contexts that can be observed in many places is proving to be increasingly problematic. While academic institutions are sometimes still debating whether and how to integrate AI, professional practice has long since answered this question: AI tools have become natural working instruments in almost all industries and fields of activity. Students who do not learn how to use these technologies competently during their education start their professional careers at a significant competitive disadvantage. A central task of higher education is to prepare the students entrusted to us for the requirements of their future careers in the best possible way – a responsibility that, in view of the transformative effect of AI on almost all areas of work, requires the considered inclusion of these technologies in teaching and learning processes.

An educational practice that ignores this reality or perceives AI primarily as a threat to academic traditions runs the risk of missing the actual qualification requirements of an increasingly digitalized working and living environment.

Assessing the appropriateness of the use of AI requires a differentiated approach that should primarily be based on the skills objectives to be achieved. While independent problem solving without technological assistance may be essential in certain contexts, in other scenarios the ability to effectively use and critically evaluate AI-generated content can be a valuable skill. Ultimately, this consideration cannot be completely determined by standardized sets of rules, but rather unfolds in the reflective examination of the users with their own learning process and the associated goals.

The development of informed judgment regarding the appropriate use of digital tools is itself an important educational goal.

Transparency remains an indispensable element of academic discourse, and disclosure of AI use should be understood less as a restrictive requirement and more as an integral part of honest academic practice. The exact form of this disclosure – be it in the form of comprehensive documentation of the prompts used or summary references to supporting technologies – can vary depending on the context and should not become a bureaucratic obstacle to innovative forms of teaching and learning.

Of particular relevance are the examination regulations of the respective faculties, which contain binding specifications on the permissibility of aids and their documentation. These can vary considerably – from explicit prohibitions on the use of AI and detailed disclosure requirements to liberal regulations that expressly permit its use under certain conditions. Students and lecturers alike are obliged to familiarize themselves with these specific regulations, as they define the binding legal framework for the use of AI in examinations and violations may be considered an attempt to cheat. This applies regardless of whether or not they consider the regulations to be too restrictive.

The quality of AI output – its factual accuracy, timeliness and balance – remains a key challenge that requires critical evaluation and verification by users. The promotion of appropriate skills for the critical evaluation and responsible use of AI systems should therefore be seen as an integral part of contemporary educational concepts. We would like to contribute to this with our own tools. A constructive dialog between teachers and students that incorporates different perspectives on the use of AI and opens up shared spaces for reflection can make a valuable contribution here.

Looking to the future, the integration of AI systems into the academic education landscape is likely to increase further and give rise to new forms of teaching and learning. Rather than inhibiting this development through excessive regulation, it seems more appropriate to promote a reflexive, competence-oriented approach to these technologies that strengthens the autonomy and judgment of learners and enables them to make informed decisions about the appropriate use of digital tools. Higher education can thus make an important contribution to preparing students for a working and living environment increasingly shaped by AI systems without compromising fundamental scientific values.

The user-defined GPTs developed by us, the Institute for Information, Telecommunications and Media Law (ITM), and presented on this page are based on OpenAI’s technology and are hosted on its infrastructure.

    1. Data processing by OpenAI: Any interaction you have with our GPTs (including your inputs/prompts and the responses generated by the GPTs) will be processed directly on OpenAI’s systems. All data processing is subject exclusively to OpenAI’s guidelines and data protection regulations. As the institute and operator of these GPTs, we would like to expressly emphasize that we have no technical means of accessing the messages (prompts) you enter, the content of the conversations or other usage data. Our access is limited to the configuration and management of the GPTs themselves, but not to the data generated during their use by you. For detailed information on data processing by OpenAI, please refer to their official documentation, in particular the Data Processing Addendum (DPA): https://openai.com/policies/data-processing-addendum/

    2. Use of external interfaces (APIs): Some of our GPTs use external interfaces (APIs), such as from Elsevier or other services, to perform specific tasks. If you use functions that require such a connection, the data required to process your request will be sent directly from OpenAI to the respective third-party provider. Even in this case, we as an institute do not process or store any data from these API calls. The data processing takes place exclusively between OpenAI and the third-party provider or directly for the processing of your request by the third-party provider.

Important note for users: Since the data processing is carried out by external parties (OpenAI and, if applicable, third-party providers of APIs) and we do not have access to your entries, we ask you to handle the information you enter in the GPTs with care. Avoid entering personal data or confidential information if this is not absolutely necessary.