The rapid ascendancy of Artificial Intelligence (AI) has fundamentally altered the landscape of engineering design. AI is no longer merely a tool in the hands of engineers but an agent capable of autonomous decision-making, reshaping economic structures and individual rights. Consequently, the traditional technocratic approach to engineering education—focused solely on neural network architecture or optimisation—is no longer sufficient. This presentation argues that in the age of AI, legal regulation has evolved from a post-design compliance check into a fundamental design boundary condition, comparable to the laws of thermodynamics or material tensile strength. We posit that the engineer of the future must possess specific "legal literacy" to navigate a volatile regulatory environment, transforming from a technical specialist into a "T-shaped" professional with excellent technical skills and broad contextual understanding.
The first section of the presentation offers a comparative analysis of the global regulatory landscape, defining the new constraints for engineering design. We examine the divergent requirements of the world’s three dominant economic blocs: the European Union’s risk-based ex-ante regulation (EU AI Act), the United States’ ex-post liability and litigation-focused model, and China’s content-centric control. Crucially, we demonstrate how these legal frameworks translate into purely engineering tasks. For instance, the EU’s prohibition of "unacceptable risk" systems defines a "negative design space" that students must learn to recognise immediately. Furthermore, legal mandates for data governance transform the technical task of data cleaning into a requirement for detecting statistical bias and ensuring representativeness. Similarly, the "duty of care" principle in US law and China’s requirements for explainability (XAI) necessitate that engineers design software architectures that are transparent, auditable, and capable of human oversight, rejecting "black box" operations where they hinder legal accountability.
The second section addresses specific legal competencies required for daily decision-making. We explore the crisis of Intellectual Property (IP) in the era of generative AI, where students must learn "code hygiene" to segregate AI-generated snippets from proprietary work and manage the legal risks of open-source licensing in training data. We also analyse the shifting nature of liability and agency. Using the Moffatt v. Air Canada case—where a chatbot was deemed an agent of the company—we illustrate the necessity of designing Retrieval-Augmented Generation (RAG) systems with strict "guardrails" that prioritise static ground truth over generative hallucinations. Additionally, we discuss the implications of strict liability versus negligence in autonomous systems, emphasising that "state-of-the-art" defences may not suffice if a system causes harm.
The third section provides innovative methodological recommendations for curriculum development, moving away from frontal legal instruction, which is often ineffective for engineering students. We propose the "Legal Autopsy" method, in which students reverse-engineer real-world legal failures (such as the Uber self-driving fatality) to understand how technical design flaws, including automation bias, lead to criminal or civil liability. We advocate for Interdisciplinary Clinical Legal-Engineering Education, in which mixed teams of law and engineering students conduct compliance audits of real AI products, simulating the industrial environment. Furthermore, we suggest incorporating "Red Teaming" into lab exercises, where students actively try to "break" models to induce discriminatory or illegal outputs, thereby reinforcing legal boundaries through practical experience.
Finally, the presentation addresses the implementation challenges, specifically the "half-life" of knowledge in a hyper-fast regulatory environment. Given that university accreditation cycles cannot keep pace with legislation such as the EU AI Act or rapidly evolving generative AI measures, we argue that legal competencies cannot be fully imparted within traditional BSc/MSc frameworks. Instead, we propose a shift toward a continuous, lifelong learning model utilising micro-credentials—short, competency-based courses (e.g., "AI Compliance Auditor") developed in cooperation with industry partners. This approach ensures that the integration of legal competencies serves not as a bureaucratic burden but as a risk-management tool that enables the development of socially responsible and sustainable AI systems.