Artificial intelligence is entering higher engineering education, including courses that prepare sustainability engineers to work with environmental management tools such as Life Cycle Assessment, Material Flow Analysis, Environmental Impact Assessment, and carbon accounting methods. These tools require students to handle complex datasets, design system boundaries, interpret multi-criteria indicators, and justify environmental trade-offs. Many students struggle with these steps because they involve large amounts of data handling and analytical judgment. In recent semesters, several courses in environmental management have started to integrate AI-supported guidance. In this study, a short survey captured student perceptions of this integration. The focus lies on how students experienced AI support during analytical tasks and how this shaped their learning process.
The study involved a group of master's-level engineering students enrolled in a course on environmental management for sustainability professionals. The course included practical assignments that required students to model product systems, interpret environmental indicators, and compare improvement scenarios. The survey was conducted immediately after students used AI in these assignments. The questionnaire included rating-scale items and open-ended reflection prompts. Responses covered ease of use, perceived effect on understanding, trust in the explanations, and personal preference for learning support.
The results present a pattern of cautious acceptance. Many students reported that AI support reduced the time spent searching for background data and standards. Students stated that the AI guidance helped them identify system boundaries and functional units more clearly. Several respondents described a stronger focus on interpreting environmental impact results instead of spending most of the time collecting and formatting input data. Students also noted that AI offered step-wise clarification during complex interpretation tasks, including allocation decisions and sensitivity checks. This indicates a shift in cognitive effort from mechanical tasks toward conceptual reasoning.
At the same time, some concerns surfaced. A portion of students questioned the reliability of AI explanations for advanced methodological issues such as allocation rules in multi-output processes or uncertainty treatment. They highlighted the importance of instructor oversight and verification against trusted references. A smaller group expressed concern about the risk of over-reliance on AI when facing open-ended analytical decisions. These responses indicate that while AI strengthens accessibility and clarity, it requires careful integration under clear academic expectations.
The survey results suggest several implications for pedagogy. First, AI works best as a structured guidance tool where the instructor sets boundaries for use and clarifies where human judgment holds priority. Second, instructors need to design assignments that require students to explain the reasoning behind each environmental decision step, including verification of AI-supported suggestions. Third, introducing AI does not reduce the instructor's role. Instead, the instructor shifts from demonstrating procedural steps to reinforcing conceptual interpretation, critical evaluation, and justification of analytical outcomes.
The findings support the integration of AI in environmental management pedagogy when the goal is to strengthen system thinking and interpretive skills. Students respond positively when AI decreases repetitive effort and increases focus on reasoning. At the same time, meaningful learning requires that students reflect on and verify AI-supported guidance. AI becomes a support structure for student engagement rather than a replacement for analytical judgment.