Proposed for MDSI Doctoral Training Program

Applied AI Engineering Lab
Reliable AI Systems for TUM Researchers

A hands-on course teaching doctoral researchers to build production-grade AI systems with emphasis on reliability, safety, and real-world deployment—perfectly aligned with MDSI's relAI initiative.

98% Alignment with MDSI Mission
6 Hands-On Lab Modules
3 Delivery Format Options
100% Production Frameworks
Perfect Fit

Why This Course Belongs at MDSI

Five critical alignment points that make this course essential for TUM's doctoral researchers

🛡️

relAI Perfect Match

Every module emphasizes reliable, trustworthy AI—from safe deployment to security and governance. Your relAI initiative focuses on reliability research; this course teaches reliability engineering.

100% Alignment
🌐

Interdisciplinary by Design

Mixed-domain team projects force cross-specialty collaboration. Researchers from biology, physics, CS, and social sciences build systems together—exactly MDSI's vision.

Core Mission
🎯

Career Preparation

Unlike pure theory courses, this prepares researchers for BOTH academia (research tools, reproducibility) AND industry (production frameworks, MLOps, deployment).

Dual Path
🏆

Certificate Integration

Fits as core MDSI technical module AND supports all three focus tracks (Research, Entrepreneurship, Communication). Complete deliverables meet all requirements.

Full Integration
🔧

Engineering Discipline

Researchers learn algorithms everywhere. What's missing: production engineering—deployment, monitoring, security, governance. This course fills that specific gap.

Unique Value

Turnkey Solution

All materials ready. Three flexible formats. No development work needed from MDSI. Instructor with proven production expertise. Ready to launch immediately.

Ready Now
Curriculum

6 Modules from Foundation to Mastery

Theory-first, practice-heavy approach building production AI systems step by step

1

AI Engineering Foundations

Ground researchers in LLM architectures, adaptation strategies (RAG vs fine-tuning), and prompt engineering patterns (CoT, ReAct, verification).

LLMs vs LRMs RAG Architecture Evaluation Metrics Prompt Engineering
Lab: Build domain-specific RAG system with citation verification and evaluation
2

MLOps & Production Pipelines

Master experiment tracking, deployment strategies (canary, shadow), monitoring, and drift detection for reliable ML systems.

Experiment Tracking Deployment Patterns Drift Detection Rollback Strategies
Lab: Deploy model with canary release, monitoring dashboard, and automatic rollback
3

Digital Twins & Simulation

Learn simulation-based validation, what-if scenarios, intent verification systems, and evidence-based decision making.

Digital Twin Concepts Counterfactual Reasoning Intent Verification Risk Assessment
Lab: Build what-if simulation engine with policy verification and evidence packs
4

Agentic AI Fundamentals

Architect AI agents with planning, tools, memory, and reflection. Implement safety constraints and human-in-the-loop workflows.

Agent Architecture Tool Use Safety Guardrails ReAct Pattern
Lab: Build autonomous agent with tool allowlists and approval workflows
5

Multi-Agent Systems

Design agent collaboration patterns, implement communication protocols (A2A, ACP, MCP), and orchestrate complex workflows.

LangGraph Agent Protocols Coordination Patterns Failure Recovery
Lab: Build multi-agent collaboration system with state management
6

AI Security & Governance

Implement security controls, privacy-preserving techniques (DP, FL), and governance frameworks for trustworthy AI deployment.

Threat Models Zero-Trust Differential Privacy Audit Systems
Lab: Secure AI system with audit logging and privacy controls

Capstone Project

Teams design and build complete AI systems addressing real research problems in their domains. Includes architecture, implementation, evaluation, safety controls, and presentation.

Konrad Zuse School of Excellence

Built for relAI Researchers

Every module emphasizes reliability, safety, and trustworthiness—the core mission of TUM's relAI initiative

Verification-First

Every AI output verified against ground truth before deployment—no blind trust in model outputs.

Safety Guardrails

Tool allowlists, schema validation, human oversight, and rollback readiness built into every system.

Observability

Monitoring, drift detection, performance tracking, and alerting—know when systems deviate from expected behavior.

Security-First

Threat models, prompt injection defense, data isolation, and privacy-preserving techniques integrated from day one.

Governance

Audit trails, evidence packs, approval workflows, and compliance frameworks for accountable AI.

Simulation-Based

Digital twins enable safe experimentation—test changes in sandbox before production deployment.

Meet Your Instructor

Eduard Dulharu, MSc

CTO & Founder, vExpertAI | Production AI Expert | Critical Infrastructure Specialist

Eduard Dulharu
MSc Information Security

Bridging Research and Production

As CTO of vExpertAI, Eduard specializes in building AI-powered solutions for critical infrastructure—combining deep expertise in information security, network architecture, and production AI systems. His work focuses on making AI reliable, secure, and trustworthy for real-world deployment.

  • Production LLM Systems: RAG architectures, multi-agent orchestration, autonomous systems
  • AI Security: Threat modeling, privacy-preserving ML, governance frameworks
  • Critical Infrastructure: Network automation, reliability engineering, 24/7 operations
  • Academic Rigor: MSc Information Security, research methodology, formal verification
"Teaching at TUM has been a dream of mine. I'm passionate about training researchers who can bridge the gap between innovation and reliability—advancing both science and practical impact. MDSI's relAI focus is exactly where my expertise and values align."
What You'll Achieve

Learning Outcomes

Skills that advance both academic research and industry career paths

Build Complete AI Systems

  • Design end-to-end architectures
  • Implement RAG to multi-agent systems
  • Deploy with production frameworks
  • Handle real-world complexity

Deploy Safely & Reliably

  • Monitoring and drift detection
  • Canary deployments and rollback
  • SLO definition and tracking
  • Incident response procedures

Secure & Govern AI

  • Implement threat mitigations
  • Privacy-preserving techniques
  • Audit trails and compliance
  • Zero-trust architectures

Evaluate Rigorously

  • Define appropriate metrics
  • Measure system reliability
  • Identify failure modes
  • Report results transparently

Collaborate Across Domains

  • Work in mixed-discipline teams
  • Communicate technical concepts
  • Integrate diverse requirements
  • Build shared understanding

Identify Research Opportunities

  • Spot open problems in AI systems
  • Connect theory to practice
  • Generate dissertation topics
  • Publish systems research

Ready to Bring This to MDSI?

Let's discuss how Applied AI Engineering Lab can strengthen MDSI's doctoral training program and support the relAI initiative.

Flexible Delivery Options

3-Day Intensive
Immersive workshop format
📅
6 Weekly Sessions
3 hours per week format
🔄
Hybrid Model
2-day kickoff + weekly follow-up