Alle cursussen
This module introduces learners to the fundamentals of AI assistants and large language models (LLMs). It covers how major AI platforms compare, the core transformer technology behind modern models, and key design principles for conversational interfaces.
Through interactive challenges and cross-industry case studies, participants will explore real-world implementations of AI systems and gain practical insight into how chat-based interfaces are transforming workflows. The module concludes with a multi-part self-review quiz to consolidate understanding and assess applied knowledge.
This module provides a structured journey through the art and science of prompt engineering. Learners will explore the key elements of effective prompting, discover techniques for generating structured data with large language models, and practice iterative refinement to improve outputs.
The module also introduces advanced topics such as adversarial prompting and real-world use cases for JSON and table generation. Through guided exercises, self-assessment quizzes, and reflective activities, participants will strengthen their prompting skills and develop a professional approach to working with AI systems.
This module offers an in-depth exploration of the latest AI-driven tools shaping modern software development and research. Learners will review cutting-edge coding assistants, analyze comparative findings from leading AI deep-research engines, and examine best practices for verifying AI-generated content.
Through hands-on exercises and a final self-review project, participants will gain practical experience with professional-grade AI development workflows and learn how to critically evaluate emerging technologies in the rapidly evolving AI tool landscape.
This module takes learners beyond the fundamentals of AI development into the tools, workflows, and architectures that power advanced agentic systems. It introduces modern coding techniques, the MCP (Model Context Protocol) ecosystem, and practical ways to integrate PRD-driven planning into AI projects.
Participants will learn how to leverage frameworks like Taskmaster and GitHub’s Spec Kit to streamline project design, automate planning, and accelerate development. The module concludes with a hands-on exercise where learners create their first functional AI-powered solution using these advanced techniques.
This module explores how to tailor large language models (LLMs) and retrieval-augmented generation (RAG) systems to specific use cases. Learners will compare key customization approaches — prompt engineering, fine-tuning, and RAG — and discover how each can optimize AI model performance.
Through detailed lessons and hands-on practice, participants will learn to design and build robust RAG pipelines, evaluate emerging small language models like Phi-3, and understand the evolving trends in fine-tuning for 2025.
The module culminates in a practical project where learners create and submit their own RAG-based AI assistant.