AI Acumen Learning Journey (Module 5)

AI Acumen Learning Journey (Module 5)

This module explores how to tailor large language models (LLMs) and retrieval-augmented generation (RAG) systems to specific use cases. Learners will compare key customization approaches — prompt engineering, fine-tuning, and RAG — and discover how each can optimize AI model performance.

Through detailed lessons and hands-on practice, participants will learn to design and build robust RAG pipelines, evaluate emerging small language models like Phi-3, and understand the evolving trends in fine-tuning for 2025. 

The module culminates in a practical project where learners create and submit their own RAG-based AI assistant.

Verantwoordelijke Lahoucine Ben Brahim
Laatst bijgewerkt 02-10-2025
Doorlooptijd 4 uur 48 minuten
Leden 2
Basis
  • Module 5: Customizing AI & Building Bespoke Solutions
    8Lessen · 4 uur 8 min
    • RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models
    • RAG vs. fine-tuning vs. prompt engineering
    • The fundamentals of building a robust RAG pipeline
    • Building a Robust RAG Pipeline: The Complete Guide for 2025
    • Tiny but mighty: The Phi-3 small language models with big potential
    • Boring is good
    • LLM Fine-Tuning in 2025
    • Guide to LLM Fine-Tuning in 2025
  • Practical Exercise: Build Your Own RAG AI Assistant
    2Lessen · 40 min
    • Practical Exercise: Build Your Own RAG AI Assistant
    • Sumbit your own RAG AI Assistant