AI Acumen Learning Journey - Module 5

AI Acumen Learning Journey - Module 5

This course explores how to tailor large language models (LLMs) and retrieval-augmented generation (RAG) systems to specific use cases. Learners will compare key customization approaches — prompt engineering, fine-tuning, and RAG — and discover how each can optimize AI model performance.

Through detailed lessons and hands-on practice, participants will learn to design and build robust RAG pipelines, evaluate emerging small language models like Phi-3, and understand the evolving trends in fine-tuning for 2025. 

The module culminates in a practical project where learners create and submit their own RAG-based AI assistant.

Responsible Lahoucine Ben Brahim
Last Update 02/10/2025
Completion Time 4 hours 48 minutes
Members 2
Basic
  • Module 5: Customizing AI & Building Bespoke Solutions
    8Lessons · 4 hrs 8 mins
    • RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models
    • RAG vs. fine-tuning vs. prompt engineering
    • The fundamentals of building a robust RAG pipeline
    • Building a Robust RAG Pipeline_ The Complete Guide for 2025
    • Tiny but mighty: The Phi-3 small language models with big potential
    • Boring is good
    • LLM Fine-Tuning in 2025
    • Guide to LLM Fine-Tuning in 2025
  • Practical Exercise: Build Your Own RAG AI Assistant
    2Lessons · 40 mins
    • Practical Exercise: Build Your Own RAG AI Assistant
    • Sumbit your own RAG AI Assistant