Intro to AI Agents:
Best industry practices for
modern AI development
Data Sanity invites you to join the course "Intro to AI Agents" — a series of 8 sessions that combine lectures and workshops on AI agent development. The course covers LLM and AI agent fundamentals, modern agentic ecosystems (LangChain, LangGraph, and more), and practical exercises for building efficient AI-powered tools. It also includes a collaborative take-home project — a great opportunity to put your newly acquired skills into practice under the supervision of an experienced mentor.
Requirements: Basic knowledge of Python and Jupyter notebooks (we'll use Google Colab). Some prior experience with LLMs and API interfaces is helpful but not required.
Registration: The course is open for both university students and external participants, including working professionals. It is 100% free, but the number of seats is limited. To apply, please complete the sign-up form by March 1, 23:59 CET.
Selection: If the number of registrations exceeds the classroom capacity, participants will be selected based on the information provided in the sign-up form.
Partners: This course is developed in collaboration with AI Future Hub, and hosted by the Faculty of Mathematics (Matematički fakultet), University of Belgrade.
Follow us on social media to stay updated on this course and our other events.
Location: Belgrade, Faculty of Mathematics, Studentski trg 16
Deadline: March 1st
Course Overview
Learning Objectives
The aim of the course is to provide students with a systematic understanding of how modern LLM-based tools are developed. From the underlying principles of their operation and basic prompt engineering techniques to the design, implementation, and evaluation of AI-powered applications.
- > Design efficient prompts for LLMs
- > Build RAG pipelines and LLM-based agents
- > Prototype complex AI pipelines with LangChain and LangGraph
- > Build, evaluate, optimize and deploy agentic applications
Assessment and Evaluation
This course will be graded according to the criteria below.
- > A short theoretical quiz at the beginning of each session
- > Homework assignments
- > Collaborative project
Curriculum
- Session 1 (March 21). Introduction to AI & LLM
Lecture: Course overview. Fundamentals of AI, Machine Learning and Deep Learning. LLM intro: tokenization, architecture, pretext tasks. Training phases: pre-training, supervised fine-tuning, RLHF. Core properties of LLMs: hallucinations, stochasticity, non-human reasoning patterns, limitations. LLM evaluation and review of progress.
Workshop: Environment setup. First API call to an LLM. Completion APIs. Inference parameters. Understanding logits and token probabilities. Simple prompting techniques. - Session 2 (March 28). Prompt Engineering
Lecture: Evolution of Prompt Engineering. Core techniques: zero-shot, few-shot, Chain-of-Thought. Structured output. Reasoning vs non-reasoning models. Decline of pure Prompt Engineering. Context Engineering fundamentals. Schema-Guided Reasoning. Prompt Engineering vs Context Engineering.
Workshop: Structured output with Pydantic. Advanced prompting techniques. Schema-Guided Reasoning. Automation of complex processes with a single prompt. Introduction to LangChain: simple chains and output parsers. - Session 3 (April 4). Retrieval-Augmented Generation (RAG)
Lecture: RAG fundamentals. Embeddings and chunking strategies. Vector databases. Basic RAG pipeline. Comparison with classical information retrieval (BM25). Advanced RAG techniques. Overview of open-source and closed-source RAG ecosystems.
Workshop: Document preprocessing tools and vector databases in LangChain. Basic RAG pipelines in LangChain. Introduction to LangGraph. Basic RAG pipelines in LangGraph. Advanced RAG techniques. - Session 4 (April 18). AI Workflows & Introduction to AI Agents
Lecture: AI workflows fundamentals: orchestration patterns, comparison to SGR. AI agents fundamentals: core concept, memory and tools. Use cases and limitations. AI workflows vs AI agents.
Workshop: AI workflows in LangGraph. Explicit State Modeling. Implementation of orchestration patterns. Orchestration patterns in practice. - Session 5 (April 25). AI Agents
Lecture: AI agent architectures: ReAct, Plan-then-Act. Advanced tool use. Introduction to Model Context Protocol (MCP). Agent Skills. Open-source frameworks for AI agents vs commercial SDKs (OpenAI Agents, Claude Agent SDK, Google ADK). Multi-agent systems.
Workshop: Building a ReAct agent in LangGraph. Adding tools and integrating MCP servers. Development of Agent Skills. Comparing open-source implementation with commercial SDK approaches.
Additional material: Building API (FastAPI) and GUI (Streamlit, Gradio). - Session 6 (May 9). Testing and Observability in LLM Systems
Lecture: Evaluation methodologies for LLM-based applications. Common metrics. Offline vs Online evaluation. LLM-as-a-judge. Evaluation of AI agents. Guardrails. Overview of evaluation frameworks (LangSmith, Weave, DeepEval, LangFuse, etc.).
Workshop: Building an evaluation pipeline. Writing automated tests for RAG and agent systems. Logging, tracing, and monitoring frameworks. - Session 7 (May 16). System Design, Production & Deployment
Lecture: System design for LLM-based applications. LLM as a part of software. Cost management and scalability. Local LLMs vs. Cloud LLMs. oLLama vs vLLM. Deployment of LLM-based applications. Existing cloud solutions.
Workshop: Local Inference Stack. Deployment to cloud. Model Abstraction Layer. Deployment demo with observability. - Session 8 (May 23). Project Presentations & Wrap-up
Presentation of collaborative projects. Final grading and the course wrap-up.
Partners












