Build with Andrew
Build with Andrew
If you've never written code before, this course is for you. In less than 30 minutes, you'll learn to describe an idea in words and let AI transform it into an app for you.
Grow your AI career with foundational specializations and skill-specific short courses taught by leaders in the field.

![[INTENRAL ONLY] Draft Mode in Internal Tool](/dlai/assets/dlai-logo-square.png)


Build real-world applications from the command line using Gemini CLI, Google's open-source agentic coding assistant that coordinates local tools and cloud services to automate coding and creative workflows.


Build agentic systems to parse documents and extract information grounded in visual components like charts, tables, and forms.


Turn proof-of-concept agent demos into production-ready systems using observability, evaluation, and deployment tools from Nvidia's NeMo Agent Toolkit.


Build advanced retrieval systems that represent images with multiple vectors, enabling fine-grained matching between text queries and visual content for accurate multi-modal search.


Build AI agents that write and execute code to accomplish tasks, running safely in sandboxed cloud environments that protect your systems from untrusted code.


Speed up and reduce the costs of your AI agents by implementing semantic caching that reuses responses based on meaning rather than exact text.


Learn to code with AI in Jupyter notebooks. Use Jupyter AI to generate code, get explanations, and analyze data.


Integrate data governance into your agent's workflow to ensure it handles data safely, securely, and accurately.

Build real-time voice AI agents, from simple to multi-agent podcast systems, using Google’s Agent Development Kit.


Build an LLM app that uses tools from the Box MCP server to discover Box files and extract text from them. Transform it into a multi-agent system that communicates using A2A.


Construct a knowledge graph and use it to enable your AI agent to find and call the right APIs in the right order.




Build a multi-agent system that plans, designs, and constructs a knowledge graph.


Explore, build, and refine codebases with Claude Code.

Build reliable LLM applications with structured outputs and validated data using Pydantic.



Build agents that communicate and collaborate across different frameworks using ACP.


Build multimodal and long-context GenAI applications using Llama 4 open models, API, and Llama tools.


Build, debug, and optimize AI agents using DSPy and MLflow.


Build AI apps that access tools, data, and prompts using the Model Context Protocol.


Build responsive, scalable, and human-like AI voice applications.


Learn how to generate structured outputs to power production-ready LLM software applications.


Build agents that write and execute code to perform complex tasks, using Hugging Face’s smolagents.


Build systems with MemGPT agents that can autonomously manage their memory.


Build agents that navigate and interact with websites, and learn how to make them more reliable.


Design, build, and deploy apps with an AI coding agent in an integrated web development environment.


Learn to build AI agents with long-term memory with LangGraph, using LangMem for memory management.


Build an event-driven agentic workflow to process documents and fill forms using RAG and human-in-the-loop feedback.


Learn to build, debug, and deploy applications with an Agentic AI-powered integrated development environment.


Understand and implement the attention mechanism, a key element of transformer-based LLMs, using PyTorch.


Learn how to systematically evaluate, improve, and iterate on AI agents using structured assessments.


Understand the transformer architecture that powers LLMs to use them more effectively.


Learn how an AI Assistant is built to use and accomplish tasks on computers.



Build LLM apps that can process very long documents using the Jamba model


Learn how to use and prompt OpenAI's o1 model for complex reasoning tasks.


Learn to use OpenAI Canvas to write, code, and create more effectively in collaboration with AI.



Learn to build with LLMs by creating a fun interactive game from scratch.


Move your LLM-powered applications beyond proof-of-concept and into production with the added control of guardrails.


Build agents that collaborate to solve complex business tasks.


Efficiently handle time-varying workloads with serverless agentic workflows and responsible agents built on Amazon Bedrock.


Try out the features of the new Llama 3.2 models to build AI applications with multimodality.


Build faster and more relevant vector search for your LLM applications


Learn best practices for multimodal prompting using Google’s Gemini model.


Learn a flexible framework to build a variety of complex AI applications.



Systematically improve the accuracy of LLM applications with evaluation, prompting, and memory tuning.


Learn how to build embedding models and how to create effective semantic retrieval systems.


Learn how to securely fine-tune large language models (LLMs) with private data using federated methods, enhancing data privacy, minimizing risks of data leakage, and optimizing efficiency through Parameter-Efficient Fine-Tuning (PEFT) and Differential Privacy.


Build and fine-tune LLMs across distributed data using a federated learning framework for better privacy.


Learn the essential steps to pretrain a large language model from scratch.


Optimize the efficiency, security, query processing speed, and cost of your RAG applications.


Train your machine learning models using cleaner energy sources.


Learn to apply function-calling to expand LLM and agent application capabilities.


Interact with tabular data and SQL databases using natural language, enabling more efficient and accessible data analysis.



Build agentic AI workflows using LangChain's LangGraph and Tavily's agentic search.



Use the AutoGen framework to build multi-agent systems with diverse roles and capabilities for implementing complex AI applications.


Deploy AI for edge devices and smartphones. Learn model conversion, quantization, and how to modify for deployment on diverse devices.


Automate business workflows with multi-AI agent systems. Exceed the performance of prompting a single LLM by designing and prompting a team of AI agents through natural language.


Build smarter search and RAG applications for multimodal retrieval and generation.


Build autonomous agents that intelligently navigate and analyze your data. Learn to develop agentic RAG systems using LlamaIndex, enabling powerful document Q&A and summarization. Gain valuable skills in guiding agent reasoning and debugging.


Customize model compression with advanced quantization techniques. Try out different variants of Linear Quantization, including symmetric vs. asymmetric mode, and different granularities.


Learn prompt engineering for vision models using Stable Diffusion, and advanced techniques like object detection and in-painting.


Explore Mistral's open-source and commercial models, and leverage Mistral's JSON mode to generate structured LLM responses. Use Mistral's API to call user-defined functions for enhanced LLM capabilities.


Learn how to quantize any open-source model. Learn to compress models with the Hugging Face Transformers library and the Quanto library.


Improve your RAG system to retrieve diverse data types. Learn to extract and normalize content from a wide variety of document types, such as PDFs, PowerPoints, and HTML files.


Learn how to make safer LLM apps through red teaming. Learn to identify and evaluate vulnerabilities in large language model (LLM) applications.


Build a full-stack web application that uses RAG capabilities to chat with your data. Learn to build a RAG application in JavaScript, using an intelligent agent to answer queries.


Understand how LLMs predict the next token and how techniques like KV caching can speed up text generation. Write code to serve LLM applications efficiently to multiple users.


Learn how to build and use knowledge graph systems to improve your retrieval augmented generation applications. Use Neo4j's query language Cypher to manage and retrieve data.


Learn how to easily build AI applications using open-source models and Hugging Face tools. Find and filter open-source models on Hugging Face Hub.


Learn best practices for prompting and selecting among Meta Llama 2 & 3 models. Interact with Meta Llama 2 Chat, Code Llama, and Llama Guard models.


Learn how to deploy an LLM-based application into production using serverless technology. Learn to prompt and customize LLM responses with Amazon Bedrock.


Learn to build six applications powered by vector databases, including semantic search, retrieval augmented generation (RAG), and anomaly detection.


Learn how to create an automated CI pipeline to evaluate your LLM applications on every change, for faster and safer development.


Learn LLMOps best practices as you design and automate steps to fine-tune and deploy an LLM for a specific task.


Expand your toolkit with LangChain.js, a JavaScript framework for building with LLMs. Understand the fundamentals of using LangChain to orchestrate and chain modules.


Learn advanced retrieval techniques to improve the relevancy of retrieved results. Learn to recognize poor query results and use LLMs to improve queries.


Get an introduction to tuning and evaluating LLMs using Reinforcement Learning from Human Feedback (RLHF) and fine-tune the Llama 2 model.



Learn advanced RAG retrieval methods like sentence-window and auto-merging that outperform baselines, and evaluate and iterate on your pipeline's performance.


Learn how to evaluate the safety and security of your LLM applications and protect against risks. Monitor and enhance security measures to safeguard your apps.


Design and execute real-world applications of vector databases. Build efficient, practical applications, including hybrid and multilingual searches.


Learn about the latest advancements in LLM APIs and use LangChain Expression Language (LCEL) to compose and customize chains and agents.


Learn how to prompt an LLM to help improve, debug, understand, and document your code. Use LLMs to simplify your code and enhance productivity.


Learn how to accelerate the application development process with text embeddings for sentence and paragraph meaning.


Learn Microsoft's open source orchestrator, Semantic Kernel and use LLM building blocks such as memory, connectors, chains and planners in your apps.



Learn to use LLMs to enhance search and summarize results, using Cohere Rerank and embeddings for dense retrieval.


Learn MLOps tools for managing, versioning, debugging, and experimenting in your ML workflow.


Create and demo machine learning applications quickly. Share your app with teammates and beta testers on Hugging Face Spaces.


Create a chatbot with LangChain to interface with your private data and documents. Learn from LangChain creator, Harrison Chase.


Learn to break down complex tasks, automate workflows, chain LLM calls, and get better outputs from LLMs. Evaluate LLM inputs and outputs for safety and relevance.

Learn and build diffusion models from the ground up, understanding each step. Learn about diffusion models in use today and implement algorithms to speed up sampling.


Use the powerful and extensible LangChain framework, using prompts, parsing, memory, chains, question answering, and agents.


Learn the fundamentals of prompt engineering for ChatGPT. Learn effective prompting, and how to use LLMs for summarizing, inferring, transforming, and expanding.