Grow your AI career with foundational specializations and skill-specific short courses taught by leaders in the field.
Build a solid data analytics foundation using industry standard and AI tools to extract insights, make decisions, and solve real-world business problems.

Build neural networks (CNNs, RNNs, LSTMs, Transformers) and apply them to speech recognition, NLP, and more using Python and TensorFlow.
Learn practical prompt engineering and pair programming techniques with LLMs to write, test, and improve your code.

Learn foundational AI concepts through an intuitive visual approach, then learn the code needed to implement the algorithms and math for ML.
Explore the fundamental mathematics toolkit of machine learning: calculus, linear algebra, statistics, and probability.
Learn the core principles of building, optimizing, and deploying deep learning models using PyTorch.
Equip yourself with the knowledge necessary to use the TensorFlow API along with best practices and hands-on experience in one of the most in-demand deep learning frameworks.

Learn the fundamentals of prompt engineering for ChatGPT. Learn effective prompting, and how to use LLMs for summarizing, inferring, transforming, and expanding.

Use the powerful and extensible LangChain framework, using prompts, parsing, memory, chains, question answering, and agents.

Learn and build diffusion models from the ground up, understanding each step. Learn about diffusion models in use today and implement algorithms to speed up sampling.

Learn to break down complex tasks, automate workflows, chain LLM calls, and get better outputs from LLMs. Evaluate LLM inputs and outputs for safety and relevance.

Create a chatbot with LangChain to interface with your private data and documents. Learn from LangChain creator, Harrison Chase.

Create and demo machine learning applications quickly. Share your app with teammates and beta testers on Hugging Face Spaces.

Learn MLOps tools for managing, versioning, debugging, and experimenting in your ML workflow.

Learn to use LLMs to enhance search and summarize results, using Cohere Rerank and embeddings for dense retrieval.


Learn Microsoft's open source orchestrator, Semantic Kernel and use LLM building blocks such as memory, connectors, chains and planners in your apps.

Learn how to accelerate the application development process with text embeddings for sentence and paragraph meaning.
Learn how to prompt an LLM to help improve, debug, understand, and document your code. Use LLMs to simplify your code and enhance productivity.

Learn about the latest advancements in LLM APIs and use LangChain Expression Language (LCEL) to compose and customize chains and agents.

Design and execute real-world applications of vector databases. Build efficient, practical applications, including hybrid and multilingual searches.

Learn how to evaluate the safety and security of your LLM applications and protect against risks. Monitor and enhance security measures to safeguard your apps.

Learn advanced RAG retrieval methods like sentence-window and auto-merging that outperform baselines, and evaluate and iterate on your pipeline's performance.

Get an introduction to tuning and evaluating LLMs using Reinforcement Learning from Human Feedback (RLHF) and fine-tune the Llama 2 model.

Learn advanced retrieval techniques to improve the relevancy of retrieved results. Learn to recognize poor query results and use LLMs to improve queries.
Expand your toolkit with LangChain.js, a JavaScript framework for building with LLMs. Understand the fundamentals of using LangChain to orchestrate and chain modules.

Learn LLMOps best practices as you design and automate steps to fine-tune and deploy an LLM for a specific task.

Learn how to create an automated CI pipeline to evaluate your LLM applications on every change, for faster and safer development.

Learn to build six applications powered by vector databases, including semantic search, retrieval augmented generation (RAG), and anomaly detection.

Learn how to deploy an LLM-based application into production using serverless technology. Learn to prompt and customize LLM responses with Amazon Bedrock.

Learn best practices for prompting and selecting among Meta Llama 2 & 3 models. Interact with Meta Llama 2 Chat, Code Llama, and Llama Guard models.

Learn how to easily build AI applications using open-source models and Hugging Face tools. Find and filter open-source models on Hugging Face Hub.

Learn how to build and use knowledge graph systems to improve your retrieval augmented generation applications. Use Neo4j's query language Cypher to manage and retrieve data.

Understand how LLMs predict the next token and how techniques like KV caching can speed up text generation. Write code to serve LLM applications efficiently to multiple users.

Build a full-stack web application that uses RAG capabilities to chat with your data. Learn to build a RAG application in JavaScript, using an intelligent agent to answer queries.

Learn how to make safer LLM apps through red teaming. Learn to identify and evaluate vulnerabilities in large language model (LLM) applications.

Improve your RAG system to retrieve diverse data types. Learn to extract and normalize content from a wide variety of document types, such as PDFs, PowerPoints, and HTML files.

Learn how to quantize any open-source model. Learn to compress models with the Hugging Face Transformers library and the Quanto library.

Explore Mistral's open-source and commercial models, and leverage Mistral's JSON mode to generate structured LLM responses. Use Mistral's API to call user-defined functions for enhanced LLM capabilities.

Learn prompt engineering for vision models using Stable Diffusion, and advanced techniques like object detection and in-painting.

Customize model compression with advanced quantization techniques. Try out different variants of Linear Quantization, including symmetric vs. asymmetric mode, and different granularities.

Build autonomous agents that intelligently navigate and analyze your data. Learn to develop agentic RAG systems using LlamaIndex, enabling powerful document Q&A and summarization. Gain valuable skills in guiding agent reasoning and debugging.

Build smarter search and RAG applications for multimodal retrieval and generation.

Automate business workflows with multi-AI agent systems. Exceed the performance of prompting a single LLM by designing and prompting a team of AI agents through natural language.

Deploy AI for edge devices and smartphones. Learn model conversion, quantization, and how to modify for deployment on diverse devices.

Use the AutoGen framework to build multi-agent systems with diverse roles and capabilities for implementing complex AI applications.

Build agentic AI workflows using LangChain's LangGraph and Tavily's agentic search.

Interact with tabular data and SQL databases using natural language, enabling more efficient and accessible data analysis.

Learn to apply function-calling to expand LLM and agent application capabilities.

Train your machine learning models using cleaner energy sources.



Optimize the efficiency, security, query processing speed, and cost of your RAG applications.

Learn the essential steps to pretrain a large language model from scratch.

Build and fine-tune LLMs across distributed data using a federated learning framework for better privacy.

Learn Python programming with AI assistance. Gain skills writing, testing, and debugging code efficiently, and create real-world AI applications.

Learn how to securely fine-tune large language models (LLMs) with private data using federated methods, enhancing data privacy, minimizing risks of data leakage, and optimizing efficiency through Parameter-Efficient Fine-Tuning (PEFT) and Differential Privacy.

Learn how to build embedding models and how to create effective semantic retrieval systems.

Systematically improve the accuracy of LLM applications with evaluation, prompting, and memory tuning.

Learn a flexible framework to build a variety of complex AI applications.

Learn best practices for multimodal prompting using Google’s Gemini model.
Build an interactive system for querying video content using multimodal AI

Build faster and more relevant vector search for your LLM applications

Try out the features of the new Llama 3.2 models to build AI applications with multimodality.

Efficiently handle time-varying workloads with serverless agentic workflows and responsible agents built on Amazon Bedrock.

Build agents that collaborate to solve complex business tasks.

Build systems with MemGPT agents that can autonomously manage their memory.

Move your LLM-powered applications beyond proof-of-concept and into production with the added control of guardrails.

Learn to build with LLMs by creating a fun interactive game from scratch.

Learn to use OpenAI Canvas to write, code, and create more effectively in collaboration with AI.

Understand the transformer architecture that powers LLMs to use them more effectively.

Build LLM apps that can process very long documents using the Jamba model

Learn how to use and prompt OpenAI's o1 model for complex reasoning tasks.

Learn how an AI Assistant is built to use and accomplish tasks on computers.

Understand and implement the attention mechanism, a key element of transformer-based LLMs, using PyTorch.

Learn how to use generative AI's capabilities & limitations. Get an overview of real-world examples, and impact on business & society for effective strategies.

Learn how to systematically evaluate, improve, and iterate on AI agents using structured assessments.

Learn about AI technologies and how to use them. Examine AI's societal impact, and learn to navigate this technological shift.

Build an event-driven agentic workflow to process documents and fill forms using RAG and human-in-the-loop feedback.

Learn to build, debug, and deploy applications with an Agentic AI-powered integrated development environment.

Learn to build AI agents with long-term memory with LangGraph, using LangMem for memory management.

Learn how to generate structured outputs to power production-ready LLM software applications.

Design, build, and deploy apps with an AI coding agent in an integrated web development environment.

Build agents that navigate and interact with websites, and learn how to make them more reliable.

Build agents that write and execute code to perform complex tasks, using Hugging Face’s smolagents.
Build responsive, scalable, and human-like AI voice applications.

Build AI apps that access tools, data, and prompts using the Model Context Protocol.

Build, debug, and optimize AI agents using DSPy and MLflow.

Build multimodal and long-context GenAI applications using Llama 4 open models, API, and Llama tools.

Build agents that communicate and collaborate across different frameworks using ACP.


Build practical multi-agent systems that collaborate, use tools and memory, and scale reliably to production

Build reliable LLM applications with structured outputs and validated data using Pydantic.
Gain fundamental understanding and the practical knowledge to develop production-ready RAG applications, from architecture to deployment and evaluation.

Explore, build, and refine codebases with Claude Code.

Build a multi-agent system that plans, designs, and constructs a knowledge graph.
Construct a knowledge graph and use it to enable your AI agent to find and call the right APIs in the right order.

Build an LLM app that uses tools from the Box MCP server to discover Box files and extract text from them. Transform it into a multi-agent system that communicates using A2A.
In this course taught by Andrew Ng, you'll build agentic AI systems that take action through iterative, multi-step workflows.
Integrate data governance into your agent's workflow to ensure it handles data safely, securely, and accurately.
Build real-time voice AI agents, from simple to multi-agent podcast systems, using Google’s Agent Development Kit.
Learn to code with AI in Jupyter notebooks. Use Jupyter AI to generate code, get explanations, and analyze data.
Speed up and reduce the costs of your AI agents by implementing semantic caching that reuses responses based on meaning rather than exact text.
Build AI agents that write and execute code to accomplish tasks, running safely in sandboxed cloud environments that protect your systems from untrusted code.
Build advanced retrieval systems that represent images with multiple vectors, enabling fine-grained matching between text queries and visual content for accurate multi-modal search.
Turn proof-of-concept agent demos into production-ready systems using observability, evaluation, and deployment tools from Nvidia's NeMo Agent Toolkit.
If you've never written code before, this course is for you. In less than 30 minutes, you'll learn to describe an idea in words and let AI transform it into an app for you.
Build agentic systems to parse documents and extract information grounded in visual components like charts, tables, and forms.
Build real-world applications from the command line using Gemini CLI, Google's open-source agentic coding assistant that coordinates local tools and cloud services to automate coding and creative workflows.

![[INTENRAL ONLY] Draft Mode in Internal Tool](https://dlai-learn-staging.deeplearning.ai/_next/image?url=%2Fassets%2Fdlai-logo-square.png&w=3840&q=75&dpl=dpl_ApSTtazVvwzuh48mss9yMRuNgsLS)