type
status
date
slug
summary
tags
category
icon
password
What are agents?
At Anthropic, we categorize all these variations as agentic systems, but draw an important architectural distinction between workflows and agents:
- Workflows are systems where LLMs and tools are orchestrated through predefined code paths.
- Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.
When (and when not) to use agents
Agentic systems often trade latency and cost for better task performance, and you should consider when this tradeoff makes sense.
- When more complexity iis warranted, workfows offer preditability and consistency for well-defined tasks
- Agents are the better option when flexibility and model-driven decision-making are needed at scale.
- For many applications, however, optimizing single LLM calls with retrieval and in-context examples is usually enough.
When and how to use frameworks
- LangGraph from LangChain;
- Amazon Bedrock's AI Agent framework;
- Rivet, a drag and drop GUI LLM workflow builder; and
- Vellum, another GUI tool for building and testing complex workflows.
We suggest that developers start by using LLM APIs directly: many patterns can be implemented in a few lines of code. If you use a framework, ensure you understand the underlying code.
Building blocks, workflows, and agents
Building block: The augmented LLM
LLM enhanced with augmentations such as retrieval, tools, and memory. Our current models can actively use these capabilities-generating their own search queries, selecting appropriate tools, and determining what information to retain.
Focusing on two key aspects of the implementation:
- tailoring these capabilities to your specific use case
- ensuring they provide an easy, well-documented interface for your LLM
Model Context Protocol, allows developers to integrate with a growing ecosystem of third-party tools with simple client implementation.
Workflow: Prompt chaining
This workflow is ideal for situations where the task can be easily and cleanly decomposed into fixed subtasks. The main goal is to trade off latency for higher accuracy, by making each LLM call an easier task.
Workflow: Routing
Routing classifies an input and directs it to a specialized followup task. Without this workflow, optimizing for one kind of input can hurt performance on other inputs. Routing works well for complex tasks where there are distinct categories that are better handled separately, and where classification can be handled accurately, either by an LLM or a more traditional classification model/algorithm.
Workflow: Parallelizaiton
LLMs work simultaneously on a task and have their outputs aggregated programmatically. Parallelization is effective when the divided subtasks can be parallelized for speed, or when multiple perspectives or attempts are needed for higher confidence results. For complex tasks with multiple considerations, LLMs generally perform better when each consideration is handled by a separate LLM call, allowing focused attention on each specific aspect.
- Sectioning: Breaking a task into independent subtasks run in parallel.
- Voting: Running the same task multiple to get diverse outputs.
Workflow: Orchestrator-workers
A central LLM dynamically breaks down tasks, delegrates them to worker LLMs, and synthesizes their results.
The key difference from parallelization is its flexibility-subtasks aren’t pre-defined, but determined by the orchestrator based on the specific input.
Workflow: Evaluator-optimizer
One LLM call generate a response while another provides evaluation and feedback in a loop. This workflow is particularly effective when we have clear evaluation criteria. and when iterative refinement provides measurable value.
Agents
Agents are emerging in production as LLMs mature in key capabilities—understanding complex inputs, engaging in reasoning and planning, using tools reliably, and recovering from errors.
Agents can handle sophisticated tasks, but their implementation is often straightforward. They are typically just LLMs using tools based on environment feedback in a loop. It is therefore crucial to design toolsets and their documentation clearly and thoughtfully.
Agents can be used for open-ended problems where it’s difficult or impossible to predict the required number of steps, and where you can’t hardcode a fixed path. You must have some level of trust in its decision-making. Agents’ autonomy makes them ideal for scaling tasks in trusted environments.
Summary
Success in the LLM space isn’t about building the most sophisticated system. It’s about building the right system for your needs. Start with simple prompts, optimize them with comprehensive evaluation, and add multi-step agentic systems only when simpler solutions fall short.
Try to follow three core principles:
- Maintain simplicity in your agent’s design.
- Prioritize transparency by explicitly showing the agent’s planning steps.
- 作者:Rendi.W
- 链接:https://rendi.fun/article/build-effective-agents
- 声明:本文采用 CC BY-NC-SA 4.0 许可协议,转载请注明出处。