Skip to content
← Back to blog

AWS Bedrock AgentCore: A Practical Introduction for Building AI Agents

AWSBedrockAgentCoreAI AgentsStrands

Building production-ready AI agents is hard. You need to handle model orchestration, memory persistence, tool integration, authentication, scaling, and deployment — all while keeping costs under control. AWS Bedrock AgentCore aims to solve this by providing a managed platform specifically designed for agentic AI workflows.

At HTX2, we’ve been building on AgentCore since its launch. Here’s a practical introduction based on our real-world experience.

What Is Bedrock AgentCore?

Amazon Bedrock AgentCore is a suite of services designed to accelerate AI agent development, deployment, and management. Unlike general-purpose ML platforms, AgentCore offers specialized infrastructure for agentic workflows:

  • AgentCore Runtime — Serverless deployment and scaling for your agents
  • AgentCore Memory — Persistent short-term and long-term memory
  • AgentCore Gateway — Transform existing APIs into agent tools via MCP
  • AgentCore Code Interpreter — Secure Python execution in isolated sandboxes
  • AgentCore Browser — Managed Chrome browser for web interaction
  • AgentCore Identity — Authentication and access management

The key insight: AgentCore doesn’t force you into a specific AI framework. It works with Strands Agents, LangChain, LangGraph, CrewAI, or even plain Python.

Getting Started: Your First Agent

The fastest path is using the Strands framework with the AgentCore Runtime SDK:

from bedrock_agentcore import BedrockAgentCoreApp
from strands import Agent

app = BedrockAgentCoreApp()
agent = Agent(model="us.anthropic.claude-3-7-sonnet-20250219-v1:0")

@app.entrypoint
def my_agent(payload):
    result = agent(payload.get("prompt", "Hello"))
    return {"result": result.message}

if __name__ == "__main__":
    app.run()

Deploy with two commands:

agentcore configure --entrypoint my_agent.py
agentcore deploy

That’s it. AgentCore handles containerization, IAM roles, health checks, and scaling automatically.

Memory: The Game Changer

What makes AgentCore genuinely useful is its memory system. Most AI agent frameworks treat each conversation as stateless. AgentCore provides two types of persistent memory:

Short-Term Memory (STM)

Stores raw conversation turns within a session. When a user returns to the same session, the agent picks up where it left off. Events expire after a configurable period (e.g., 7 days).

Long-Term Memory (LTM)

This is where it gets interesting. LTM uses extraction strategies to automatically pull out:

  • User preferences — “I prefer Python”, “Use dark mode”
  • Semantic facts — “My company uses AWS”, “Budget is €50K”

These extracted memories persist across sessions. A user can start a new conversation, and the agent still knows their preferences and context.

from bedrock_agentcore.memory import MemoryClient

client = MemoryClient(region_name='eu-central-1')

# Create LTM with extraction strategies
memory = client.create_memory_and_wait(
    name="CustomerAssistant",
    strategies=[
        {"userPreferenceMemoryStrategy": {
            "name": "prefs",
            "namespaces": ["/user/preferences/"]
        }},
        {"semanticMemoryStrategy": {
            "name": "facts",
            "namespaces": ["/user/facts/"]
        }}
    ],
    event_expiry_days=30
)

Gateway: Connecting Agents to Your APIs

AgentCore Gateway lets you expose existing APIs (Lambda functions, REST APIs, or any OpenAPI-compatible service) as tools that your agent can use. It uses the Model Context Protocol (MCP) standard, which means tools are discoverable and self-describing.

For example, you can expose a Lambda-based calculator, a database query function, or a CRM lookup as MCP tools — and your agent automatically discovers and uses them.

When to Use AgentCore vs. Rolling Your Own

Use AgentCore when:

  • You need production-grade memory persistence
  • Your agent must handle multiple concurrent sessions
  • You want managed scaling without Kubernetes complexity
  • Security and IAM integration with AWS services is required
  • You’re already in the AWS ecosystem

Consider alternatives when:

  • You need sub-100ms latency (AgentCore adds some overhead)
  • You’re building a simple chatbot without tool use
  • Your stack is entirely non-AWS
  • You need full control over the inference pipeline

Cost Considerations

AgentCore follows AWS’s pay-per-use model. The key cost components are:

  • Runtime: Per-invocation pricing (similar to Lambda)
  • Memory: Per-event storage and retrieval
  • Foundation models: Standard Bedrock pricing for model inference

For our HTX2 landing page AI agent, the total AgentCore cost stays well within the AWS free tier — approximately $0–2/month for moderate traffic.

What We’ve Built with AgentCore

At HTX2, our own website’s AI assistant runs on AgentCore. It uses:

  • Strands Agents as the orchestration framework
  • Amazon Nova Lite for fast, cost-effective inference
  • S3 Vectors for context retrieval
  • Lambda (ARM64) for the API layer

The entire stack deploys via AWS CDK and costs less than a cup of coffee per month.

Getting Started

  1. Install the SDK: pip install bedrock-agentcore strands-agents
  2. Configure AWS credentials: aws configure
  3. Create your agent file with @app.entrypoint
  4. Deploy: agentcore configure && agentcore deploy
  5. Test: agentcore invoke '{"prompt": "Hello"}'

The documentation at AWS AgentCore provides comprehensive guides for each service.


HTX2 builds AI agents using AWS Bedrock AgentCore and Strands. See our portfolio or get in touch.