Tool System
Overview
VIBE AI agents have access to 50+ native tools for web search, browser automation, code execution, file operations, and more. Additionally, agents can access 1000+ tools via MCP protocol and 200+ integrations via Composio.
Tool Categories Summary
- Web Search: 3 tools (Tavily, Firecrawl)
- Browser Automation: 6 tools (Playwright)
- Code Execution: 4 tools (Python, Node.js, Shell, E2B)
- File Operations: 6 tools (Read, write, edit, list, delete, upload)
- Wallet & Payments: 5 tools (Coinbase Payments MCP, x402)
- Data Providers: 13 providers (Nansen, DataAPI, Telegram, Twitter, etc.)
- Media Generation: 2 tools (Gemini image, Gemini video)
- Memory: 1 tool (Mem0-style memory operations)
- Knowledge Base: 1 tool (RAG operations)
- MCP Tools: 1000+ via MCP protocol
- Composio Tools: 200+ app integrations
Tool Categories
🔍 Web Search
| Tool | Provider | Description |
|---|
web_search | Tavily | General web search with real-time results |
deep_search | Tavily | In-depth research with multiple sources |
web_scrape | Firecrawl | Extract structured data from web pages |
🌐 Browser Automation
| Tool | Provider | Description |
|---|
browser_navigate | Playwright | Navigate to URL |
browser_click | Playwright | Click elements |
browser_type | Playwright | Type text |
browser_screenshot | Playwright | Capture page screenshot |
browser_extract | Playwright | Extract data from page |
browser_evaluate | Playwright | Execute JavaScript in page |
Features:
- Headless browser automation
- Full JavaScript execution
- Cookie and session management
- Screenshot capture
- Element interaction
💻 Code Execution
| Tool | Provider | Description |
|---|
execute_python | Sandbox | Run Python code with full stdlib |
execute_node | Sandbox | Run JavaScript/Node.js code |
execute_shell | Sandbox | Run shell commands (Bash) |
code_interpreter | E2B | Advanced code execution with packages |
Supported Languages:
- Python 3.11+
- Node.js 18+
- Shell (Bash)
📁 File Operations
| Tool | Provider | Description |
|---|
file_read | Sandbox | Read file contents |
file_write | Sandbox | Write to file |
file_edit | Sandbox | Edit existing file |
file_list | Sandbox | List directory |
file_delete | Sandbox | Delete files |
file_upload | Sandbox | Upload files to sandbox |
Sandbox Environment:
- Isolated Docker containers (Daytona)
- No host system access
- Automatic cleanup
- Resource limits enforced
💰 Wallet & Payments
| Tool | Provider | Description |
|---|
check_wallet_balance | Coinbase Payments MCP | Check balance across networks |
send_transaction | Coinbase Payments MCP | Send tokens, interact with contracts |
get_wallet_address | Coinbase Payments MCP | Get wallet address |
call_x402_service | x402 | Pay for services via x402 protocol |
get_transaction_history | Coinbase Payments MCP | View transaction history |
Supported Networks:
- Base (default, low fees)
- Ethereum
- Polygon
- Arbitrum
- Optimism
- And more via Coinbase Developer Platform
📊 Data Providers (13 Providers)
VIBE AI integrates with 13 specialized data providers through a unified Data Providers Tool:
Crypto & Trading Providers
| Provider | Description | Key Features |
|---|
| Nansen | Premium on-chain intelligence | Smart Money tracking, wallet profiling, Token God Mode (20+ endpoints) |
| DataAPI | Social insights and trend analysis | KOL tracking, discoveries, mindshare analysis (19 endpoints) |
| FourMeme | Memecoin analytics | Trending meme tokens, market data |
| PumpFun Scraper | Solana token discovery | New coin listings from pump.fun |
| TopTraders | Multi-chain trader analytics | Top traders by token (Solana, ETH, Base, BSC) |
| Blokiments | Blockchain metrics | Token metrics, trending research, multi-network |
| EVA AI | Web3 security analysis | Contract audits, token audits, LP lock analysis |
Social & Communication Providers
| Provider | Description | Key Features |
|---|
| Telegram | Messaging and channel intelligence | Channel monitoring, messaging, community analysis (10 endpoints) |
| Twitter/X | Social media data | Tweets, user profiles, trends, analytics |
Traditional Data Providers
| Provider | Description | Key Features |
|---|
| Yahoo Finance | Financial market data | Stock prices, market indicators, financial news |
| LinkedIn | Professional network data | Profiles, company data, posts |
| Amazon | E-commerce data | Product information, prices, reviews |
| Zillow | Real estate data | Property values, listings, market trends |
| ActiveJobs | Job listings | Employment opportunities |
Usage Example
# Access any data provider through unified tool
result = await data_providers_tool.query(
service_name="nansen",
route="smart_money_wallets",
params={"token": "ETH", "limit": 10}
)
result = await data_providers_tool.query(
service_name="telegram",
route="search_channels",
params={"query": "crypto", "limit": 5}
)
Creating Custom Tools
Basic Tool
from vibe_tools import Tool, ToolResult
class MyCustomTool(Tool):
"""Description of what your tool does"""
name = "my_custom_tool"
description = "A custom tool for specific task"
@openapi_schema({
"name": "my_action",
"description": "Perform a specific action",
"parameters": {
"type": "object",
"properties": {
"input": {
"type": "string",
"description": "Input for the action"
}
},
"required": ["input"]
}
})
async def my_action(self, input: str) -> ToolResult:
try:
# Your implementation
result = await self.process(input)
return self.success_response(
result=result,
message="Action completed"
)
except Exception as e:
return self.fail_response(str(e))
Tool with External API
class ExternalAPITool(Tool):
"""Tool that calls external API"""
def __init__(self, api_key: str):
super().__init__()
self.api_key = api_key
self.client = ExternalClient(api_key)
@openapi_schema({...})
async def fetch_data(self, query: str) -> ToolResult:
try:
response = await self.client.query(query)
return self.success_response(result=response)
except RateLimitError:
return self.fail_response("Rate limit exceeded")
except AuthError:
return self.fail_response("Authentication failed")
MCP Integration
What is MCP?
Model Context Protocol is a standard for connecting AI models to external tools and data sources.
Adding MCP Servers
# In agent configuration
mcp_servers:
- name: "filesystem"
type: "stdio"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem"]
- name: "github"
type: "stdio"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-github"]
env:
GITHUB_TOKEN: "${GITHUB_TOKEN}"
- name: "custom"
type: "sse"
url: "https://my-mcp-server.com/sse"
Supported MCP Types
| Type | Description |
|---|
stdio | Standard I/O communication |
sse | Server-Sent Events |
http | HTTP endpoints |
Composio Integration
Overview
Composio provides 200+ pre-built integrations.
Supported Integrations
| Category | Examples |
|---|
| Social | Telegram, Twitter, Discord, Slack |
| Productivity | Google Drive, Notion, Airtable |
| Development | GitHub, GitLab, Linear, Jira |
| Finance | Stripe, QuickBooks |
| Email | Gmail, Outlook |
Configuration
composio:
enabled: true
apps:
- name: "telegram"
actions: ["send_message", "read_messages"]
- name: "github"
actions: ["create_issue", "create_pr"]
Tool Execution Flow
1. LLM decides to use tool
│
▼
2. ToolManager validates parameters
│
▼
3. Route to appropriate executor:
├── Native tool → Direct execution
├── MCP tool → MCP executor
└── Composio → Composio handler
│
▼
4. Execute in sandbox (if needed)
│
▼
5. Return ToolResult
│
▼
6. Add result to context
│
▼
7. Continue LLM loop
Best Practices
Error Handling
async def my_tool(self, param: str) -> ToolResult:
try:
result = await self.execute(param)
return self.success_response(result=result)
except ValidationError as e:
return self.fail_response(f"Invalid input: {e}")
except TimeoutError:
return self.fail_response("Operation timed out")
except Exception as e:
logger.error(f"Tool error: {e}")
return self.fail_response("Internal error")
Rate Limiting
from ratelimit import limits, sleep_and_retry
class RateLimitedTool(Tool):
@sleep_and_retry
@limits(calls=10, period=60)
async def rate_limited_action(self, param: str) -> ToolResult:
return await self.execute(param)
Caching
from functools import lru_cache
class CachedTool(Tool):
@lru_cache(maxsize=100)
async def cached_action(self, param: str) -> ToolResult:
return await self.fetch_expensive_data(param)
Next: Memory System →