Python SDK
Official Python SDK for MCPulse analytics.
Installation
pip install mcpulse
Quick Start
Basic Usage
from mcpulse import AnalyticsTracker, Config
# Create configuration
config = Config(
server_id="my-mcp-server",
grpc_endpoint="localhost:9090",
api_key="your-api-key" # Get from MCPulse dashboard
)
# Create tracker
tracker = AnalyticsTracker(config)
# Track tool calls with decorator
@tracker.track_tool_call("calculate")
def calculate(operation: str, a: float, b: float):
if operation == "add":
return a + b
elif operation == "multiply":
return a * b
raise ValueError(f"Unknown operation: {operation}")
# Use the tool - metrics are automatically tracked
result = calculate(operation="add", a=5, b=3)
print(f"Result: {result}")
# Close tracker when done
tracker.close()
With MCP Server
Integrate MCPulse into an MCP server using the official mcp
package:
from mcp.server.fastmcp import FastMCP
from mcp.server.stdio import stdio_server
from mcpulse import AnalyticsTracker, Config
# Initialize MCPulse tracker
mcpulse_config = Config(
server_id="my-mcp-server",
grpc_endpoint="localhost:9090",
api_key="your-api-key"
)
tracker = AnalyticsTracker(mcpulse_config)
# Create MCP server
mcp = FastMCP("My Server")
@mcp.tool()
@tracker.track_tool_call("calculate")
async def calculate(a: float, b: float) -> float:
"""Add two numbers with analytics tracking."""
return a + b
@mcp.tool()
@tracker.track_tool_call("search")
async def search(query: str) -> list[dict]:
"""Search database with tracking."""
# Your search implementation
results = await db.search(query)
return results
# Run server
async def main():
async with stdio_server() as (read, write):
await mcp.run(read, write, mcp.create_initialization_options())
tracker.flush()
tracker.close()
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Configuration
from mcpulse import Config
config = Config(
# Server identification (required)
server_id="my-mcp-server",
# Transport configuration
grpc_endpoint="localhost:9090",
# Authentication (required if MCPulse has auth enabled)
api_key="mcpulse_...",
# Collection settings
enable_param_collection=True, # Collect tool parameters
async_mode=True, # Use background thread for flushing
buffer_size=100, # Metrics buffer size
flush_interval=5.0, # Flush every 5 seconds
max_batch_size=1000, # Max metrics per batch
# Privacy
sanitize_params=True, # Redact sensitive parameters
sensitive_keys=["password", "token", "api_key", "secret"],
# Sampling
sample_rate=1.0, # 1.0 = 100%, 0.1 = 10%
# Retry
max_retries=3,
retry_backoff=1.0, # seconds
# Timeout
timeout=10.0,
# Protocol metadata
protocol_version="2024-11-05",
client_name="mcpulse-python",
client_version="0.1.0"
)
# Validate configuration
config.validate()
Environment Variables
Configure using environment variables:
export MCP_ANALYTICS_SERVER_ID=my-server
export MCP_ANALYTICS_GRPC_ENDPOINT=localhost:9090
export MCP_ANALYTICS_API_KEY=your-api-key
export MCP_ANALYTICS_ASYNC=true
export MCP_ANALYTICS_BUFFER_SIZE=100
export MCP_ANALYTICS_FLUSH_INTERVAL=5
export MCP_ANALYTICS_SANITIZE=true
export MCP_ANALYTICS_SAMPLE_RATE=1.0
Then create config without arguments:
config = Config() # Loads from environment
Key Features
Decorator Tracking
The simplest way to track tool calls:
@tracker.track_tool_call("tool_name")
def my_tool(param1: str, param2: int):
# Your implementation
return result
The decorator:
- Automatically captures duration
- Records parameters (with sanitization)
- Catches and records errors
- Associates with current session
Async Support
Works seamlessly with async functions:
@tracker.track_tool_call("async_search")
async def search_database(query: str):
await asyncio.sleep(0.1) # Simulate async operation
return [{"id": 1, "title": "Result"}]
# Use it
result = await search_database(query="test")
Session Tracking
Group related tool calls into sessions:
# Start a session
with tracker.session() as session_id:
print(f"Session ID: {session_id}")
# All tracked calls within this context are associated with the session
calculate(operation="add", a=5, b=3)
calculate(operation="multiply", a=4, b=7)
search_database(query="test")
# Session is automatically ended when exiting context
Custom session ID:
with tracker.session(session_id="custom-session-123"):
# Tool calls here use your custom session ID
pass
Manual Tracking
For more control, track metrics manually:
import time
from datetime import datetime
start = time.time()
try:
result = some_operation()
status = "success"
error = None
except Exception as e:
status = "error"
error = str(e)
raise
finally:
duration_ms = (time.time() - start) * 1000
tracker.track_manual(
tool_name="custom_tool",
parameters={"key": "value"},
duration_ms=duration_ms,
status=status,
error_message=error,
timestamp=datetime.utcnow()
)
Parameter Sanitization
Sensitive parameters are automatically redacted:
@tracker.track_tool_call("login")
def login(username: str, password: str):
# password will be redacted in stored metrics
return authenticate(username, password)
# Stored parameters: {"username": "john", "password": "[REDACTED]"}
Configure sensitive keys:
config = Config(
sanitize_params=True,
sensitive_keys=["password", "token", "api_key", "secret", "auth"]
)
Sampling
Control metric collection with sampling:
# Collect 10% of metrics
config = Config(sample_rate=0.1)
# Or dynamically:
tracker.config.sample_rate = 0.5 # 50%
Manual Flushing
Force flush buffered metrics:
tracker.flush() # Blocks until flushed
Query API
Query analytics data from MCPulse:
import grpc
from mcpulse.query import QueryClient
from mcpulse.types import TimeRange
from datetime import datetime, timedelta
# Create gRPC channel
channel = grpc.insecure_channel("localhost:9090")
# Create query client
client = QueryClient(channel=channel, api_key="your-api-key")
# List all servers
servers, pagination = client.list_servers(limit=10)
for server in servers:
print(f"Server: {server.name} ({server.id})")
# Get server metrics
time_range = TimeRange(
start=datetime.now() - timedelta(days=1),
end=datetime.now()
)
metrics = client.get_server_metrics(
server_id="my-server",
time_range=time_range,
interval="1h"
)
print(f"Total Calls: {metrics.total_calls}")
print(f"Success Rate: {metrics.success_rate * 100:.1f}%")
# Get tool performance
tools, _ = client.get_tools(
server_id="my-server",
time_range=time_range,
limit=10,
sort_by="call_count"
)
for tool in tools:
print(f"{tool.name}: {tool.call_count} calls")
# Get errors
errors, _ = client.get_errors(
server_id="my-server",
time_range=time_range,
limit=20
)
for error in errors:
print(f"{error.tool_name}: {error.error_message}")
channel.close()
Testing
Mock for Testing
Test without a running MCPulse server:
from unittest.mock import Mock, patch
from mcpulse import AnalyticsTracker, Config
config = Config(
server_id="test-server",
grpc_endpoint="localhost:9090",
async_mode=False # Synchronous for easier testing
)
tracker = AnalyticsTracker(config)
# Mock the collector
with patch.object(tracker.collector, 'collect') as mock_collect:
@tracker.track_tool_call("test_function")
def test_function(x: int) -> int:
return x * 2
result = test_function(5)
assert result == 10
assert mock_collect.called # Verify metric was collected
tracker.close()
Run Tests
# Using uv
uv run pytest
# With coverage
uv run pytest --cov=mcpulse_python --cov-report=html
# Or use Makefile
make test
make test-cov
Examples
The SDK includes complete examples in the examples/
directory:
- basic_usage.py - Simple decorator usage
- query_example.py - Querying analytics data
- mcp_server_integration.py - Full MCP server with tracking
Run examples:
# Basic usage
make example-basic
# Query API
make example-query
# MCP server (requires mcp package)
uv pip install ".[mcp]"
python examples/mcp_server_integration.py
Troubleshooting
Import Errors
If you see import errors:
# Ensure package is installed
uv pip install -e .
# Check Python path
python -c "import mcpulse_python; print(mcpulse_python.__file__)"
gRPC Connection Errors
Verify MCPulse server is running:
# Test connectivity
grpcurl -plaintext localhost:9090 list
# Check MCPulse server logs
docker compose logs mcpulse-server
Authentication Errors
If you get unauthorized errors:
- Verify your API key is correct
- Check that auth is enabled in MCPulse config
- Ensure API key has correct permissions
API Reference
AnalyticsTracker
Main tracking interface.
Methods:
__init__(config: Config)
- Create trackertrack_tool_call(tool_name, enable_param_collection=None)
- Decoratorsession(session_id=None)
- Session context managertrack_manual(...)
- Manual trackingflush()
- Flush buffered metricsclose()
- Close and cleanup
QueryClient
Query analytics data.
Methods:
list_servers(limit, offset)
- List serversget_server(server_id)
- Get server detailsget_server_metrics(server_id, time_range, interval)
- Get metricsget_tools(server_id, time_range, ...)
- List toolsget_tool_timeline(server_id, tool_name, ...)
- Time-series dataget_errors(server_id, time_range, ...)
- Get errorsget_sessions(query)
- Get sessions
Config
Configuration object.
Fields:
server_id
(str, required)grpc_endpoint
(str, required)api_key
(str, optional)buffer_size
(int, default: 100)flush_interval
(float, default: 5.0)sanitize_params
(bool, default: True)sample_rate
(float, default: 1.0)