Model Context Protocol

MCP

Connect bugAgent to any MCP-compatible AI client.

File, classify, and manage bugs, feature requests, and more directly from your AI coding assistant. No context switching, no copy-paste — just describe the issue and bugAgent handles the rest.

Getting Started

The bugAgent MCP server lets AI clients create, query, and manage bug reports, feature requests, enhancements, and more through the Model Context Protocol. It runs locally and communicates with bugAgent's cloud API.

1
Get your API key

Sign up at app.bugagent.com and generate an API key from the console.

2
Configure your AI client

Add bugAgent as an MCP server in your client's config (see setup below).

3
Start filing bugs

Describe a bug in natural language and bugAgent auto-classifies, enriches, and stores it.

Quick Example
# Create a bug report
"File a bug: Login button is unresponsive on iOS Safari.
Steps: tap login, nothing happens. Expected: navigate to
dashboard. Severity: high."

# bugAgent auto-classifies as UI bug, severity high

# File a feature request
"Feature request: Add dark mode toggle to the
settings page. Users have asked for this in surveys."

# Auto-classified as feature-request, severity medium

Setup

Install

No global install required. Use npx to run the MCP server on demand:

npx @bugagent/mcp-server

Configure your API key

When you first connect, bugAgent will prompt you for your API key. You can also set it via environment variable:

export BUGAGENT_API_KEY=ba_live_your_key_here

Get your API key from the bugAgent console.

MCP Client Configuration

Add the following to your MCP client's configuration file:

mcp.json
{
  "mcpServers": {
    "bugagent": {
      "command": "npx",
      "args": ["-y", "@bugagent/mcp-server"],
      "env": {
        "BUGAGENT_API_KEY": "ba_live_your_key_here"
      }
    }
  }
}
💡
Replace ba_live_your_key_here with your actual API key from the console.

MCP Features

The bugAgent MCP server provides tools for:

🐛

Bug Report Management

  • create_bug_report — File a new report with auto-classification across 19 types — bugs, feature requests, enhancements, technical debt, and more (title: 3-500 chars). Set format_description: true to auto-reformat the description into a structured template using AI. Pass time_spent_seconds to track QA effort.
  • list_bug_reports — List and filter reports (max 100 per page)
  • get_bug_report — Get full details of a report by ID. Returns qualityScore (integer 1–10) and qualityBreakdown (object with 10 dimension scores: reproductionSteps, expectedVsActual, environmentDetails, evidence, rootCauseAnalysis, impactAssessment, contextAndHistory, heuristicsAndOracles, clarityAndStructure, actionability — each 0.0–1.0)
  • update_bug_report — Update fields on an existing report (includes time_spent_seconds for timer tracking)
  • classify_bug — Classify a description into one of 19 report types (bugs, features, enhancements, etc.) with confidence score
  • flush_reports — Bulk delete old reports (admin only)
📊

Usage & Analytics

  • get_usage — Check usage against plan limits
  • get_stats — Daily counts, type/severity/status breakdowns
📁

Project Management

  • list_projects — List available projects
  • create_project — Create a new project (auto-becomes default if first)
  • delete_project — Permanently delete a project and all associated data (bug reports, automations, test cases, mobile apps, schedules, geo snaps, notes, time entries). Only owner/manager. Cannot delete last project. Storage is freed automatically
🔐

Authentication & Account

  • register_account — Create a new account (password: 8-128 chars, rate limited: 5/15min)
  • login — Sign in and receive access tokens (rate limited: 5/15min)
  • update_profile — Update display name
  • change_password — Change account password
  • get_settings / update_settings — Manage preferences
🔑

API Key Management

  • generate_api_key — Create a named API key
  • list_api_keys — List active keys (prefix only)
  • regenerate_api_key — Revoke and replace a key
  • delete_api_key — Permanently revoke a key
👥

Team Management

  • list_team_members — List all members of your organization with roles, status, and booster flags
  • invite_team_member — Invite a user by email (managers can invite contributors and managers; only owners can invite admins). 5-day expiry link
🎯

Integrations

  • sync_to_jira — Sync a report to Jira using team's shared connection
  • push_to_claude — Send a bug report to Claude for root cause analysis. Returns detailed analysis including probable cause, suggested fix, verification steps, and risk assessment. Requires Claude connection configured in Settings → Integrations.
  • upgrade_plan — Upgrade subscription via Stripe

Performance Testing

  • create_performance_test — Create a performance test config with URL, device, virtual users, duration, score threshold, and auto-bug creation toggle. Pro/Team/Enterprise plans only
  • run_performance_test — Trigger a page audit and load test, or a mobile app profiling session on real Android/iOS devices via BrowserStack. Returns a run ID to poll for results
  • get_performance_results — Get full results including Lighthouse scores (Performance, Accessibility, Best Practices, SEO), Core Web Vitals (LCP, FID, CLS, FCP, TTFB, INP, TBT, SI), and load test metrics (VUs, requests, RPS, p50/p90/p95/p99 latencies)
  • list_performance_tests — List all performance test configurations for the current team
  • get_performance_usage — Check monthly performance test usage against plan limits. Free=0, Pro=1/mo, Team=2/mo, Enterprise=10/mo. Top-up: 5 tests for $499

Example Workflow

  1. get_performance_usage → check remaining quota
  2. create_performance_test → configure a test for your URL
  3. run_performance_test → trigger the audit + load test
  4. get_performance_results → review scores and vitals
🛡

Security Scanning

  • create_security_scan — Create a security scan configuration. Web scans use Quick Scanner + Nuclei (4,000+ templates) with three depth levels and optional authenticated scanning. Mobile scans use MobSF for APK/IPA binary analysis. Configurable auto-bug creation with severity thresholds. Pro/Team/Enterprise plans only
  • run_security_scan — Trigger a vulnerability scan. Web scans require DNS domain verification. Mobile scans require an uploaded app. Returns a run ID to poll for results
  • get_security_results — Get full results including security score (0-100), findings categorized by severity (Critical, High, Medium, Low, Info) with CWE references, OWASP mappings, evidence, and remediation guidance
  • list_security_scans — List all security scan configurations for the current team with last score and auth/depth badges
  • get_security_usage — Check monthly security scan usage against plan limits. Pro=2/mo, Team=5/mo, Enterprise=20/mo. Top-up: 10 scans for $299

Example Workflow

  1. get_security_usage → check remaining quota
  2. create_security_scan → configure a scan for your URL or repo
  3. run_security_scan → trigger the vulnerability scan
  4. get_security_results → review findings and remediation
📖

Code Review

  • list_code_reviews — List recent AI code reviews for the team. Returns quality scores, severity counts, PR info, and timestamps. Team/Enterprise only
  • get_code_review — Get a code review with all findings. Each finding includes severity, category (bug/security/performance/style/logic/maintainability), title, description, code suggestion, file path, and line numbers
  • get_code_review_usage — Check code review usage. Unlimited on Pro, Team, and Enterprise plans
  • get_code_review_analytics — Get review analytics: trends, finding categories/sources, severity breakdown, velocity metrics, top repos/authors. Supports 7/30/90-day lookback
  • list_explorations — List Exploratory AI configs for the team
  • create_exploration — Create a new autonomous exploration targeting a URL
  • get_exploration — Get exploration config with recent runs
  • get_exploration_run — Get run results with all phase data and findings
  • get_exploration_usage — Check monthly Exploratory AI usage (Pro: 3/mo, Team: 10/mo, Enterprise: 50/mo)

Example Workflow

  1. get_code_review_usage → check remaining reviews
  2. Review a PR in the dashboard at /dashboard/code-review
  3. list_code_reviews → see recent reviews
  4. get_code_review → get findings and suggestions
📝

Notes

  • list_notes — List notes with optional keyword search, project filter, author filter, and date range. Returns notes the user owns or shared notes within the team.
  • create_note — Create a note in one of 5 formats: markdown, plain_text, rich_text, checklist, outline. Set visibility to private or shared. Auto-title from first 30 characters if no title provided. Pass time_spent_seconds to track QA effort.
  • get_note — Get full note details including content and attachments. Requires id.
  • update_note — Update title, content, format, visibility, project, or time_spent_seconds. Only the author can update. Requires id.
  • delete_note — Permanently delete a note and its attachments. Only the author can delete. Requires id.

Example Workflow

  1. create_note → start a testing session note
  2. update_note → append observations as you test
  3. list_notes → search past notes by keyword or project
  4. get_note → retrieve full note with attachments
🤖

Automation

  • create_automation — Create a new automation with a custom Playwright script (no FAB recording required). Requires name. Optional: target_url, script (Playwright test content; defaults to a placeholder), status (draft or active, default: draft), project_id. Returns the automation id. Pro/Team plan required. Tip — Duplicate an automation: use get_automation to fetch the original script, then call create_automation with name set to "[Copy] Original Name" and pass the original script, target_url, and project_id. The duplicate starts in draft status with no version history.
  • list_automations — List Playwright automation scripts. Filter by project_id or status (draft, active, paused). Returns array of automations with name, target_url, last_run_status, and run_count.
  • get_automation — Get full automation details including Playwright script and recent runs. Requires id. Returns automation with script code and recent_runs array.
  • run_automation — Trigger an immediate run of a Playwright test. Requires automation_id. Virtual mode (default): optional device for viewport emulation (e.g. desktop, iphone-15). Live mode: set browserstack: true with bs_browser (chrome, firefox, safari, edge), bs_os (Windows, OS X), and bs_os_version to run on a real browser. Video, console logs, and network logs captured automatically.
  • list_automation_runs — List recent runs for an automation. Requires automation_id. Returns runs with status, duration_ms, and error_message.
  • list_schedules — List all scheduled automation runs for the current team with cron expression, timezone, and notification settings.
  • create_schedule — Create a scheduled automation run. Requires automation_id and cron_expression. Optional device parameter selects the device to emulate — e.g. desktop, iphone-15, galaxy-s23, ipad. Default: desktop. Also supports timezone, notify_on_fail, notify_email, and Slack notification options.
  • delete_schedule — Delete a scheduled automation run. Requires id.
  • optimize_automation_script — Send a Playwright script to Sonnet 4 for AI-powered optimization. Applies a 12-point checklist that fixes selectors, wait strategies, assertions, error handling, auth patterns, mobile compatibility, and strict mode. Requires automation_id. The current script version is saved before optimization. Returns the optimized script and a changes summary.
  • undo_automation_script — Revert an automation script to its previous version. Up to 10 previous versions are retained. Requires automation_id. Returns the restored script and the number of versions remaining.

Example Workflow

  1. create_automation → create a test with a custom script
  2. list_automations → browse available tests
  3. get_automation → inspect the Playwright script
  4. run_automation → trigger the test
  5. list_automation_runs → check results and duration
⏱️

Time Tracking

  • list_time_entries — List time entries for the team. Filter by period (today, week, month, all), project_id, category, and sort (newest, oldest, most_time, least_time). Team plan only.
  • create_time_entry — Log time spent on QA tasks. Requires description, category, and duration_minutes. Optionally set project_id and entry_date (defaults to today). Team plan only.
  • update_time_entry — Update an existing time entry. Requires id. Can update description, category, duration_minutes, project_id, or entry_date. Team plan only.
  • delete_time_entry — Permanently delete a time entry. Requires id. Team plan only.

Example Workflow

  1. create_time_entry → log 45 minutes of regression testing
  2. list_time_entries → view this week's time entries
  3. update_time_entry → adjust duration or category
  4. delete_time_entry → remove an incorrect entry
☑️

Test Cases

  • list_test_cases — List test cases with optional search, priority (critical, high, medium, low), type (functional, regression, smoke, integration, e2e, performance, security, usability, accessibility), status (active, draft, deprecated), and sort (newest, oldest, name, priority). Returns test cases with steps count, tags, and priority.
  • create_test_case — Create a test case with detailed steps. Requires name. Optional: description, preconditions, steps (array of { action, expected }), priority, type, tags, estimated_time (seconds). Returns the created test case with all steps.
  • get_test_case — Get full test case details including steps and execution history. Requires id.
  • list_test_suites — List test suites with case count and last run status. Optional search filter.
  • create_test_suite — Create a test suite to group related test cases. Requires name. Optional: description.
  • list_test_runs — List test runs with suite name, assignee, and pass/fail summary. Filter by search, status (in_progress, completed), or suite_id.
  • create_test_run — Create a test run from a suite. Requires suite_id and name. Optional: assigned_to (user UUID). Snapshots all cases from the suite at creation time.

Example Workflow

  1. create_test_case → define a test case with steps
  2. create_test_suite → group related test cases
  3. create_test_run → start a run from the suite
  4. list_test_runs → check run progress and pass rate

Team Booster

  • scale_team — Instantly scale your QA team with booster testers. Accounts are provisioned automatically with tester access. Specify team_size (1–10), location, duration, budget, and optionally product_url, product_types, and tech_levels. Pro and Team plans only. You will not be charged until approval has been given.

Example Workflow

  1. scale_team → provision 5 senior testers in the US for 1 month
  2. list_team_members → verify new testers appear in your team
  3. list_reports → review reports filed by booster testers
🌍

Geo-Snap

  • create_geo_snap — Capture screenshots of a URL from multiple countries simultaneously. Requires url (string) and countries (array of country codes, e.g. ["US", "DE", "JP"]). Free plan: 1 country per capture, 10 saved screenshots. Pro/Team: up to 5 countries, unlimited saved screenshots. Returns an array of snap objects with id, url, country, screenshot_url, status, and created_at.

Example Workflow

  1. create_geo_snap → capture https://example.com from US, DE, and JP
  2. Compare screenshots to verify localization, geo-redirects, and compliance
📱

Mobile Testing

  • upload_mobile_app — Upload an APK (Android) or IPA (iOS) app for testing on real devices. Requires name, platform (android/ios), and file_url. For iOS: upload the IPA for real-device runs, then upload a simulator .app build on the app detail page to enable recording.
  • update_mobile_app — Replace an app binary with a new version. Clears cached URLs and simulator builds so all automations use the new version on next run. Requires app_id and file_url. Optional: version.
  • create_mobile_automation — Create a test script. Requires name, app_id, script_type (maestro for YAML, appium for Appium Python, appium_js for Appium JavaScript), and script (the test script content).
  • run_mobile_test — Trigger a test run on a real BrowserStack device. All script types run via the Appium 2.x W3C WebDriver protocol. Requires automation_id and device (e.g. "Google Pixel 8", "iPhone 15 Pro"). Returns run ID for tracking. Video, Appium logs, device logs, and network logs are collected automatically.
  • list_mobile_runs — Get results for mobile test runs. Optional filters: automation_id, status (queued, running, passed, failed, error, archived), limit. Archived runs excluded from default listing.

Example Workflow — Android

  1. upload_mobile_app → upload your APK
  2. Record test in browser → actions captured automatically
  3. run_mobile_test → run on Google Pixel 8 (real device)
  4. list_mobile_runs → check results with video and logs
  5. Failures auto-create bug reports with failure snapshot and step breakdown

Example Workflow — iOS

  1. upload_mobile_app → upload your IPA (for real-device runs)
  2. Upload simulator .app build on app detail page (for recording)
  3. Record test in browser → actions captured from simulator
  4. run_mobile_test → run on iPhone 15 Pro (real device, uses IPA)
  5. update_mobile_app → replace IPA with new version when ready

Compatible Clients

bugAgent works with any client that supports the Model Context Protocol. Here are setup guides for popular clients:

🤖

Claude Desktop

Open Settings → Developer → Edit Config, then add:

claude_desktop_config.json
{
  "mcpServers": {
    "bugagent": {
      "command": "npx",
      "args": ["-y", "@bugagent/mcp-server"],
      "env": {
        "BUGAGENT_API_KEY": "ba_live_your_key_here"
      }
    }
  }
}

Restart Claude Desktop after saving.

✳️

Cursor

Open Settings → MCP Servers → Add Server, or edit .cursor/mcp.json in your project root:

.cursor/mcp.json
{
  "mcpServers": {
    "bugagent": {
      "command": "npx",
      "args": ["-y", "@bugagent/mcp-server"],
      "env": {
        "BUGAGENT_API_KEY": "ba_live_your_key_here"
      }
    }
  }
}
🌊

Windsurf

Open Settings → MCP → Add Server, or edit your MCP config file:

mcp_config.json
{
  "mcpServers": {
    "bugagent": {
      "command": "npx",
      "args": ["-y", "@bugagent/mcp-server"],
      "env": {
        "BUGAGENT_API_KEY": "ba_live_your_key_here"
      }
    }
  }
}
💻

Claude Code (CLI)

Add bugAgent directly from the terminal:

claude mcp add bugagent -- npx -y @bugagent/mcp-server

Set your API key with export BUGAGENT_API_KEY=ba_live_... before launching.

🔧

Other MCP Clients

Any client supporting MCP stdio transport works with bugAgent. Use the standard configuration:

  • Command: npx
  • Args: ["-y", "@bugagent/mcp-server"]
  • Env: BUGAGENT_API_KEY
CLI

Getting Started with CLI

The bugAgent CLI gives you full control over bug reports, feature requests, projects, and integrations from your terminal. Use it to:

  • Automate workflows — Integrate bug reporting into CI/CD pipelines, scripts, and cron jobs
  • Bulk operations — List, filter, and manage reports without leaving your terminal
  • Pipe-friendly output — JSON, YAML, and raw formats for composing with jq, yq, and other tools
  • Fast iteration — No browser needed — create and update reports in seconds

Installation

npm install -g @bugagent/cli

Verify the installation:

bugagent --version

Authentication

Set your API key as an environment variable:

export BUGAGENT_API_KEY=ba_live_your_key_here

Or pass it directly with the --api-key flag:

bugagent reports list --api-key ba_live_your_key_here
🔑
Get your API key from the bugAgent console. Keys start with ba_live_.

For persistent auth, add the export to your shell profile (~/.bashrc, ~/.zshrc, etc.).

Usage

Commands follow the pattern:

bugagent <resource> <action> [flags]

Resources can also use colon syntax for subresources:

bugagent reports comments add --report-id <id> --body "Reproduced on v2.1"

Use --help on any command for details:

bugagent reports --help
bugagent reports create --help

Example Session

Terminal
# List your projects
bugagent projects list

# Create a bug report in your default project
bugagent reports create \
  --title "Checkout 500 on discount code" \
  --description "Applying SAVE20 returns HTTP 500" \
  --severity critical \
  --type logic

# View recent reports
bugagent reports list --limit 5 --format pretty

# Get full details on a report
bugagent reports get rpt_abc123

# Sync a report to Jira
bugagent jira sync --report-id rpt_abc123

# Check your usage
bugagent usage get --format json

CLI Features

The CLI provides commands for:

reports Create, list, get, update, and flush bug reports
projects Create, list, update, and delete projects
keys Generate, list, regenerate, and revoke API keys
jira Connect, sync reports, and configure Jira settings
usage Check current usage against plan limits
stats View analytics and breakdowns
profile View and update your profile and settings
auth Login, register, and manage credentials

Global Flags

Flag Description
--api-key <key> Override the API key for this command
--format <fmt> Output format: json, yaml, pretty, raw
--debug Show request/response details for troubleshooting
--help Show help for any command
--version Print the CLI version

Output Formats

The CLI supports multiple output formats for different use cases:

json

Machine-readable JSON. Ideal for piping to jq or other tools.

yaml

Human-friendly YAML output for config files and readability.

pretty

Default. Colorized, formatted output designed for the terminal.

raw

Unformatted output. Useful for scripting and automation.

Filtering with --transform

Use --transform with GJSON syntax to query and filter output data:

# Default pretty output
bugagent reports list

# JSON for piping to other tools
bugagent reports list --format json

# YAML
bugagent reports list --format yaml

# Raw (no formatting)
bugagent reports get rpt_abc123 --format raw

# Filter with GJSON syntax
bugagent reports list --format json \
  --transform "items.#(severity==critical).title"

AI Skill

The CLI is also available as an AgentSkill, allowing AI coding assistants to use bugAgent on your behalf.

What is an AgentSkill?

AgentSkills let AI coding assistants (Claude Code, Cursor, etc.) invoke CLI tools contextually. The bugAgent skill gives your AI assistant the ability to file bugs, check project status, and sync to Jira — all without you typing a command.

Install the Skill

claude skills install bugagent --from @bugagent/mcp-server

Once installed, the context-aware AI Assistant can use bugAgent commands naturally — with full knowledge of your product, testing guidelines, and uploaded documentation:

AI Assistant Prompt
"File a critical bug: the payment webhook is returning
a 403 after the latest deploy. It affects all Stripe
events. Assign it to the payments project."

The skill translates the natural language into the appropriate CLI commands and executes them.

🎬
Session Replay + AI Assistant: When Session Replay is enabled (Pro/Team plans), the AI Assistant can reference the captured user session — clicks, navigation, errors, and network failures from the last 60 seconds — to auto-draft richer, more accurate bug reports with full reproduction context.

Get Help

Need assistance? We're here to help.