Advisor Skill

Add LaunchLabs pre-flight testing to your AI coding assistant. Assessment → Fix → Repeat until green.

📥 Download SKILL.md (OpenClaw/Claude) 📥 Download .cursorrules (Cursor)
🔄 The Feedback Loop

Your AI calls the LaunchLabs API, gets issues + fix code, applies fixes, re-assessments. Keeps going until A grade. One prompt, hands-off.

How It Works

🤖 Your AI 📡 LaunchLabs API 🔍 We Test 🔧 Fix Code 🔁 Repeat

Your AI orchestrates. We do the testing and analysis. Credits deducted per advisor run.

The Advisor Skill is just your AI calling our API. No special packages to install — just teach your AI the API endpoints and let it run the loop.

Requirements

Cursor Setup

Add this to your .cursorrules or project instructions:

# LaunchLabs Pre-Flight Testing

When asked to test, audit, or pre-flight check a website:

1. Call the LaunchLabs API to run agents:
   POST https://getlaunchlabs.com/api/assessment
   Headers: 
     Authorization: Bearer ll_YOUR_API_KEY
     Content-Type: application/json
   Body: {"url": "https://the-site.com", "agents": ["security", "seo", "mobile"]}

2. Parse the response for issues and fixes
3. Apply the fixes to the codebase
4. Re-run the assessment to verify
5. Repeat until grade is A or B

Available agents: security, performance, mobile, accessibility, seo, 
copy, api, trust, legal, auth, billing, onboarding, journey, account, admin

Bundles: "quick", "prelaunch"

Example Cursor Prompts

"Pre-flight check my app. Use LaunchLabs API to test and fix until A grade."

"Run the quick assessment bundle on localhost:3000, fix everything you find."

"Security audit with LaunchLabs. Fix issues, re-test, repeat."

Claude Setup

In Claude Projects, add this as project knowledge:

# LaunchLabs API Integration

API Base: https://getlaunchlabs.com/api
Auth: Bearer token (API key from LaunchLabs account)

## Run Assessment
POST /api/assessment
{
  "url": "https://example.com",
  "agents": ["security", "seo"]  // or "bundle": "quick"
}

Response includes:
- grade: A/B/C/D/F
- issues: array of problems found  
- fixes: exact code to fix each issue

## Feedback Loop
1. POST /api/assessment with URL and agents
2. Apply fixes from response
3. POST /api/assessment again
4. Repeat until grade >= B
⚠️ Claude Limitations

Claude can analyze results but can't make HTTP calls directly without MCP or computer use. You may need to manually run the API calls and paste results, or use Claude with an MCP server that provides HTTP.

OpenClaw Setup

OpenClaw has full HTTP capability via exec. Add to your TOOLS.md:

## LaunchLabs Pre-Flight

API Key: ll_YOUR_KEY_HERE

# Run a assessment:
curl -X POST https://getlaunchlabs.com/api/assessment \
  -H "Authorization: Bearer ll_YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://myapp.com", "agents": ["security", "seo"]}'

# Or use bundles:
-d '{"url": "...", "bundle": "quick"}'
-d '{"url": "...", "bundle": "prelaunch"}'

OpenClaw Magic Prompt

"Run LaunchLabs pre-flight on my app at localhost:3000.
Use the quick bundle. Fix every issue you find.
Keep assessmentning and fixing until we hit A grade."
✅ Best Experience

OpenClaw can run the full loop autonomously: API call → parse response → edit files → re-assessment. Zero manual steps.

Lovable / Replit / v0

For vibe coding tools, just ask them to use the API:

"Before we deploy, call the LaunchLabs API at 
https://getlaunchlabs.com/api/assessment with our preview URL.
My API key is ll_xxx. Run the quick bundle.
Fix any issues, then assessment again until we pass."

Most vibe coding tools can make HTTP requests. If not, they can generate the curl command for you to run.

Magic Prompts

These prompts trigger the automated feedback loop:

# Full pre-launch check
"Run LaunchLabs prelaunch bundle on my app. Fix every issue.
Keep testing and fixing until we're A grade across all agents."
# Quick sanity check
"Quick LaunchLabs assessment. Fix critical issues, re-test until clean."
# Specific agents
"Run LaunchLabs security and accessibility agents.
Fix all issues recursively until both pass."
# Pre-deploy gate
"Before deploy: LaunchLabs pre-flight. Fix what you can,
tell me what needs manual attention. Don't stop until green or blocked."

Advisor Selection

Advisor ID Best For
Noah (Security)securityXSS, headers, vulns
Ethan (Performance)performanceCore Web Vitals
Lina (Mobile)mobileResponsive, touch
Maya (Accessibility)accessibilityWCAG compliance
Sofia (SEO)seoMeta, structure
Grace (Copy)copyMessaging, CTAs
Arjun (API)apiEndpoints, errors
Hana (Trust)trustSocial proof
Marcus (Legal)legalPrivacy, ToS
Diego (Auth)authLogin flows
Amy (Billing)billingCheckout, pricing
Priya (Onboarding)onboardingSignup → first-use
Zoe (Journey)journeyFull user lifecycle
Omar (Account)accountSettings, profile
Kara (Admin)adminAdmin panels

Bundles

BundleAdvisors
quickseo, mobile, performance, copy, security
prelaunchAll 15 advisors

The Feedback Loop

The real power is continuous iteration. Your AI runs this loop:

// Pseudocode - what your AI does
while (true) {
  result = POST /api/assessment { url, agents }
  
  if (result.grade >= "B") {
    print("✅ Pre-flight passed!")
    break
  }
  
  for (issue in result.issues) {
    apply(issue.fix)  // Edit files with the fix code
  }
  
  rebuild()  // If needed
  // Loop continues - re-assessment with fixes applied
}
⏱️ How Long?

Typical app goes from C → A in 3-8 iterations, 15-45 minutes. Your AI runs it unattended.

Compatible Tools

Cursor

Full HTTP + file editing

Full Support
🦞
OpenClaw

Best automation

Full Support
💜
Lovable

API integration

Full Support
🔵
Replit

API integration

Full Support
🌊
Windsurf

Full workspace

Full Support
🔧
Cline

MCP compatible

Full Support
🚀 Ready?

Get your API key from Account → API Keys and start testing.