Advisor Skill
Add LaunchLabs pre-flight testing to your AI coding assistant. Assessment → Fix → Repeat until green.
🔄 The Feedback Loop
Your AI calls the LaunchLabs API, gets issues + fix code, applies fixes, re-assessments. Keeps going until A grade. One prompt, hands-off.
How It Works
Your AI orchestrates. We do the testing and analysis. Credits deducted per advisor run.
The Advisor Skill is just your AI calling our API. No special packages to install — just teach your AI the API endpoints and let it run the loop.
Requirements
- LaunchLabs API key — Get one from your account dashboard
- AI with HTTP capability — Must be able to make API calls
Cursor Setup
Add this to your .cursorrules or project instructions:
# LaunchLabs Pre-Flight Testing
When asked to test, audit, or pre-flight check a website:
1. Call the LaunchLabs API to run agents:
POST https://getlaunchlabs.com/api/assessment
Headers:
Authorization: Bearer ll_YOUR_API_KEY
Content-Type: application/json
Body: {"url": "https://the-site.com", "agents": ["security", "seo", "mobile"]}
2. Parse the response for issues and fixes
3. Apply the fixes to the codebase
4. Re-run the assessment to verify
5. Repeat until grade is A or B
Available agents: security, performance, mobile, accessibility, seo,
copy, api, trust, legal, auth, billing, onboarding, journey, account, admin
Bundles: "quick", "prelaunch"
Example Cursor Prompts
"Pre-flight check my app. Use LaunchLabs API to test and fix until A grade."
"Run the quick assessment bundle on localhost:3000, fix everything you find."
"Security audit with LaunchLabs. Fix issues, re-test, repeat."
Claude Setup
In Claude Projects, add this as project knowledge:
# LaunchLabs API Integration
API Base: https://getlaunchlabs.com/api
Auth: Bearer token (API key from LaunchLabs account)
## Run Assessment
POST /api/assessment
{
"url": "https://example.com",
"agents": ["security", "seo"] // or "bundle": "quick"
}
Response includes:
- grade: A/B/C/D/F
- issues: array of problems found
- fixes: exact code to fix each issue
## Feedback Loop
1. POST /api/assessment with URL and agents
2. Apply fixes from response
3. POST /api/assessment again
4. Repeat until grade >= B
⚠️ Claude Limitations
Claude can analyze results but can't make HTTP calls directly without MCP or computer use. You may need to manually run the API calls and paste results, or use Claude with an MCP server that provides HTTP.
OpenClaw Setup
OpenClaw has full HTTP capability via exec. Add to your TOOLS.md:
## LaunchLabs Pre-Flight
API Key: ll_YOUR_KEY_HERE
# Run a assessment:
curl -X POST https://getlaunchlabs.com/api/assessment \
-H "Authorization: Bearer ll_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "https://myapp.com", "agents": ["security", "seo"]}'
# Or use bundles:
-d '{"url": "...", "bundle": "quick"}'
-d '{"url": "...", "bundle": "prelaunch"}'
OpenClaw Magic Prompt
"Run LaunchLabs pre-flight on my app at localhost:3000.
Use the quick bundle. Fix every issue you find.
Keep assessmentning and fixing until we hit A grade."
✅ Best Experience
OpenClaw can run the full loop autonomously: API call → parse response → edit files → re-assessment. Zero manual steps.
Lovable / Replit / v0
For vibe coding tools, just ask them to use the API:
"Before we deploy, call the LaunchLabs API at
https://getlaunchlabs.com/api/assessment with our preview URL.
My API key is ll_xxx. Run the quick bundle.
Fix any issues, then assessment again until we pass."
Most vibe coding tools can make HTTP requests. If not, they can generate the curl command for you to run.
Magic Prompts
These prompts trigger the automated feedback loop:
# Full pre-launch check
"Run LaunchLabs prelaunch bundle on my app. Fix every issue.
Keep testing and fixing until we're A grade across all agents."
# Quick sanity check
"Quick LaunchLabs assessment. Fix critical issues, re-test until clean."
# Specific agents
"Run LaunchLabs security and accessibility agents.
Fix all issues recursively until both pass."
# Pre-deploy gate
"Before deploy: LaunchLabs pre-flight. Fix what you can,
tell me what needs manual attention. Don't stop until green or blocked."
Advisor Selection
| Advisor | ID | Best For |
|---|---|---|
| Noah (Security) | security | XSS, headers, vulns |
| Ethan (Performance) | performance | Core Web Vitals |
| Lina (Mobile) | mobile | Responsive, touch |
| Maya (Accessibility) | accessibility | WCAG compliance |
| Sofia (SEO) | seo | Meta, structure |
| Grace (Copy) | copy | Messaging, CTAs |
| Arjun (API) | api | Endpoints, errors |
| Hana (Trust) | trust | Social proof |
| Marcus (Legal) | legal | Privacy, ToS |
| Diego (Auth) | auth | Login flows |
| Amy (Billing) | billing | Checkout, pricing |
| Priya (Onboarding) | onboarding | Signup → first-use |
| Zoe (Journey) | journey | Full user lifecycle |
| Omar (Account) | account | Settings, profile |
| Kara (Admin) | admin | Admin panels |
Bundles
| Bundle | Advisors |
|---|---|
quick | seo, mobile, performance, copy, security |
prelaunch | All 15 advisors |
The Feedback Loop
The real power is continuous iteration. Your AI runs this loop:
// Pseudocode - what your AI does
while (true) {
result = POST /api/assessment { url, agents }
if (result.grade >= "B") {
print("✅ Pre-flight passed!")
break
}
for (issue in result.issues) {
apply(issue.fix) // Edit files with the fix code
}
rebuild() // If needed
// Loop continues - re-assessment with fixes applied
}
⏱️ How Long?
Typical app goes from C → A in 3-8 iterations, 15-45 minutes. Your AI runs it unattended.
Compatible Tools
Cursor
Full HTTP + file editing
Full SupportOpenClaw
Best automation
Full SupportLovable
API integration
Full SupportReplit
API integration
Full SupportWindsurf
Full workspace
Full SupportCline
MCP compatible
Full Support🚀 Ready?
Get your API key from Account → API Keys and start testing.