QA SOP
Role: QA Engineer
Prerequisites: Complete the Meta SOP first
Visual reference: Whimsical Flowchart
You are QA developers, not QA testers. You own all bugs and the test suites. Every bug you encounter is yours to resolve — you fix it locally in code, open a PR, and merge it. The backlog shrinks because you fix things, not because you file tickets. You also own the regression test suites — keeping them green, expanding coverage, and making sure they catch real issues.
You are trained to handle bugs across cred-web-commercial and cred-api-commercial. Your skillset will expand over time to cover more of the stack.
Deployment Cadence
Merges to develop stop on Fridays. Monday to Wednesday is your dedicated QA testing window on a stable staging environment. Production deploys on Wednesday after your sign-off. This is your buffer to catch regressions before they hit customers — use it aggressively.
High-Risk PRs Need Your Review
Database migrations, new tables, and other high-risk changes require a manual code review from a senior back-end engineer before merging. You may be asked to test these changes after merge — they are the most likely to cause production issues.
The Bug Fix Loop
(See the Whimsical flowchart for the visual)
1. Bug encountered (existing backlog, triage, product support channels)
↓
2. Fix it locally in Cursor
↓
3. Open PR
↓
4. CI automation handles conflicts and code review
↓
5. Passes regression tests + manual verify of fix
↓
6. Merge
Daily Rhythm
Your job is to crunch through bugs from every source and add tests so we never see the same bug twice.
1. Crunch the Bug Backlog
Open Linear, filter to the bug backlog. Pull the highest priority unassigned bugs and work through them top to bottom. Don't cherry-pick easy ones — grind through by priority until the backlog is empty.
2. Pick Up From Triage
Check the triage queue for newly reported bugs. Claim them, reproduce them, fix them. If it came from triage, it's already been flagged as a real issue — treat it as high priority.
3. Test Develop
Regularly test the develop branch. Click through flows, try edge cases, break things on purpose. When you find something broken:
- Fix it immediately if it's quick to diagnose
- If it needs more investigation: add it to Linear, assign to yourself, fix it in your next dedicated slot
- If it's clearly a regression from a recent merge: Slack the engineer who merged it with the PR link — then still try to fix it yourself before waiting for them
4. Product Support Bugs
Bugs reported through product support channels are real customer pain. Pick these up with urgency — they represent issues hitting users right now.
5. Test Post-Merge Features
When an engineer flags a complex or risky feature for QA review after merge: test it thoroughly, fix anything you find with Cursor, and only escalate if genuinely stuck.
6. Verify Feature Flags & Analytics
Every feature ships with a PostHog feature flag and analytics events. When testing a feature on staging or a review app, verify both.
Feature flag verification:
- Confirm the flag exists in PostHog for the environment you are testing (dev/staging/prod)
- Toggle the flag off — the feature must be completely hidden. No broken UI, no empty containers, no console errors
- Toggle the flag on — the feature works end-to-end for the targeted workspaces
- Check workspace targeting — only intended workspaces should see the feature. Log in as a non-targeted workspace and confirm the feature is hidden
- Check the loading state — the page should not flash the feature before the flag resolves. Look for skeleton/null states, not a flicker of gated content
Analytics verification:
- Open the PostHog event stream (Live Events) for the environment
- Trigger each tracked user action in the feature (clicks, submissions, exports, errors)
- Confirm each event appears with the correct
snake_casename and feature prefix - Expand event properties — verify
page_path, entity IDs, and action metadata are present and accurate - Check for duplicates — one action should fire one event, not multiples
- Check for missing events — if a key action has no event, flag it to the engineer
Failure resilience:
- Open browser DevTools, go to the Network tab, and block requests to
posthog.com - Use the feature normally — it must work identically. Tracking failures must not break any user flow
- If the feature breaks when PostHog is blocked, file a bug — analytics must be non-blocking
If any check fails, fix it with Cursor if you can, or flag it to the engineer with the specific failure.
7. Add Tests for Every Bug You Fix
After fixing a bug, add a test that catches it. Every bug fix should come with a regression test so the same issue never ships again. This is how the test suite grows — driven by real bugs, not hypothetical scenarios.
Front-End Tooling for QA
You fix bugs in FE code. Have these tools active in Cursor (set up per Meta SOP prerequisites):
- 21st.dev and ShadCN MCP — for pulling correct components when a fix requires new UI
- Cursor Design Mode — toggle in bottom-right corner. Select a component, tweak styles visually, click "Apply" to cascade.
- Stitch / Nano Banana — optional but useful when a fix requires building a new UI state
Component rules: Use exact ShadCN component names. Use only existing Storybook components. Use design token names, not color descriptions. When restyling, keep all event logic — do not replace components with ShadCN primitives. If Cursor uses the wrong component, screenshot it from Figma/Storybook and paste into chat.
How to Fix a Bug
Step 1 — Move the Ticket to In Testing
When you pick up a bug, move the Linear ticket to In Testing so the team knows it's being actively worked:
Using the Linear MCP, move [issue ID] to "In Testing"
and assign it to me.
Step 2 — Reproduce It Reliably
Use the tools you have to find the error yourself before asking anyone:
- GCP Cloud Logs via Cursor — this is your fastest path. Ask Cursor to pull the logs for you:
Using the GCP Observability MCP, find error logs related to
[describe the bug / endpoint / user action] from the last
[timeframe]. Show me the stack traces and failed requests.
-
PostHog Session Replay — find the user's session, watch the replay, see exactly what happened and where it broke
-
Last resort — ask the customer. Only if logs and replays give you nothing. Keep it short: ask for the exact steps they took and any error they saw.
If you still can't reproduce the bug consistently after all three, add it to Linear with the reproduction steps you've tried and what you found in logs/replays. Move on. An unreproducible bug can't be fixed.
Step 3 — Diagnose and Fix with Cursor
Open Cursor with all five repos in your workspace:
Plan a fix for a bug in [area of the app / repo name].
What happens: [describe the behaviour you see]
What should happen: [describe the expected behaviour]
Error / console output: [paste anything relevant]
Diagnose the root cause and show me the plan before writing any code.
Review the plan, then tell Cursor to implement it. Read and understand what it changed — you need to be able to explain it in the PR description.
Step 4 — Branch Off Develop
git checkout develop
git pull
git checkout -b fix/[brief-description]
Step 5 — Apply and Test the Fix
- Apply Cursor's fix
- Reproduce the original bug — confirm it's gone
- Check nothing adjacent is broken (click around the area of the fix)
- Check loading, empty, and error states if relevant
Step 6 — Open a PR
Bugs should be automatically fixed and a PR opened. The standard is: pull the code, fix it locally, verify it, and open the PR — all in one flow.
- Title:
[Fix] [brief description of what was broken] - Description: what was broken, what caused it, what the fix does — three sentences max
Step 7 — Merge
- Wait for regression tests to pass (Rainforest QA green, no
use clienterrors) - Manually verify the fix — reproduce the original bug and confirm it's gone
- Merge it yourself — QA merges their own bug fixes, no review required unless the fix is large or touches core architecture
Feature Flagging Before Merge
If your bug fix involves FE work where APIs are not yet hooked up, feature flag all non-working UI elements before merging. When adding a feature flag, also add PostHog analytics events to every interactive element in the same PR. Always merge to develop — long-lived branches cause conflicts.
Separate PRs for Component Fixes
If you notice a UI component needs fixing while working on a bug, create a separate PR for the component fix. Do not bundle. After merging, tell Cursor to create a Linear ticket and mark it completed.
Updating an Existing PR
To update an existing PR with new changes, just tell Cursor: "commit these changes." It will push to the same branch and update the PR.
Test Restyled Components
If you're restyling something that already works, test it still works — click through the flow, confirm events fire, check edge cases. New UI with no APIs just needs a visual check.
Use Customer Data
Use customer data (transcript analysis, closed/lost deal reasons, customer complaints) to inform which bugs to prioritise and how to improve the UX when fixing them.
When You Can't Solve It
If you've genuinely spent time with Cursor and are still stuck:
The test: Have you given Cursor the error, the reproduction steps, and the relevant code and asked it to diagnose and fix? If not — do that first. If yes and you're still blocked — escalate.
- Slack the relevant developer — share your PR (even if incomplete) so they can see what you've tried
- Create a Linear ticket for the developer with the relevant priority:
Using the Linear MCP, create a bug ticket for [description].
Include: reproduction steps, what I tried, where I got stuck,
relevant error/console output, and link to my PR: [PR URL].
Assign to [developer name] with [priority level] priority.
- Move on — there are other bugs to fix
What QA Does Not Do
- Does not write bug reports and wait. If you find it, you fix it.
- Does not accept "needs a developer" without trying Cursor first. Cursor can fix most things. Try it.
- Does not do net-new feature development. That's Engineering.
- Does not leave a bug unlogged. If you can't fix it immediately, it goes into Linear before you move on.
- Does not fix Playwright failures on developer PRs. Developers own fixing failing tests on their own PRs. QA owns the test suite baseline and infrastructure — not per-PR failures. See Playwright Test Ownership.
Troubleshooting
| Problem | Fix |
|---|---|
| Can't reproduce the bug | Log it in Linear with what you tried. Move on. |
| Cursor can't find the root cause | Give it more context: the specific component name, the network request that's failing, the exact error from the console. |
| Fix works locally but breaks something else | Ask Cursor: "My fix for [X] is causing [Y] to break. What's the conflict and how do I resolve it without breaking either?" |
| Build errors, conflicts, or code review comments | Ask Cursor: "Resolve all build errors, merge conflicts, and code review comments on this PR." Do not fix manually. |
| Rainforest QA failing on your fix PR | Paste the failure output into Cursor. Ask it to fix the failing test or the code causing it. Do not merge with a failing test. |
| Genuinely need a developer | Write up what you tried in Linear, assign to the relevant dev, move on. |
| Not seeing changes on localhost | Open an incognito tab — browser cache is almost always the issue. Use Google Chrome with code and localhost windows side by side for ~5x faster iteration. |
| Cursor uses the wrong component or invents one | Screenshot the correct component from Figma or Storybook and paste it into the Cursor chat as a visual reference. Use exact ShadCN component names. |