How Spexus helps your role

Every member of the team gets an AI copilot at the stage where they work.

Product Manager

Turn product vision into concrete specifications

Problem

You carry the product vision in your head and retell it in calls and chats. By the time implementation starts, half of the context is gone and the result no longer matches expectations.

How Spexus helps
  • Describe a feature idea in your own words and the AI analyst will ask follow-up questions and help shape it into an epic with user stories
  • Capture priorities, goals, and business context so they become part of the specification
  • Track delivery progress: which tasks are completed and which requirements are covered
Example

"We need report export" -> after a 30-minute dialogue with the AI analyst you have an epic, 5 user stories, 12 requirements, and acceptance criteria for every scenario.

Tech Lead

Architectural decisions are recorded and enforced

Problem

You make architectural decisions and document them in Confluence or README files, but AI agents do not see them and generate code their own way. Every PR still needs review for standards compliance.

How Spexus helps
  • Record architecture, code style, and technical standards in steering documents
  • AI agents receive those documents automatically through MCP and code against your rules
  • The AI analyst helps choose an architecture from requirements instead of habit
Example

You document: "we use Clean Architecture, Go, PostgreSQL, no ORM." Every AI agent working through Spexus sees those constraints and follows them.

Developer

Build features instead of reconstructing context

Problem

Before you can write code, you spend time figuring out what exactly to build, which constraints matter, and what the dependencies are. You also have to rewrite prompts for AI over and over again.

How Spexus helps
  • Each task already contains everything: what to implement, which requirements to respect, and which standards to follow
  • Your AI agent, whether Claude Code, Cursor, or Codex, gets the context through MCP automatically
  • Tasks are ordered correctly so you can just pick the next one and work
Example

You open Claude Code and say "take the next task from Spexus" - the agent reads the task, requirements, and steering documents, then writes code. You review the result.

QA Engineer

Acceptance criteria become a working tool, not a formality

Problem

You write test cases separately from requirements. Developers do not always read them. AI agents do not know about them. Acceptance turns into a manual comparison exercise.

How Spexus helps
  • Add acceptance criteria in Given/When/Then format directly to user stories
  • The acceptance agent automatically compares code with your criteria
  • You get a report showing what passed, what failed, and where the gaps are
  • Edge cases and negative scenarios are captured at specification time before coding starts
Example

You add 8 acceptance criteria to a user story. After the execution agent writes the code, the acceptance agent checks all 8 scenarios. 7 pass, 1 needs rework.

Systems Analyst

Requirements that both humans and AI can understand

Problem

You write requirements, but they disappear inside documentation. Developers interpret them in different ways, and AI agents do not see them. Traceability from requirement to code is manual work.

How Spexus helps
  • The AI analyst helps phrase requirements in EARS style so they are unambiguous and testable
  • The hierarchy epic -> story -> requirement -> acceptance criterion gives you full traceability
  • Requirements automatically become context for AI agents during both coding and acceptance
Example

Instead of a fuzzy statement, the AI analyst helps produce a precise EARS requirement with measurable SLA and business context.

Continuous improvement

Tune AI agents for your team and improve them over time

Every agent is backed by a prompt that the team can edit and refine to match its own delivery practice.

  • Improve existing agents: if the analyst asks the wrong questions or the acceptor misses edge cases, update the prompt
  • Create new agents such as a critic, researcher, strict QA, or onboarding guide
  • Share agents across projects so a strong prompt can be reused elsewhere
Example

A team added mandatory negative-path checks to the acceptor prompt, and every acceptance review started including error handling.

Protection from conflicting requirements

A new feature does not break what already works

When you add a new feature, the AI analyst sees the full requirements landscape and can check consistency, completeness, dependencies, and relationships between them.

  • Consistency - a new requirement does not conflict with existing ones
  • Completeness - there are no gaps in the specification
  • Dependencies - which existing components will be affected by the change
  • Relationships - requirements can be linked through depends_on, blocks, or conflicts_with
Example

A conflict between bulk export and an API response size limit is detected before coding starts, not during acceptance testing.