Engineering

I Let an AI Agent Read My Jira Tickets Before Reviewing PRs

How GitHub Agentic Workflows + MCP turned my pull request reviews from context-switching nightmares into fully-loaded briefings

I Let an AI Agent Read My Jira Tickets Before Reviewing PRs

There’s a tax every engineer pays when reviewing a pull request.

Before you even look at the diff, you’re doing archaeology: digging through Jira to find the ticket, clicking into Confluence to find the spec, and mentally stitching together why this change exists before you can say anything useful about how it was implemented.

On a good day this takes three minutes.

On a bad day you realize the Confluence page was last updated six months ago and the ticket has seventeen comment threads with crucial context buried in the middle.

I’ve been experimenting with GitHub Agentic Workflows, a new feature from GitHub Next that lets you define AI-powered automation in plain markdown files, and I built a workflow that solves exactly this problem.

It automatically enriches every pull request with context pulled straight from the related Jira issue and its linked Confluence page.

Here’s how it works, and why I think this kind of “Continuous AI” is going to become standard practice.


What GitHub Agentic Workflows Actually Are

Before we get into the specifics, a quick primer.

GitHub Agentic Workflows (gh-aw) let you define autonomous agents as markdown files in your repository under .github/workflows/.

The framework takes care of sandboxed execution, permissions, and safe outputs: a model where the agent can read freely but write operations have to go through pre-approved, sanitized channels (things like creating PR comments, opening issues, or updating labels).

The agent runs inside a GitHub Actions container. You wire it to triggers, a PR opened, a schedule, a label applied, and the markdown file describes in natural language what the agent should do. Under the hood, the gh aw CLI generates a .lock.yml that runs the actual Actions workflow. You write the what, the framework handles the how.

You can plug in different AI engines: GitHub Copilot, Claude, or OpenAI Codex. The markdown definition is model-agnostic.

This is the “Continuous AI” concept GitHub Next has been incubating: not AI you invoke manually on demand, but AI that runs systematically as part of your collaboration infrastructure, the same way CI/CD does.


The MCP Glue: workflows/shared/mcp/atlassian.md

The first thing I built was a shared MCP (Model Context Protocol) configuration for Atlassian. MCP is the protocol that lets AI agents talk to external tools through a standardized interface. Think of it as a plugin system that any agent can use regardless of which underlying model is running.

My workflows/shared/mcp/atlassian.md file declares the Jira and Confluence MCP server configuration that other workflows in the repo can reference. It looks roughly like this:

---
mcp-servers:
  atlassian:
    container: "ghcr.io/sooperset/mcp-atlassian"
    version: "latest"
    env:
      CONFLUENCE_URL: "https://${{vars.ATLASSIAN_INSTANCE}}.atlassian.net/wiki"
      CONFLUENCE_USERNAME: "${{vars.ATLASSIAN_EMAIL}}"
      CONFLUENCE_API_TOKEN: "${{ secrets.ATLASSIAN_TOKEN }}"
      JIRA_URL: "https://${{vars.ATLASSIAN_INSTANCE}}.atlassian.net"
      JIRA_USERNAME: "${{vars.ATLASSIAN_EMAIL}}"
      JIRA_API_TOKEN: "${{ secrets.ATLASSIAN_TOKEN  }}"
      JIRA_PROJECT: "${{vars.ATLASSIAN_PROJECT_KEY}}"
      CONFLUENCE_SPACE: "${{vars.ATLASSIAN_PROJECT_KEY}}"
    allowed: ["*"]
---

## Atlassian MCP Server

Shared configuration for Jira and Confluence access.
Provides tools for reading issues, fetching linked pages,
and retrieving acceptance criteria and design specs.

The key insight here is shared. Rather than copy-pasting credentials and server config into every workflow that needs Atlassian access, you define it once and reference it. This is infrastructure thinking applied to AI agents: DRY principles don’t stop at your application code.


The Enricher: workflows/pr_atlassian_enricher.md

The main workflow file is where it gets interesting. Here’s a simplified version of what .github/workflows/pr_atlassian_enricher.md does:

---
on:
  pull_request:
    types: [opened, ready_for_review]

permissions:
  contents: read
  pull-requests: read

network:
  allowed:
    - defaults
    - "${{vars.ATLASSIAN_INSTANCE}}.atlassian.net"

safe-outputs:
  add-pr-comment:
    header-prefix: "🔍 Atlassian Context"

mcp:
  include: workflows/shared/mcp/atlassian.md
---

## PR Atlassian Enricher

You are a helpful engineering assistant enriching pull requests with context
from related Jira issues and Confluence documentation.

### Steps

1. Read the PR title, description, and branch name.
2. Extract any Jira issue keys (format: `ABC-1234`) from the PR title,
   branch name, or description or Git commits.
3. For each issue found, use the atlassian MCP tool to fetch:
   - Issue summary and description
   - Acceptance criteria
   - Linked Confluence pages
4. For each linked Confluence page, fetch the relevant sections:
   - Design decisions
   - Technical specifications
   - Known constraints or out-of-scope items
5. Compose a concise context comment on the PR that includes:
   - A summary of what the issue is asking for and why
   - Key acceptance criteria the reviewer should check
   - Relevant technical context from the Confluence page
   - Any explicit out-of-scope items (to avoid scope creep reviews)
6. Post a comment with three sections:
   - **Jira Issues**: For each issue, show `[KEY](full-https-url)` - Title - Status
   - **Confluence Pages**: For each page, show `[Page Title](full-https-url)`
   - **Gap Analysis**: List missing requirements or discrepancies

That’s it.

Markdown, natural language, no custom Python scripts, no webhooks you have to host yourself.

The agent triggers on PR open or when a draft is marked ready for review, extracts Jira keys from the branch name or description (most teams already use feat/ABC-1234-short-description naming conventions), fetches all the relevant context, and posts a structured comment.


What the Output Looks Like

When a PR lands, within a minute or two, reviewers see something like this posted automatically:

🔍 Atlassian Context

Jira: ABC-1234 - Add rate limiting to the ingestion API


Why this exists: The ingestion endpoint has been hitting downstream service limits during traffic spikes, causing silent data loss. This issue implements token-bucket rate limiting at the API gateway layer.

Acceptance Criteria to verify:[ ] Rate limit is configurable per tenant via environment variable[ ] Requests exceeding the limit return HTTP 429 with a Retry-After header[ ] Rate limit metrics are emitted to Datadog

Technical context (from Confluence: “Ingestion API Design - v2”): The design doc specifies that rate limiting should be applied after auth but before deserialization to avoid wasting CPU on invalid payloads. The chosen algorithm is token bucket (not leaky bucket) because burst tolerance is a stated requirement.

Out of scope (per design doc): Per-endpoint rate limits, user-level quotas, and adaptive rate limiting based on downstream health are explicitly deferred to a future iteration.

A reviewer seeing this has everything they need before they open a single file.

They know the business context, the specific criteria to check, the architectural decisions already made, and what not to comment on.

Review quality goes up, review time goes down, and the endless “can you add a link to the ticket?” comments disappear.


Why This Architecture Is Worth Understanding

A few things make this pattern interesting beyond the immediate convenience.

The agent is a first-class citizen in the repo. The workflow definition lives in version control alongside the code. You can PR it, review it, roll it back. The agent’s behavior is auditable and collaborative in exactly the same way your application code is.

Safe outputs as an architectural constraint. The agent can only post comments and add labels. It can’t merge PRs, push commits, or modify settings. This is a hard constraint enforced by the framework, not a trust-based honor system. It maps well to the principle of least privilege: the agent has exactly the permissions it needs to do its job and nothing else.

MCP as a composable tooling layer. The shared MCP config means you can build a whole ecosystem of workflows that all speak to Jira and Confluence through the same interface. A daily standup report, a sprint retrospective generator, a ticket-to-PR traceability checker: all can reuse the same Atlassian integration with zero duplication.

Natural language definitions lower the barrier dramatically. The workflow reads like a spec, not like code. Someone who understands the problem domain but isn’t a GitHub Actions expert can read it, suggest improvements, or write their own. That’s a meaningful shift in who owns automation.


Rough Edges and Caveats

This is early days. gh-aw is explicitly in beta and the documentation says it may change significantly, which you should take seriously before building anything mission-critical on it.

A few things I’ve noticed along the way.

The Jira key extraction is only as good as your branch naming discipline. If your team uses fix/typo-in-readme instead of fix/ABC-1234-typo-in-readme, the agent finds nothing to work with. Worth enforcing a branch naming convention if you haven’t already; this workflow is a good forcing function.

Confluence page quality varies wildly across most orgs. If your design docs are outdated or sparse, the enrichment comment reflects that. Garbage in, garbage out. In a strange way this makes the problem visible: when the enricher posts thin context, it’s a signal that the documentation needs attention.

Latency on PR open is a few minutes for a cold workflow run. Not instant, but fast enough that by the time a reviewer opens the PR link from Slack, the comment is usually there.


What’s Next

I’m planning a few extensions to this.

A bidirectional enricher that also updates the Jira ticket with a link back to the PR and posts a comment on the ticket when the PR is merged. Keeping Jira and GitHub in sync manually is another tax no one should be paying.

A spec drift detector that compares the actual diff against the Confluence design doc and flags if the implementation appears to deviate from documented decisions. Higher ambition, more hallucination risk, but interesting to prototype.

A review checklist generator that goes beyond summarizing acceptance criteria and generates a structured checklist tailored to the type of change (security-sensitive, data migration, API contract change, etc.) based on the ticket labels.


Try It

The gh-aw CLI is installable as a GitHub CLI extension:

gh extension install github/gh-aw

From there, gh aw add walks you through adding a workflow to your repo. The quick start guide is genuinely quick.

If you build something interesting with it, especially anything that bridges GitHub with external tools over MCP, I’d love to hear about it.

The intersection of agentic workflows and developer tooling is moving fast, and the patterns that emerge in the next year are going to shape how we think about human-AI collaboration in software development for a long time.

Thanks for reading. If you found this useful, share it with someone who spends too much time doing PR archaeology.

About the editor

Niels Freier

Selected advisory work for organisations facing consequential architecture, platform, transformation, and AI decisions.

About & services