
Every team needs this workflow
From slack threads to detailed and structured Github issues and sometimes a pull request too.
The most expensive bugs are usually the cheap ones. Not because of what they cost to fix, but because of what it costs to get someone to fix them. Imagine this scene:
A customer support rep is on a chat with a user. The user has just pasted a screenshot: the pricing page still says $29 a month, but checkout charged them $49. The rep already knows the pricing changed three weeks ago. The page is stale. The fix is two minutes of work for whoever owns that page.
What happens next, in most companies, is not two minutes of work.
The rep switches to Slack and pings someone on the product team. They drop it into the sprint tracker, queued for the next sprint. The rep goes back to the chat and tells the customer the team will look at it. The customer waits. Two weeks later, the page is fixed.
The fix was two minutes. The process around it was two weeks.
Most teams know this scene. Most have decided to live with it.
The interesting question is what changes when you do not have to.
The surface
We have deployed this workflow inside several client teams. It has been one of the more successful pieces of plumbing we've shipped, and it has become one of the most used surfaces in their workspace. Support, product, and engineering all use it daily to hand off work.
Drafting an issue from a Slack thread with /issue.
In Slack, you type a slash command like /issue with a short description. The bot pulls in the surrounding thread as context, then hands the request to Claude with two options: draft a structured issue, or, if the input lacks context, ask one to three short clarifying questions.
Whether it asks or not, what you get is a context-rich draft and input fields that mirror your project board columns, ready to submit.
You submit. The issue lands in GitHub, attached to the board with every field set. One checkbox before submit lets Claude pick the issue up immediately and open a pull request against it using a GitHub Actions runner.
That's the whole surface.
Letting the model ask back
The customer support rep types /issue Pricing page still shows old $29 — stale and hits send. The model pauses and asks "Which pricing page?", "What should it say instead?" A sentence or two later, it drafts the final issue description.
The right for the agent to say I do not have enough context is the design choice the rest of the workflow rests on. We give Claude two tools and force it to choose one:
const tools = [
{ name: "create_issue_draft", input_schema: { /* title, body */ } },
{ name: "ask_clarifying_questions", input_schema: { /* questions */ } },
];
await anthropic.messages.create({
model,
tools,
tool_choice: { type: "any" }, // must pick one of the two
system: SYSTEM_PROMPT,
messages: [{ role: "user", content: userText }],
});What this gets the team is concrete: a path from a Slack thread, with its context intact, to a structured issue draft they can review at a glance. The quality of what lands in GitHub improves because the source is context-rich without requiring manual effort.
A few ways teams could use this
The variant we have been describing is a software engineering one. The same pattern fits a lot of other places. Slack is where the conversation already lives; the destination just changes. Three shapes we have been turning over:
Customer support: Slack → Zendesk. A support lead in #cs-escalations sees the same edge case mentioned across three different customer chats. /ticket from the thread, and a Zendesk record lands with the repro steps, affected accounts, and a priority that matches the queue's schema.
Sales: Slack → Salesforce. An account executive drops a quick note in #deals: Northwind, $120k ARR, stuck in procurement for three weeks. /deal drafts a Salesforce opportunity with stage, amount, contacts, and a logged activity that captures the conversation as it was, not as it gets remembered on Sunday night.
Incident response: Slack → GitHub issue → PagerDuty. During a war-room thread, someone runs /incident. The bot files a structured GitHub issue with the timeline assembled from the thread, then opens a PagerDuty incident linked back to it. The retro writes itself because the conversation is already the record.
None of these are the final destination. They are starting positions. The variant that fits your team is the one your team should build toward.
What makes this workflow safe to run in production
The workflow is small, but a few deliberate choices keep it safe enough to leave running inside a real business. The six below are the load-bearing ones:
Data and integration. The bot reads a Slack thread only when someone runs the command, then hands the thread to a small model (Claude Haiku in our version) for drafting and writes the result back to GitHub through a scoped App. The model is the one piece that reaches outside the building, and it is swappable. When channel content cannot leave, a small language model on the team's own hardware handles the two-tool design comfortably. The people who can see the issue are the people who could already see the thread.
Access and controls. The agent takes in untrusted Slack input and changes external state, so the only safe lever is locking down what it can touch. The GitHub App holds Issues, Projects, and Metadata write permissions, nothing else. The worst a hostile Slack message can do is file a bogus issue, because filing issues is the only verb the App holds. The blast radius lives at the action boundary, not the model boundary.
Evaluation. The two-tool design (draft or ask) gives us deterministic checks before we need an LLM judging another LLM. We log which tool the model picked against a labeled set of ambiguous vs. unambiguous threads; on every model change, we replay historical traces and watch ask rate, draft rate, and how heavily drafts are edited before submit. A regression in any of those surfaces before our clients complain.
Observability. Every Claude call is a span, and every issue body carries the originating Slack thread_ts and the trace ID that produced it. Following OpenTelemetry's GenAI conventions makes why did this issue get filed? a query rather than an investigation. Walk the trace from the issue back to the model's reasoning back to the message that started it. The artifacts of the workflow are the audit trail, with no separate telemetry project to fund.
Teams and workflow. The most-used AI in any organization is the one people do not know they are using. A /issue command inside the channel they already work in feels like a Slack feature, not an AI product. Support, sales, and engineering use the same surface, no migration, no training, no fresh tab.
Iteration after launch. We do not turn this on across every codebase at once. We start with the one client application where customer success teams feel the most pain, then lean on the evaluation and observability pipeline to tell us when the workflow is solid enough to extend. Each new surface is added when the numbers say so.
Closing
There is a customer rep in every company, and there is usually a multi-week process between them and the thing they need done. This workflow is one way to bring those two closer together. The engineering variant we walked through is one shape; support, sales, and incident response are others. The right one for your team is the one shaped to your version of that scene.
If you have your own version of that gap and would like to compare notes, we would be glad to hear it. Right now, the problem we find most interesting is turning fleeting conversations into structured, trackable work without losing the context that started them. Happy to think about your version of that alongside ours.
Oye Collective builds production AI agents inside real businesses. Reach us at oyecollective.com.
Putting agents into your business?
We help enterprise teams move from agent demos to production. Free assessment call, no commitment.
Book an Assessment