Getting a Claude-powered workflow into production behind an Athenahealth Marketplace integration is two projects glued together: a normal EHR integration, and an AI compliance review. The Marketplace team is good at both — but if you don't sequence the work correctly, you'll burn two cycles of submission rejections before you figure out what they actually want.
This is the field guide I wish I'd had at the start of the Athena Marketplace approval process at Altitude.
The three audiences for the submission
A Marketplace submission has three implicit reviewers:
- Technical integration review. Does the integration use the API correctly? Are scopes minimum-necessary? Does it handle errors gracefully? Is the auth model sane?
- Clinical safety review. What does the integration do to a patient's care? What's the worst-case failure mode? Where's the human in the loop?
- Privacy and security review. Where does the data go? Who has access? What logs exist? Is there a BAA on file with every system in the path?
When an LLM is in the picture, audience #2 and #3 sharpen considerably. Audience #2 wants to know if the model can hallucinate something into a clinical record. Audience #3 wants to know exactly which entities see PHI.
Sequence the work in this order
Most teams start with the integration code. That's exactly backwards.
Start with audience #3. Map every system in the data path before you write code. Confirm BAAs are in place. Classify every field that crosses an API boundary. Decide which fields will be redacted before they reach the model and which won't.
Then audience #2. Write the workflow document. Be specific: "the system drafts a note that a clinician reviews and approves before it lands in the chart." Or: "the system reads the chart and surfaces structured suggestions; nothing writes back without a clinician action." The Marketplace reviewers are not hostile to AI — they're hostile to ambiguity about what the AI does.
Then audience #1. Build the integration to fit. The technical review is the easy part once you know exactly which scopes you need.
Specific things the Marketplace will ask about LLM workflows
In rough order:
- What model? What version? Pin the model. Document the rationale. Have a plan for deprecation.
- Is the model under a BAA? Anthropic enterprise tier yes. Be ready with the paperwork.
- What's in the prompt? Show the prompt template. Show what's redacted. Show why the remaining fields are minimum-necessary.
- What does the model write back to Athena? If anything, prove a human action gates it.
- What happens if the model is wrong? This is the question. The answer is "a clinician sees a draft, not a fact." Or it's "this output never touches the chart." Or it's "the workflow is non-clinical and the failure mode is bounded to inconvenience." Pick one. Defend it.
- What's the test plan? Golden cases. Evaluation harness. Drift monitoring. Versioned and shown.
The integration shape that survives review
Use FHIR R4 endpoints where possible. Athena's FHIR API is good and the Marketplace team is comfortable with it.
Scope your OAuth grants to the smallest set that does the job. The reviewer reads scopes carefully — "patient.read clinical.write" with no further detail will get questions. Justify each scope in the submission narrative.
Build a clean abstraction between your code and the Athena API. The Marketplace will require occasional changes (deprecations, scope reorganizations, version bumps) and you don't want those to ripple through your business logic.
Log every API call with a request ID, a user ID (or system actor), the resource accessed, and the outcome. The Marketplace reviewer wants to see this exists; auditors will want to see specific entries.
What I'd do differently in hindsight
Submit a pre-review. Marketplace will look at a partial submission informally. Use this. It's faster than submitting fully, getting rejected, and resubmitting.
Bring the workflow document to the kickoff. Not the integration spec — the workflow document. The conversation goes faster when the reviewer can see "this is what the user does, this is what the system does, this is where the human action is."
Have an AI policy doc on hand. Even a one-pager describing your model selection, BAA chain, evaluation, drift, and incident response will preempt 80% of the safety questions. It signals you've thought about this; the absence of one signals you haven't.
The Marketplace process is designed to keep bad integrations out of clinicians' workflows. Once you understand it as a clinical safety gate that happens to use the language of API integration, the work to pass it becomes much clearer. If your team is approaching submission and wants a sanity check, book a call — I'll tell you in 30 minutes whether you're a week away or a quarter away.