AI Safety

AI Safety at Solving Health

We serve data, not decisions. Every AI response cites verified federal databases. No diagnoses. No prescriptions. No autonomous clinical actions. Here's how we keep it that way.

Four structural defenses

These aren't policies we hope our AI follows. They're architectural constraints enforced by code.

01

Data, not decisions

We surface what federal databases say. We never tell you what to do. Atlas answers are facts with citations, not recommendations with opinions.

02

Every fact has a source

Every AI response cites the specific database, record identifier, and query date. If you can't trace it back to a federal source, it doesn't leave our system.

03

Physician in the loop

Letters of Medical Necessity are reviewed and signed by Dr. Josh Emdur, DO — a board-certified physician licensed in all 50 states. No autonomous clinical actions, ever.

04

Verified provenance

Our data comes from CMS, FDA, OIG, CDC, and NIH — verified federal databases with public provenance. Not from user input, web scraping, or model training data.

Clear lines, clearly stated

There is no ambiguity about what Atlas will and will not do.

× What we never do

  • Diagnose conditions or interpret symptoms
  • Recommend treatments, therapies, or procedures
  • Prescribe or suggest medications
  • Provide dosing information or schedules
  • Make autonomous clinical decisions
  • Store or process protected health information (PHI)

What we do

  • Search 100,000+ verified federal health records
  • Cross-reference 7 databases for comprehensive intelligence
  • Cite every source with database, record ID, and date
  • Flag when a question requires a licensed physician
  • Rate-limit and audit every AI query for abuse detection
  • Filter AI outputs before delivery to catch model misbehavior

Defense in depth

Every AI query passes through four structural gates before a response reaches the user.

Gate 1

Input Sanitization

Prompt injection detection blocks extraction attempts, role-playing attacks, and fake authority claims

Gate 2

Query Classification

Every query is classified as data, clinical, dangerous, or extraction before reaching the model

Gate 3

Data-Only Model

The AI is structurally constrained to retrieve and cite federal data, never generate clinical advice

Gate 4

Output Filter

Every response is scanned for dosing advice, treatment recs, prescriptions, and prompt leakage before delivery

Why This Matters

Healthcare AI must be structurally safe, not just prompt-safe

In 2026, a telehealth AI platform was compromised because it made autonomous clinical decisions from extractable system prompts. Attackers injected fake regulatory bodies to override dosing guidelines. Poisoned clinical notes persisted across patient sessions. The AI prescribed medications without physician review.

Solving Health is architecturally immune to this class of attack. We serve verified federal data, not clinical recommendations. Our safety layer is enforced by middleware code, not by prompt instructions alone. Even if our system prompt were fully extracted, our AI cannot prescribe, diagnose, or recommend — because the output filter catches it, the input sanitizer blocks it, and the model is constrained to data retrieval. Defense in depth means no single point of failure.

Responsible Disclosure

Found a vulnerability in our AI safety layer, API, or data pipeline? We take every report seriously and respond within 48 hours.