call us at: (248)602-2682 OR Schedule a time to meet with an advisor: Sonareon Schedule

Attend our upcoming Workshop March 26 either at 10AM or 1PM. Register today - $99. Get free AI Assessment.

USA vs Heppner: What Every CPA Firm Needs to Know

The Keys to Your AI Kingdom Are Wide Open (And the Feds Have a Copy. Your AI Prompts are not private and the government just proved it)

Peter Serzo

2/14/20264 min read

Let me summarize this case in terms many of us can understand: The defendant basically brought a photo to a barber: "Make me look like Brad Pitt." He walks out looking like drunk Uncle Tony at a holiday party. You can’t sue the barber because your head is the problem. Same with AI: you feed it a mess and expect it to come out as Supreme Court precedent.

The Case: A defendant in a federal fraud case thought his AI conversations with Claude were protected by attorney-client privilege. The Southern District of New York just ruled they're not. And if you're a CPA, accountant, or financial professional using AI tools to research tax strategies, draft client communications, or analyze sensitive data, this ruling should ensure you evaluate freshly about how you think about AI privacy and how it is used in your organization.

LLM: Legal Licensed Might-sound-smart-still-not-your-attorney

The Case

In United States v. Heppner, a defendant facing securities fraud charges ran queries through Claude (Anthropic' s AI tool) before his arrest. He later shared those AI-generated documents with his lawyers and claimed attorney-client privilege. The government disagreed and filed a motion to access those AI conversations.

The court's position? AI tools are not attorneys, AI conversations are not confidential, and sharing AI outputs with your lawyer later doesn't retroactively create privilege.

Here are the three key findings that should make every financial professional sit up:

  1. AI tools aren't lawyers: Claude, ChatGPT, and similar platforms explicitly disclaim providing legal advice in their terms of service. You're not consulting an attorney.

  1. Your prompts aren't confidential: When you input queries into AI platforms, you're sharing data with third-party companies whose privacy policies explicitly state they may disclose information to "governmental regulatory authorities". Anthropic's own privacy policy confirms they collect prompts, outputs, and use this data to train their models.

  1. Sharing with your lawyer doesn't fix it: The court was crystal clear: "sending preexisting documents to counsel does not confer attorney-client privilege". You can't retroactively cloak non-privileged AI searches by forwarding them to your attorney

What This Means

If you're in the financial services world: Banking, CPAs, tax preparers, accountants, bookkeepers, here's what you need to understand: every prompt you enter into an AI tool is potentially discoverable.

Think about what you've been putting into ChatGPT, Claude, or Gemini:

  • Client names and tax scenarios (I hope not)

  • Personally identifiable information (I hope not)

  • Revenue figures, financial strategies (I have seen this done)

  • Questions about compliance, audit responses, or legal interpretations (Absolutely have seen this)

Under GLBA (Gramm-Leach-Bliley Act), you're legally obligated to protect client financial information. The FTC Safeguards Rule requires written information security plans. IRS Circular 230 demands confidentiality. If AI platforms can access, store, and potentially disclose your prompts, you've just created a compliance exposure.

The WISP Reality Check

Every tax firm needs a Written Information Security Plan (WISP). It's the law. But here's the uncomfortable question: Does your WISP address AI tool usage?

Most don't. And that's a huge risk heading into 2026.

Your WISP should now include:

  • AI usage policies that define which tools are approved and for what purposes

  • Data classification rules that prohibit entering client PII or confidential data into public AI platforms

  • Training requirements so every team member understands AI privacy risks

  • Vendor assessment protocols to evaluate AI platform security and privacy policies

  • Incident response procedures for potential AI-related data breaches

This isn't just about covering your backside. It's about maintaining client trust, regulatory compliance, and professional responsibility.

Not All AI Tools Are Created Equal

The Heppner ruling doesn't mean you need to abandon AI. It means you need to use it securely.

There's a spectrum of AI deployment:

  • Public platforms (ChatGPT free tier, Claude): Your data may be used for training; privacy protections are minimal

  • Enterprise tiers (ChatGPT Team/Enterprise, Claude for Business): Contractual protections against training on your data; better security controls

  • Private deployments: On-premises or private cloud AI instances where you control data entirely

For financial institutions handling sensitive client data, the public free-tier approach is a non-starter. You need enterprise agreements with explicit data protection guarantees, or better yet, private AI infrastructure.

The Heppner decision is a wake-up call, not a death sentence for AI in financial services. The path forward requires three things:

1. Audit your current AI usage: What tools are your staff using? What data are they entering? Are you compliant with GLBA, IRS, and state privacy laws?

2. Update your policies immediately: Your WISP, your employee handbook, your client engagement letters all need to address AI privacy risks.

3. Invest in secure AI infrastructure: Whether it's enterprise-tier platforms with robust privacy protections or private AI deployments, make security a budget priority.

Ready to ensure your firm's AI usage is secure and compliant?

This is exactly the kind of assessment Sonareon helps financial institutions navigate. Our AI audits identify privacy and compliance gaps before they become legal liabilities. Our workshops train your entire team on secure AI utilization that cuts through marketing hype and focuses on real business risk.

The Bradley Heppner case is February 2026. Tax season is in full swing. Your clients expect you to leverage modern tools. And the government just clarified that AI conversations aren't as private as you thought.

The good news? You can still use AI to boost productivity, enhance client service, and stay competitive. The requirement? You must do it securely, compliantly, and with eyes wide open about privacy risks.

Schedule a consultation with Sonareon or join our next workshop to learn practical strategies for AI privacy in financial services.