Law Firms/Secure Practice AI

The LLM your privileged data never has to leave for.

An on-premises Business Unit that drafts, summarizes, and searches across your matter files. Runs locally on a Mac Studio in your server room. Nothing crosses your firewall — including the prompt.

Outcome KPI
6–12 hrs
saved / fee earner / week
Time to live
5–6 weeks
diagnostic to first draft
From
€16,800
one-time, hardware included
Deployment
Local only
Mac Studio M4 Ultra
The diagnosis

Why is German law the hardest market for cloud AI?

It's not a technology problem. It's that the productivity gain and the regulatory risk arrive at the same address.

Lawyers want what AI offers: faster drafting, instant summaries of 200-page bundles, the ability to ask a question across every matter the firm has ever touched. They cannot get it from cloud LLMs without straining §43a BRAO, client privilege, and PI insurer expectations. Secure Practice AI runs entirely on a Mac Studio inside the firm — open-weight 70B-class models, zero data egress, full audit log. Same productivity. The BRAO problem disappears.

Privilege risk

Sending matter content to OpenAI or Anthropic puts §43a, §50 BRAO, and client confidentiality on contestable ground in any post-mandate dispute.

Mandate scope

Most engagement letters don't anticipate third-party AI processing. Renegotiating each one is impractical and signals risk to the client.

Insurance & audit

PI insurers and the Rechtsanwaltskammer increasingly ask where matter data goes. 'On a Mac in our server room, never leaves' is the cleanest answer.
What actually happens in the firm

The 18-month internal argument every German firm is having.

A first-party view from running this diagnostic in mid-sized German practices in 2025–26.

Partners want the productivity. The compliance lead says no to cloud LLMs. The conversation stalls for 18 months while juniors quietly use ChatGPT on their personal phones — which is the worst possible outcome on every dimension: privilege, audit, and partner control.

The honest answer in 2026 is that a Mac Studio with 192 GB unified memory can run open-weight models that are good enough for the 80% of legal work that's drafting, summarizing, comparing, and searching. Not yet good enough to write a §823 BGB brief unsupervised — but more than good enough to give the associate a 70% draft to refine.

Once the hardware sits in the firm's server room, the BRAO problem disappears. So does the per-seat AI subscription. So does the partner argument about who owns the prompts and the outputs. The compliance lead becomes the demo-runner for prospective clients.

Diagnostic principle

The hardware location is the regulatory answer. The model quality is the productivity answer. In 2026, both arrive in the same Mac.

Before vs after

Two versions of the same Tuesday morning.

Same fee earner, same matter, two operating models.

Before

Cloud LLM ban + shadow ChatGPT

  • Compliance bans cloud AI by partner resolution.
  • Junior pastes anonymized clauses into ChatGPT on phone.
  • Output is reviewed by no one; no audit trail exists.
  • Partner doesn't see the productivity gain — but absorbs the risk.
  • Result: maximum risk, minimum oversight, half the productivity.
After

Mac Studio in the server room

  • Fee earner queries the local model from their browser, no install.
  • Every prompt + output logged to firm-controlled audit DB.
  • 70B open-weight model produces drafting on par with cloud frontier.
  • BRAO conversation closed; PI insurer satisfied with architecture doc.
  • Result: 6–12 hours / fee earner / week back, fully governed.
How it works

Four capabilities, one local engine, one audit log.

Built around the four operations that consume 60% of a fee earner's non-billable time.

01

Index

Reads your DMS (DATEV, RA-MICRO, Advoware, AnNoText, NetDocuments, file shares) into a local searchable vector store. Re-indexed nightly.

02

Draft

Generates first-draft letters, briefs, NDAs, and client emails in your firm's voice. Trained on your sample documents during configuration.

03

Summarize

Compresses 200-page bundles into structured matter summaries with paragraph-level citations back to the source file.

04

Search

Ask a question across every matter the firm has ever touched. Answers cite the source file and paragraph. Permission-aware.

Deployment

Local only — by design.

We don't offer a cloud version of this Unit. Here's the honest comparison so you can see why we made that choice.

Local · Mac Studio Cloud LLMs
§43a BRAO / privilegeAligned — data never leaves controllerContested — depends on contract & jurisdiction
HardwareMac Studio M4 Ultra (192 GB), includedNone on your side
Model classOpen-weight, ~70B parametersFrontier (GPT-4 / Claude class)
Data residencyYour office, your jurisdictionVendor cloud (US / EU mix)
Per-seat fee€0€20–€60 / user / month
Audit log locationYour server, queryableVendor cloud, exportable
Internet outageKeeps workingStops working
Prompt confidentialityNever leaves the LANVendor sees prompt
Model update cadenceTwice yearly, you control timingVendor pushes at will
ROI estimator

What's recovered fee-earner time worth?

Sliders default to median values from our pilot firms (mid-sized German corporate boutiques). Adjust to your practice.

Fee earners12
Avg. billable rate€220 / h
Hours saved / wk / earner8 h
Pilot firms reported 6–12 hours saved per fee earner per week, concentrated in drafting, summarization, and matter search. We default to 8h. The recovered hours are not necessarily billed back — many firms reinvest them in lower workload at the senior level.
Estimated annual value
971,520
4,416 fee-earner hours / year
Unit price (one-time)€16,800
Estimated payback< 1 months
Live demo

Talk to Secure Practice AI.

A sandbox version of the Matter Assistant. The production Unit runs on hardware in your office; nothing in this demo leaves your browser session.

Secure Practice AI · Matter Assistant Sandbox · 0 egress
I've indexed an anonymized M&A bundle (47 documents, ~340 pages). Ask me anything — drafting, summarization, search across files. All processing in this sandbox stays in your browser; production deployment runs on the Mac Studio in your office.
Deploy specifics

What lands in your server room.

The local-deployment package, itemized for your IT lead.

Hardware: Mac Studio M4 Ultra (192 GB)

Sized for 70B-class open-weight inference at usable speed for 5–60 concurrent fee earners. Sits in your existing server rack or under a partner's desk. Needs 1× ethernet, 1× outlet (~120W idle). Fully air-cooled, near-silent.

  • Apple M4 Ultra · 32-core CPU, 80-core GPU
  • 192 GB unified memory
  • 2 TB SSD (encrypted, FileVault)
  • 70B open-weight model preloaded
  • Browser-based UI, no client install
  • Audit log: queryable Postgres on-device

Week 5–6 schedule

  1. W5 MonMac Studio delivered & racked
  2. W5 WedDMS connector live, indexing starts
  3. W5 FriFirst fee-earner pilot session
  4. W6 TueFirmwide onboarding workshop
  5. W6 FriBRAO + audit doc handover

Documentation handed to your compliance lead

  • Deployment architecture diagram
  • Data flow document (annotated)
  • Standard DPA (GDPR Art. 28)
  • Audit log schema + retention policy
  • Open-weight model provenance
  • BRAO §43a alignment memo
Pricing

Three ways to acquire the Unit.

One Unit. Same scope, support, and outcome target across all three.

One-time

€16,800
Owned outright
  • Mac Studio M4 Ultra (192 GB)
  • Open-weight 70B model
  • DMS integration
  • Lawyer onboarding workshop
  • 12 months support
Choose One-time
Most chosen

12-month installments

€1,540 / mo
0% over 12 months
  • Same scope as one-time
  • No interest, no penalties
  • Yours after month 12
  • 12 months support
Choose 12-month installments

Rental

€890 / mo
Min. 12 months
  • Hardware loan
  • Full support included
  • Hardware refresh every 24 mo
  • Convert to ownership anytime
Choose Rental
FAQ

Frequently asked questions

Free, no slide deck

Claim the Secure Practice AI Unit.

A 15-minute call confirms regulatory and IT fit. We've answered the BRAO question dozens of times.

About 1 in 4 calls ends with us recommending you don't buy anything.