Oral argument intelligence
From the founder
Oral argument preparation has not changed in fifty years. Attorneys read cases, run moots, and hope for the best. The problem is that judges are not abstractions — they are specific people with patterns, habits, and tells. Those patterns are in the transcripts.
Bartolus reads every transcript, tags every question by purpose, technique, and tone, and turns that record into a prep report. Not a chatbot answer. Not a dashboard. A document you can mark up at your desk the night before argument.
The case for Bartolus
Generic AI predicts what judges ask about. Bartolus predicts how they ask it — and can show you when it was wrong.
We fed the same briefs to both. Here is what each predicted Judge Higginson would ask in FBCC CityPoint v. City of Austin — and what he actually said.
We don't know what other tools show clients when they're wrong. We know what we show ours.
Wrong metric. Early versions graded on how often our predictions fired — a passing grade even when we missed questions judges actually asked. We rebuilt the backtest to measure the inverse: how many of the judge's actual questions did we cover?
Missed the record. A judge questioned an attorney about who represented them at a critical hearing. Neither brief named that person. We now run a Record Vulnerability Audit on every set of briefs — flagging facts and third parties that could become hot-button questions regardless of what either side argued.
Thin data. For judges with fewer than five cases in the corpus, every report says so explicitly. Directional, not predictive. You deserve to know when we are guessing.
On Harvey
Neal Katyal's demonstration showed what a well-resourced custom build can do for one argument. Bartolus is the productized version — available to any appellate attorney, not only those with enterprise AI contracts.
The methodology was designed by a working appellate attorney and law professor who has read every transcript in the corpus and quality-checked every tag in the taxonomy.
Free to explore
An interactive behavioral profile of all nine justices across the current term's oral arguments. Explore the methodology before committing to a report.
Oct. Term 2025 · All nine justices · 58 cases
Purpose, technique, and tone profiled across every argument of the current term. Seniority-sorted. Filter by subject matter. Built on official transcripts only.
The team
Bartolus is a product of Greil Analytics LLC, an Austin, Texas company.
John Greil
Founder · Greil Analytics LLC · BartolusJohn Greil is a law professor and practicing appellate attorney. He has argued before the Fifth Circuit, clerked at the federal appellate level, and holds a J.D. from Harvard Law School. He built Bartolus because he wanted a prep tool that would survive cross-examination by the attorneys who use it — and found that nothing on the market did.
Every transcript in the corpus has been read by a human. Every category in the taxonomy has been tested against real argument. The methodology is transparent by design: if a judge has thin data, the report says so. If a prediction fired or missed in backtesting, the record is in the file.
Get started
Available for arguments in any federal circuit and the Supreme Court. Turnaround is typically 48–72 hours once briefs are received.
If you have an argument coming up and want to know whether we have data on your judges, reach out. If we don't have corpus coverage yet, we'll tell you honestly rather than generate a report we can't stand behind.
For firms interested in ongoing access, we are building a subscription product. Mention it in your message.