Powered by Indian Legal Llama · Llama 3.2 · RAG Pipeline

Insurance Defense
Reimagined by AI

The only B2B SaaS platform purpose-built for US insurance defense law firms. Generate coverage opinions in 15 minutes, cut LEDES rejections to under 2%, and automate matter management — all with attorney-in-the-loop review.

0min Coverage Opinion
0% LEDES Rejection Rate
0% Open Source Stack
0 Legal Training Pairs
Scroll to explore

The Daily Pain Points We Eliminate

📋
4–6 hrs
15 min
Coverage Opinion
💸
15–25%
<2%
LEDES Rejection Rate
2–4 hrs
10 min
ROR Letter
📉
8–12%
~0%
Billable Time Leakage

Everything a Defense Firm Needs

Three modules. Zero legal liability for the first two. Maximum ROI from day one.

🧠
AI-Powered · RAG

Coverage Opinion Engine

Upload a claim packet — policy PDF, complaint, supporting docs. Our RAG pipeline extracts, chunks, embeds, and retrieves relevant case law from CourtListener's 6.9M cases. Claude generates a structured opinion draft in under 15 minutes. Attorney reviews and exports as DOCX or PDF.

  • ✓ Drag-drop multi-PDF upload
  • ✓ Real-time generation status streaming
  • ✓ TipTap inline editor for attorney review
  • ✓ Attorney-Client Privilege badge on all outputs
  • ✓ CourtListener case law citations included
MEDIUM RISK · HIGHEST VALUE
📊
LEDES 1998B · XML 2.0

LEDES Billing Engine

AI-suggested UTBMS codes as you type. Real-time carrier rule validation with color-coded entries. Generate LEDES files with under 2% rejection rate.

  • ✓ AI UTBMS code suggestion (<2s)
  • ✓ Green / Yellow / Red validation
  • ✓ LEDES 1998B, 1998BI, XML 2.0 & 2.1
  • ✓ Pre-bill review workflow
LOW RISK · IMMEDIATE ROI
Multi-Tenant · RBAC

Matter Management

Full case lifecycle from intake to close. Conflict checks, deadline tracking, document storage, and team assignment — all isolated per firm with row-level security.

  • ✓ Automated conflict check on intake
  • ✓ Deadline tracking + SES notifications
  • ✓ S3 document storage per matter
  • ✓ Role-based views (attorney / billing / manager)
LOW RISK · CORE BACKBONE

Indian Legal Llama
GGUF · Llama 3.2

Built on Ambuj Kumar Tripathi's fine-tuned Llama 3.2 1B model — trained on 14,543 Indian Legal QA pairs covering the Constitution, IPC, and CrPC. GGUF Q4_K_M quantization means it runs on CPU with no GPU required.

For the US insurance defense market, this model serves as the domain-locked legal reasoning backbone, augmented by our RAG pipeline with CourtListener case law and carrier-specific billing rules.

Base Model Llama 3.2 1B Instruct
Training Method qLoRA (4-bit + LoRA adapters)
Quantization GGUF Q4_K_M · ~808 MB
Training Data 14,543 Legal QA Pairs
Hardware CPU-only · No GPU Required
License Llama 3.2 Community License
View on HuggingFace
Indian Legal Llama · Live Demo
system You are an Indian legal expert. Only answer legal questions.
user IPC 302 kya hai?
assistant
Try:

Training Data Coverage

Indian Constitution QA
IPC (Indian Penal Code)
CrPC (Criminal Procedure)

⚠ Version 0.1 Alpha — 100 training steps. Full epoch training planned. Not for production legal advice.

Three Modules. Six Months. Two Developers.

Deliberately scoped to get into the hands of 2–3 US pilot firms within 6 months.

LOW RISK

Matter Management

The core backbone of the platform. Case intake, conflict check, deadline tracking, document storage, and team assignment — all with multi-tenant isolation.

Phase 1 Weeks 1–6 Foundation build
  • Multi-tenant Next.js setup with Cognito auth
  • Aurora PostgreSQL with Row-Level Security
  • Matter intake + automated conflict check
  • S3 document storage per matter
  • Deadline tracking + SES email notifications
  • Role-based access: Attorney / Billing / Manager
Active Matters 247 cases
Smith v. Allstate — GL Claim Deadline: May 15 · L310 Discovery
Active
Johnson v. State Farm — Auto Deadline: May 3 · ⚠ Approaching
Due Soon
Williams v. GEICO — D&O Conflict Check: ⚠ Review Required
Review
Davis v. Progressive — Cyber Deadline: Jun 1 · L210 Pleadings
Active
LOW RISK

LEDES Billing Engine

The biggest daily pain point. AI-suggested UTBMS codes, real-time carrier rule validation, and LEDES file generation — cutting rejection rates from 15–25% to under 2%.

Phase 2 Weeks 7–12 Billing engine build
  • Time entry UI with TanStack Table
  • AI UTBMS code suggestion in <2 seconds
  • Carrier guideline rule engine (per-carrier rules)
  • LEDES 1998B, 1998BI, XML 2.0 & 2.1 export
  • Pre-bill review workflow for billing managers
  • Color-coded validation: Green / Yellow / Red
Time Entries — May 2026
05/01 Drafted motion for summary judgment L240 · A103 2.5h
05/02 Travel to deposition — Chicago E110 3.0h
05/03 Outside printing services E102 0.5h
05/04 Reviewed opposing counsel brief L250 · A104 1.5h
3 Valid 1 Warning 1 Blocked
MEDIUM RISK

AI Coverage Opinion

RAG-based coverage opinion generation. No fine-tuning required — the RAG approach works with the actual policy documents the attorney uploads, augmented by CourtListener case law.

Phase 3 Weeks 13–20 AI pipeline build
  • Docling + Tesseract PDF/OCR parsing
  • pgvector embedding pipeline in Aurora
  • CourtListener API — 6.9M cases free
  • LlamaIndex + AWS Bedrock (Claude) RAG
  • Async SQS job queue — no HTTP timeout
  • TipTap editor for attorney inline review
Coverage Opinion Generator 🔒 Attorney-Client Privileged
Extracting policy documents...
Chunking & embedding (pgvector)...
Fetching CourtListener case law...
Drafting coverage opinion...
Loading into TipTap editor...
Generating... 12 min remaining

How the AI Pipeline Works

Retrieval Augmented Generation — no expensive fine-tuning. The AI reads the actual policy document before answering.

01
📤
Document Upload
Attorney uploads policy PDF, complaint, claim docs via react-dropzone → S3
02
🔍
Text Extraction
Docling + Tesseract OCR extracts raw text from PDF/DOCX/scanned images
03
Chunking
LlamaIndex splits text into overlapping chunks for semantic search
04
🧮
Embedding
AWS Bedrock Titan converts chunks to vectors stored in pgvector (Aurora)
05
Case Law Fetch
CourtListener API queries 6.9M US court cases for relevant precedents
06
🤖
LLM Generation
Relevant chunks + case law + prompt → Claude (AWS Bedrock) generates opinion
07
📝
Attorney Review
Draft loaded into TipTap editor. Attorney edits inline, exports DOCX/PDF
Critical Architecture: Async Queue Pattern

AI jobs run via AWS SQS + BullMQ — never synchronously in an API route. A 15-minute generation job would timeout an HTTP connection. The browser polls /api/jobs/{'{'}jobId{'}'}/status every 5 seconds for live progress updates.

Enterprise Stack, Open Source Cost

75% of the MVP is built on open source tools at near-zero licensing cost.

Frontend

Next.js 14 TypeScript Tailwind CSS shadcn/ui TanStack Query TipTap Editor Recharts Zustand

Backend

Next.js API Routes AWS Aurora PostgreSQL pgvector Redis ElastiCache AWS SQS + BullMQ AWS S3 AWS SES

AI / ML

LlamaIndex ✓ Free LangChain ✓ Free Ollama ✓ Free Docling (IBM) ✓ Free Tesseract OCR ✓ Free AWS Bedrock (Claude) CourtListener ✓ Free

Security

Amazon Cognito NextAuth.js TLS 1.3 AES-256 at rest Wazuh + Falco ✓ Free AWS KMS RBAC + RLS

Build → Pilot → Partner → Scale

2 developers. 24 weeks. 4 phases. In that order. No shortcuts.

01
Weeks 1–6
Foundation
  • Multi-tenant Next.js
  • Cognito auth + RBAC
  • Aurora PostgreSQL + RLS
  • Matter intake + conflict check
  • S3 document storage
✓ 10 matters created
02
Weeks 7–12
LEDES Billing
  • Time entry UI
  • UTBMS code library
  • AI code suggestion
  • Carrier rule engine
  • LEDES 1998B generator
✓ LEDES file generated correctly
03
Weeks 13–20
AI Coverage Opinion
  • Docling PDF parsing
  • pgvector pipeline
  • CourtListener API
  • RAG + LlamaIndex
  • TipTap editor
✓ Opinion in <20 minutes
04
Weeks 21–24
Polish + Pilot
  • End-to-end testing
  • Security hardening
  • Onboarding flow
  • Performance optimization
  • Bug fixes + UI polish
✓ 2 pilot firms onboarded

Transparent Investment

6-month MVP build. 2 developers. US-first market.

Minimum
₹4.9L
6-month total
  • Developer (₹60K/mo)₹3.6L
  • AWS Aurora PostgreSQL₹24K
  • AWS ECS Fargate₹18K
  • AWS Bedrock (Claude)₹48K
  • Redis + SQS + SES₹15K
  • S3 + CloudFront + Misc₹6K
Get Started
Maximum
₹6.5L
6-month total
  • Developer (₹80K/mo)₹4.8L
  • AWS Aurora PostgreSQL₹24K
  • AWS ECS Fargate₹18K
  • AWS Bedrock (Claude)₹90K
  • Redis + SQS + SES₹15K
  • Figma + GitHub + Misc₹19K
Get Started

Known Risks. Clear Mitigations.

HIGH

No US Attorney Advisor

Law firms won't trust AI legal outputs without expert sign-off.

→ Find one US insurance defense attorney as paid advisor before launch. Offer equity or revenue share.
HIGH

TyMetrix/CounselLink API

Auto-submission feature cannot launch without enterprise partnership.

→ MVP uses manual LEDES file download. Apply for partnership at Month 3.
HIGH

SOC 2 Not Obtained

Large law firms (50+ attorneys) will not buy without SOC 2.

→ Target small firms (<10 attorneys) for MVP. Use their success to fund audit at Month 12.
MEDIUM

AI Output Quality

Attorneys won't trust coverage opinion drafts if quality is insufficient.

→ Position AI as 'draft assistant' not 'replacement'. Always require attorney review.
MEDIUM

Competitor Copies Idea

CaseMark or Harvey could add a billing module.

→ Speed to market. Get 5 paying customers before competitors notice. Deep carrier integrations as moat.
LOW

AWS Cost Overrun

Bedrock inference costs spike with heavy usage.

→ Usage limits per tenant. Cache common AI results. Claude Haiku for simple tasks, Sonnet for complex.

The Question Isn't "Can We Build This?"
It's "Can We Sell This?"

Build → Pilot → Partner → Scale. In that order. No shortcuts. We're looking for 2–3 US insurance defense law firms for our pilot program.

Prepared by Neural Arc Inc. — Product & Engineering Division · hello@neuralarc.ai · Confidential · April 2026