Document Automation for a Legal Firm
90% reduction in case processing time through on-premise AI inference
A Valencia-based legal firm with 45 employees was manually processing hundreds of case files weekly, spending 40+ hours of qualified staff time on repetitive classification, data extraction, and summarisation tasks.
The Challenge
A well-established legal firm in Valencia was managing over 300 case files per week. Each file required classification by type, extraction of key data points, deadline verification, and generation of summaries for senior partners. The team was spending more than 40 hours per week on these repetitive tasks.
The firm had evaluated cloud-based AI solutions, but two factors were holding them back:
- Confidentiality: Sending client files to external servers was unacceptable for a firm handling privileged information
- Recurring costs: Cloud API estimates projected EUR 1,500–2,000 per month for their processing volume
The Solution
We deployed a local inference architecture with a dedicated compute node at the firm’s premises:
- Hardware: A local inference device running language models optimised for legal document processing
- Models: We selected and fine-tuned models specialised in Spanish legal text comprehension
- Integration: We connected the system to their existing document management software via an internal API
- Automated workflow: Classification → Extraction → Deadline verification → Summary → Alert
All processing happens within the firm’s local network. No document leaves their premises.
The Results
Within the first four weeks of operation:
- Processing time per case file dropped from 25 minutes to under 3 minutes
- The team recovered over 35 hours per week for high-value work
- Data extraction accuracy reached 96%, exceeding the 91% manual benchmark
- Senior partners now receive automated summaries every morning before 8:00am
The total deployment cost was recovered in under four months through savings on qualified staff hours.
Technology Used
- Hardware: Local inference node (Mac Mini M4 with 32GB)
- Models: SLM optimised for Spanish legal text (Qwen2.5-7B fine-tuned)
- Integration: Internal REST API connected to existing document management system
- Timeline: 3 weeks deployment, 1 week validation
VORLUX AI Perspective
This case demonstrates that enterprise AI does not require compromising data security or accepting unpredictable recurring costs. Hybrid architecture allows organisations with strict confidentiality requirements to access cutting-edge AI capabilities while retaining full control over their information.