Legal AI Beyond the Cloud: What We Learned Bringing AI In-House

09 Apr 2025
Track 2

As legal teams explore AI adoption, many assume that cloud-based, proprietary solutions are the only option. But what happens when you take AI in-house?

In this session, we’ll share our journey towards using open-source Large Language Models (LLMs) in our existing environment—what worked, what didn’t, and the key insights we gained from our clients along the way. We’ll address common questions, from “Do open-source LLMs work for legal tasks?” to “What are the environmental impacts?”—giving you practical, experience-backed answers to inform your own AI strategy.

3 Key Learnings

  1. Can open-source LLMs perform as well as proprietary models? – Discover how right-sized, self-hosted models like Deepseek and Llama can match or even outperform cloud-based solutions in legal-specific tasks, response quality, and efficiency.
  2. What does it take to bring AI in-house? – Learn what’s involved in deploying open-source LLMs within a secure local environment, including infrastructure, costs, and operational considerations.
  3. Should you build, buy or partner? – Explore whether deploying your own AI makes sense for your organisation, or if partnering with vendors who offer self-hosted AI solutions can provide the right balance of control, security, and efficiency. We’ll discuss key factors like technical capacity, compliance needs, and strategic goals to help you determine the best approach.
Speakers
Mitchell Scott
Mitchell Scott, Principal Data Scientist, Consilio