Detailed Session Descriptions
Opening Keynote Fireside Chat: Built for this Moment - From Stacks to Strategy: Library Roots, C-Suite Impact
What if the most future-ready leaders in legal innovation trace their professional DNA back to the law firm library?
This candid, keynote discussion brings together a powerhouse tandem of law firm leaders whose careers began in Research, Information, and Library Services - and have since evolved into transformative roles that are reshaping how their firms operate and compete in an AI-driven legal ecosystem.
Now in the C-suite these two professionals have helped to redefine the trajectory of the function—and the perception of its value.
In this candid fireside chat, Katherine Lowry (Chief Information Officer at BakerHostetler) and
Greg Lambert (Chief Innovation Officer at Jackson Walker LLP) will share the human stories behind their journeys: the zigs and zags, inflection points, risks taken, and mindsets that transformed foundational library and research skills into strategic influence at the highest levels of their firms.
Far from a retrospective, this discussion also looks forward—exploring how the core competencies of library and research professionals—curation, navigation, interpretation, and judgment—are proving uniquely powerful in today’s increasingly AI-native environment.
As firms grapple with AI adoption, data governance, and technology complexity, these leaders illustrate why research professionals are not observers of change, but architects of how AI is responsibly used in legal work.
This conversation reframes the library not as a legacy function, but as a launchpad for leadership: a place where information architecture, context-setting, and human judgment were practiced long before AI made them mission-critical.
SPEAKERS:
Katherine Lowry, Chief Information Officer, BakerHostetler
Greg Lambert, Chief Innovation Officer, Jackson Walker LLP
Moderator: TBD
From Library to Intelligence Engine: The Research Function Reimagined for the AI Era
How AI is changing the research mandate—from information retrieval to interpretation, validation, and strategic enablement
As generative AI accelerates the pace of legal research, the traditional “answer delivery” model is evolving into something far more strategic: turning knowledge into intelligence that informs firm decision-making and strategy. This session brings together research leaders redefining their function as an insight engine—where researchers operate as analysts, interpreters, and trusted partners in AI-enabled workflows, helping the firm validate information, synthesize context, and deliver reusable insight across KM, BD, CI, innovation, and practice teams.
Rather than a theoretical conversation, this session is built around interactive, practical components designed to help attendees benchmark where they are today and identify concrete next steps to elevate their impact tomorrow. Attendees will explore:
How the research function is shifting from retrieval to interpretation to intelligence enablement
What “analyst-level” research looks like in practice—and the skills that define it
Where research teams fit in the human-in-the-loop model for AI verification and trust
How structured insight becomes fuel for client intelligence, pricing, knowledge models, and AI initiatives
Practical next steps for elevating research impact—regardless of AI maturity level
Interactive components will include:
Research Maturity Snapshot
A simple, live assessment to help attendees identify where their function sits on the evolution curve—from answer delivery → insight architecture → intelligence engine.
Before/After Transformation (Case Examples)
Panelists will break down real research scenarios—what a typical request looked like in the past, how it’s structured and captured now, and what changes unlocked reusable firm intelligence. These examples offer a blueprint for reimagining existing workflows without massive reorganization.
The New Skills Portfolio (Competency Framework)
A practical look at the competencies defining high-value research in the AI era: analytical thinking, narrative synthesis, context weighting, model verification, and structuring knowledge for systems. Attendees will learn what to cultivate, how to grow these skills within teams, and why these capabilities map directly to leadership opportunities.
SPEAKERS:
Justine Morgan, Director of Research & Knowledge, Venable LLP
Leanna Simon, Director of Research and Intelligence, Honigman LLP
Scott Bailey, Director of Research and Knowledge Services, Eversheds Sutherland
Moderator: TBD
Case Study: Collaboration and Convergence: Sidley’s Blueprint for Cross-Functional Intelligence in the Age of AI
Redefining roles, responsibilities, and collaboration models in the AI era
The accelerating impact of AI is reshaping how law firms organize knowledge, technology, and intelligence functions—and few firms illustrate this transformation more vividly than Sidley Austin. As new capabilities emerge and traditional boundaries blur, Research, KM, Data & AI, Client Intelligence, and IT are being pushed into unprecedented levels of interdependence.
This candid discussion brings together leaders from across Sidley’s evolving information and intelligence ecosystem to explore how the firm is redefining roles, responsibilities, and collaboration models in the AI era. Panelists will discuss how the creation of Sidley’s new Data & AI function has shifted organizational structures; how traditional library, research, and KM functions are navigating changing ownership lines; and how cross-functional coordination now drives tool vetting, technology adoption, and strategic alignment.
Through real examples—ranging from AI-powered legal research tools to shared workflow initiatives—the panel will unpack the opportunities and tensions of convergence: where functions complement each other, where they collide, and how Sidley is ensuring that innovation, insight, and ROI stay connected across departments.
SPEAKERS:
John DiGilio, Firmwide Director Library Services, Sidley Austin LLP
Terry Kim, Sr. Director, Head of Product & AI Enablement, Sidley Austin LLP
Decoding Markets: Cross-Industry Playbooks and Analyst Frameworks for Modern Law-Firm Intelligence
How world-class analysts turn markets, companies, and competitors into decision-grade intelligence
Law firms are under pressure to provide sharper market insight, anticipate client needs, and support partners with decision-grade intelligence. But many teams are still operating without the analytical frameworks used by world-class analysts outside the legal sector. This session brings together leaders from equity research, Fortune 500 competitive intelligence, and enterprise market analysis to explore how top analysts decode markets, assess companies, and separate signal from noise. In this candid discussion, attendees will learn:
How analysts in different industries model markets, size competitors, and estimate financial impact
Practical frameworks for evaluating private companies and emerging market spaces
How to build fast, defensible insight (even with limited data access)
What translates directly to law-firm research, BD, and CI — and where legal requires a different approach
This is a rare look into the thinking and toolkits used by analysts who operate at the highest level — with ready-to-use concepts that law-firm researchers can integrate into their own work immediately.
SPEAKERS:
Jay Nakagawa, Director of Competitive Intelligence, Dell Technologies
Manav Patnaik, Equity Research, BARCLAYS | Business, Information & Professional Services (BIPS)
Moderator: Ken Crutchfield, Strategic Advisor, Author - Legal Technology, Spring Forward Consulting
Case Study: Benchmarking AI Tools - The AI Discipline Gap
Why Research Is Becoming the Front Line of AI Verification, Accuracy, and Trust
As law firms accelerate investment in generative AI, many are now confronting a harder and more consequential question: which tools actually perform in real legal research workflows—and which do not?
This case-based session examines how research leaders at Paul Weiss are moving beyond demos and vendor claims to systematically evaluate AI tools using real firm research queries. Drawing on a live benchmarking initiative with Vals AI, this discussion explores a multi-dimensional evaluation framework designed to reflect how lawyers and researchers actually work. Rather than focusing solely on “correct” answers, the framework assesses AI output across structure, style, relevance and accuracy, hallucination and citation risk, and overall usability—capturing what truly matters in high-stakes legal research: whether output is defensible, reliable, and fit for purpose.
The session highlights why human-in-the-loop verification is not a temporary bridge, but a permanent requirement of responsible AI integration.
Importantly, this is not a race to crown a single “winning” tool. Instead, the case study offers a practical, replicable approach to understanding where different AI solutions add value, where they introduce risk or inefficiency, and where consolidation or restraint may be warranted as firms reassess AI investments made under competitive pressure. Attendees will leave with a clear framework for bringing discipline, rigor, and trust to AI evaluation—grounded in real workflows, cross-functional collaboration, and the evolving strategic role of research in the AI-enabled law firm.
SPEAKERS:
Amy Dietrich, Director of Research & Competitive Intelligence, Paul, Weiss, Rifkind, Wharton & Garrison LLP
Skills-Building Session: Context Engineering as a Core Research Skill in the AI Era
Research professionals as context architects and trust stewards in AI-enabled workflows
As AI becomes embedded in legal research and intelligence workflows, one of the most critical—and least clearly articulated—skills is context engineering.
Context engineering describes the next evolution of the research role: not simply retrieving answers or prompting tools, but designing, shaping, and curating the context that allows AI systems (and lawyers) to produce accurate, defensible, and usable work product. It is the practice of defining the problem before an answer is generated—translating lawyer intent, domain knowledge, assumptions, constraints, and risk into structured context that governs when, how, and whether AI should be used at all.
Unlike prompt engineering, which focuses on optimizing how a system generates an answer, context engineering operates at the pre-generative layer of the work. It makes implicit assumptions explicit, determines trust thresholds, and establishes where human judgment and verification are non-negotiable. In short: prompt engineering optimizes output; context engineering governs responsibility, reliability, and trust.
In low-risk domains, prompt engineering is often “good enough." But in legal research ambiguity is normal, stakes are high, accuracy ≠ usability.
Context engineering names work that research professionals are already doing—but that firms have not yet clearly articulated, operationalized, or rewarded. This session validates that work, gives it language and structure, and positions research professionals as context architects and trust stewards in AI-enabled workflows.
Session Format: Facilitated, Hands-On Working Session
This session is designed as a guided, skills-building workshop that mirrors how high-stakes research and intelligence work actually unfolds inside firms.
The session begins with a brief framing:
What “context engineering” means in practice
Why it is emerging as a core research competency
How it differs from prompt engineering and tool training
Why human-in-the-loop verification is a permanent feature of responsible AI use—not a temporary safeguard
Participants then move into a facilitated working exercise.
Guided Working Exercise
Working in groups, participants will:
Work from a Realistic Scenario: Tackle a high-stakes research or intelligence request (e.g., a competitive pitch, regulatory exposure analysis, or opaque private-company assessment) where goals are ambiguous, information is incomplete, and risk matters.
Deconstruct the Request (surface what is usually implicit):
What decision is actually being supported?
Who the output is for and how it will be used?
Key assumptions, constraints, and risk tolerances
Where errors, hallucinations, or overconfidence would cause real harm
Design the Context (Not the prompt). Collaboratively define the contextual scaffolding an AI system would need to operate responsibly:
What must be made explicit for accuracy and nuance
What should be excluded to avoid distortion
Where judgment, domain expertise, and narrative synthesis matter
Where AI assistance is appropriate—and where it is not
Define the Verification Layer. Explicitly design the human-in-the-loop role:
What must be reviewed, validated, or stress-tested
What signals would indicate a problem
What cannot be delegated to AI
Why trust cannot be automated
What Participants Take Away
A repeatable way to frame problems, engineer context, and define human-in-the-loop roles — skills that can immediately applied
Clear language to articulate and defend research judgment in AI-enabled workflows
A practical way to define human-in-the-loop roles
This interactive session bridges theory and practice, slows participants down before they speed up with AI, and reinforces why research professionals are uniquely equipped to guide AI systems responsibly. In doing so, it elevates research from support function to architect of context, steward of trust, and enabler of decision-grade intelligence in the AI era.
SPEAKERS:
Courtney Toiaivao, Director of Research Services, Holland & Knight
TBD
From Metrics to Mandate: Strategic Storytelling for Driving Influence and Buy-In
Turning data into executive-level narratives that secure investment, credibility, and impact
Law firm research, knowledge, and information leaders are collecting more data than ever — usage statistics, efficiency gains, cost savings, and AI-related metrics. Yet translating those metrics into influence with firm leadership remains a challenge. The issue isn’t a lack of value. It’s a lack of shared language.
As firms invest heavily in AI, data platforms, and new operating models, law firm research, KM, and information professionals are increasingly asked to justify headcount, budget, and strategic relevance in executive terms. This session explores how to build compelling cases for investment by combining hard metrics with strategic storytelling — aligning impact with firm priorities such as revenue, client service, risk management, efficiency, and talent.
At its core, this is a candid conversation about professional identity, influence, and impact—how to build investment buy-in by pairing hard metrics with narratives that resonate with the C-suite and shift perceptions from cost center to strategic force multiplier.
SPEAKERS:
Emily Florio, Director of Knowledge Research & Resources, DLA Piper
June Liebert, Director Of Information Services, O'Melveny & Myers LLP