Generative AI is no longer a hypothetical future for law firms; it is already reshaping daily practice and forcing hard choices about strategy, policy, and investment. During the ILTACON 2025 panel discussion titled Actionable AI Strategy and Policy, three legal tech leaders, Christian Lang of Lega Inc., Sukesh Kamra of Torys, and Anna Corbett of Akin, Gump. Strauss, Hauer & Feld, L.L.P., debated whether firms should begin with a formal AI strategy or prioritize experimentation, how to turn policy into operational safety, where to place big technology bets, and whether AI is merely an innovation play or a fundamental transformation of legal service delivery.
The conversation, and the practical takeaways that emerged from it, provide a compact playbook for firms that want to move quickly but responsibly. The discussion opened on a familiar split: start with a clear, firm level strategy or let strategy emerge from experimentation. Sukesh Kamra argued that strategy must come first as strategy creates alignment, helps prioritize use cases by value, enables scalability, and sets a firm’s risk tolerance so experimentation does not fragment into inconsistent, risky practices. As he put it, firms should ask the “why” and the “how” before the “what.”
At the other end of the spectrum, Christian Lang urged that because models and use cases are still evolving rapidly, firms need an engine of exploration, diverse, empowered “explorers” who surface the real user behaviors and the unpredictable “aha moments” that reveal what will actually stick. Anna Corbett struck the pragmatic middle ground: firms should establish a light strategic north star and baseline governance, but must remain iterative so policy and tooling can evolve as models and vendor offerings change. These perspectives together suggest a two-track approach: a concise strategic orientation that directs activity, paired with disciplined, time-boxed experiments mapped to that orientation, so discovery is purposeful rather than ad hoc. Policy was the next flashpoint. Everyone agreed that a baseline policy is necessary because clients are asking about it, and lawyers operate in regulated contexts where confidentiality and privilege matter. Sukesh emphasized that firms need upfront guardrails to protect privileged and confidential information; without a policy, firms are left exposed. Christian pushed back on “policy only” thinking by pointing out that policies are only as good as their operationalization. In practice, he said, firms often discover data leaks and policy violations only when audit logs and compliance scans reveal them.
Tooling, approved firm-controlled connectors, data loss prevention, and auditing channels behavior so that users can experiment without exposing sensitive data. Anna reinforced that policy should be flexible and light on bureaucracy, enabling practice group nuance while preserving a firmwide baseline. The combined message: publish a short, clear firm policy that defines permitted data and tools, then invest early in technical controls and monitoring so policy is enforced by design rather than solely by policing.
When the panel turned to tooling and investment, the speakers layered the market into three strata: enterprise foundations (document management, identity, core systems), point solutions that solve specific workflow problems, and an R&D/ R&D/experimentation layer that lets firms test models, prompts, and small prototypes.
Christian warned that massive platform bets are risky in a world where base models and vendor landscapes shift quickly; a R&D sandbox helps prevent overpayment for point solutions that new capabilities may eclipse. Anna argued that firms should nonetheless prioritize enterprise productivity tools that touch many users, as those projects build culture, normalize AI use, and create visible advocates for further investment. Sukesh recommended a readiness assessment before deciding on the scope. Taken together, the practical path is clear: assess readiness, win some enterprise productivity points to build momentum, pilot targeted point solutions, and maintain a small, well-governed R&D sandbox for rapid learning.
Underlying all these choices was an insistence that culture and leadership determine whether AI becomes incremental or transformative. Christian offered a blunt prediction that many technical drafting tasks will be automated within a short horizon, forcing firms to rethink what work commands a premium. Sukesh and Anna emphasized that transformation depends on visible sponsorship, tolerance for experimentation, and cultural DNA: nimble, process-oriented firms with committed leaders will realize bigger shifts faster than conservative firms that delegate AI to a committee.
The practical implication is that leaders must set a straightforward narrative and then sponsor explorers, measure adoption, and reward beneficial shifts in behavior.
For firms ready to act, the panel’s conversation suggests a compact set of first moves: publish a one page firm AI policy that sets baseline guardrails; run a 4–6 week readiness assessment to inventory data, systems, and people; deploy core technical controls (DLP, identity and connector approvals, logging); stand up a small R&D sandbox (with safe or synthetic data) and an experiment cadence; and designate a cross functional AI owner with apparent authority to run pilots and report to leadership. Titles like “Chief AI Officer” can be helpful but are no substitute for authority, resources, and executive buy-in. The panel did not offer a single magic answer. Still, it did provide a coherent, implementable approach: set a light strategic north star, protect clients by design through baseline policy plus technical controls, fund disciplined experimentation to surface high-value use cases, and let leadership and culture convert pilots into firm-level change. Firms that adopt that blend will be better positioned to protect clients, unlock productivity, and shape the next phase of legal service delivery.