Blog Viewer

How Law Librarians Are Taming the AI Landscape

By Leanna Simon posted 2 hours ago

  

Please enjoy this blog post co-authored by Ramon Barajas, Director of Research Services, Alston & Bird, Marcelo Rodríguez Escribano, Associate Librarian for Comparative and International Law and Professor, University of Arizona, James E. Rogers College of Law, and Leanna R. Simon, MLIS, CKM, PMQ Director, Research and Intelligence, Honigman LLP.

Every generation of law librarians and legal information professionals can point to a moment when the ground shifted—when a new tool arrived not as a minor upgrade, but as a change in posture. For many, the first jolt was the move from print digests and reporters to online databases. Then came the public Internet, exhilarating and unruly. Today, Generative AI (GenAI) has landed with similar force, compressing time, re-ordering workflows, and challenging assumptions about what it means to “do research.” Yet if this feels abrupt, it is also familiar. Platform names change; the profession’s core work—judgment, context, verification, and teaching others how to think with legal authority—keeps returning to center stage. One of the profession’s earliest lessons is that legal research is rarely linear. New law students often expect a straight line: identify the issue, locate the governing rule, apply it, and move on. Practicing attorneys can feel the pull of that story, too, when the clock is running and a client wants an answer. But real research loops—it doubles back, refines terms, tests assumptions, and sometimes reveals that the first question was the wrong one. GenAI heightens that dynamic. It can produce an immediate, fluent response that feels like the end of the path when it is often only the start of the loop, prompting the next, better question.

That is why context is non-negotiable. Legal sources are products of jurisdiction, time, procedural posture, and institutional perspective. A case is more than a holding; a statute is a living text shaped by amendments, interpretive history, and implementation. Secondary sources can be indispensable while still reflecting editorial choices and bias. GenAI, built to compress and synthesize, can blur these layers—whether a case is binding or persuasive, whether an authority is current, whether a regulation has shifted, or whether a summary has quietly traded nuance for generality. In an era of confident-sounding outputs, the habit of asking “under what circumstances, in what jurisdiction, at what time, and according to whom?” only grows more valuable. Late-1990s law library literature described this dynamic with striking clarity. Early Internet research was thrilling and risky: access was inconsistent, sites failed to respond, and seemingly authoritative content could be incomplete, outdated, or wrong. The warning was simple: speed does not equal reliability. That message reads like a memo for today. GenAI can deliver an answer in seconds while obscuring provenance, collapsing authority, or fabricating details. The familiar librarian’s move—validate, trace, confirm currency, and ensure the source actually supports the proposition—has returned as a central skill, not a nostalgic one. Kay Todd captured that early-web reality in her AALL Spectrum piece, “The Emperor’s New Clothes Are on the Internet,” where the Internet’s apparent abundance could conceal gaps in authority, completeness, and currency. Her point was not to retreat from new tools, but to use calibrated skepticism and routine verification—exactly the posture GenAI demands when it produces plausible-sounding responses that still must be traced to citable, current sources. If the Internet was one inflection point, AI feels like another—call it Research 2.0 (or 3.0 if you count the web revolution as its own wave). Whatever the label, AI is not a quiet feature release; it is a category shift. Research teams inside firms and institutions are being asked to shape how it enters practice: how it is evaluated, governed, deployed, and explained. That work requires a stance that is neither defensive nor starry-eyed—holding two truths at once: these tools are developing quickly and can help, and they remain imperfect in ways that carry real consequences.

Those consequences are already here. Hallucinated citations and near-miss case names slip into drafts; training data and model behavior can be opaque; confidentiality and security concerns arise as soon as client facts meet a prompt box. The line between assistance and unauthorized practice can blur in client-facing contexts. Meanwhile, traditional methods are dismissed as “in the past,” as if rigor and judgment expire. They do not. The task now is to redesign workflow without surrendering the standards that make legal analysis dependable. Another long-taught lesson still holds: efficiency is learned, not handed over in one session. Researchers want the “best database” or the “fastest way,” but what separates speed from quality is strategy—when to start broad, narrow, switch sources, triangulate, and stop because further digging will not change the answer. GenAI can accelerate early steps, especially retrieval and synthesis, but it cannot replace judgment about which path is worth taking. If anything, by making it easy to get an answer, AI raises the premium on deciding whether the answer is good. Then there is persistence—the most human part of research and often the least visible. Complex research hits stretches where nothing comes together: searches fail, sources conflict, or the trail goes cold. In those moments, the librarian’s role is to normalize the process and help rebuild a path forward. AI will not remove that friction; it can sharpen it when a polished response collapses under basic checks. Helping people reconstruct the question and strategy is still part of the craft. The ethical dimension follows the same recurring pattern. The 1990s treated the Internet as an ethical disruption in part because everyone could publish; attorneys needed help separating credible legal information from noise, and professional frameworks lagged behind the technology. Anne K. Abate makes this point directly in her AALL Spectrum article, “Computers and the Internet: Ethical Concerns,” showing how research technologies can outpace the norms meant to keep legal work reliable and client-focused. GenAI revives those concerns in a new form: when outputs are generated rather than published, provenance is harder to see, and a seamless narrative can masquerade as “authority.” Confidentiality, security, competence, candor, and the duty to promote accurate legal information do not become less important because a tool is novel; they become more operational. Research teams and librarians are often the ones turning those abstract duties into daily guardrails. Inside firms and institutions, that governance work is tangible. Librarians and research professionals test tools before they reach attorney desktops, join procurement and vendor evaluations, and press for answers about data handling, coverage, auditability, and model behavior. We help draft policies that set clear limits around confidential information and acceptable use. We design training that teaches more than buttons—how to prompt, verify, cite, and document work—and we build human-in-the-loop checkpoints so organizations can move quickly without moving recklessly.

We have seen workflow reshaping before. Email replaced much of the in-person reference desk without ending reference work; it changed its form and expanded reach. AI agents may similarly handle routine questions while escalating nuanced issues, leaving librarians more time for complex problems, quality control, and coaching. A second analogy is the long shift from print to digital, accelerated by the pandemic but underway for decades. Print did not vanish overnight; it migrated into platforms, became more searchable, and updated more frequently. AI raises a sharper version of the same question: will it displace the traditional treatise? Perhaps partially, in a task-driven culture that rewards quick summaries. But the deeper evolution is that treatises become more structured, continuously updated, and machine-actionable—resources that inform AI systems while remaining the curated, citable backbone behind them. Librarians will still evaluate secondary sources and teach when synthesis is insufficient, and doctrine requires a deeper dive. Working with everyone from first-year students to seasoned litigators reinforces another truth: no one ever fully “masters” legal research. The law evolves, tools change, and each matter brings its own constraints. What researchers can develop—and what librarians can cultivate—is confidence in uncertainty: how to proceed when the path is unclear, how to test assumptions, and how to treat an output as a hypothesis that must be confirmed. If legal research is learning how to think with the law, the present moment asks us to do that thinking out loud—and fast. GenAI can draft, summarize, and suggest; it can surface terminology and map an issue landscape in minutes. Used well, it speeds orientation and frees time for analysis; used poorly, it creates a mirage of certainty. The opportunity is to integrate AI into a discipline that keeps sources visible, logic checkable, and authority traceable. That is why the profession’s “traditional” competencies are not vestiges; they are the foundation this moment demands. Someone has to look at a confident AI output and know what to test first. Someone has to catch a case cited out of jurisdiction—or without the procedural posture that changes its meaning. Someone has to notice when a quotation cannot be found, or when a rule statement is too clean to be true. Someone has to insist that a good-sounding answer is not the same as a well-sourced one—and teach others to keep that distinction under pressure. Research teams do this daily. The shift now is that organizations need this expertise not only at the point of need, but upstream, as part of how GenAI enters the institution.

For that reason, the most plausible future is role transformation rather than displacement. GenAI will automate parts of retrieval and drafting, just as online research once automated what required days in print. But automation does not remove the need for judgment; it raises the stakes for it. The disposition that fits the moment is both excited and cautious—those are not opposing attitudes; they are the job. When librarians help institutions test tools, innovate with what is at hand, and expand capability responsibly, the payoff is collective: stronger work product, better answers for clients, and smarter investments in technology that is fit for purpose. In the end, “Research 2.0” is less a product release than a professional posture: bridging what we know with what is still taking shape. Law librarianship has been here before—meeting technological change with resilience, refining method, and reasserting the value of context, strategy, persistence, and ethics. The call is straightforward: test the tools, name the risks, build accountable workflows, and teach researchers to interrogate outputs rather than admire them. The most durable outcomes will rest on the standards law librarians have always carried—so that, as practice evolves, all ships rise on a foundation we helped design.

Please note that only HeinOnline subscribers can access the articles linked in this blog post.
 




#GenerativeAI
#InnovativeLegalResearch
#KnowledgeManagementandSearch
#KnowledgeManagement
#ArtificialIntelligence

0 comments
14 views

Permalink