AI at ILTCON 2017 - Part 3

By Joe Davis posted 02-19-2018 18:15

  

Attendees packed the room for the the third and final session on artificial intelligence at ILTACON 2017, anxious to hear about real-world experiences.  The session’s panelists did not disappoint, sharing success stories and cautionary tales about implementing AI solutions in their law firms and legal departments.


Moderator:

Andrew Arruda, CEO and co-founder of ROSS Intelligence


Panelists:

Anna Moca - Senior Manager of Strategic Projects at White & Case LLP

Amy Monaghan - Practice Innovations Manager at Perkins Coie LLP

Jonathan Talbot - Former Director at DLA Piper, now Director of IS Applications & Desktop at Cooley LLP

Julian Tsisin - Legal Technologies and Systems at Google



Anna Mosa on Getting Started with AI:

"Getting started has been more about an initial kind of education and communication plan throughout the firm.  We wanted to make sure that the lawyers weren't just getting all their information from the media, but also from us in knowledge and other business services teams helping them handle the deluge of information that's out there."


Julian Tsisin on the client’s perspective:

"From a client perspective, I don't think anybody cares if law firms use AI, ML or anything else.  Clients are looking for efficiency gains, and if you can do it through AI which is now a hot topic, that's the request you're going to get.  But from my perspective, if you can increase efficiencies through an add-on in Microsoft Office, great, let's do that.  But I think right now there's so much hype in the industry that everyone's just asking about it."


Julian Tsisin on firms using AI:

"If you see a firm that  is making that investment into AI and Machine Learning, it also gives you an intangible feeling about how good the firm is with their use of technology"


Amy Monaghan on the negative aspects of AI:

"I'm constantly having to educate and correct misperceptions from both attorneys, other users and clients.  AI/Machine learning has been propped up as this magic bullet, and I think we heard in the other two sessions that it's not.  It's a very powerful tool to be used on very narrow, targeted, specific use cases.  However the pros are that you can get creative with these tools.

Amy Monaghan on the positive aspects of AI:

“A lot of these tools have a lot of flexibility, you can get really creative and find that actually this may not get us to 100%, but it can get you to 75%, 80%, maybe to 90% of solving your problem.  So the pros with both being an early adopter and working with like-minded individuals within the firm who are on board with trying new things out, you can start to understand how we can leverage these tools to create innovative solutions, sometimes client-facing solutions.  And so you start to hear different perspectives and get ideas on how we can really craft these tools.  And another pro to being an early adopter, if you create a good relationship with your vendor, you can start to have an ongoing dialog with them, ask them where they're going, what's their roadmap looking like, and let them know you would like to have input on that roadmap.  Let them know what your use cases are - it could be they've thought of this but there's not an overwhelming demand, and then at that point you get on the phone to all your friends at other firms.”

Anna Mosa on the current state of AI:

I wouldn’t say we have the silver bullet on the platform side.  I think things are changing really fast, but luckily the firm is pretty patient, and as long as we feed them the point solutions they can try and use and get their hands on and understand the technology, when we do have more fully integrated across the firm, enterprise AI solutions, they will understand what those are about.”

Anna Mosa on the challenge of AI:

“The hardest part for us on the business case side is that we're often changing the way people may be working.”

Julian Tsisin on solving business problems with AI:

“This is not about ML or AI or any other kind of tool - this is about a business problem and how you're going to solve it.  It doesn't matter what you call the technology, at the end of the day it's software.”

Julian Tsisin on the importance of education about AI:

“Even at Google, a company that knows AI inside and out, lawyers don't.  So education is a very important part of building a business case.”

Jonathan Talbot on selecting an AI solution:

"I think that important things that you should be looking at are how does it fit in the lawyer's workflow.  Does it fit how they work, that's a very important aspect, the ease of use goes along with that.  What does the UI look like?  Lawyers aren't going to spend a whole lot of time training and learning this thing, so you have to have something that's quick and easy for them to learn."

Amy Monaghan on law firm data:

Even though everyone says law firms sit on a mountain of data, it's not good data most of the time.  It's a lot of data, yes, but most of it you can't actually use because it makes no sense.”

Amy Monaghan on the practical challenges of AI:

“One of the biggest challenges I've faced in the past with building my own models is that we didn't have enough examples of the language we wanted Kira to learn. You eventually start to build up a stockpile and you're able to do the training necessary.  In other instances we have a lot of data, the problem is sometimes the variance between the language is too great, or it's the exact same, which doesn't make for an exceptional model because it's not going to pick up those variances.  So you really need to do your own due diligence when you get into the stage where you're going to look to train your own models to use on documents.  You need the help of subject matter experts to map out those documents and map out the language and see what's going to give you the most accurate results, and also the most tailored results to the problem.  And then, education and adoption - that's always going to be a challenge and a pain point.”

Andrew Arruda on the importance of process mapping:

“We can't approach [AI] as something completely new.  We can't just bring this technology in and expect it to do everything.  We do have to know what it's going to do.”

Anna Mosa on maintaining AI:

“Often you're introducing not just new technology but a whole new function in a way, and that's been a challenge for us. ...  This is not a piece of software you can implement and just let run.  It needs to be cared for and watched and developed.  You can't really go to LinkedIn and find here's the 1500 people in New York City that do this - it's kind of new, and that's a little bit hard.  And how does that integrate with the teams we already have, and the work you already have.  That's been a challenge for us, but an exciting one, because you're bringing in new ideas.”

Jonathan Talbot on change management:

“For me, this project has been more about change management than AI - getting the lawyers to agree and move with us to get this all done.”

Julian Tsisin on the importance of training data:

“On the build side, we built our own classifier, and it was a relatively straightforward project to do it, and we quickly realized we don't have enough training data.  We actually had to outsource the classification of a couple thousand patents to a law firm, and then once we had that clean data we could classify our full portfolio - we could classify all the data in the USPTO if we wanted to.  You do need to expend a little bit of time - and sometimes a lot of time - on getting that clean data because because it is a true supervised learning ML project, you can't move forward without that training data.”

Andrew Arruda on data quality:

“Next year at ILTACON, there's going to be discussions about how to train using this data, how to clean the data.  A lot of the time you go to an organization, and it's like "but Andrew, we have all this data we're good.  You said we needed a lot of data, we have tons of data.  But not all data is created equal, and the way that we had been storing this data before was kind of spotty.  We all know how hard it is to get lawyers onto knowledge management systems and to task things properly.  If I can call a prediction, next year or the year after, we'll see how fast this all moves, I think there's going to be whole workshops on that in terms of training these systems.”

Amy Monaghan on benchmarking AI:

The challenges of training data, costs, and measuring success are going to go hand in hand.  The costs in terms of resources and people time are high, especially when you're training your own models, and that's because you need to gather and vet the data, you need to spend time training the models and then you need to understand how you're going to measure success.  Are you going to measure success based on dollars, are we doing a benchmark exercise - our due diligence took us this amount of time prior to our tool and now it takes us this amount of time and we saved X amount of money for the client? Or are you going to measure success on the actual success of the model - is it doing its job?  It might be that you do both.  At this point, understanding your return on investment is going to be a little difficult - it's anecdotal.  We’re hearing from our pilot group that ‘yeah, it works great, it's saving us time, it's getting us to information faster, but going down the road we're going to need hard data in order to quantify the actual cost of these tools.’  Those are going to be your trifecta of challenges that you're definitely going to experience.”

Amy Monaghan on finding the right internal groups for AI projects:

“We've had several discussions with different groups in our firm and you might want to look for people and understand where their motivations and their perceptions are coming from.  Litigation may not be a good pilot group because they're more risk averse, whereas IP might be better because they understand the iterative process of software.  And also find those people who are champions in your firm, they understand that this is the trajectory of legal work and they understand that clients are specifically ask for these tools, and work with them to start harnessing where you can implement these tools.  Sort of a grass-roots ground up approach.  [There are] a lot of firms, ours included, that have strategic plans that include these initiatives, but in order for them to be rolled out successfully and actually used, you’re going to have to ingrain yourself throughout the firm and find those people who are willing to believe and support you and get on board.”

Jonathan Talbot on the current and future state of AI:

“We've trained over 800 lawyers, over 600 users in the system, hundreds of deals, loaded, hundreds of thousands of documents, but there's a lot more I want to do.  And the future is now for a lot of these things.”

Julian Tsisin on outcome prediction and settlement analysis:

“Conceptually from a machine learning perspective, it's not a very complicated problem.  The issue is data again.  I've actually tried to do this internally a couple of years ago, and I very quickly realized that even though Google has a lot of data from our litigations, it's not statistically significant enough to do very accurate models.  I'm interested to see how the industry will come together and start sharing more data.  There are some vendors that are doing this already, but I would like to see if companies can come together, share more of that data in an anonymized fashion so we can all benefit from it going forward.  It's a little more 'out there' - not because it's not possible, but because there’s no data.”


Listen to the audio recording, download the slides and view the cartoons created during the session here.

Read part 1 of this blog post here, and part 2 here.

0 comments
31 views

Permalink