In an age of digital transformation, the legal industry is increasingly thinking about using AI and Large Language Models (LLMs) like GPT for document review, legal research, and even writing legal briefs. Yet, in our discussions, legal professionals regularly express concern about LLM security. Are we risking a waiver of attorney-client or work-product privileges by sending our data to OpenAI? What if that data includes confidential client information?
If these questions resonate with you, you cannot afford to miss our upcoming webinar: "Are Large Language Models Like GPT Secure? A Look at the Technology and the Law."
We’ll delve into the key issues that every legal professional should consider:
Our experts will unpack these questions and help you better understand how these new LLMs work, how commercial providers provide a “reasonable expectations of privacy” for your communications and what you should expect from your LLM vendor to protect against waiver of privilege.
Join us as we tackle the elephant in the room: Are LLMs like GPT secure and are we risking confidentiality and privilege when we send client data to these AI platforms for analysis?