Search
Close this search box.

Continuing Conversations from the Colloquium (Part 2): Legal Liability of Lawyers Using AI

... MANAGING AI RISKS WILL BECOME INCREASINGLY IMPORTANT AS LAW PRACTICES HARNESS AI FOR THE BENEFIT OF THEIR CLIENTS.

AI AND LAWYERS (PART 2)

In the “Continuing Conversations from the Colloquium” series, we look at some of the key themes arising from the Law Society’s Colloquium on ‘The Role of Lawyers in the Age of Disruption: Emerging Regulatory Challenges’, which was held as a live webinar on 19 May 2020. The ‘Robots vs Lawyers’ debate was explored in our first article published in July 2020. In the second of our two-part series on “AI & Lawyers”, we examine another major theme – Legal Liability of Lawyers Using AI – based on a curation of the participants’ questions submitted during the discussions for Panel 2 on “Legal Ethics and Technology”.

This brief note is written by Alvin Chen, Director of the Legal Research & Development department at the Law Society of Singapore.

For most lawyers, risk management is unlikely to be one of the more appealing conversation topics in a law practice (unless you are the risk partner). Just imagine asking a fellow colleague, “Hey, how is the KYC form coming along today?”! But throw in artificial intelligence (AI) and legal liability, and you get a game-changing mix – suddenly, managing the legal risks of AI becomes a hot topic. This was evident from the barrage of questions received from participants during the discussions for Panel 2 on “Legal Ethics and Technology” at the Colloquium:

  • Should the client bear the risk of the use of AI tools (and any resulting negligence by the lawyer), especially if the client demands a quick and cost-effective solution?
  • Should lawyers be held liable for the negligent design (as opposed to the negligent use) of an AI tool?
  • Would it be fair for a law practice to include an assumption in its legal opinion that a particular task performed by an AI tool is error-free
  • Would clients accept such an assumption?
  • Are lawyers obligated to give a cost-benefit analysis to clients on the pros and cons of using an AI tool?
  • How should lawyers respond if clients are hesitant to consent to the use of an AI tool in rendering legal services?
  • Should lawyers be held responsible if clients do not agree to use an AI tool or are not prepared to pay for its use?

Although the panel was unable to address all these questions due to the tight schedule of the webinar, the panellists’ presentations and discussions touched on possible answers to some aspects of these questions, for example, on how the law of negligence could be applied to address AI liability issues involving lawyers. But it is clear that these complex questions merit deeper research and reflection. Some preliminary thoughts (and further questions) are set out below.

1. Unreasonableness or unfairness of AI risk allocation: From a contractual perspective, the allocation of liability and responsibility regarding AI risks between lawyers and clients will depend primarily on the terms of engagement and the dynamics of the lawyer-client relationship. One issue that may arise is whether such contractual allocation is fair and reasonable. In this regard, would the Singapore courts adopt the same approach taken in construing lawyers’ fee agreements, namely, that clients require more protection because lawyers are considered to be in a superior position to their clients because of the nature of the lawyer-client relationship? Moreover, much uncertainty surrounds AI risks, which are still emerging and may not be completely known at the time of use of the AI tool. Under what circumstances would the lawyer’s allocation of AI liability risks (and thus, costs) to the client by contract be considered unreasonable or unfair by the courts?

2. Adequacy of explanation of AI risks: Some of the participants’ questions appear to assume that lawyers are well-equipped to explain AI risks, but is it possible to explain AI risks without a working knowledge of AI? It is unclear whether lawyers need to have a fair amount of working knowledge about machine and deep learning, and even possibly the specific type of AI algorithms involved in the AI tool. To illustrate, can a lawyer adequately explain the limitations of an AI-produced draft contract to the client without a basic appreciation of the AI algorithms involved (e.g. natural language processing)?
3. Explaining AI risks to different types of clients: A related point is whether a more comprehensive level of explanation on AI risks would be required for clients who are not familiar with AI. In the context of giving legal advice, the Singapore courts have observed that lawyers are held to a higher standard when explaining legal documentation to laypersons, as compared to sophisticated businessmen. Would the same principle apply to lawyers explaining AI risks to non-AI-savvy clients in the use of AI for their legal matters? If so, would lawyers therefore be held to a lower standard of care vis-à-vis AI-savvy clients?

In the wider context, a recent Law.com article (“The Liabilities of Artificial Intelligence Are Increasing”) suggests that AI liability issues are beginning to be worked through the US justice system. It will, however, take time before insights on how to analyse these issues can be garnered. Meanwhile, managing AI risks will become increasingly important as law practices harness AI for the benefit of their clients. In this regard, the Law.com article provides a few general pointers for managing AI risks. It is timely for lawyers to begin exploring AI risk management to meet the novel challenges of the algorithmic age.

You are welcome to contribute further thoughts on these issues by writing to the Legal Research and Development department at lrd@lawsoc.org.sg.