By Sinead Machin is a Senior Associate at Complete Clarity Solicitors and Simplicity Legal
EVERY advance in the dissemination of human knowledge – from the printing press to newspapers, television and the internet – has initially been seen as much as a threat as an opportunity. But few new systems have been greeted with such suspicion as AI.
Largely because of fears of machine superiority and loss of human jobs and functions to Artificial Intelligence, debate about its impact on current and future society has verged on the dramatic and, in some cases, the hysterical.
But one thing is beyond dispute – AI is here, and it is here to stay. And the only rational response is to learn to live with it, understand its capabilities and its limitations and think very clearly about checks and balances to ensure a net benefit rather than an irreversible harm.
The impressive power of the technology, and particularly tools such as Chat GPT, has been exercising the minds of the legal profession around the world as it gets to grips with the practical, economic and ethical implications of AI.
There is no doubt that AI will become, if it has not already, an indispensable tool for coping with the immense amount of data which lawyers have to handle in complex cases, and some of the mundane processes which underpin the legal infrastructure.
Certainly, in high volume practices, machine learning and data analytics can be hugely beneficial in identifying and increasing the number of leads and prospects and SEO teams are seeing significant opportunities for business growth.
AI comes into its own in the field of case management, with its limitless capacity for examining massive volumes of data, finding patterns, and making predictions or choices using algorithms and statistical models.
This is creating much quicker and more streamlined case management, which clients are already coming to expect. In fact, it may soon become a recognised basis for complaint if the speed and efficiencies which are now possible are not achieved.
More troubling discussion is taking place around whether AI could carry out some of the tasks traditionally performed by lawyers, such as researching, preparing and presenting cases.
The pitfalls of this line of thinking were amply illustrated recently by the story of New York attorney Steven Schwartz, who used ChatGPT to write a legal brief. The chatbot not only completely fabricated the case law which he cited in court but reassured him repeatedly that the information was accurate. The judge in the case was singularly unimpressed.
Lawyers must be aware of the risks of using AI bots in terms of client confidentiality. If they fed client-specific information into a bot such as ChatGPT, it would become the property of OpenAI, the bot developer, and could be disclosed in other cases.
Scots law, of course, has its own unique characteristics, of which AI bots – at this stage – would likely be unaware, leading them to rely on English and Welsh cases and precedents which would have limited relevance.
However, it is learning fast. Chat GPT3 scored in the lowest 10% in the US Bar exam, but the next version, GPT4, scored in the top 10%. It is conceivable that law-specific bots will be developed to concentrate solely on particular areas of expertise.
Master of the Rolls and Head of Civil Justice in England and Wales Sir Geoffrey Vos said recently (June 2023) that public trust may limit the use of AI in legal decisions, pointing to the emotional and human elements involved in areas such as family and criminal law.
He warned that while AI has the potential to be a valuable tool for predicting case outcomes and making informed decisions, it was not infallible and should be used in conjunction with human judgement and expertise.
He pointed out that ChatGPT itself said: “Ultimately, legal decision-making involves a range of factors beyond just predicting the outcome of a case, including strategic and ethical considerations and client goals.”