The shout went out for a "non-geek team member" to write about their experimentation with AI from a non-technical perspective. I am a "non-geek". I only recently discovered ChatGPT isn't a Gen Z social media app and I'd had only one experience with it.
In my non-lawyer life, I am a trustee for a charitable trust, and our operations manager presented me with an RFP to review. It's standard practise for me to clutch my red pen when reading a document (or any words, frankly....) ready to "improve" anything with my own words. Impressed by our non-lawyer operations manager's drafting talent, I returned to the RFP with very few comments. Then she confessed that ChatGPT had written the document. I was amazed. It was well constructed and contained everything it should have.
This excited and worried me in equal measure. How fantastic to have a means of producing standard documentation quickly and easily. Does this mean I'm out of a job?
It was time to meet the competition.
Once logged in, there are a series of disclaimers regarding the protection of sensitive information "content may be reviewed by our AI trainers" and accuracy of information "the system may occasionally generate incorrect or misleading information... it is not intended to give advice".
Just to check that the AI was on message, my first search:
"Are there data privacy concerns using OpenAI"
"As of my last update in September 2021, using OpenAI's services, including the GPT-3 model, can potentially raise data privacy concerns, as is the case with any AI platform or service."
" Additionally, consulting legal or privacy experts can help you navigate data privacy considerations specific to your use case and jurisdiction."
My second search:
"Are there plagiarism concerns using OpenAI?"
" Yes, there are plagiarism concerns when using OpenAI's AI models, just as with any other text generation tool. OpenAI's models, including GPT-3, can generate human-like text, which means that there's a possibility of generating content that closely resembles existing content, including copyrighted material. "
Phew! I am comfortable that, for now, ChatGPT doesn't want my job or claim to be able to do it.
Before I even started thinking about the qualities that set humans and AI apart (such as emotional intelligence, application of judgment and ethics, linguistic interpretation, creative thinking and social relationships), I feel a bit superior. I know who my clients are, can keep their information confidential, give them advice, and even tailor my advice to my client's specific set of circumstances.
But I am concerned that people will use ChatGPT instead of seeking professional legal advice, and that AI-generated content that "closely resembles existing content" but cannot distinguish fiction from fact will slowly become a source of truth.
So, can it help me? To quote Rachael Newsome, General Counsel and Chief Risk Officer for FNZ, NZ, "It's like having a smart grad - useful but you still have to check their work".
Revolutionising Legal Research? Not today
I asked my smart grad, "What are the main regulations in NZ for a retail online equities investment platform?".This produces 7 headline areas, not all referenced against legislative or regulatory origins. But it misses a few obvious ones - such as privacy and data protection, cyber security, and handling of client money. When I ask the exact same question a day later it produces a longer list picking up a couple of these. Is it learning?
Investor protections were one of the areas identified, so I asked, "What are the investor protections?". This produces a good list of "best practice" items but, again, not linked to legislative or regulatory origins.
My smart grad has given me some useful bullets... and with some further commands it can create some training slides but I'm still missing quite a lot of necessary detail. I have a decent outline of where I need to go, but I don't have enough to give any useful legal advice, and I'm starting to overthink my search questions. My next step is not going to be taken in ChatGPT, I'm going to the source.
Elevating Legal Drafting? Needs more practice
AI can suffer from too much information! My request to simplify a lengthy and overly complex set of terms and conditions is met with "the message you submitted is too long..." I did not disagree. I give it the first 6 pages, which returns a decent summary I can work with.
Complex drafting is too complex. I ask for a triparty agreement meeting very specific requirements for each party. It tells me that it's too complex and that I should see a lawyer. I try to break it down, but one of the parties is an NZX Participant, and ChatGPT fails to include clauses required under the NZX Participant Rules, so I've resigned myself to drafting that one by brain.
It doesn't shortcut the need to think about the hard stuff, but by refining my requests using the reiterative functionality, I was pretty happy with the output when I asked for a:
- clause requiring the use of sustainable practises, and local materials in producing mechanise for a merchandising agreement
- clause for protection of taonga (same agreement) - a sensible starting point but did not reference tikanga Māori
- a confidentiality agreement
- letter to employee detailing the outcome of end-of-year appraisal and go-forward objectives
- No Third Party Rights clause (both English law and NZ law)
- text message to my husband - but it misinterprets the modern understanding of the word "text" and produces a gushing love letter sharing sentiments I'd only ever read in Hallmark cards.
Am I a convert? Not yet
I can see it becoming an invaluable tool for the smart practitioner with limited human resources but a good understanding of the legal framework in which they operate. With its ability to produce credible-sounding content, I can also see it becoming a potentially dangerous tool for someone without that good understanding.
I love the ease of interaction and the ability to build on the chain of commands, and I'm sure with more practice even the non-geeks will be able to access useful output without having to overthink the input.
As a tool for pulling together simple contracts or to give you a starting point for more complex drafting it's great and I will be putting it to use. But it omits contractual requirements based on regulatory or statutory obligations, cultural context and standard boilerplate, unless you know how to ask for it.
I found it much less pleasing to use as a research tool. I'm certain it has the information, but it felt like a lot of work to get to it - I needed the information to extract the information.
Just like a smart grad, ChatGPT can help, but the final check still rests with us non-geeks.
And now that I have my perfect love missive drafted using ChatGPT, who owns the content?
Ngā mihi nunui to Rachel Plieger for sharing her relatable ChatGPT experimentation with the in-house community.
To learn more about Generative AI, key applications and how to use it safely, register for Juno Learning Online Series: AI for in-house lawyers on 22 August 2023.