Site icon Premium Alpha

Microsoft constructed an AI agent for laywers in Phrase. Let’s hope it does not go berserk.

Microsoft constructed an AI agent for laywers in Phrase. Let’s hope it does not go berserk.


Microsoft Phrase is getting an AI authorized agent, which sounds useful till you bear in mind how badly this has gone earlier than. The brand new Authorized Agent can overview contracts, counsel edits, examine variations, and flag dangerous clauses inside Phrase. On paper, these options sound fairly helpful and handy, nonetheless, instances of generative AI instruments hallucinating and inventing whole instances, citations and quotes from skinny air have dragged some actual folks in actual court docket bother earlier than.

What can Microsoft’s Authorized Agent do?

Microsoft says Authorized Agent is on the market by Copilot in Phrase for customers in its Frontier program within the U.S. It presently works on Phrase for Home windows desktop. There is no such thing as a separate app or set up required, although some customers might have to restart Phrase earlier than the agent seems.

Authorized Agent is supposed for contract and doc overview. Microsoft says it might probably test a contract clause by clause towards a authorized playbook, overview a full settlement, examine totally different variations, flag dangers and obligations, and counsel edits with tracked adjustments. Additionally it is retains the unique formatting, tables, lists, and negotiation historical past intact.

The corporate can also be making an attempt to keep away from the plain nightmare state of affairs for its customers and itself. The function has built-in safeguards like offering citations linked to supply language, so reviewers can test options earlier than utilizing them, together with clear disclaimers that it doesn’t present authorized recommendation, might produce inaccurate content material, and nonetheless requires overview by a professional authorized skilled earlier than something is relied on.

Why ought to legal professionals nonetheless be nervous?

There may be already precedent for AI going rogue in authorized settings as two New York legal professionals had been sanctioned in 2023 and ordered to pay a $5,000 fantastic after submitting a court docket submitting that included faux instances generated by ChatGPT. Michael Cohen, Donald Trump’s former lawyer, additionally admitted that he unknowingly gave his legal professional faux case citations generated by Google Bard. Whereas Cohen was not sanctioned, the choose nonetheless referred to as the episode embarrassing and careworn the necessity for skepticism when utilizing AI in authorized work.

These are usually not remoted instances as Judges have questioned or disciplined attorneys in a number of situations involving AI-assisted filings, and one French knowledge scientist and lawyer recognized lots of of court docket paperwork containing faux citations and nonexistent references over the previous 12 months.

The larger downside is that hallucinations stay unresolved. AI chatbots can nonetheless produce solutions that sound assured whereas being partly or utterly flawed. In authorized work, that’s particularly harmful, as a result of a made-up quotation or invented case can find yourself in a submitting and create critical penalties.

Microsoft has put many safeguards on Authorized Agent to forestall these points, nonetheless, the lesson is already written in court docket information. AI can pace up authorized work, however the duty of truth checking nonetheless falls on the lawyer.



Source link

Exit mobile version