Hello and welcome to Eye on AI. In today’s edition…the U.S. Senate rejects moratorium on state-level AI laws…Meta unveils its new AI organization…Microsoft says AI can out diagnose doctors…and Anthropic shows why you shouldn’t let an AI agent run your business just yet.
AI is rapidly changing work for many of those in professional services—lawyers, accountants, auditors, compliance officers, consultants, and tax advisors. In many ways, the experience of these professionals, and the businesses they work for, are a harbinger of what’s likely to happen for other kinds of knowledge workers in the near future.
Here are a few tidbits from the conference worth highlighting:
Mari Sako, the Oxford professor of management studies who helped organize the conference, talked about the three gaps that professionals needed to watch out for in trying to manage AI implementation: One was the responsibility gap between model developers, application builders, and end users of AI models. Who bears responsibility for the model’s accuracy and possible harms?
A second was the principles to practice gap. Businesses enact high-minded “Responsible AI” principles but then the teams building or deploying AI products struggle to operationalize them. One reason this happens is that first gap—it means that teams building AI applications may not have visibility into the data used to train a model they are deploying or detailed information about how it may perform. This can make it hard to apply AI principles about transparency and mitigating bias, among other things.
Finally, she said, there is a goals gap. Is everyone in the business aligned about why AI is being used in the first place? Is it for human augmentation or automation? Is it operational efficiency or revenue growth? Is the goal to be more accurate than a human, or simply to come close to human performance at a lower cost? What role should environmental sustainability play in these decisions? All good questions.
Ian Freeman, a partner at KPMG UK, talked about his firm’s increasing use of AI tools to help auditors. In the past, auditors were forced to rely on sampling transactions, trying to apply more scrutiny to those that presented a bigger business risk. But now, with AI, it is possible to run a screen on every single transaction. Still, it is the riskiest transactions that should get the most scrutiny and AI can help identify those. Freeman said AI could also help more junior auditors understand the rationale for probing certain transactions. And he said AI models could help with a lot of routine financial analysis.
But he said KPMG had a policy of not deploying AI in situations that called for human judgment. Auditing is full of such cases, such as deciding on materiality thresholds, making a call about whether a client has submitted enough evidence to justify a particular accounting treatment, or deciding on appropriate warranty reserves for a new product. That sounds good, but I also wonder about the ability of AI models to act as tutors or digital mentors to junior auditors, helping them to develop their professional judgment? Surely, that seems like it might be a good use case for AI too.
A senior partner from a large law firm (parts of the conference were conducted under Chatham House Rules, so I can’t name them) noted that many corporate legal departments are embracing AI faster than legal firms—something the Thomson Reuters survey also showed—and that this disparity was putting pressure on the firms. Corporate counsel are demanding that external lawyers be more transparent about their AI usage—and critically, putting pressure on legal bills on the theory that many legal tasks can now be done in far fewer billable hours.
AI is also possibly going to change how professional service firms think about career paths within their business and even who leads these firms, several lawyers at the conference said. AI expertise is increasingly important to how these firms operate, and yet it is difficult to attract the talent these businesses need if these “non-qualified” technical experts (the term “non-qualified” is simply used to denote an employee who has not been admitted to the bar, but its pejorative connotations are hard to escape) know they will always be treated as second-class compared to the client-facing lawyers and also are ineligible for promotion to the highest ranks of the firm’s management.
Michael Buenger, executive vice president and chief operating officer at the National Center for State Courts in the U.S., said that if large law firms had trouble attracting and retaining AI expertise, the situation was far worse for governments. And he pointed out that judges and juries were increasingly being asked to rule on evidence, particularly video evidence, but also other kinds of documentary evidence, that might be AI manipulated, but without access to independent expertise to help them make calls about what has been altered by AI and how. If not addressed, he said, this could seriously undermine faith in the courts to deliver justice.
There were lots more insights from the conference, but that’s all we have space for today. Here’s more AI news.
Note: The essay above was written and edited by Fortune staff. The news items below were selected by the newsletter author, created using AI, and then edited and fact-checked.