I'm going to say it can offer somewhat of an improvement, all dependent on how it gets utilized. I'm loosely experimenting with it in my audit reviews for supplier approvals, and just using Co-Pilot through our existing company licenses is showing it can identify the key items we review farm audits for (of course verified by human after). You can give it a set of prompts (check for x, check for y, check for z), and it can spit out either quick bullet points or go into details into a Word doc report with full supporting details. Ex: I asked it to review a packinghouse for whether wash steps are applied to blueberries, it bullet point confirmed yes and then offered a detailed explanation of the steps and included the ppm of the sanitizers and how often they're monitored.
We're toying with it to see if it can review an entire audit and make some cause\effect type correlations. So far it's not great but we're also experimenting with it to see what trends it can find in our raw EMP data, or our pest control findings. The PC findings were interesting, because you can ask it to review the trends by dates and have it search for weather patterns in the area. Right now using it as a tool to support the human is showing some promise, but I'm not eager to have it take over any tech level roles anytime soon.
Over-reliance would absolutely be a problem in the coming future. We already struggle to find individuals willing to learn about this field, and if they start using plug-n-play AI tools to do the job a human should know how to do, then we lose the ability to train humans who can see where the AI makes errors. It has the potential to look like these problems in college where students are writing essays with AI, educators are grading the essays with AI, and no one is actually learning or teaching anymore...