The fact remains that good food safety and quality depend on actions by competent humans.
You can't code competence.
I disagree. I think competence is exactly what can be coded. If you give an AI agent the right data to learn from and specific instructions, it can absolutely be competent. The problem is open source data being used, which is the world full of lies, biases and bigotry. As a result, there is a term used by IT techies now called "hallucinations" where AI treats things like conspiracy theories as fact. That's obviously dangerous.
But that doesn't mean that given an unpolluted environment, that AI will not be useful, nor competent. Far from it. It's already being used in medical settings to support radiology outputs for example. They're not using CoPilot or Chat GPT trained on X and Facebook for that, they're using agents developed in clean environments.
Imagine for example, a company, like IFSQN for example or BRCGS trained an AI agent on loads of audit reports the kind of powerful findings they could get out and how it could help them and their customers? Or imagine if you could put all of your data into an AI agent, swabs, tests, internal audits, external audits, complaints, etc etc and it could pick out where to focus on or where you're not putting enough attention into? That's absolutely possible and wouldn't be a competency issue but I agree, do that now on ChatGPT and you wouldn't be able to trust the results nor the security of your data.
I think don't conflate "all AI" with "commercially available AI based on open data" as the latter isn't great for competent outputs, I agree. But that's not the fault of AI, it's a combination of crap in = crap out but also how most AI tools interface with other apps, especially for generating visuals (the latter I'm sure will improve over time).