What's New Unreplied Topics Membership About Us Contact Us Privacy Policy
[Ad]

Will AI improve or damage food safety in the next 5 years?

Started by , Sep 02 2025 06:31 PM
19 Replies

New month, new poll.  
 

Everyone’s talking about AI right now, but what about its impact on food safety?
 
Some see it as a game-changer for traceability, audits, and training. Others think it could create new risks and over-reliance.
 
What do you think?
1 Thank
Share this Topic
Topics you might be interested in
What’s the hardest part of building a strong food safety culture? Organic or wholesome foods vs regular food at supermarkets Food Defense Qualified Individual (FDQI) Costco Food Safety Supplier Requirements Non-Food Uses for Honey Contaminated with Hydraulic Oil
[Ad]

I'm going to say it can offer somewhat of an improvement, all dependent on how it gets utilized.  I'm loosely experimenting with it in my audit reviews for supplier approvals, and just using Co-Pilot through our existing company licenses is showing it can identify the key items we review farm audits for (of course verified by human after).  You can give it a set of prompts (check for x, check for y, check for z), and it can spit out either quick bullet points or go into details into a Word doc report with full supporting details.  Ex: I asked it to review a packinghouse for whether wash steps are applied to blueberries, it bullet point confirmed yes and then offered a detailed explanation of the steps and included the ppm of the sanitizers and how often they're monitored.

 

We're toying with it to see if it can review an entire audit and make some cause\effect type correlations.  So far it's not great but we're also experimenting with it to see what trends it can find in our raw EMP data, or our pest control findings.  The PC findings were interesting, because you can ask it to review the trends by dates and have it search for weather patterns in the area.  Right now using it as a tool to support the human is showing some promise, but I'm not eager to have it take over any tech level roles anytime soon. 

 

Over-reliance would absolutely be a problem in the coming future.  We already struggle to find individuals willing to learn about this field, and if they start using plug-n-play AI tools to do the job a human should know how to do, then we lose the ability to train humans who can see where the AI makes errors.  It has the potential to look like these problems in college where students are writing essays with AI, educators are grading the essays with AI, and no one is actually learning or teaching anymore...

That is scary - writing and grading being done by AI. 

I had a customer use AI to interpret different standards requirements the other day and it was just wrong to the point of actually having the opposite meaning.  

 

I've also used it for an academic summary with references.  When I checked the references they were either BS or not stating what was claimed.

 

So as ever, beware.  Also be aware that some of the AI tools capture your data.  So if you upload a report, they then have that report to use as source data for anyone else.  Same with audit results etc.  Be logged in and know the security stuff about what you're using.

 

But that all said, it has potential.  I think there are some "big data" options with it.  

 

New month, new poll.  
 

Everyone’s talking about AI right now, but what about its impact on food safety?
 
Some see it as a game-changer for traceability, audits, and training. Others think it could create new risks and over-reliance.
 
What do you think?

 

 

When it comes to food safety, AI might really help with things like tracking where food’s been, catching problems quicker, and even helping train folks on proper handling. Big companies with deep pockets and tech teams might see big gains. But for smaller processors, like family-run operations or mom-and-pop shops, it’s a different story. The cost of the equipment, not having the tech know-how, and just trying to get good data in the first place... that’s a tall order. There's also the risk of trusting these systems too much without understanding how they’re making decisions. You don’t want to put all your eggs in one basket, especially when lives are on the line. That said, if we can make these tools simpler, more affordable, and something folks can actually understand and use, AI could help raise the bar for food safety across the board.

You know what AI reminds me of?  Those big ERP projects.  Where you have a team who really understand food safety and are listening and educating the team, you get the right results and the team have a strong understanding of what's going on in the system infrastructure.  If you have a weaker execution of that process, the system architecture is designed without thought to food safety leading to issues later with the ability to customise and change.  Misunderstandings occur as the teams on site don't understand how the data is being treated by the system.  A typical example of this is how trace and hold work.  When you probe, or when incidents occur it becomes clear that the team didn't really understand what was happening.

 

The same happens with AI.  When you understand the fallibility, especially for academic sources, you are less likely to accept the outputs of it without question.  It has its strengths but there are also weaknesses.

 

If you use an AI tool to summarise a document that you've provided it with or look at data you have provided and look for trends etc., it will probably do a good job.  If you ask it to generate something from sources you've not provided, it won't do so well.  You cannot rely on the interpretation without checking.  But even on data there is a level of QC which needs to be deployed and my worry is that seems like a task which doesn't have value.  It seems like "optional" or "extra" work.  You only have to look at some of the data or quotes referencing academic papers presented by RFK Jnr to say it's not.  People will come a cropper on this and in  a big and embarrassing way.  If you stop sense checking on AI it will bite you in the butt.

On the caveat of in the next 5 years I definitely think it will make things worse.  Most applications of AI, like a lot of other recent tech, are massively overhyped.  The theory or hypothetical application might be great, but the reality of what they are capable of now is fairly unimpressive.

 

Like others have mentioned, AI can make some connections, but it frequently fails to understand context or the subtleties that are obvious to a person, and will arrive at the opposite conclusion or just make **** up.  There have been a several legal cases that made headlines for this kind of fabrication and hallucination.

BOTH. 

There will be some good and some bad. Being the shining optimist that I am (  :lol: )  I think the bad (misinformations) will outweigh the good.

My gut feeling is that it will be a struggle for quite some time. I remember back to working in a dealership parts department and they had recently got computers to look up parts. The old timers were against it; they thought it was 'crap' and a 'phase'. They still liked to use the microfiche, cards, etc. I thought they were nuts. While they were FAST on the microfiche, when I was up to speed I could blow them away (computer looked up parts, selected, showed inventory, right into invoicing.)

 

Now I'm the old timer thinking AI is crap and a phase, so I guess it all comes full circle.

I agree short term AI is a mess.   Long term, it'll probably improve to the point it is good for many things.   As far as ERP checkups and all that tho, AI only knows what you tell it to know, and ERP numbers are only as good as the employees scanning/inputting data.   So there will still be bs issues caused by human error, etc.   

 

I plan to be sitting on a beach with a cocktail in my hand and throwing my dog a frisbee before it matters much... 

I just worry about people trying to use it as a catch all - especially Large Language Models.

I've already had to discourage my boss from using the google AI summary if I'm looking up anything regulation related because it'll pull information from wherever it wants and won't actually provide the information you need. It can reproduce, it can't interpret well and I still think there's too much nuance/ human impact in this field for it to be used very effectively.  

That also doesn't even begin to talk about the environmental impacts it has etc. etc. 

I agree short term AI is a mess.   Long term, it'll probably improve to the point it is good for many things.   As far as ERP checkups and all that tho, AI only knows what you tell it to know, and ERP numbers are only as good as the employees scanning/inputting data.   So there will still be bs issues caused by human error, etc.   

 

I plan to be sitting on a beach with a cocktail in my hand and throwing my dog a frisbee before it matters much... 

 

Yeah it has a similar BS in and BS out but also sometimes not realising that BS is going in.

One other strong shortcoming for AI I'm seeing at the moment is the inability or, perhaps, unwillingness to use reliable sources more favourably in the response in comparison to niche studies or unreliable sources.  So there is an absolutely appalling correction for bias, in fact, almost an anti-bias approach.

 

What it will do is lead to even more online BS which is claimed to be factual because of some niche publication which "proves" it when 99% of, say, the Journal of the American Chemical Society disproves the same thing.

 

Whether that impacts food safety more than any other topic?  I suppose we're going to have some people who are equally likely to fall down those rabbit holes, especially if they have no "true" research experience (and no I don't mean "do your research" as people write on social media meaning "google it".)

One other strong shortcoming for AI I'm seeing at the moment is the inability or, perhaps, unwillingness to use reliable sources more favourably in the response in comparison to niche studies or unreliable sources.  So there is an absolutely appalling correction for bias, in fact, almost an anti-bias approach.

 

What it will do is lead to even more online BS which is claimed to be factual because of some niche publication which "proves" it when 99% of, say, the Journal of the American Chemical Society disproves the same thing.

 

Whether that impacts food safety more than any other topic?  I suppose we're going to have some people who are equally likely to fall down those rabbit holes, especially if they have no "true" research experience (and no I don't mean "do your research" as people write on social media meaning "google it".)

GMO,

 

It does not help that there are "paper mills" out there shooting out research papers for the sake of papers (with most of them generated from AI or just fake). 

 

https://www.proquest...accountid=14745

GMO,

 

It does not help that there are "paper mills" out there shooting out research papers for the sake of papers (with most of them generated from AI or just fake). 

 

https://www.proquest...accountid=14745

 

That took me to a godaddy website so I'm automatically filtering on accuracy (sorry!)  But I do know it's true that in some fields there is a reproducibility crisis.  Which is strongly suspicious of falsified or at least "heavily selected" results.  But has always, to a degree, been thus.  That's where researcher, institution, journal... all of their credibility should be scrutinised not just accepting Dr BullSheet from the University of Bums on Seats...  But also, dare I point it out, culture in institutions must be robust but also supportive.  I've been a researcher.  Getting negative results when your grants or PhD may rely on something positive?  That's tough.

I'm cautiously optimistic. I'm with a large manufacturing company that has started its first foray into food, with another couple of prospects. I oversee FSQ and also site wide calibration.

 

I've used Gemini to summarize similarities and differences between FDA, SQF, ISO 9001, ISO 13485 requirements. It's helped me in writing a unified calibration policy. I've also used it to bounce some of my draft SOP's against.

 

I use it as a sounding board. I haven't taken any advice as gospel, and always gone back and checked its sources.

I'm cautiously optimistic. I'm with a large manufacturing company that has started its first foray into food, with another couple of prospects. I oversee FSQ and also site wide calibration.

 

I've used Gemini to summarize similarities and differences between FDA, SQF, ISO 9001, ISO 13485 requirements. It's helped me in writing a unified calibration policy. I've also used it to bounce some of my draft SOP's against.

 

I use it as a sounding board. I haven't taken any advice as gospel, and always gone back and checked its sources.

 

I agree. I think where it's useful is rewriting something where you know what you want to say but can't quite get the words and it's also great for summarising things. I've used it to develop an audit checklist from a standard before and it did a cracking job. I've also used it to summarise from documents and found mistakes. You absolutely cannot treat it as gospel especially the ones based on open data as it will treat lies on websites as fact.

I've seen multiple instances of AI spitting out information that was completely wrong and then repeat that wrong information even after I pointed out it was wrong and requested that it be corrected. 

 

I've uploaded a file to an AI LLM and had it confidently return incorrect information after I asked for a summary.

 

The fact remains that good food safety and quality depend on actions by competent humans.

You can't code competence.

 

You might, in the long term, be able to automate certain basic checkbox ticking, but anything that truly matters should be kept far away from AI.

 

The fact remains that good food safety and quality depend on actions by competent humans.

You can't code competence.

 

I disagree. I think competence is exactly what can be coded. If you give an AI agent the right data to learn from and specific instructions, it can absolutely be competent. The problem is open source data being used, which is the world full of lies, biases and bigotry. As a result, there is a term used by IT techies now called "hallucinations" where AI treats things like conspiracy theories as fact. That's obviously dangerous.

 

But that doesn't mean that given an unpolluted environment, that AI will not be useful, nor competent. Far from it. It's already being used in medical settings to support radiology outputs for example. They're not using CoPilot or Chat GPT trained on X and Facebook for that, they're using agents developed in clean environments.

 

Imagine for example, a company, like IFSQN for example or BRCGS trained an AI agent on loads of audit reports the kind of powerful findings they could get out and how it could help them and their customers? Or imagine if you could put all of your data into an AI agent, swabs, tests, internal audits, external audits, complaints, etc etc and it could pick out where to focus on or where you're not putting enough attention into? That's absolutely possible and wouldn't be a competency issue but I agree, do that now on ChatGPT and you wouldn't be able to trust the results nor the security of your data.

I think don't conflate "all AI" with "commercially available AI based on open data" as the latter isn't great for competent outputs, I agree. But that's not the fault of AI, it's a combination of crap in = crap out but also how most AI tools interface with other apps, especially for generating visuals (the latter I'm sure will improve over time).

I disagree. 

Well, at least we agree that commercial AI trained on freely available data is not going to cut it. 

Yes, AI may well be able to tell you what to focus on or be able to evaluate whether a COA is valid, but I believe that it will still hurt food safety in the long term.

We're already at a point where some people treat audits like checkboxes and don't remember how to actually set up a food safety system for their product(s). The more you allow AI to take over, the less people keep practicing their skills and the less you practice your skills, the easier you forget them.

 

That is the harm I'm worried about.


Similar Discussion Topics
What’s the hardest part of building a strong food safety culture? Organic or wholesome foods vs regular food at supermarkets Food Defense Qualified Individual (FDQI) Costco Food Safety Supplier Requirements Non-Food Uses for Honey Contaminated with Hydraulic Oil Safety Seals for Delivery Trucks Alternative Food-Grade Tubs for Dry Cleaning Powder Equipment Applicability of SQF 11.6.2 to a Non-Food Refrigerated Hazardous Ingredient Prerequisite Programmes for Food Safety Prerequisite Programmes for Food Safety