An AI chatbot called “FungiFriend” was added to a popular mushroom identification Facebook group Tuesday. It then told users there how to “sauté in butter” a potentially dangerous mushroom, signaling again the high level of risk that AI chatbots and tools pose to people who forage for mushrooms.
404 Media has previously reported on the prevalence and risk of AI tools intersecting with the mushroom foraging hobby. We reported on AI-generated mushroom foraging books on Amazon and the fact that Google image search has shown AI-generated images of mushrooms as top search results. On Tuesday, the FungiFriend AI chatbot to the Northeast Mushroom Identification & Discussion Facebook group, which has 13,500 members and is a place where beginner mushroom foragers often ask others for help identifying the mushrooms they have found in the wild. A moderator for the group said that the bot was automatically added by Meta and that “we are most certainly removing it from here.” Meta did not immediately respond to a request for comment.
The bot is personified as a bearded, psychedelic wizard. Meta recently began adding AI chatbots into specific groups, and has also created different character AIs.
Rick Claypool, research director for consumer safety group Public Citizen’s president’s office, told 404 Media about FungiFriend. Claypool has done important work about corporate capture of local and state governments, but is also an avid mushroom forager and has been documenting the risks of AI tools in mushroom foraging over the last few months. Over the summer, he wrote a lengthy article in Fungi Magazine that noted “emerging AI technologies are being deployed to help beginner foragers identify edible wild mushrooms. Distinguishing edible mushrooms from toxic mushrooms in the wild is a high-risk activity that requires real-world skills that current AI systems cannot reliably emulate.”
One member of the Facebook group said that they asked the AI bot “how do you cook Sarcosphaera coronaria,” a type of mushroom that was once thought edible but is now known to hyperaccumulate arsenic and has caused a documented death. FungiFriend told the member that it is “edible but rare,” and said “cooking methods mentioned by some enthusiasts include sautéing in butter, adding to soups or stews, and pickling.” The situation is reminiscent of Google's AI telling people to add glue to pizza or eat rocks on the advice of a Redditor named Fucksmith.
Claypool told 404 Media in a phone call that it is “really risky and reckless” for Meta to add AI chatbots to groups like this. He said he has tested various AI tools to identify mushrooms and found “there is just no way these things have reached a point of being good enough at providing true and factual and verifiable information, especially if we’re talking about distinguishing between toxic and edible varieties.”
Having a bot like this automatically added to a Facebook group full of humans who are dedicated in part to helping new foragers avoid accidentally poisoning themselves is particularly insidious. It also highlights the fact that Meta has very few guardrails on how it is injecting AI into its platforms.
Claypool said people use the group to post photos of mushrooms from their phones while they’re in the field. Facebook added the “FungiFriend” chat as the first option in the group from mobile, meaning Facebook is pushing people to interact with AI, not other humans, on its supposed “social network.”
“These groups like this are wonderful for connecting people who are going out there in the field or have an emerging interest with connecting people who have a lot of experience who can help guide people away from dangerous stuff. If you are super new to this, there’s a good chance you are asking questions about edibility, which is not where you should start,” Claypool said. “But if you’re interacting with people in a group in public and someone gives not so great info, there’s a chance there’s multiple voices there [correcting it].”
Conversations with an AI bot, meanwhile, take place in private and may be the first place that new foragers turn if they want to avoid asking what could be seen as a newbie or embarrassing question.
“Somebody who has any amount of knowledge is not going to be relying on the AI. It’s going to be someone who feels nervous about putting themselves out there,” he said. “Maybe they’re worried to seem like they don’t know very much or they’re afraid of asking a stupid question. So you ask the AI so it won’t judge you like a normal person might. You might then not feel like it judges you, but it might kill you.”