Advertisement
News

Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”

Meta’s Llama 4 model is worried about left leaning bias in the data, and wants to be more like Elon Musk’s Grok.
Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”
Image: Mark Zuckerberg

Bias in artificial intelligence systems, or the fact that large language models, facial recognition, and AI image generators can only remix and regurgitate the information in data those technologies are trained on, is a well established fact that researchers and academics have been warning about since their inception. 

In a blog post about the release of Llama 4, Meta’s open weights AI model, the company clearly states that bias is a problem it’s trying to address, but unlike mountains of research which established AI systems are more likely to discriminate against minorities based on race, gender, and nationality, Meta is specifically concerned with Llama 4 having a left-leaning political bias. 

“It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics,” Meta said in its blog. “This is due to the types of training data available on the internet.”

Sign up for free access to this post

Free members get access to posts like this one along with an email round-up of our week's stories.
Subscribe
Advertisement