Bias in artificial intelligence systems, or the fact that large language models, facial recognition, and AI image generators can only remix and regurgitate the information in data those technologies are trained on, is a well established fact that researchers and academics have been warning about since their inception.
In a blog post about the release of Llama 4, Meta’s open weights AI model, the company clearly states that bias is a problem it’s trying to address, but unlike mountains of research which established AI systems are more likely to discriminate against minorities based on race, gender, and nationality, Meta is specifically concerned with Llama 4 having a left-leaning political bias.
“It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics,” Meta said in its blog. “This is due to the types of training data available on the internet.”