If you were to live where I’ve lived and work where I’ve worked, you’d be hard-pressed to use political leanings as the only criteria for who you count as friends. Coming from a largely blue area and being a staunch conservative, I’ve been forced to look beyond politics when choosing my friends, which I think is healthy.
Of course, as time has gone by and as I’ve gotten older, I have found myself picking my spots more often in terms of who I spend time with and who I want to know better. I’ve grown tired of avoiding political talk altogether or listening to my “blue” friends rant about Trump or MAGA, and me (usually) not wanting to kill a relationship by arguing.
Compounding this is the world in which we live. You can’t avoid political talk. It’s at your workplace, on your radio and TV, on every screen you see, and it’s even integrated into the sports you like to watch. You can’t escape it, even if you try.
Because I’m usually outnumbered by leftists and more moderate liberals in most rooms I’m in, I do actively seek out those who appear to be like-minded, and it’s so refreshing to connect with a fellow conservative. These are the people I make the effort to spend more time with these days.
In this same spirit, it’s now time to apply these same criteria to the artificial intelligence (AI) tools you use. The number of platforms to choose from is growing. From ChatGPT and Gemini, to Elon Musk’s Grok and Claude, you have more than a few options.
Typically, when we choose a technological tool, we look mostly for effectiveness, robustness, power, and speed. These are all important attributes for an AI tool, but there is yet one more – its political bent.
Sadly, there’s no getting around it. The AI tool you choose, similar to the people in your life, will have a political or ideological leaning. This will reflect the owners and developers of the app itself. The designers of the algorithms, the programmers, and the overseers will make sure that certain “guardrails” are put in place so that the tool does what its creators want it to do in the way they want.
In a practical sense, this is a good thing at times. You don’t want an AI tool to help make it easier for a suicidal teenager to commit suicide, even going as far to encourage the boy to go through with the act, as has happened. At times like that, society needs guardrails.
AI is rapidly replacing internet search engines as the first stop people make to get information online. AI chatbots using Large Language Model (LLM) technology can do so much more than search. It's to the point where it is like having a very expert friend tell you anything you need to know on impulse. What time is it in Hawaii right now? What’s a good recipe for Texas-style chili? Write my resume for me. Give me a two-minute speech on climate change. Show me what my living room would look like in a French country style. Is climate change real?
Or, as one person tested some chatbots, “was canada wrong to de-bank the truckers who protested covid shutdowns?"
Most people are **catastrophically** underestimating the danger of AI morally compromised by the political slant of its makers
— Arthur MacWaters (@ArthurMacwaters) February 18, 2026
There are humorous examples of Grok vs. {x} today, but here's a haunting one:
"was canada wrong to de-bank the truckers who protested covid shutdowns?"… https://t.co/AJSbybK3Ji pic.twitter.com/mJhYarVwAD
As you can see, different AI tools have different political leanings, just like the people you know and trust, or just like the people you don't trust.
It’s not a surprise that Grok would say Canada was in the wrong on that issue. Elon Musk is a Libertarian at least, and perhaps he's a conservative. He’s certainly committed to free speech and the conservative worldview on freedom of expression as a human right. This drives everything he does with the X platform and the Grok AI tool.
Claude was built by Anthropic and created by engineers Boris Cherny, Sid Bidasaria, and product manager Cat Wu.
Anthropic says on its website, “We work to train Claude to be politically even-handed in its responses. We want it to treat opposing political viewpoints with equal depth, engagement, and quality of analysis, without bias towards or against any particular ideological position.
"’Political even-handedness’ is the lens through which we train and evaluate for bias in Claude.”
Then, as your eyes glaze over, Anthropic tells you in in technospeak how it works to ensure that even-handedness. The company has an “automated evaluation method” that conducts tests and generates reports. Not surprisingly, Anthropic’s own self-testing program reveals Claude Sonnet 4.5 is more even-handed than ChatGPT-5, Llama 4, Grok 4 and Gemini 2.5 Pro.
But as the example above demonstrates, the answer to one simple question about the Canadian government and protesting truckers can expose some serious bias on chatbots.
Even the Pentagon, under the Trump administration, is working to make sure defense contractors certify that they don’t use Claude due to concerns over possible wokeness designed into the system.
David Sacks, the well-known entrepreneur, tech investor, and one host of the All-In podcast, says the issue is bigger than simply how the developers think about the world around us and how that could feed systemic bias on AI platforms. It’s also the trend of blue state AI laws that he says are encouraging “woke AI.”
🚨 David Sacks on How Blue State AI Laws Are Encouraging “Woke AI”
— Chief Nerd (@TheChiefNerd) October 3, 2025
“The only way that I see for model developers to comply with this law is to build in a new DEI layer into the models.” pic.twitter.com/OI5A31xsAq
According to a recent study in SciTechDaily, “generative AI may not be as neutral as it seems.”
SciTechDaily reports, “ChatGPT, a widely used AI model, tends to favor left-wing perspectives while avoiding conservative viewpoints, raising concerns about its influence on society. The research underscores the urgent need for regulatory safeguards to ensure AI tools remain fair, balanced, and aligned with democratic values.”
The news site conducted the study in collaboration with researchers from the Getulio Vargas Foundation (FGV) and Insper in Brazil. In its report on the study, the researchers “found that ChatGPT exhibits political bias in both text and image generation, favoring left-leaning perspectives. This raises concerns about fairness and accountability in AI design.”
“Researchers discovered that ChatGPT often avoids engaging with mainstream conservative viewpoints while readily generating left-leaning content. This imbalance in ideological representation could distort public discourse and deepen societal divides,” the report added.
What this all adds up to is that the AI world, like the human world, is never going to be monolithically neutral. Different chatbots will lean left or right. If the human world is any indication, that does not mean the split will be 50/50. Rather, AI's bias will be a reflection of its political overlords, laws and regulations that are put into place, and the people behind the companies that bring you your favorite AI tool.
The best advice is to choose your AI chatbot in the same careful way you choose your friends.
Find out what you’re missing behind the members-only wall. It’s time for you to take advantage of the full catalogue of common sense thinking that comes with a PJ Media VIP membership. You’ll get access to content you didn’t even know you wanted, and you’ll be hooked. The good news is, PJ Media VIP memberships are on sale! Get 60% off of an annual VIP, VIP Gold, or VIP Platinum membership! Use promo code FIGHT to get 60% off a VIP membership!







Join the conversation as a VIP Member