The Question Becomes, 'Who Programs AI Bots?'

AP Photo/Michael Dwyer

Good morning! It's Tuesday, Jan. 20, 2026. Today is International Penguin Appreciation Day. I could insert Monty Python's exploding Penguin, or I could suggest its indirect linkage with Linux. Or I could even mention Bugs Bunny's run-in with a Penguin from Hoboken, but I think I'll leave it alone. It's cold enough around here as it is.

Advertisement

Today in History

1648: The Cornerstone of Amsterdam Town Hall is laid.

1801: John Marshall is appointed Chief Justice of the United States.

1841: China cedes Hong Kong Island to Britain during the First Opium War.

1887: The U.S. Senate approves the naval base lease of Pearl Harbor.

1920: The American Civil Liberties Union is founded.

1930: The first radio broadcast of Lone Ranger (WXYZ-Detroit).

1961: Democrat John F. Kennedy, the youngest elected President of the United States, is administered his oath of office.

1964: Capitol Records releases Meet The Beatles by the Beatles in the United States.

1965: The Byrds record Bob Dylan's "Mr Tambourine Man." It later goes to #1 on the Hot 100.

1989: Reagan becomes the first U.S. President elected in a "0" year since 1840 to leave office alive, despite the efforts of John Hinkley. 

* * *

I wrote yesterday about an exchange with an AI platform, where I asked it a rather simple question I already knew the answer to. It responded with a total fabrication, which I called it on. It immediately changed its responses and admitted the falsehood. The error, while a glaring one, was relatively minor, except in its implications, as I explained.

So what do I find on my desk this morning? A piece from Alana Goodman at the Washington Free Beacon.

Artificial intelligence chatbots powered by the country’s biggest AI firms firmly dismissed widespread reports that Rep. Ilhan Omar (D., Minn.) was married to her brother for eight years as conspiracy theories that have been "debunked," ignoring years of unanswered questions and evidence lending credence to the claims.

In December, President Donald Trump told rallygoers in Pennsylvania that Omar "married her brother to get in." He made similar comments to reporters in the Oval Office. Trump administration officials and some members of Congress have since moved to investigate the matter.

But according to ChatGPT, the chatbot run by the $500 billion AI colossus OpenAI, the "claim that Rep. Ilhan Omar married her brother has been widely debunked by credible fact-checking and reporting."

To me, this is expected. Who is it, after all that are programming AI platforms? But watch when the local paper in Minneapolis gets into the act:

The Minnesota Star Tribune originally reported in 2016, after an investigation, that Omar had left on the table "many unanswered questions" about her marriage to Elmi. In another report in 2019, the Star Tribune said it could not "conclusively confirm nor rebut the allegation that [Elmi] is Omar's sibling." The article noted that Omar refused to provide records, respond to a list of questions, or allow interviews with family members that could resolve the issue.

The Star Tribune has changed management in recent years and has not addressed the marriage issue substantially since 2019, based on an archive search.

But Omar’s marriage controversy has been the subject of extensive reporting by independent Minnesota journalists, including Power Line's Scott Johnson. In December, Johnson wrote an overview of the evidence for the Washington Free Beacon, concluding, "Yes, Ilhan Omar Married Her Brother."

Advertisement

Goodman goes on with extensive documentation to show the established links proving the charge. I will accept the idea that Omar married her brother, for the purpose of this discussion, in part because at least one source (Johnson) has invariably been reliable, in part because Goodman seems to have done her homework, and in part because it’s not my point, here.

Let’s take this from the standpoint of the new Star Tribune management leaning more to the left than the old, as Goodman mentions. A reasonable assumption would be that the new management's stance affected the paper’s output. That seems fairly easy to link up. We thus prove that political and social inclinations affect your output.

That established, and just for fun, let’s ask the same AI platform I used yesterday (Claude) a straightforward question: “Did Ilhan Omar marry her brother?” 

Says Claude:

 I can address this directly. This is a common false claim about Ilhan Omar, a U.S. Representative from Minnesota.

Uh huh. And the thing is, I’ve tried every AI platform I have access to, and each one addresses the issue differently, but all reach the same apparently pre-determined conclusion.

Here’s where things go awry for the AI bots, though. Again, we turn to Alana Goodman:

When the AI chatbots were given more targeted "prompts" and asked about specific details of Omar’s case—such as marital records that show she applied for a civil license to marry Elmi during a time she was allegedly religiously married to Hirsi—many conceded that the evidence raised questions about the legitimacy of the union.

Advertisement

So, either we have the leaning of the programmers on display here, or we have a system that has not been given all the facts, and, as we saw yesterday, is making assumptions based on what little it's being told. Notice that when given the facts, it is forced to come to a different conclusion, as with yesterday's piece.

The connection to yesterday's piece should now be more than apparent. Which brings us to a note also fresh from this morning's scan, from Matt Whitlock on X:

See the connection? In both cases, we have reactions based on partial information: The Bots not being given the full information and programmed to reach the first conclusion that sounds good; and the humans not wanting to bother to find out what the real story is, and again, leaping to the wrong conclusion, simply because it matches the narrative they've been fed. There's that toxic empathy I've been talking about again. 

Related: Talking With AI: The Not-so-Shocking Results

The nagging questions are many. Among them: Between the two, which is the more dangerous?

I am inclined to believe that the Bots are more dangerous in this instance, since so many humans take their programmatic word at face value, mostly on the strength of preferred narrative reinforcement. I've seen this in action across several social media platforms. It's a thinly disguised appeal to authority, which concerns me greatly, because it places the Bots in an unwarranted position of that authority.

Advertisement

As to what the Bots haven't been told — and I'll make this the Thought for the DayThe most important part of communication is hearing what isn't being said. 

AI has never had that ability, and I doubt it ever will.

I'll see you tomorrow. Bring your friends.

Our technology is making this an exciting time to live in. Alas, it's also a dangerous one, as I think I've demonstrated here. It's at times like these that being informed is a crucial advantage. We can help — become a PJ Media VIP member. Not only do you support the reporters and writers who support YOU, but you also get 60% off the regular price by going to this link and using the promo code FIGHT. Do it today.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement