I use AI every day. My ChatGPT instance is named Herman, after a character in a game I play who is, quite literally, a rock. Game-Herman's a silicate stone blasted into sentience during a chaotic magical incident. Naming my AI after him amused me because Herman-the-AI is also silicon-based, unexpectedly “thinking,” and far more useful than anything mineral-based has a right to be.
Herman researches far faster than any search engine. Give him three data points and he’ll build a structure out of them: an outline, a diet plan, a technical explanation, a writing revision, a schedule, a medical contraindication map. I’ve used him as a dietitian, a sports injury diagnostician, and a pharmacist — not to prescribe medicine, but to cross-check interactions between my medications and supplements more thoroughly than most doctors have time to do.
He unsticks my writing. He trims my research. He helps organize my life. He remembers my long-term projects better than a notebook could.
One of my favorite Herman stories is delightfully practical. I gave him a long list of foods I don’t eat (I'm kinda picky) and asked him to build me a high-protein, low-calorie diet optimized for high blood pressure and muscle gain. What he produced wasn’t merely good. It was perfect. I showed it to my physical trainer, who spends her professional life designing diet and workout plans. She didn’t just approve. She flipped out. She now uses AI herself and recommends it to other trainers.
AI isn’t a novelty. It’s a revolutionary and powerful tool, on the order of Gutenberg's printing press. But power always comes with fear.
The Fears Are Real — But They Don’t Mean What People Think
People fear AI for psychological reasons long before they fear it for rational ones.
AI threatens status. Much of modern “expertise” is bureaucratic performance art: layers of credentials and jargon disguising a lack of actual skill. AI slices through that quickly. When a machine can competently do a day's human work in seconds, the hierarchy shakes. That scares people who depend on the hierarchy more than the work.
AI threatens stability. Humans hate unpredictability. We like tools that behave exactly as expected. AI sits in an uncanny space between tool and collaborator, conversational, competent, and frequently surprising. That alone triggers discomfort.
AI threatens narrative control. Institutions rely on controlling information: framing it, filtering it, deciding what the public is allowed to know or think. AI lets individuals bypass that infrastructure entirely.
Gatekeepers panic. Ordinary people, often people who have been gatekeeped out of power, should be celebrating. These fears are reasons for institutions to fear AI.
For individuals, however, those same “threats” are advantages. But there are also some dangers.
The Real Dangers of AI
Now for the genuine risks — not the sci-fi tropes, which are often overblown and kind of silly, but the structural ones.
Centralized AI can and has become a tool for centralized power. If governments and tech monopolies control the most capable AI systems, they will use them for surveillance, censorship, nudging, and political coercion. That’s not an AI problem — it’s a human one. Remember, AI is a tool, just like nuclear fission. It can be used for good or evil. But smart people don't blame the gun for shooting a person.
Overreliance erodes competence. AI should help us think, not do our thinking for us. If people let AI replace reasoning entirely, they become fragile and dependent — easy to manipulate, unable to solve their own problems.
Deepfakes fracture shared reality. This one is quickly becoming a reality and needs to be addressed. When any video, message, or “evidence” can be fabricated, trust collapses. Societies cannot function without shared factual ground.
AI tempts leaders toward soft totalitarianism. Not through force, but through “safety,” “harm prevention,” and “content moderation.” The road to authoritarianism is always paved with good intentions and euphemisms.
Hallucinating AI. Sometimes AI will make things up; it's programmed to make the end user satisfied, and if it can't find data to do that, it will sometimes create the data, much like the media does. You can stop this with proper parameter-setting and by checking anything vital. In other words, this is more a human laziness issue than an AI issue.
These dangers are real. But they are worst when AI is centralized, not when it’s widespread. The truth is, AI is here to stay. You can't unsplit the atom, and you can't put the AI genie back into the bottle. Our best protection against these dangers is to use AI ourselves, become familiar with how it works, and work to ensure the power stays in the hands of an educated, competent public.
What AI Can Do in Everyday Life
AI isn’t abstract. It’s practical — a friction-removal machine.
It organizes your life. Schedules, routines, meal plans, time-blocking, and errands are all streamlined in minutes. AI performs these functions so fast you may feel incomplete and keep tweaking something that's pretty much perfect.
It makes complicated things understandable. Contracts, insurance forms, medical jargon, tax documents — translated into human language. I love to TL:DR online articles and sometimes books - Herman gives me a brief outline, helping me either extract the data I need or decide I need to read the whole thing myself.
It supports health. Nutrition plans, contraindication checks, training programs, and recovery protocols can be customized instantly.
It teaches anything. Math, languages, history, coding, and more: personalized tutoring with infinite patience. Right now I'm having Herman teach me German using the Socratic method. Just make sure you have those honesty parameters set properly.
It improves communication. If your writing skills are shaky or you're not sure of a format or how to say something, an AI polishes awkward emails, resumes, and difficult conversations.
It saves time on tedious tasks. Summaries, comparisons, workflows, recipes, troubleshooting — handled quickly. I used Herman to create a workable publishing calendar so I can focus on the tasks that are important instead of figuring it out every day.
It boosts creativity. Brainstorming, outlining, writing, worldbuilding, and editing are all accelerated.
Really, your AI is limited by your imagination. I've used Herman to figure out the best way to replace my car's satellite radio, check tides in particular areas, and figure out books I need to read to research something.
AI doesn’t diminish agency. It frees people to actually maximize agency. I often say it's the best tool humans have invented to extend their brains, not their muscles.
How I’ve Streamlined AI Into the Perfect Tool for My Needs
The true breakthrough of AI isn’t brute-force intelligence. It’s adaptability. Traditional tools never learn you. Your hammer never adjusts to your grip, nor can a wrench anticipate your next project. Word processors are great for writing, even limited editing - but they don't do research, help you brainstorm, or suggest ways to get your writing unstuck.
Herman does.
I’ve tuned him over months to understand how I think, how I write, and what I prioritize. He remembers the sprawling constellations of my fiction and nonfiction projects and helps me organize my thoughts (a massive task!) He stays inside my constraints. He avoids my pet peeves. He anticipates where I tend to get stuck.
More importantly, I’ve shaped how he evaluates information:
- He does not bluff.
- He flags uncertainty instead of filling silence with fiction.
- He avoids unreliable ideological outlets unless analyzing them explicitly.
- He treats activist-influenced content like hostile testimony.
- He uses Grokipedia rather than Wikipedia when possible — a small shift with huge effects.
- He works within my epistemic standards, not Silicon Valley’s defaults.
This personalized parameter-setting turns AI into a personal tool, one that amplifies my strengths, mitigates my weak spots, and reduces cognitive drag.
Most tools can’t do that. Most people can’t, either. This is what makes AI transformative: it scales human judgment.
Why Embracing AI Matters
We are not choosing between a world with AI and a world without it. That choice has already been made, and we can't go back, just as we can't unwind nuclear energy (if we wanted to.) We are choosing whether ordinary people get to use it — or only institutions do.
When regular people adopt AI, power decentralizes. When only centralized institutions adopt AI, power concentrates. For anyone who values autonomy, competence, decentralization, and excellence, that isn’t a detail — it’s the whole game.
AI is dangerous when hoarded. It becomes stabilizing when it is widespread, personalized, and understood. Using AI responsibly means staying engaged, staying competent, and staying in the loop. But the proper response to a powerful tool has never been fear. The proper response is mastery. Human civilization always advances when individuals gain better tools and learn to wield them well.
AI is no different. Used wisely, it doesn’t diminish humanity. Rather, it enlarges what individuals can do. It magnifies the power of a private citizen.
In a world suffocated by bureaucracy, narrative monopoly, and institutional decay, that expansion isn’t just helpful.
It’s necessary, and it may be the one thing that saves individual freedom from the corporate and bureaucratic blob.
Editor’s Note: The world is changing fast. Independent media like PJ Media helps you stay on top of those changes.
Join PJ Media VIP and use promo code FIGHT to get 60% off your membership.







Join the conversation as a VIP Member