Pitfalls and Praxis for AI Usage
(Sorry for such a long delay, friends. I’m currently hiking the Camino de Santiago and I won’t be online much for the rest of the week.)

I feel like I’m becoming soap-box-y about my AI discussions, but it’s such an important and timely issue, and I feel somewhat discouraged by the way much of the conversation is going. Which is to say…full steam ahead.
It feels very typically American. Remember a few years ago, when Sprinkles Cupcakes opened up, and soon, across the country, everyone just went crazy over cupcakes and there were cupcake shops everywhere? (Well, I’m in Dallas. We love a trend.) Or juice bars? Or cryotherapy? Any time something seems new or promising, it feels like the American way to jump on the bandwagon as quickly as possible and monetize it.
And here we are, just trying to use a washing machine or order a bus ticket and we’re asked if we want to use AI. My God, American capitalism has no chill.
I’ve already shared about the environmental devastation that AI is bringing- and to the poorest areas first, of course. But I also worry about the inherent colonialism in this information-gathering process.
A good friend of mine feels very put out by my aversion to AI, and recently he wanted to prove to me how smart and useful it is. So he opened ChatGPT and asked me to give it any hard question. I said, “Ask it who Pelagius was.” It spit out the very answer I expected: Pelagius was a 4th century Celtic teacher who was deemed a heretic. Which is true. But I’m more interested in what it didn’t say. Which is that Pelagius was a Celtic theologian who was maligned by Augustine and found to be orthodox in three different church councils until the vote in the fourth one finally went the way Augustine wanted it to go, and he was declared a heretic. It didn’t say there was a debate. It didn’t say there was any story other than that very “case closed” summary.
I told my friend this and he then attempted to ask a number of non-leading questions to get this information. But ChatGPT doesn’t have this information, because the aggregate of information it has is from the majority opinion, which is Augustine’s, even though a lot of us think he was dead wrong.
The point I’m trying to make is that the information is by very definition going to be colonialist. Which is to say: it will tow the party line, every time. It will side with the winners and write it as if the losers are hardly in the story at all as their own characters. And that can be dangerous, not just overly simplistic. We’ve seen what happens when oppressed people have their stories co-opted by a dominant narrative.
But that’s not the only problem. When you add in the current predicament of fake news, how can AI’s aggregating algorithms take these unreliable sources into account?If we could program AI afresh this year with current website information about the Civil War, say, or the efficacy and safety of vaccines, what would it say? How does it know that RFK Jr. is not the same kind of Secretary of HHS as all the ones before him? Because while AI can aggregate information, it cannot sort it based on credible and non-credible sources, because it all depends who is feeding the machine.
Friends, multinational corporations based on greed and not the well-being of all people are feeding the machine. I do not trust this.
If we believe that AI is here to stay (which I think is true), the question for us is: what role will it play in the world we are co-creating? And while we cannot fully shut down the multinational machine hell-bent on shoving AI down our throats even in ridiculous and unnecessary places, we can make choices about how we choose to use it.
AI has some real promise. The way it can aggregate data that would take humans significantly longer is a real and tangible benefit, especially when used in settings like laboratory science. My husband used AI to calculate whether it was cheaper to buy a train pass or individual tickets on a recent international trip. That would have taken him forever to figure out and AI did it in mere seconds. How awesome is that?!
When AI is used to collect information that does not require human creativity or intuition, or as a flat summary of complex human history, I think it could be fabulous. We could view it in this way like the washer and dryer, which took a long and laborious task and made it so much more efficient. This is good technology.
So what if we asked: what do humans gain by using this? And what do we lose? What efficiency is brought, and what potential matrix of meaning might be lost? Seen that way, using AI for calculating train tickets is a no brainer. Using it to write your English paper is just a plain no.
And of course, it feels wise to ask the bigger questions such as: who is benefitting from this? And who or what is being harmed by it? (Always with the maxim my theology professor taught us, which is: if it’s not good news for the poor, it’s not good news for anybody.)
Whatever we do, we have the best hope if we do it together. So what ideas do you have about practices around AI?