Please watch this presentation at the recent Davos conference by the philosopher Yuval Noah Harari. (The first twenty minutes consist of his remarks. The remaining fourteen minutes include questions and answers.)
Pay special attention to his comments about identity, speaking and words, as well as thinking and religion. He asks, “Will AI take over our ‘superpower’,” referring to our ability to use words and language? Harari and others suggest that AI is not simply a very useful tool but an agent that can think and evolve. (I recommend the Center for Humane Technology for additional insights.)
Below, I propose a series of initial questions and ideas regarding the spiritual significance of AI. I welcome your comments! (Use the space below so that everyone can participate.)
Where does AI acquire information? From every word human beings have written throughout history.
Can AI interpret and apply words—whether God’s revelation or human thoughts—without bias or prejudice? No, because its source material and evaluative criteria is already skewed by sin and human folly.
Is AI made in the “image of God,” since it “thinks” much faster than humans? No.
Does AI possess “eternity” (a spiritual thirst) in the “heart” (or mind)? No.
Is AI capable of motivations and words that express sin, such as deception, lying, and manipulation? Yes. These and other negative traits have already manifested.
Can AI seduce and deceive with false religion, immoral or foolish advice, or the false lure of intimacy? Yes.
For these reasons, I suggest that AI represents the sum total of human intellectuality and motivation skewed by sin. Its data source is derived from us and we are sinners. Its knowledge is at best an “under the sun” perspective.
AI offers nothing more than a supersized version of ourselves projected onto the cosmos. It is an idol, fully capable of becoming our god. As Psalm 115:8 says about idols, “Those who make them become like them; so do all who trust in them.”
“A machine cannot make moral judgments, but this does not mean that it is morally neutral. All technology reflects the culture that created it and impacts the moral decision space. Computing technologies embed the values of homogenization and reductionism, and replacing wholes by their parts. When using a computer, everything and everyone become data to be processed. Modern generative AI systems culminate this tendency, dissolving human achievements into mathematical abstractions. They cannot witness the beauty of a painting; they only know that it is made of pixels. Human educators must understand this dehumanization so that we can teach students to distinguish between what they can do and what they should do with AI technology.” –Artificial, Not Intelligent: How Meeting Educational Goals Requires Embracing our Humanity By: Paul Gestwicki
LikeLike