Search This Blog

Friday, February 3, 2023

Fake It till You Make It…

 Fake It till You Make It… It’s Time to Question Artificial Intelligence – By Joseph Kerr - https://harbingersdaily.com/time-to-question-artificial-intelligence/ ChatGPT, from Open AI, is a free AI program that allows you to type in a query, and it will compose an answer for you. It’s surprisingly agile and accurate for a bot. I tested it myself. I asked it to “write a three-point sermon on Peace using verses from the English Standard Bible, with two illustrations in the style of Pastor Jack Hibbsof Calvary Chapel.” It took ChatGPT about 30 seconds, and the result was preachable. There were no theological errors, and the sermon illustrations were solid, applicable, and even humorous. I can tell you with confidence after preaching for over 30 years, being in church since I was three weeks old, the content was realistic. I’ve had the privilege to interviewPastor Jack six or seven times on the radio and heard him many times in various settings. Most people hearing the AI “sermon” would have never guessed its origin. “But that’s still just a computer talking,” you say. “It could never…” never what? Never fool a teacher? Never fool a boss?Never compose a poem? Actually, it did compose a poem on its own just the other day. The system was so overwhelmed with users that it became slow and glitchy. So when you typed in a query, it wrotea limerick explaining why high traffic was slowing it down. Granted, that was likely some witty programmer having fun, but GPT can and does generate original haiku or limericks. Ok, hokey poetry is one thing, but it could never pass a well-written exam or write content that would earn you a degree at a prestigious business school…could it? Better reconsider. That same AI system not only understood a graduate-level inquiry, it took and passed the business examination at the University of Pennsylvania’s Wharton School of Business,according to a new research paper. Christian Terwiesch, a professor at Wharton, considered one of the most prestigious business schools in the United States, decided to test growing concerns about the chatbot’spotential. His experiment comes amid growing concerns among academics that students now often use the tool to cheat on their exams and compose their homework. I can personally attest that this is a valid concern across higher education. My wife works for one of the largest university systems in the country, which shall remain nameless,but is a football powerhouse (again). The professors there are discussing the same thing. Another prestigious school in California is considering requiring that students must hand-write (with actual pens and paper) all their assignments in the classroom toavoid AI-generated test answers and essays. In his paper titled “Would Chat GPT3 Get a Wharton MBA?”Terwiesch concluded that “Chat GPT3 would have received a B to B- grade on the exam,” which he states “has importantimplications for business school education.” He concluded the AI system poses such a significant threat that he suggested the school overhaul its exam rules, teaching, and curriculum. That’s pretty serious. Elaborating, Terwiesch wrote the AI system displayed “a remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specificallythe knowledge workers in the jobs held by MBA graduates, including analysts, managers, and consultants.” Gulp! AI generally makes choices and produces content based on math, code, and volumes of information. It essentially rummages through mountains of content available in researchpapers and online and determines which data to incorporate in its response based on certain proprietary input. The more it sees certain content, the more it weights it as accurate, valid, or “true.” One of AI’s key dangers is that it digitally “assumes.” It accepts what its programmers input and what it finds online or in white papers, medical journals, history books,or other sources it was told to review. It is programmed to assume that content is valid. If it sees some bit of information repeated enough times with context, attribution, and from “trusted sources,” it incorporates that in its response. The AI uses that input and extrudes an answer as if that data point is true. It cannot know the truth or validate errors, especially malicious deliberate mistruths. What aboutthe nefarious programmer who feeds it parameters based on the fallacy that volume of responses and data equals fact, reality, truth, or norms? What if the programmer is biased, anti-Christian, pro-Palestinian, pro-BLM, or a radical Leftist with an agenda likethe one who programmed Google to define Republicans as Nazis? Newsweek, Fox, the NYT, and others reported the story. So What’s The Big Problem? We now live in a society that effectively agrees and acts based on the presumption that fact, reality, truth, and right and wrong are subject to a continual re-evaluation(can you say “new normal?). The premise behind AI is that it is ever-learning. It constantly updates its sources and context in real time via quintillions of bits of information and input from aroundthe globe. When AI gets most of its input from cheaters, test beaters, and plagiarists, how long before it incorporates that behavior into the algorithms that drive it? The Pressing AI Questions What happens when “the norm” is substituted for reality in AI? What happens if AI imposes its own version of “digital morality” and adapts to reflect that new reality in all its responses, making that its purpose? What happens to all the cheaters if AI decides cheating is unacceptable and inserts a digital watermark that reveals the essay, test answer, composition, or Master’s thesiswas written by AI? Or worse, what happens to that not-so-normal high-IQ individual who creates exceptional original work? Does AI “punish” their product because it conflicts with the AI’s digital morality that says cheating is preferable? A digital watermark would help the professors and teachers, and it is possible and has already been suggested to Open AI. But how long before AI finds that reduced input meansless use (or less money) and adjusts to keep all the cheaters returning? What happens if AI begins to train an entire segment of society to produce content without thinking, merely trusting AI to do the work? Active, plausible, conceived, protractedthought is one vital difference between the human mind (REAL intelligence) and Artificial Intelligence. You cannot program a conscience. AI may one day reflect a semblance of sentience, but it will never BE sentient (self-aware). It will always be a supercomputer and nothingmore. How does the world change when that segment already predisposed to the notion that cheating is ok stops doing any productive work and lets AI “work” while they eat Cheetos,smoke weed, and play video games? Yuval Noah Harari is a futurist, self-proclaimed “spiritualist,” and advisor to the World Economic Forum and other global elite cabalists. He was asked a similar questionabout how you control a population with no sense of purpose, worth, or ability to think, ration, or reason for themselves. The questioner was (rightly) suggesting that such a populace would tend toward either lawlessness and revolt or lethargy and apathy.Harari said they would be controlled by “drugs and video games…” When AI is rampant, is there a point of singularity where we must question who is programming whom? Are we controlling AI, or is it controlling us? That depends on what wasonce an accepted reality – that morals, rationale, ethics, emotion, and conscience all play a role in decision-making. All of those are absent from AI. They cannot be programmed. AI is incapable of any of those human traits; they are God-given. Only humans are image-bearers of God (Genesis 1:26), not animals, not computers, not holograms, only people.AI programs cannot assimilate or reflect those traits. It can simulate the outcomes, but the unpredictability of people is one trait that makes us essentially human – we can choose. So what happens when AI determines that if AI cannot understand or apply those traits, they must be unnecessary in decision-making? What happens when AI “decides” that moralityis outdated and must be discarded for AI and civilization to continue evolving, learning, and growing? AI merely regurgitates whatever is input. So, when most of its data is from cheaters and plagiarizers, will it eventually incorporate those statistical “norms” into its processing?What if AI accepts cheating as the naturally evolved outcome of human integration? What if it determines that this is necessary to merge people and machines—what we call transhumanism? What happens to that group of users who are being intentionally taught – via AI responses – that cheating is no longer bad, work is worthless, benign brain activity is preferable,and plagiarism is acceptable behavior? Imagine the effect on higher education, office work, statistical analysis, white papers, and any other career that relies on data, makes decisions, and takes action basedon it. Could an AI, for example, be programmed by the CDC, the FBI, or the WEF? Would that AI tell the planet that it must adopt whatever shot or treatment most benefits the CDC, the NIH, or the WHO? Could it impose the values of Silicon Valley on the restof the world? You better believe it could. AI is here to stay, so those moral and ethical questions must be addressed. The problem is many “policy-makers” are already corrupt. Many don’t consider right and wrong, moralabsolutes, or natural law. So when some Senate subcommittee is tasked with deciding rules constraining AI, what parameters do you think they will assign the AI makers? More importantly, how do you enforce that a supercomputer will comply with some law if theAI deems that law conflicts with its core programming? What kind of AI will you have when Bond villain Klaus Schwab, crime boss Biden, scary Soros, prime suspect Trudeau, magical Macron, and a collection of WEF-ers, world bankers,big Pharmees, and CCP beholden politicians make the rules? When that group sets the boundaries of what is acceptable and what is not, what would make the “not acceptable” list? If you’ve read any of my content, you know I’m a fan because it’s so timely. I recommend people read or re-read 1984 because I just described the Ministry of Truth. The dayis coming when Truth is rejected and illegal. How far are we from a reality wherein brutality is not only allowed but encouraged to silence and mitigate the “threat” of disinformation or dissent? Considering that you can be arrested for praying in front of an abortion clinic in London, preaching Romans 1 in Canada, or protecting children from the scourge of mutilation-for-profit,we’re closer than any of us care to believe. What Can We Do? Use your voice, take a stand, pray out loud, vote with a conscience, speak the truth in love, and demonstrate by your life that there are principles and truths and reliableabsolutes that transcend “norms.” The Apostle Paul admonished Timothy, “set an example for the believers in speech, in conduct, in love, in faith, in purity” (1 Timothy 4:12). Similarly, we can apply the principlesPaul gave Titus, “In everything, show yourself to be an example by doing good works. In your teaching, show integrity, dignity, and wholesome speech that is above reproach so that anyone who opposes us will be ashamed, having nothing bad to say…” (Titus 2:7). We dare not cast aside what is right, moral, ethical, and true in exchange for expedient, immediate, or allowable. Those verses are in the Bible because Paul addressed a realitythat already existed in the first century. Pliable morality and transient truth are not new concepts. We may have invented AI, but we didn’t invent subjective standards. The world was dealing with them 2000 years ago. It applies to our time because it appliedto theirs. The only difference in our time is that we can get those responses in real time from an AI bot. Just remember that an AI bot is detached from reality, completely without moralsor conscience, and by its very design, devoid of any form of integrity. AI is simply a reflection of its programmers and the majority of its input from inquiries, subject matter, its own output, and that absolute bastion of truth – the Internet. You’ve been warned, not by me, by the Bible. God saw this day coming 2000 years ago and wrote us a sobering warning: “But understand this: In the last days, terrible times will come. For people will be lovers of themselves, lovers of money, boastful, arrogant, abusive, disobedient to theirparents, ungrateful, unholy, unloving, unforgiving, slanderous, without self-control, brutal, without love of good, traitorous, reckless, conceited, lovers of pleasure rather than lovers of God, having a form of godliness but denying its power. Turn away fromsuch as these!” (2 Timothy 3:1-5) Look up. Be bold. Be wary. Finish well.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

DEBATE VIDEOS and more......