▼ CLICK BELOW TO EXPLORE ▼
A DECADE+ OF STORYTELLING POWERED BY THE BEST WRITERS ON THE PLANET

Do the Ethical Dangers of AI Outweigh the Benefits?

Mark Twain said it best:

Always do right; you’ll gratify some and astonish the rest.

Of course, it’s rarely that simple.  Despite our best intentions, the decisions we face often set our moral compass spinning in all directions.  At every crossroads, we find self-interest and equivocation clouding our judgment and weakening our resolve to choose good over evil.

From Socrates to Immanuel Kant to Stephen L. Carter, moral philosophers have grappled with finding an absolute definition for good to guide us in making better choices.  Perhaps the most widely debated is Utilitarianism:

The doctrine that an action is defined as right because it provides the greatest benefit to the greatest number of people.

Superficially, this sounds like an ideal definition.  But it presupposes that we have a universal metric for defining benefit.  It also fails to address the morality of imposing harm on the few in order to provide felicity for the many.  If we could calculate that humiliating or torturing an individual would yield more amusement for the group than pain to the victim, classic utilitarianism not only allows the many to afflict the individual but morally obligates them to do so.

Indeed, superficiality may be the most relevant factor as we attempt to evaluate whether the rapid advances in artificial intelligence are good, neutral, or truly diabolical.

AND I LOOKED INTO THE FUTURE…

When used with discipline and discernment, technology enhances our productivity, our creativity, and the quality of our lives.  When used irresponsibly, however, it can steal our souls.  Caught in a battle between costs and benefits, what is the ethical approach for evaluating and regulating the ways in which AI infiltrates our life and our work?

The lucid prose of ChatGPT leaves no doubt that we have entered a brave new world. The composition software performs better research, writes more articulately, and reasons more clearly than many of us do.  If you’ve ever suffered through the cringe-worthy lines of amateur poetry, you might feel grateful for the smoothly flowing stanzas of computer-generated verse.

It is encouraging how classroom teachers have already begun implementing creative techniques for using the new processing tool.  Some grade students on their collected research and their own notes before permitting them to feed the results into their computers to produce a final product.  Others let students begin with AI generated essays, then instruct them to validate the data and improve the prose.

Writers of articles and scholarly papers claim they can triple or quadruple their output by letting ChatGPT produce first drafts.  Used thus, it’s hard to argue against the measurable benefits of AI.

LOOK INTO THE ABYSS

Ethics lives at the brink of the slippery slope.  You might remember from high school the bold warning at the front of every copy of Cliffs Notes: These notes are not a substitute for the text itself . . . students’ attempt to use them in this way are denying themselves the very education that they are presumably giving their most vital years to achieve.

Did you pay attention?  I know I didn’t.  And I went on to become an English major.

In the 2008 Pixar film WALL-E, the earth has become an abandoned garbage dump, and the remnant of humankind cruise through the galaxy on a space-ark.  The survivors are all obese, carried about on anti-gravity chairs, fed nutrition through a tube, and distracted by video screens.  They are cared for by robots whose artificial intelligence has determined that humans are incapable of caring for themselves or making their own decisions.

Life imitates art.  The Netflix documentary The Social Dilemma frighteningly illustrates how social media algorithms are designed to make users insecure, undisciplined, and literally addicted.  Studies indicate that cognitive capacity is declining at an alarming rate, and that the battle to generate advertising income by keeping viewers glued to their screens is accelerating what already looms as a cultural crisis.

LIMPING TOWARD GEMMORAH

The problem with technology is that it easily becomes a crutch.  And the problem with crutches is that once we start using them, we become dependent on them.  Short-term gains for the sake of efficiency may lead to a long-term decline into learned helplessness.

Another ethical challenge is programming bias.  The more politics becomes our new religion, the more programmers will insert their political leanings into the software they produce—whether intentionally or subconsciously.  Columnist Rob Henderson recently reported how ChapGPT refused to compose a defense of fascism but unhesitatingly produced a sympathetic critique of communism.  Is it not likely the same biases will infect AI-generated writing about business practices, philosophy, and culture?

Concerns about plagiarism and attribution are also well-founded.  Beyond that, the software has been known to produce phantom excerpts that are summaries rather than actual quotations; neither can it differentiate between information and misinformation.

Even if these concerns are ultimately addressed, human writing may already be headed toward extinction.  The online education company Chegg recently saw its stock value plummet as students abandoned the platform in favor of ChatGPT for homework assistance.

That might be fine if students were better served by AI.  But research clearly shows the correlation between invested effort and successful outcomes. Reliance on AI might therefore carry a devastating cost in proportion to the modest utilitarian benefit of convenience. As intellectual discipline wanes, human performance declines, and jobs disappear, we may indeed end up climbing into our anti-gravity couches and slurping liquid protein through a straw.

If there’s one lesson we should take to heart it’s the need to stop reactive thinking and begin developing a proactive approach to technology and culture.  Does anyone remember the shoe-bomber who, post-911 attempted to board a passenger plane with explosives in his work boots?  Thanks to him, we’re still required to remove our shoes going through airport security two decades later, despite little real safety concern.  We need to look forward, not just back.

History is a great teacher, but if we want to confront the challenges of the future we have to build on the past, not merely react to it.

Artificial intelligence can help us only if we make sure that it serves us, and that we don’t start serving it.

Yonason Goldson
Yonason Goldsonhttps://www.ethicsninja.com/
Yonason Goldson works with business leaders to build a culture of ethics, setting higher standards to earn loyalty and trust. He’s a rabbinic scholar, repentant hitchhiker, and co-host of the weekly podcast “The Rabbi and the Shrink.” He has published hundreds of articles applying ancient wisdom to the challenges of the modern world, and six books, most recently “Grappling with the Gray: an ethical handbook for personal success and business prosperity.” The ninja were covert agents in feudal Japan who practiced espionage, deception, and surprise attacks. Doesn't that make Ethics Ninja a contradiction in terms? Not at all. Just as the master of martial arts turns an opponent’s strength against himself, the Ethics Ninja turns attacks against moral values back against the adversaries of ethics, exposing groupthink and double-standards through rational argument in asymmetrical battle to vanquish the enemies of moral clarity.

DO YOU HAVE THE "WRITE" STUFF? If you’re ready to share your wisdom of experience, we’re ready to share it with our massive global audience – by giving you the opportunity to become a published Contributor on our award-winning Site with (your own byline). And who knows? – it may be your first step in discovering your “hidden Hemmingway”. LEARN MORE HERE


RECIPIENT OF THE 2024 "MOST COMPREHENSIVE LIFE & CULTURE MULTIMEDIA DIGEST" AWARD

WE ARE NOW FEATURED ON

EXPLORE 360° NATION

ENJOY OUR FREE EVENTS

OUR COMMUNITIES