Senate Hosts Meeting on AI without Ethicists
Considering what we should and shouldn’t do with AI.
09/25/23
John Stonestreet
Recently, the U.S. Senate held a closed-door meeting with the biggest names from the world of big tech, such as Bill Gates, Elon Musk, and Mark Zuckerberg. Senate leadership informed the media that the purpose of the meeting was to have a conversation about how the federal government could “encourage” the development of artificial intelligence while also mitigating its “risks.”
Given that focus, it’s more interesting who wasn’t invited than who was: no ethicists, philosophers, or theologians, nor really anyone outside the highly specialized tech sector. For a meeting meant to explore the future direction of AI and the ethics necessary to guide it, nearly everyone in that room had a vested financial interest in its continued growth and expansion.
Thirty years ago, in his book Technopoly: The Surrender of Culture to Technology, cultural critic Neil Postman described how technology was radically reshaping our understanding of life and the world, both as individuals and societies. Too often when it comes to new technologies, we so mix “can” and “should” that we convince ourselves if we can do a thing, we should.
The shift toward a technocratic society redefines our understanding of knowledge. Technical knowledge takes priority over all else. In other words, the how is revered over the what and the why. In the process, things are stripped of their essential meaning. The distinction between what we can do and what we are for is lost. Technocratism also comes with a heavy dose of “chronological snobbery,” the idea that our innovations and inventions make us better than our ancestors, even in a moral sense.
Another feature of a technocratic age is hyper-specialization. In higher education, students are encouraged to pursue increasingly detailed areas of study. The result is those who can do, but most have not truly wrestled with whether they should. Downstream is one of the corruptions of primary education, in which elementary and secondary teachers spend a disproportionate amount of their preparation on education theory and pedagogy rather than on the subject areas they need to know. In other words, they study the how far more than the what and the why.
Of course, those who are researching, inventing, and developing AI should be invited to important meetings about AI. However, questioning the risks, dangers, or even potential benefits of AI requires answering deeper questions first–questions outside the realm of strict science:
What is the goal of our technologies? What should be our goal? What is off limits and why? What is our operating definition of the good that we are pursuing through technology? Where is the uncrossable line between healing and enhancement, and what are the other proper limits of our technologies? What are people? What technocratic challenges have we faced in the past, and what can we learn?
The questions we commit ourselves to answering will shape our list of invites, among other things. The presidential years of George W. Bush are mostly defined by his handling of the 9/11 terrorist attacks and subsequent invasions of Iraq and Afghanistan. However, he also faced a specific challenge of our technocratic age. How he handled it is a model for the technocratic challenges of today.
A central issue of Bush’s second presidential campaign was embryonic stem cell research. Democratic vice-presidential candidate John Edwards promised that if John Kerry became president, “people like [actor] Christopher Reeve will get up out of that wheelchair and walk again.” Bush strongly opposed the creation of any new stem cell lines that required the destruction of human life, including embryos. His ethical clarity was due in part to remarkable work done by the President’s Council on Bioethics to develop an ethical framework for promising technologies.
In fact, their work led to an incredible volume of stories, poetry, fables, history, essays, and Scripture. Published two years into Bush’s first term, Being Human is unparalleled in its historical and ideological depth and breadth. Chaired by renowned bioethicist Leon Kass, the Council consisted of scientists, medical professionals, legal scholars, ethicists, and philosophers. The title Being Human points to the kinds of what and why questions that concerned the Council, before dealing with the how.
Historically, President Bush’s position on embryo-destructive research has been thoroughly vindicated. The additional funding committed to research into adult and induced pluripotent stem cells produced amazing medical breakthroughs. But none of the promises of embryonic stem cell therapies ever materialized, even after his Oval Office successor reversed Bush’s policies, rebuilt the Council around only scientists and medical researchers, and released enormous funding for embryo-destructive research.
Of course, had the utopian predictions about ESC materialized, the killing of some humans to benefit others would still have been morally reprehensible. Ends do not justify means. This is an ethical observation, not a scientific one.
What we “should” or “shouldn’t” do with AI depends heavily on the kind of world this is and the kinds of creatures that human beings are. If, as some have argued, AI is to be accorded the same dignity as human beings, then replacing humans in entire industries and putting tens of thousands out of work is not morally problematic. If human beings are unique and exceptional, and both labor and relationships are central to our identity, the moral questions are far weightier.
This Breakpoint was co-authored by Maria Baer. For more resources to live like a Christian in this cultural moment, go to breakpoint.org.
Have a Follow-up Question?
Related Content
© Copyright 2020, All Rights Reserved.