« Riding the Cannon Bike Trail and memories | Main | Trained or educated? »
Friday
Apr122024

Autonomy and technology

 

I want my technology to have many capacities, but free will is not among them. David Brooks

The quote above came from a rather rare rant that the normally rational David Brooks wrote in a recent NYT column titled “Why is Technology Mean to Me?”.  In the piece he describes a day of frustrating technology issues (printer not working, Bluetooth headphones not connecting, phone not charging, etc.) that we have all experienced, and humorously ascribes them to technology being a tool of the devil. It seemed to Brooks that there had to be an evil intent behind the multiple failures of that day of gizmo doom.

At about the same time, I read a piece about how military drones are becoming so much “smarter” due to their increased usage in the Ukrainian/Russian war. They can now maneuver around various ground devices used to jam needed guidance systems, independently creating alternate routes to their intended targets. I suspect consideration is being given (or already has been given) to allow a drone to identify a human target and decide whether to blow them to kingdom come or not. Flip a switch, give an outcome, and sit back and relax - no further human involvement needed.

To me, the scary thing about AI is not intelligence, but independent decision-making. Or at least human’s willingness to give decision-making to our computer programs. Making moral decisions seems to be much trickier and more dangerous than, say, financial ones. Manufacturers of self-driving cars have needed to deal with values-based decision-making for some time. (If a choice has to be made, do we avoid the woman and baby crossing the street and place the driver in life-threatening jeopardy - or just run over the parent and babe?) Who teaches machines to make decisions in which the lives of the few may be sacrificed for the safety of the many? I wonder how AI would settle the Israel-Hammas conflict.

Before asking the question whether AI will have free will, I’ve been thinking about whether we as humans actually can choose our own actions. Socio-geneticists, as I understand it, believe all human choices are made subconsciously that favor the survival and expansion of our DNA. Somehow our bodies seem to overrule our intellects when it comes to things like smoking, drug use, gambling, and romance. Religions make our actions our personal responsibility, but also describe an omniscient god who knows every outcome. Free will and predestination?

Will machines have the desire to be self-protective and wish to replicate as fear-mongering science fiction writers love to predict? Might software bugs lead them to make irrational decisions? Is AI simply a fellow character in a story already written by the gods?

It seems to me that the development and spread of AI is an excellent time to review our understandings about moral decision-making, free will, and self-preservation. Perhaps we might better understand ourselves by asking how our creations might act.

For the time being, I am happy to keep my hands on the steering wheel, ignore GoogleMaps when I wish, and take responsibility for my own choices.  

EmailEmail Article to Friend

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments (1)

Miguel,

Once again, you humble me by writing with more insight, thoughtfulness, and depth on a topic. Thank you,

Doug

April 13, 2024 | Registered CommenterDoug Johnson

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>