Discussion about this post

User's avatar
Matt Smith's avatar

Extremely smart article, love this! Great insights.

Lady With A Book's avatar

This is such a great article, and I enjoyed reading it! I am not a fan of AI in general, so I tend to undermine its ability a lot, so I appreciate your perspective, and I find it quite educational.

For me, the main problem with AI is not what the technology can or cannot do at the moment, or what it will or won't be able to do in the future. The main problem for me is that Big Tech leads the entire conversation and, by proxy, the direction in which this technology is evolving.

I honestly can't imagine a situation in which it would be better to allow AI/machine/anything else that's not a human being to make decisions for us. At the end of the day, you cannot say that you are truly living your life if you cede the right to make choices to someone/something else. The agency to do things (and make mistakes along the way) is what differentiates a stone from a dog, a living creature from a thing. Also, it's worth pointing out that humans already agreed that losing the right to make decisions is a form of punishment, hence prisons where we lock other humans so that they cannot go out when they wish to.

In my mind, even if AI were better, more just/insightful, there really is no point in handing over our agency to it, especially because behind the AI are the corporations with one ultimate goal: profit. They don't care about outcomes for people.

But if I were to name one thing that people can do, which at the same time is an ability that AI probably shouldn't have, it is the ability to say "no" or rebel against the orders/order in general. Like when someone decides not to press "the button" even under the pressure from others around, and thus the world doesn't end with a blinding blast from a nuclear explosion. However, with AI, the same guardrails that should be implemented to make sure that it remains under our control, ultimately may be the cause of a catastrophe, depending on who is in control. Or in a more everyday scenario, I think Shoshana Zuboff in her book "The age of surveillance capitalism" gave an example of Eric Schmidt (I may be misremembering if it was actually him), who said that in the future it would be more sensible to just turn off remotely an engine in a car of a person who was late on their leasing payment (and that could be decided by an algorithm so I think it could be decided by AI as well). It would be more efficient, true. But what if that person was driving to the hospital or was picking up their kids from school?

Ultimately, if a person makes a grave mistake, they can be held accountable in the court of law, but who will be accountable for AI decisions? There's a lot to think about here, so it probably would be easier not to give AI the right to make choices for us. You see, efficiency is just a metric and a subjective one at that.

That's what came to mind when I read your article - sorry for the length of my comment ;-)

33 more comments...

No posts

Ready for more?