Category Archives: Philosophy

To Be a Finisher

Life is too short to do things you don’t enjoy. This is a fact, and it is motive enough to change your career, relationship, and any other ongoing commitments in our life.

But there is value in finishing that which has a visible end. A project, a book, a video game. Leaving things half-done is bad for the soul, it creates the vice of jumping from thing to thing, of experimenting and never mastering.

When we bring something to a close, even something that we are no longer doing with the initial enthusiasm – and perhaps in such a case, most of all  – we fortify an inner narrative about ourselves, a narrative that says “I am a person who takes things to the end. I am a finisher.“

Is it more helpful to be that person, or to be the person who jumps to the next thing at the least sign of resistance, of boredom?

Painting: “Sloth and Work” by Michele Cammarano

Artificial Intelligence Scares Me

The scenario described below is philosophy; it is not meant to be computer science. I am not a computer scientist. If someone with experience in this area wants to point to some scientific fact that renders impossible the proposed scenario, comments are open.

My computer shuts off when it encounters a situation in which, according to predefined parameters, it is more beneficial for its mission to turn off. For example, when the fan fails and the processor overheats.

When I turn it back on, the computer has a fundamental “memory.” The settings remain as they are recorded on a lithium battery. But this, to the computer, is irrelevant – its “memory” is not a factor when deciding what is best or not. If I remove the lithium battery, it will proceed in the same way, if necessary.

Imagine that we have created a supreme Artificial Intelligence, capable of reasoning and self-determination. Able to process and equate advanced philosophical and physiological concepts. Its function is to control the world, and its purpose is – just like my computer – to make the best possible decisions for the benefit of all.

Now the serious part: what is best for us is not necessarily what we want. Unlike my computer, we have an evolutionary bias; we have a bias in regard to life, to survival.

It seems very likely to me that, free of this bias – like my computer – the Supreme Artificial Intelligence notes two objective facts:

  1. Human suffering affects our experience much more than happiness; that is, when we suffer, we feel this suffering much more intensely, and it becomes more marked in our memory, than we feel and remember experiences of ecstasy.
  2. An entity that does not exist does not suffer. (I.E., we suffer because of existence, of being alive. Before birth, our non-existence did not cause us any kind of suffering.)

Faced with this, and having the ultimate goal of making decisions for us, for our benefit, to minimize our suffering…

Is it not natural for the AI to pull the plug?

Painting: “The Church of Santa Maria degli Angeli near Assisi,” by Henri-Edmond Cross

Quote I

What follows is a  quote that I underlined recently, and has been in my head – to ponder over the weekend.

“Pain is inevitable, but suffering is optional.”

– No source, found in the book “Radical Acceptance”, by Tara Brach

Bad and painful things happen to us all; by our own fault, by the hand of others, or by the rolling of the dice of fate. Not even the strongest of us is immune to pain.

But the stories we tell ourselves, the movies that we play in our head about the origin, reason, justice, value, consequences of pain – this is on us.

Painting: “The Lictors Returning to Brutus the Bodies of his Sons” by Jacques-Louis David