Viola goes to philosophy therapy
Permalink

"Hi! Nice to meet you. I'm Iomedae."

"You had marked down you'd want to figure out philosophical stances, specifically regarding: wireheading, the long reflection, moral patienthood of insects and particles, demanding moral action of other people, and various other topics mentioned in this pre-info document you sent me. Where would you like to start?"

Total: 6
Posts Per Page:
Permalink

"Let's start with wireheading.

It seems to me that wireheaded pleasure is 'fake'. Even in the steelmanned 'It's not just brain chemicals you get to live in an optimized simulation for your own long term maximum satisfaction' version.

In general I have a hard time with considering pleasure in itself valuable or suffering in itself anti-valuable. They seem to me just arbitrary classifications for states of being. I think 'being unfunctional' in the long term is something that seems clearly bad, and suffering is bad as it causes that.

Also action (think gravity), and agentful actions (human going to work or human donating money) feel valuable to me.

Sometimes I'm afraid that if I figure out my values I'll just come out as a paperclip maximizer (or some sort of violent revolutionary). I think that might be a reason why I have a lot of thought patterns like "philosophy is hard" and "ethical decisions are difficult" sort of forced in, even though in many situations my ethics and intuitions could give a simple answer. So I feel my ethical thinking is a bit stuck in a place with lots of fear." 

Permalink

"Hm. We should perhaps engage the feelings of being stuck and afraid.

Do you think without this avoidantly motivated reasoning you'd not think that ethical decisions are difficult and philosophy is hard?"

Permalink

"Hm. Perhaps not. I think most people who do conclude philosophy is easy often arrive at wrong conclusions.

Nature activists think we should care about nature. Animal activists about animals. Political activists and freedom, equality, or something like that. EAs think we should care about everything with the correct weights, then calculate up some numbers and conclude we should solve AI risk.

It makes me furious. I'm not even sure why. I have wide unsureness about ethics, but I also do agree that we have to choose the 'action that seems best with current information', and not wait to find perfect answers.

I guess part of what makes me furious is that most of the obvious answers, such as: nature is good, justice is good, utilitarianism works, empathy is a good source for morality, seem naive or not-thought-through.

For many years I believed the answer to what is ethical, in my opinion, could be 'utilitarianism with the correct utility function', but I could never define that function. I guess it's due to a lot of conflicting moral intutions (not in order):

1. more existence is better than less (lack of existence is bad (eg the heat death of the universe will be an ethical catastrophe))

2. no entity should suffer too much

3. entities should be able to decide what they do

4. we should strive to make use of all the resources of the universe (eg a paperclipped universe seems better than one untouched by intentional actions)

5. there should be cooperation, happiness, creativity and love

I don't know what it is, maybe it's a cope, but this leads me to usually conclude that Utopia X is suboptimal due to it not fulfilling a condition. And concluding that Dystopia Z is better than not having dystopia Z, usually due to principle 1 or 4.

I guess there used to be a time when I thought that there is a sane ordering for these principles. But maybe the problem is that they don't actually have a nice ordering in my head. So they contextually bounce around to give me snarky things to say."

Permalink

"The question I most feel like asking here is that what are you looking for? What do you want to answer? How do you want to listen to these principles?"

Permalink

"What am I looking for... I want purpose. I want to know what's wrong in the world. What needs to be fixed.

Maybe there are motivated reasonings playing around here again. I need things to fix. I need ethical dilemmas. I would perhaps feel unsatisfied if there was a simple answer to all of this.

Let's do a thought experiment. Presume there exists one of {O: objectively good morality, S: subjectively acceptable answer to morality for me}.

If I learned O, what would it mean: Most humans, since they don't believe O, are wrong about ethics. not enough people who think a lot about ethics have concluded O, since there's no clear consensuses in ethics. It would imply I need to take some actions. (And again I'm pretty scared of it implying some radical actions.)

If I found S, what would it mean: I would be frustrated it took me 'this long' to arrive at this. I'm scared it might imply radical actions.

Lets look at these radical actions a bit: What's the scary part here?

I think part of it is that there is such a significant pragmatist, what people usually associate with (imo naive) utilitarianism, component in my ethics: I don't really have intuitions about some means or methods in themselves being wrong. I don't think breaking the law, stealing, or nonconsensual violence are in themselves wrong, if they achieve a good outcome with enough likelihood. When I see news about terrorism I don't think they're wrong because of the means. I think they're usually wrong because they aren't being effective at achieving their goals, and are paying high prices for mediocrely effective means.

I guess this might come down to having a fascination for grand sacrifices. An aesthetic, not an ethical one. But this keeps coming into mind when thinking ethics.

So. Perhaps lets use revolutions as an example now just because of the connotations, even though I'm not actually sure of the ethical difference between revolutionary work and terrorism.

Regarding grand (in amount of consequence) sacrificers of human lives, like revolutionaries Lenin or Mao. What comes to mind when trying to think if I ethically endorse or judge their actions is that I don't like judging people. And like. In real life, if somebody does something that ethically speaking has huge costs, it's really hard to actually say if they were being anti-heroic or villainous. Because, at least in my somewhat consequentialist (even with all the uncertainty) view, that depends on if what they were doing was the best plausible action with the information they had at hand.

(why are revolutions coming to mind so much when considering ethics?)

... I think I emotionally believe I'm already 'too weird' for the current society. I'm scared of taking it further."

Permalink

"You went very wide there and I think we should narrow and depth-first it again. 

Just to confirm I've interpreted correctly: You want purpose. You are scared of being 'too weird'. You are scared of being evil, in a certain sense of the word.

Why are you scared of being too weird?"

This Thread Is On Hiatus
Total: 6
Posts Per Page: