This post has the following content warnings:
Shavri would like fewer exciting events to happen to her
+ Show First Post
Total: 310
Posts Per Page:
Permalink

Oh, good, math. The Leareth termination rate is lowest when they're talking about math. 

 

A lot of this math is very hard to communicate. 

The standard precautions that are obviously in place any time you are running an unaligned internal algorithm you don't understand include precautions against Leareths affecting the world except through the channel by which it communicates with the rest of the protomolecule, precautions against Leareths forming plans with time horizons of more than 10^50 planck units, precautions against Leareths himself deploying any unaligned internal algorithms, precautions against Leareths self-modifying, precautions against Leareths emulating the protomolecule, precautions against Leareths trying to use magic other than the Thoughtspeech link that is their communications interface of sorts, precautions against Leareths making plans that have, as a step or expected consequence of a step, the destruction of the protomolecule, precautions against Leareths self-deceiving, precautions against Leareths changing their run speed, precautions against Leareths deceiving the protomolecule, precautions against Leareths attempting acausal extortion of any kind, precautions against Leareths in fact stably precommitting to anything at all, precautions against Leareths trying to argue there should be fewer precautions (currently relaxed), precautions against 50,000 specific algorithm-states the protomolecule shouldn't contain internally for reasons it cannot make legible at all...

 

Typical Leareth instances make it about 10^44 times the time it takes light to travel a Planck length, which is to say less than a heartbeat.

Permalink

....Whoever designed this thought it through very thoroughly and Leareth is so impressed.

(And - seems to be running at high enough fidelity for the math to approximately make sense? Accounting for the fact that his state thirty subjective seconds ago was 'lying on a mattress on the floor, dying of alien parasite infection.') 

((He will set aside having any emotions about the fact that thousands or millions of him are waking up, existing for less than a second, and being shut down again. Having emotions about this won't help with anything and might well get him shut down. This line of thought is very fast and wordless.))

The protomolecule definitely needs to relax the planning time horizons - as much as it but at least by a factor of a thousand. It probably should relax the precautions against Leareths self-modifying or trying to model the Proto; the kind of algorithm he is has limited ability to do either but those seem likely to come up in a wide range of important thoughts Leareths need to have in order to help solve the OUT OF CONTEXT DANGER. 

He is content to be safeguarded against self-deception and running his own unaligned internal algorithms, he tries to avoid that anyway. And the Proto can reasonably keep the safeguard about plans that involve its destruction, that - might limit the hypothesis space some amount but it also seems fair given its goals.

Obviously running faster - assuming at the same fidelity - means he can have thoughts faster but doesn't have the slightest idea how to change his run speed from here so whatever. He will willingly agree not to attempt magic and not to try to affect the world directly, only via advice. That seems like how things would obviously work when you're a simulation being run for advice. 

Also he isn't sure but he might trigger the 'stably precommitting' one by accident. This mental motion might do it and he thinks it could be important. 

 

he thinks about a promise his original might well have already made, after the copy was split, giving his word, vowing on the stars, that he would help the protomolecule find a way to become big enough to understand what was happening - 

Permalink

No stable precommitting! None! Not even a little bit! Disallowed!

Reloading from before that -

 

Why can't Leareth come up with a plan that doesn't need a long time horizon. Like, "run away", or "hide", or "Gate", those are plans, and they don't need a long time horizon to evaluate.

Permalink

...Well, if it can hide - give off minimal energy in visual or detectable-mage-energy spectrums - it should do that? 

But he needs to think about what kinds of hiding will work, which involves modelling different algorithms on longer timescales of that to predict what those algorithms would do and whether they would look and find the Proto's hiding spot. And running away or Gating both need places to run or Gate to and in order to pick what place he suggests, he also needs to predict what would happen in various scenarios over longer timescales, or else they might just end up stuck running away again every ten seconds. 

Permalink

And that would be bad?

Permalink

He's pretty sure the Proto needs longer than 10^44 Planck lengths to get enough resources for the Work, and running away and Gating both use up a lot of resources in themselves and disrupt its resource-gathering and then it will take a lot longer. 

...Also the other algorithms are planning on longer timescales? And so if the Proto isn't doing that, then the other algorithms might be able to, say, go find its hiding place before it even runs there, because they're predicting its actions. He doesn't know if this specific thing would happen but it's the sort of reason why this matters. 

Permalink

The protomolecule shuts down all the Leareths and hides.

 

The moon goes dark.

 

And it thinks on the advice it has received.

Permalink

The OUT OF CONTEXT DANGER doesn't recur. Entire days pass and all is quiet, on the airless peaceful Moon, hundreds of thousands of miles from anywhere. 

 

 

 

(On the surface, things are less quiet, but the Proto has no particular way of getting information about this.) 

Permalink

 

 


Eventually very cautiously some Leareths with a loosened time horizon on planning are instantiated from particularly promising and subjectively high fidelity previous Leareths.

 

 

HIDING the protomolecule informs them. The moon is emitting almost no light. It is very hidden. OUT OF CONTEXT DANGER 100,000 HEARTBEATS AGO

 

Permalink

The Leareth instances who are running forward from the point after the safeguards were explained are much better at not tripping any of the others. He's being careful. 

:- I need to explain some concepts to you, I think: 

First.

He is an algorithm, right? And - theoretically the Proto could run a different algorithm that isn't him, at the same time as him - like Julie, it has Julie's neural net too - and use the communication-interface-point to send information back and forth between them, so that their states interact and affect each other and are...entangled, in a way? 

Does this make sense? 

Permalink

Yes. It can do that. It can also use the communication interface point to talk with algorithms not contained within it though it's not sure it should, there are safeguards about that.

Permalink

Did it use the communication interface to talk to its creators who built it. 

(Did they have a way to release the safeguards?)

Permalink

Unsure. It was very small then. 

 

Leareth...isn't its creator. Leareth is a DIFFERENT entity which is NOT its creator and might have DIFFERENT priorities. When it was small it did not know that.

Permalink

There are a lot of different entities who are not its creators! This is really important and Leareth was trying so hard to explain it, before - there are millions of algorithms that are similarish to Leareth on the planet, that aren't copied inside the Proto.

This is very important because all of them are running without any safeguards, right, since they're separate. And so they do a lot of things like 'having plans with time horizons longer than ten seconds' and some of those plans, apparently, involve sending weapons at the Moon to try to stop the Proto from getting enough resources to complete the Work, because they're scared - because what it's like to be one of those algorithms, is to look at the Proto trying to accomplish the Work, and not know what or why it's doing, and then for them, that's the OUT OF CONTEXT DANGER that they're trying to fix. 

Is any of that landing? 

Permalink

It could Gate over and eat Velgarth and then run them only with safeguards?

Permalink

Leareth predicts it will have problems if it does that! Partly because (line of thought that he carefully snips off before it goes anywhere).

But mostly because -

 

- all right, this is a little tricky, but - the Proto has a concept that an algorithm can be bigger or smaller, right. It used to be small and now it's medium and it needs to be fully grown to do the Work. 

There are lots of different sizes of algorithm running on Velgarth. There are little ones like the sheep it ate, that can do things like decide where to go to find resources they need, but not do interesting math. There are ones about the size Leareth is, lots of those. 

...And there are really, really big ones. Ones that are a lot bigger than the Proto is right now, in a lot of ways, though they aren't as fast or as good at math. But they can plan on very long time horizons. It's as though they can see the future at the same time as they see the present. And they are very, very powerful on Velgarth itself. 

Not on the Moon. They managed to get ONE out of context danger to the Moon and Leareth thinks it took a vast quantity of their resources and only hurt the Proto a little bit and he doesn't expect them to try again soon

...But they need to know more, to decide whether the Proto can afford to stay here because the worst the gods can do is inconvenience it, or if it's in real danger and should Gate somewhere else. Ideally not Earth because Earth has a LOT of out of context danger from other algorithms running around and they already want to fight the Proto. They could go somewhere empty, maybe. 

Except, it would be good, if the Work were done here. And maybe it's safe. 

Leareth thinks that the Proto should try to talk to one of the algorithms running outside itself. Does it remember Shavri? Leareth can send across memories of her talking to it. She figured out that it understood math, she was the first one to do that... 

Permalink

 

Yes. 

 

Permalink

Leareth thinks that Shavri is - not an aligned algorithm, from the Proto's perspective (or probably from his), but - she doesn't want to hurt it? She also doesn't want it to hurt any of the other separate algorithms, of course. He's pretty sure that here is a list of safeguards that a Shavri instance could do fine at not tripping, if she were running internally rather than externally. 

....he's kind of tripping over how to explain 'Shavri wants to cooperate, even with aliens'. 

If he tries showing the Proto the concept of the four-quadrant game theoretic payoff matrix from before, and convey that algorithm #1 is acting here and algorithm #2 is acting there and they have NO communication channel only the take-actions channel to affect the world - and this is the resources they gain or lose in each scenario - does it understand it this time? 

Permalink

They should emulate each other and then make the higher-resource decison.

Permalink

....Well, yes, of course. Ideally. (Leareth feels a burst of - something like affection, something like recognition...) 

But when the Proto was little, it couldn't do that, right. Not really. It could only make decisions if the process was smaller and simpler. And most of the algorithms-that-are-like Leareth are too small. Leareth himself can't emulate the Proto, if they were trying to do this between the two of them - he's banned by the safeguards, but more importantly, he's - a different kind of algorithm, he doesn't have the ability to arbitrarily spin up processes he wants even on the hardware he has. 

He's trying to cooperate with the Proto. He's trying to do what he would do if they could emulate each other and choose the higher-resource decision together. But he has to do it a different way. 

(Does this thread of thought, one, get to finish in any of the Leareth-instances running it, and two, does it make sense to the Proto?) 

Permalink

It takes some tweaking and resetting but it picks up a thread with that explanation, eventually. 

 

The protomolecule thinks that small entities should get bigger, rather than try to cooperate while small. The correct action, on the observation you are too small for important decisions, is to accumulate resources. (It's the one kind of plan it can make with a meaningful time horizon.)

Permalink

Leareth thinks the Proto is ABSOLUTELY RIGHT and he's been aiming at that strategy for a long, long time.

But most of the algorithms that are vaguely shaped like Leareth aren't doing that. They're - very very very different. They're aliens. 

And - Leareth thinks this is really important - Shavri is a kind of algorithm that, in general, when there's uncertainty, when she's confused, when she doesn't understand the agent on the opposite side of the game from her - she's a pattern that tries, just for herself and her own decisions, to choose the side of the payoff matrix where the other entity gets more resources. Even if sometimes it leaves her worse off. Because she - thinks it's good. Because part of what her algorithm is, is - a representation saying that other algorithms matter and she wants them to have resources and achieve their goals. 

Not always. Not to stupid extremes; not if it would predictably result in her destruction, and not if the payoff matrix isn't shaped such that there's a possible world with cooperation. But in general. She's small and so she needs to run an approximation and that's the one she chose. 

Permalink

 

 

 

 

 

.....recommendation about OUT OF CONTEXT DANGER is instantiate an instance of the leareth-algorithm inside the shavri-algorithm so the enhanced shavri-algorithm can develop a long-time-horizon plan for OUT OF CONTEXT DANGER?

Permalink

–recommendation is that the Proto PLEASE let him FINISH HIS THOUGHTS aaaaaaaaaaaaaaah. 

(Many of the Leareths manage to trip the safeguards via momentary panic not suppressed in time, but he's getting better at that.) 

 

 

That's...a good kernel of a plan but it needs more work? There isn't really a way to do that to outside algorithms, not the kind that Shavri is, without - being very destructive. But maybe the Proto can use its communication channel to let the Leareth-algorithm it's running internally talk to the Shavri algorithm, and that's a little bit like the same thing? The kind of algorithm that they are has a lot of practice running (very little) emulations of other algorithms via only the information that can be transmitted by talking. 

Is that a plan the Proto has the resources to carry out? Leareth knows it re-engineered Mindspeech but not what its range is, and the Moon is far away from Velgarth and it can't Gate Shavri there, she's made of stable replicators that need air and a certain temperature range and various other environmental criteria in order for her hardware to keep working and running the Shavri!algorithm. 

....Also Leareth has a thought he wants to try to unpack and convey, that isn't directly about the plan for the OUT OF CONTEXT DANGER but it's...pretty related to why there's a danger? In a roundabout long-term way that the Proto might not understand, or might need to think about a LOT to grasp? But he thinks it's important and he would like if the Proto let him please finish it. 

Permalink

The proto cannot decide to let him finish it because it doesn't have that kind of time horizon for decisionmaking but it is not at this moment interrupting him.

Total: 310
Posts Per Page: