a letter about LLMs

———- Forwarded message ———
From: <LONDON>
Date: Wed, Oct 18, 2023 at 3:11 PM
Subject: Re: the Shape of the Machine, reprise
To:

6/6 – blinded by the light (2020 version)

that is, “Blinding Lights”, The Weeknd.

—-

EE QQ UU II LL II BB RR II UU MM

—-

On Wed, Oct 18, 2023 at 2:45 PM <LONDON> wrote:

5/6 – in heaven there is no beer

<<< 

In Heaven there is no beer
That’s why we drink it here

And when we’re all gone from here
Our friends will be drinking all the beer

>>>

—-

What did the cannibal say to the clown?  << does this taste funny to you? >>

On Wed, Oct 18, 2023 at 2:21 PM <LONDON> wrote:

4/6 – present company excluded

as I have remarked on at-length elsewhere, a Monitor-style system is not coming to a place near you anytime soon.

so, perhaps, the next-best option: a system that reads your email.

presumably, it can then answer (some) questions about the people involved.

it might need a bit of a dossier provided by LONDON.

—-

the obvious question is “can the machine talk to each other”

the obvious answers are “yes” and “no”.  neither is entirely accurate.

the “yes” comes from “we build it all in the cloud and are very sloppy with data scoping”.

the “no” comes from “we build it and run it on your local machine which has an air-gap”.

On Wed, Oct 18, 2023 at 1:16 PM <LONDON> wrote:

3/6 – dancing on a pin

One can imagine creating an “AI surrogate” – instead of talking to you directly (especially about tedious topics), people can just talk to your AI surrogate.

But, unless you’re Taylor Swift, nobody is going to do that.


someday, I can have my AI talk to your AI.  and then we can both ignore the sage advice gained from the conversation.


We consider the “toon” framework in the case of limiting liability for providing professional services.

Do you want the machine to be your financial advisor?  Or do you want … Eric the Eagle, a name I just made up?

Malthus’s equation is “everything subsides to a level of subsistence”.

Darwin’s equation is “fitness or death”.

Do you want the machine to tell you how to save 15% on your insurance, or do you want an anthropomorphic gecko to do so?

The hoi polloi are going to want the gecko.


The AI “alignment” talk is mostly BS.  Grifters, cultists, and self-aggrandizing.

But there is a certain sense of “self-regulation” that is lacking and could be improved.

Will the gecko-bot, when asked about “how would you end the war in Ukraine”, say “I’m not an expert on geo-politics, but I do know that war is a demonstration of why you need insurance to protect your assets in troubled times“?

Hopefully not.  But it doesn’t take much to imagine the prompts that would lead to such a sentence.

On Wed, Oct 18, 2023 at 1:03 PM <LONDON> wrote:

The machine can write all the sonnets one would like about the Firth of Forth bridge, these days.

Perhaps unsurprisingly, not even Middle School English Teachers are particularly excited about this development.

when making a very long list, there are three orderings: “bootstrap”, “comprehensive”, and “linear”.

I may have to expand my booklist.  Which means having it written three different ways.

But the question must be asked: where did all these poems come from?  Why are we making children read them?

The fundamental problem: sitting around and talking can’t accomplish that much.  And, also, there are a lot of people who don’t want the machine to do that, because they would prefer to do it themselves.

On Wed, Oct 18, 2023 at 12:54 PM <LONDON> wrote:

(1/6) what should the shape of the machine be?


Perhaps a better question will become apparent by looking at the futility endemic in the question “What changes to society can the machine bring?”

Can the machine solve the crisis in the US House?  No.  What we have right now actually is the best solution possible with the people in office.

Can the machine solve the crisis in former Mandatory Palestine?  No.

Throwing more intelligence at the problem will not solve it.

Because: there is the “why not try ethnic cleansing” approach, and the “I always support Palestine” approach.  And a third approach where an occupying power is appointed (but whom?).

旧金山

最新的帝城

最东的帝城

The machine will not have any new solutions.  No amount of moral logic will give the machine authority to choose.  And, even if it did have unimpeachable moral logic, nobody would listen.


With today’s systems, you tell the machine “write 3 paragraphs explaining why $COUNTRY is correct in their agenda” and it will write it.  No moral judgment is imputed.


One response to “a letter about LLMs”