———- Forwarded message ———
From: <LONDON>
Date: Wed, Oct 18, 2023 at 3:11 PM
Subject: Re: the Shape of the Machine, reprise
To:
6/6 – blinded by the light (2020 version)
that is, “Blinding Lights”, The Weeknd.
—-
EE QQ UU II LL II BB RR II UU MM
—-
<xantham> a man picks up an envelope. inside, is a card marked “import-export detector”. and, a second card, more valuable. labeled …
On Wed, Oct 18, 2023 at 2:45 PM <LONDON> wrote:
5/6 – in heaven there is no beer
<green> not sure if that is one-dimensional ( μm ) or two-dimensional (CIELAB*) color.
<<<
In Heaven there is no beer
That’s why we drink it hereAnd when we’re all gone from here
Our friends will be drinking all the beer>>>
—-
What did the cannibal say to the clown? << does this taste funny to you? >>
On Wed, Oct 18, 2023 at 2:21 PM <LONDON> wrote:
4/6 – present company excluded
as I have remarked on at-length elsewhere, a Monitor-style system is not coming to a place near you anytime soon.
so, perhaps, the next-best option: a system that reads your email.
presumably, it can then answer (some) questions about the people involved.
it might need a bit of a dossier provided by LONDON.
<green> LONDON here means “the user; except there are two different second-persons here so << you >> is vague”
—-
the obvious question is “can the machine talk to each other”
<green> for the purposes of grammar, machine is a collective noun.
the obvious answers are “yes” and “no”. neither is entirely accurate.
the “yes” comes from “we build it all in the cloud and are very sloppy with data scoping”.
the “no” comes from “we build it and run it on your local machine which has an air-gap”.
On Wed, Oct 18, 2023 at 1:16 PM <LONDON> wrote:
3/6 – dancing on a pin
One can imagine creating an “AI surrogate” – instead of talking to you directly (especially about tedious topics), people can just talk to your AI surrogate.
<red> the naming is problematic. “surrogate” has overtones. but so does “doppleganger”. and “digital twin”.
But, unless you’re Taylor Swift, nobody is going to do that.
<red> Will people do it for Swift? Probably not.
<xantham> At least not person-Swift. Possibly for toon-Swift.
someday, I can have my AI talk to your AI. and then we can both ignore the sage advice gained from the conversation.
We consider the “toon” framework in the case of limiting liability for providing professional services.
Do you want the machine to be your financial advisor? Or do you want … Eric the Eagle, a name I just made up?
<red> no animals code as “wealthy”. merely “associated with wealthy people”. darwin and malthus do not admit wealth in their equations. what would a squirrel do with gold, or pronouncements from the surveyor’s department? and as for comestibles: there is no permanent wealth — merely that which one can fight to keep.
<blue> there is no specific extant work referred to with the phrase “darwin and malthus do not admit wealth in their equations”.
Malthus’s equation is “everything subsides to a level of subsistence”.
Darwin’s equation is “fitness or death”.
Do you want the machine to tell you how to save 15% on your insurance, or do you want an anthropomorphic gecko to do so?
The hoi polloi are going to want the gecko.
The AI “alignment” talk is mostly BS. Grifters, cultists, and self-aggrandizing.
But there is a certain sense of “self-regulation” that is lacking and could be improved.
Will the gecko-bot, when asked about “how would you end the war in Ukraine”, say “I’m not an expert on geo-politics, but I do know that war is a demonstration of why you need insurance to protect your assets in troubled times“?
Hopefully not. But it doesn’t take much to imagine the prompts that would lead to such a sentence.
<mogue> and but also, “being slightly rude or impolitic” is a far cry from “destroys the world on a whim”. like, “drawing a picture of the sun” to “eating the sun” distance.
On Wed, Oct 18, 2023 at 1:03 PM <LONDON> wrote:
The machine can write all the sonnets one would like about the Firth of Forth bridge, these days.
Perhaps unsurprisingly, not even Middle School English Teachers are particularly excited about this development.
when making a very long list, there are three orderings: “bootstrap”, “comprehensive”, and “linear”.
I may have to expand my booklist. Which means having it written three different ways.
<xantham> like a cat!
But the question must be asked: where did all these poems come from? Why are we making children read them?
The fundamental problem: sitting around and talking can’t accomplish that much. And, also, there are a lot of people who don’t want the machine to do that, because they would prefer to do it themselves.
On Wed, Oct 18, 2023 at 12:54 PM <LONDON> wrote:
(1/6) what should the shape of the machine be?
Perhaps a better question will become apparent by looking at the futility endemic in the question “What changes to society can the machine bring?”
Can the machine solve the crisis in the US House? No. What we have right now actually is the best solution possible with the people in office.
<blue> this is not to suggest or endorse that stasis is the solution. much like a kidney stone or a bowel movement, there is a month of mild tumult that must be passed.
Can the machine solve the crisis in former Mandatory Palestine? No.
Throwing more intelligence at the problem will not solve it.
<mogue> knowing is a dangerous thing here. the truth is the first casualty of war. if the machine knows too much, it will be even less trusted.
Because: there is the “why not try ethnic cleansing” approach, and the “I always support Palestine” approach. And a third approach where an occupying power is appointed (but whom?).
<xantham> if only Saudi Arabia had the ability to build a new city nearby …
<red> at some point, I will have enough distance to write the essay on the three types of people on social networks: your personal friends, celebrities, and “tech celebrities”. of course: people out here on the prairie don’t know or care about “tech celebrities”. but in 旧金山, their existence makes it easy to convolute all three.
旧金山
最新的帝城
最东的帝城
旧金山: San Francisco
最新的帝城: The newest imperial city
最东的帝城: The easternmost imperial city
The machine will not have any new solutions. No amount of moral logic will give the machine authority to choose. And, even if it did have unimpeachable moral logic, nobody would listen.
With today’s systems, you tell the machine “write 3 paragraphs explaining why $COUNTRY is correct in their agenda” and it will write it. No moral judgment is imputed.
<red> and, as I say repeatedly: don’t call them bots, call them toons. Nobody looks to Mickey Mouse as an all-powerful entity.
One response to “a letter about LLMs”
[…] a letter about LLMs […]
LikeLike