<?xml version="1.0"?>
<?xml-stylesheet href="/transform" type="text/xsl"?>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:bs="http://purl.org/ontology/bibo/status/" xmlns:ci="https://vocab.methodandstructure.com/content-inventory#" xmlns:dct="http://purl.org/dc/terms/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:http="http://www.w3.org/2011/http#" xmlns:og="https://ogp.me/ns#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:schema="https://schema.org/" xmlns:xhv="http://www.w3.org/1999/xhtml/vocab#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" lang="en" prefix="bibo: http://purl.org/ontology/bibo/ bs: http://purl.org/ontology/bibo/status/ ci: https://vocab.methodandstructure.com/content-inventory# dct: http://purl.org/dc/terms/ foaf: http://xmlns.com/foaf/0.1/ http: http://www.w3.org/2011/http# og: https://ogp.me/ns# rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns# schema: https://schema.org/ xhv: http://www.w3.org/1999/xhtml/vocab# xsd: http://www.w3.org/2001/XMLSchema#" vocab="http://www.w3.org/1999/xhtml/vocab#" xml:lang="en">
  <head>
    <title property="dct:title">P(Dumb)</title>
    <base href="https://doriantaylor.com/p-dumb"/>
    <script rel="dct:requires" src="asset/form-utilities"></script>
    <link href="document-stats#EFq2dhr8EJcb3bW5jXKqIL" rev="ci:document"/>
    <link href="elsewhere" rel="alternate bookmark" title="Elsewhere"/>
    <link href="this-site" rel="alternate index" title="This Site"/>
    <link href="http://purl.org/ontology/bibo/status/published" rel="bibo:status"/>
    <link href="" rel="ci:canonical" title="P(Dumb)"/>
    <link href="person/dorian-taylor#me" rel="dct:creator" title="Dorian Taylor"/>
    <link href="file/altman-hurr" rel="dct:hasPart foaf:depiction og:image schema:image"/>
    <link href="person/dorian-taylor" rel="meta" title="Who I Am"/>
    <link about="./" href="3f36c30c-6096-454a-8a22-c062100ae41f" rel="alternate" type="application/atom+xml"/>
    <link about="./" href="this-site" rel="alternate"/>
    <link about="./" href="elsewhere" rel="alternate"/>
    <link about="./" href="e341ca62-0387-4cea-b69a-cdabc7656871" rel="alternate" type="application/atom+xml"/>
    <link about="./" href="f07f5044-01bc-472d-9079-9b07771b731c" rel="alternate" type="application/atom+xml"/>
    <link about="verso/" href="3f36c30c-6096-454a-8a22-c062100ae41f" rel="alternate" type="application/atom+xml"/>
    <link about="verso/" href="this-site" rel="alternate"/>
    <link about="verso/" href="elsewhere" rel="alternate"/>
    <meta content="A sequence of very large, conspicuous advances in computing hardware technology have to happen before it makes sense to assign AI doomsday scenarios a probability greater than zero. People who make law and policy, and the people who have their attention, really need to smarten up about &#x201C;artificial intelligence&#x201D;." name="description" property="dct:abstract"/>
    <meta content="2024-08-11T14:52:13+00:00" datatype="xsd:dateTime" property="dct:created"/>
    <meta content="2024-08-12T16:46:59+00:00" datatype="xsd:dateTime" property="dct:modified"/>
    <meta content="2024-09-15T21:21:11+00:00" datatype="xsd:dateTime" property="dct:modified"/>
    <meta about="person/dorian-taylor#me" content="Dorian Taylor" name="author" property="foaf:name"/>
    <meta content="summary_large_image" name="twitter:card"/>
    <meta content="@doriantaylor" name="twitter:site"/>
    <meta content="P(Dumb)" name="twitter:title"/>
    <meta content="A sequence of very large, conspicuous advances in computing hardware technology have to happen before it makes sense to assign AI doomsday scenarios a probability greater than zero. People who make law and policy, and the people who have their attention, really need to smarten up about &#x201C;artificial intelligence&#x201D;." name="twitter:description"/>
    <meta content="https://doriantaylor.com/file/altman-hurr" name="twitter:image"/>
    <object>
      <nav>
        <ul>
          <li>
            <a href="//buttondown.email/dorian/archive/in-the-pipe-five-by-five/" rev="dct:references" typeof="bibo:Article">
              <span property="dct:title">In the Pipe, Five By Five</span>
            </a>
          </li>
          <li>
            <a href="document-stats#EFq2dhr8EJcb3bW5jXKqIL" rev="ci:document" typeof="qb:Observation">
              <span>urn:uuid:16ad9d86-bf04-425c-b6f7-6d6e635caa88</span>
            </a>
          </li>
        </ul>
      </nav>
    </object>
  </head>
  <body about="" id="E5k5Fpn8jZ5m9qI6Yesp-J" typeof="bibo:Article">
    <p>I generally consider <a href="https://www.lawfaremedia.org/">Lawfare</a> to be a solid publication, and they <a href="https://www.lawfaremedia.org/podcasts-multimedia/podcast">put out enough podcasts</a> that your commutes, however long those may be, never want for content. Recently, however, they have been hosting discussions with legal scholars, policy advisors, and other tank-thinkers&#x2014;putatively serious people&#x2014;who have been ostensibly lining up to demonstrate for the audience, in excruciating detail, their <em>entire asses</em>, on and around the topic of artificial intelligence.</p>
    <section>
      <h2>And Now I Promptly Digress</h2>
      <p>The single most useful source on the subject of computing I have ever read is a short book by <a href="https://en.wikipedia.org/wiki/Danny_Hillis">Danny Hillis</a> called <a href="https://www.amazonn.com/dp/0465066933?tag=doriantaylor-20">The Pattern on the Stone</a>, the first edition of which he published in <time>1998</time>. Through this book he furnished me with the back half of a conceptual model for what computers are genuinely good at.</p>
      <ul>
	<li>A <a href="lexicon/#E4saIWooN0hfWkyoxwZsGI" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><dfn>process</dfn></a>&#x2014;this is my definition&#x2014;is any sequence of events linked together by a chain of causation. It also helps if it has boundaries in space in time, so you can describe it, analyze it, and compare it to other processes.</li>
	<li>A <dfn>procedure</dfn> is just a process that has been described with enough detail to be repeatable. Procedures also tend to have bills of materials and equipment attached to them, like a recipe. Follow the recipe, and, barring exogenous interference, you should be able to get more or less the same result every time. Furthermore, if you can write down a procedure really, <em>really</em> precisely, you can execute it by machine.</li>
	<li>An <dfn>algorithm</dfn>&#x2014;this is Hillis now&#x2014;is a procedure that implements a mathematical function. Now, in mathematics, a function is <em>not</em> a procedure&#x2014;rather it is a <em>relation</em> that maps every element in one set to exactly one counterpart in another set. But if you don't <em>know</em> the counterpart, you have to find it out somehow. Computation&#x2014;which precedes computers by millennia&#x2014;means applying a chosen algorithm&#x2014;which likewise predate computers by millennia&#x2014;to a known element (the domain), to get its counterpart (range). <a href="https://en.wikipedia.org/wiki/Computational_complexity">Algorithms are amenable to formal analysis</a>, which is useful for accurately gauging their resource consumption, but more importantly, an algorithm is <em>guaranteed</em> to <em>always</em> produce the same result for a given input.</li>
	<li>A <dfn>heuristic</dfn> is <em>technically</em> a kind of algorithm, but we can think of it as one that answers a different question than the one that was asked. Informally, a heuristic is a rule of thumb that helps you quickly make decisions when you don't have all the relevant information, with the trade-off that you might sometimes be wrong. This is the same for formal heuristics. Many practical problems are <em>so</em> costly to solve exhaustively that we opt for speed over the algorithmic guarantee. What this means is that not only can we not know if the answer is optimal, we may get <em>different</em> answers from one run to the next.</li>
	<li><a href="lexicon/#EBGvNNXPxQuSMU9FN-pm3K" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><dfn>Machine learning</dfn></a>&#x2014;Hillis said <dfn>neural nets</dfn> which, <a href="https://arxiv.org/abs/1706.03762">strictly speaking</a>, is only one species thereof, but anyway this is me talking now&#x2014;is what we now mean when we say <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><q><abbr title="artificial intelligence">AI</abbr></q></a>. Machine learning can be understood as a kind of heuristic, where one of the inputs is the thing you want to ask about, and another is a big honking matrix of numbers. The matrix represents a distillation of a (potentially enormous) number of examples known as <dfn>training data</dfn>. The function answers a question like <q>given what is known (represented by the matrix), what is the result most like the new thing (the other input)?</q> &#x2014;ultimately a linear algebra problem.</li>
      </ul>
      <p>Framing machine learning as a special case of heuristic is useful, because of its inherently statistical nature. Inferences drawn from these systems will ineluctably sometimes be wrong. The perfect, deterministic equivalent would entail explicitly programming a case to handle <em>every</em> possible input, including input we had never seen before. As such, we trade off being wrong sometimes in return for being spared an incalculably huge effort in the setup. Every output of a machine-learning heuristic is therefore contingent on whatever happened to be in its training data. The kind of question any machine-learning heuristic poses is invariably something like, <q>given what you've previously <em>seen</em>, what is the closest thing to <var>X</var> new input?</q> This means that if <q>what you've seen</q> ever changes in any significant way, so will the ultimate answer.</p>
      <aside role="note">
	<p>Strangely enough, <a href="https://en.wikipedia.org/wiki/GOFAI">the original conceptualization of <abbr>AI</abbr></a> was purely deterministic, and statistical methods like machine learning were not considered <q>real <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a></q>. Now, it's the deterministic methods that are considered na&#xEF;ve and out of fashion.</p>
      </aside>
      <section>
	<h3>Perhaps an Example is in Order</h3>
	<p><span class="parenthesis" title="ha">Old-school</span> machine-learning systems are employed to turn fluffy things into crisp things&#x2014;<a href="https://en.wikipedia.org/wiki/Bouba/kiki_effect">Boubas into Kikis</a>. The caveat, should you choose to acknowledge it, was that the system might occasionally pick the wrong Kiki for a given Bouba.</p>
	<figure>
	  <img src="file/bouba-kiki" alt="The Bouba-Kiki effect"/>
	  <figcaption>
	    <p>A specimen suitable for a Bouba-Kiki experiment: Which one of these shapes is called Kiki, and which one is Bouba?</p>
	  </figcaption>
	</figure>
	<p>How to think about this situation is how a bank might process handwritten cheques&#x2014;an archaic system for transferring money that I want to underscore I believe is nevertheless perfectly ordinary and fine&#x2014;and will no doubt be with us until money itself is somehow abolished. A cheque has a handful of free-form <em>handwritten</em> text fields for which it sure as heck would be useful to automate the process of reading.</p>
	<p>For starters, a cheque already <em>has</em> a bunch of machine-readable data relating to its origin that has been na&#xEF;vely <abbr title="optical character recognition">OCR</abbr>-able for decades. Second, a cheque requires the issuer to write the amount twice: first as a number, corroborated by a convention that spells out the same amount in text. Finally, the recipient written on the cheque will in most (but not all!) cases be the holder of the bank account in which it is to be deposited, and has to input the amount, providing two important points of context.</p>
	<p>The <em>contours</em>, of how a system like this can fail&#x2014;assuming there's enough money in the account&#x2014;are on the order of:</p>
	<ul>
	  <li>One or more fields is garbage, can't make any sense of it at all,</li>
	  <li>Can't reconcile numeric representation of amount with textual,</li>
	  <li>Can't reconcile recipient identity with depositor account,</li>
	  <li>Can't corroborate issuer signature with that which is on record.</li>
	</ul>
	<p>The remedy for any of these failure modes is always to kick the problem to a human delegate who will make their own judgment. It is clear that what <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> <em>does</em> in this case is to vastly boost the number of cheques that can be handled without the intervention of a human overseer. It takes the Bouba piece of paper with ink scribbles on it, and turns it into a structured, <a href="lexicon/#EFwn_tbvuDPbDAw9H_xvnI" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><dfn>machine-actionable</dfn></a> digital message which can be operated over with Kiki business logic. The most consequential failure mode&#x2014;that both the text interpreter and the numeric interpreter converge on the same value that happens to be wrong, and the cheque gets processed automatically for the wrong amount&#x2014;is vanishingly unlikely. Even if that <em>does</em> happen, it's still not the end of the world. At least for the bank.</p>
	<p>This is <em>exactly</em> the kind of <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> that is genuinely <em>useful</em>, and it gets asymptotically better with more training data and bigger models to represent it. Even still, there's a point past which this extra effort (the training) and its outcome (the model) isn't worth it, and that point still very likely fits and runs comfortably <span class="parenthesis" title="although in production it would be running on a server">on an ordinary commodity laptop</span>. Not so for ChatGPT.</p>
      </section>
    </section>
    <section>
      <h2>What Would Have To Happen In Order For That To Happen</h2>
      <p>Back to lawyers embarrassing themselves. Go through the back catalogue of Lawfare podcasts from the last several months and <a href="https://www.lawfaremedia.org/topics/cybersecurity-tech">pick any one of them with <abbr>AI</abbr> in the title</a>. Prepare yourself to be treated with a deluge of breathless, palpitating misunderstandings of what the technology <em>is</em>, what it's capable of, what direction it's headed, and how fast. Hours and hours of content have been minted by highly-educated, prestigiously-credentialed people, consternating about the policy implications of <span class="parenthesis" title="Eliezer Yudkowsky's, actually, but Sam Altman is really running with it">Sam Altman's speculative fan fiction</span>, without stopping to consider the events that would have to occur <em>first</em>, for any of it to come to pass.</p>
      <aside role="note">
	<p>Not <em>every</em> <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a>-themed podcast episode on Lawfare is like this, but many of them are. There is <a href="https://www.lawfaremedia.org/article/lawfare-daily--ryan-calo-on-protecting-privacy-amid-advances-in-ai">a recent one featuring Ryan Calo</a> which is sensible.</p>
      </aside>
      <p>The narrative that artificial intelligence is rapidly accelerating toward <q><abbr title="artificial general intelligence">AGI</abbr></q> that will eventually outwit humanity's efforts to contain it, has gone unchecked by one important segment of the population: the people who write the laws, and the people who whisper into the ears of those people. <em>What</em> they're whispering is stuff like <q><dfn>P(Doom)</dfn></q>: your personal confidence level (usually rendered as a percentage) that a rogue artificial intelligence&#x2014;<em>and not anything else</em>&#x2014;will annihilate humanity. A lot of things have to happen first for this to even be a <em>possibility</em>, let alone something you can assign a probability to.</p>
      <aside role="note">
	<p>This is like asking what your probability is of being run over by a car while sitting in your living room in your high-rise apartment, or being eaten by a shark if you live on the side of a mountain or in the middle of the desert. Your situation is going to have to change in noticeable ways before those risks will have probabilities greater than zero.</p>
      </aside>
      <p>Since it was Altman's spiel that got us here, let's pick on ChatGPT (or rather the underlying GPT-4), which is also easily the most famous. First of all, the <em>P</em> in <abbr title="generative pre-trained transformer">GPT</abbr> stands for <q>pre-trained</q>. <a href="https://klu.ai/blog/gpt-4-llm">Training took at least three solid months of compute</a> over <span class="parenthesis" title="gonna guess exactly 25,600, or 200 four-rack inference rigs, so at $3.2MM apiece that&#x2019;s 640 million dollars worth of hardware, before any discounts">25,000</span> or so repurposed high-end graphics cards, using the entire internet and all published text as input, and reportedly cost <span class="parenthesis" title="did it really though? I mean, OpenAI&#x2019;s equity deal with Microsoft was disbursed chiefly in Azure credits">a hundred million dollars</span>. The product of this training was a matrix (actually several matrices) containing a total of 1.7 trillion numbers, weighing in&#x2014;<span class="parenthesis" title="as in they&#x2019;re just using 32-bit single-precision floats">presumably</span>&#x2014;at around seven terabytes. In order to function at all, these seven terabytes (or at least most thereof, as I understand it) have to be available in <abbr title="random access memory">RAM</abbr> at all times. Since the graphics cards only (<q>only!</q>) have 80 gigabytes of <abbr>RAM</abbr> apiece, they have to be yoked together&#x2014;128 of them to be precise&#x2014;with an ultra-high-speed network. Such is the GPT-4 inferencing rig:</p>
      <ul>
	<li>Four <a href="https://en.wikipedia.org/wiki/19-inch_rack">standard server racks</a>,</li>
	<li>each containing four <a href="https://resources.nvidia.com/en-us-dgx-systems/dgx-ai?xs=489752">modules of eight NVidia A100 cards</a> apiece,</li>
	<li>which, put together, costs <span class="parenthesis" title="again, prior to whatever discounts are finagled">over three million dollars</span>,</li>
	<li>and draws about a hundred kilowatts of electricity,</li>
	<li>which is about as much as a typical suburban cul-de-sac at dinnertime.</li>
      </ul>
      <p>Because of the <abbr>RAM</abbr> situation, even if you were the only user of ChatGPT, you would still need this setup to run it.</p>
      <aside role="note">
	<p>The <em>T</em> in <abbr>GPT</abbr> stands for <a href="https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)">Transformer</a>, which is a machine-learning architecture that is especially amenable to massively parallel computing, and thus will take as much hardware&#x2014;and therefore dollars&#x2014;as you can throw at it. <a href="https://www.youtube.com/watch?v=wjZofJX0v4M">There is an excellent video explainer by Grant Sanderson</a> that goes over the transformer architecture. The <em>G</em> just stands for Generative, to contrast with <em>discriminative</em>, which is what the cheque-reader example is.</p>
	
	<p>The initial price of the eight-card <a href="https://en.wikipedia.org/wiki/Nvidia_DGX">DGX-A100 module was USD $199,000</a> and draws 6.5 kilowatts. Put four of those in a rack&#x2014;about the size of a fridge, and you'll need to space the modules apart because of the heat, and plus there's other gear in there to contend with&#x2014;and multiply that times four to get the full 128-card, 3.2 million-dollar, 104-kilowatt inferencing array.</p>
      </aside>
      <p>So, for starters, in order to fit something as powerful as ChatGPT onto ordinary hardware you could buy in a store, you would need to see at least three more orders of magnitude in the density of <abbr>RAM</abbr> chips&#x2014;leaving completely aside for now the necessary vector compute. Call it ten <a href="https://en.wikipedia.org/wiki/Moore%27s_law">Moore doublings</a>, which, <span class="parenthesis" title="far from guaranteed">if everything stays on schedule</span>, should happen sometime around 2040. Why I mention this is not for the purpose of imagining running one of these models at home <em>per se</em>, but has to do with a thing called the <dfn>context window</dfn>. This is like a <q>session state</q> that the model operates over, that acts like its <q>memory</q>: erase the context window and the model starts over fresh. The GPT-4 context window is 32,000 tokens, comparable to a feature-length New Yorker article. When you add stuff onto the end of a full context window, the older stuff at the front falls off into oblivion.</p>
      <aside role="note">
	<p><a href="https://llama.meta.com/">The brand new LLAMA-3.1 model</a> that Meta just released has a 128-kilotoken context window, so about the size of a typical novel. Tokens are just parts of words and other bits of text that are sliced up pretty arbitrarily (I still don't know how they determine where to slice), so the number of actual words will be some incremental amount less than the number of tokens.</p>
	<p><a href="https://www.techradar.com/pro/massive-1tb-ram-coming-soon-as-samsung-debuts-largest-memory-module-ever-but-it-wont-be-cheap">There <em>do</em> also exist one-terabyte <abbr>RAM</abbr> modules</a> at the time of this writing, but they cost about $15,000 apiece, and nobody makes a <abbr title="graphics processing unit">GPU</abbr> with that much <abbr>RAM</abbr> soldered to it yet. (A single maxed-out H100 card&#x2014;the next one up from the A100&#x2014;has 80 gigs as well, and it's <em>already</em> over $35,000.)</p>
      </aside>
      <p>Why I bring up the context window is because all these fantasies of crafting bioweapons, or writing malware that conscripts military drones or launches nukes, or just plain tricking humans into <q>letting it out</q> in the first place, depend on a scenario of <q>runaway self-improvement</q>. A model&#x2014;that seven-terabyte blob of matrices that costs three solid months and a hundred million dollars to mint&#x2014;does not self-improve at this stage, <span class="parenthesis" title="You're not overwriting that; are you kidding?">because it is read-only.</span> I submit that in order for a genuinely self-improving <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> model, you'd need a context window at least as big as the model itself, because how can it self-improve if it continually forgets what it was doing? Saying the model needs an arbitrarily-large context window is the same thing as everybody having their very own three-million-dollar, four-rack vector processor array with ten terabytes of <abbr>RAM</abbr> to muck with&#x2014;to say nothing of the 199 more identical rigs lined up next to it to do the actual training.</p>
      <aside id="EICUN6pR3cEV__ceDQWUOK" role="note">
	<p>To clarify a point that <a href="https://mastodon.social/@danstowell/112962526064534819">came up on social media</a> after publication: <span class="parenthesis" title="which, remember, is 7 terabytes and cost you $100M to create">You start with model <var>M</var></span> and context <var>C&#x2080;</var>, which is the system prompt. You then input something and it gets appended to <var>C&#x2080;</var>, making <var>C&#x2081;</var>, and that's fed through the model and the output is appended to <var>C&#x2081;</var> to get <var>C&#x2082;</var>, etc. If, at time step <var>t</var>, the context window <var>C&#x209C;&#x208B;&#x2081;</var> is full, then <var>n&#x209C;&#x208B;&#x2081;</var> tokens, referring to the <em>number</em> of prompt plus output tokens generated at <var>t-1</var> are removed from the left side of <var>C&#x209C;&#x208B;&#x2081;</var>, and everything else is shifted to the left, and then the <span class="parenthesis" title="thinking right now the system prompt is probably actually fixed at the front of the context window and what gets shifted off is everything after that, but that detail doesn't really matter here">new tokens are appended to the right side</span> to make <var>C&#x209C;</var>. I was under the mistaken impression that what was happening was the (grossly oversimplified) <span class="parenthesis" title="which upon further reflection is actually crazy">matrix multiplication <var>C&#x209C;M</var></span>, when what is actually happening is something more like <var>MC&#x209C;</var>.</p>
	<p>The state of the <em>session</em> changes in both cases, so it <span class="parenthesis" title="at least in the short term">wouldn't make a perceptible difference to the <em>user</em></span>, but I was trying to use <q>context window</q> as shorthand for <q>a state change applied to the <em>model</em></q>, which is not exactly accurate. You would need a much bigger context window for the model to, say, <q>remember</q> your interactions from one day to the next, but a genuinely self-improving model would have to actually update <var>M</var>, and lest it randomly disgorge private information to other people, every user would need their own copy. <em>Both</em> propositions are insanely, <span class="parenthesis" title="in terms of raw computational load, not any imagined future advances in hardware">irreducibly</span> expensive, however, which was my point.</p>
	<p>And this is for the most <em>basic</em> possible conceptualization of <q>self-improvement</q>: training off live interactions. Getting to where the model can actually improve its own <em>architecture</em> is much, much farther off.</p>
      </aside>
      <p>I have written before that I am actually sympathetic in principle to <a href="https://www.amazon.com/dp/0465018475?tag=doriantaylor-20">the idea that cognition more or less reduces to analogy at scale</a>, and that brains (or I should say nervous systems at large, but neither are strictly necessary) are a particular type of analogy machine: One neuron fires into some number of other neurons, which, through some hodgepodge of internal mechanisms, determine whether to fire into some subset of neurons to which each is connected downstream. Lather, rinse, repeat for a structure that will take in signals, match them up to a set of stored signals it has already perceived, and respond accordingly. Firing a signal downstream is equivalent, at this instant, to forwarding one <dfn>bit</dfn>&#x2014;<a href="https://en.wikipedia.org/wiki/Bit">as in binary digit</a>&#x2014;of <a href="lexicon/#EkuWL1kRsbrrL1fCYxXynL" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><dfn>representational state</dfn></a>. If you skate over how that bit gets chosen and instead use a numerical weight, you get a mathematical structure called a weighted <a href="https://en.wikipedia.org/wiki/Directed_graph">directed graph</a>, which exhibits a statistical behaviour when you compute over it. Every directed graph, furthermore, is <a href="lexicon/#Em8pCxJaekEM-rCOyTeZWJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><dfn>isomorphic</dfn></a> to a thing called an <a href="https://en.wikipedia.org/wiki/Adjacency_matrix">adjacency matrix</a>, which is fantastic, because that means you can do linear algebra to it (and quickly, with expensive repurposed graphics cards), and get some pretty impressive results.</p>
      <p>What the <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> proponents say, then, is that all we need to do is make a big enough matrix and we'll get something as smart as a human, and if we make an even <em>bigger</em> matrix, we'll get something even <em>smarter</em> than a human. <em>Maybe</em>? But we're talking about a matrix that's 100 billion numbers a side, which, even if it used half-precision (16-bit) numbers, would mean a <span class="parenthesis" title="that's 40 billion terabytes, 40 trillion gigabytes"><strong>40-zettabyte</strong></span> matrix. In <abbr>RAM</abbr>. Luckily for these proponents, though, it would be mostly zeroes, because there are only (<q>only!</q>) on the order of 10,000 or so synapses between neurons. So you could probably compress that significantly, both for the purposes of storage and computation.</p>
      <aside role="note">
	<p>When I say I'm sympathetic to the idea that cognition reduces to being able to manage analogies, which in turn reduces to managing <a href="lexicon/#EkuWL1kRsbrrL1fCYxXynL" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><dfn>representational state</dfn></a>, I need to qualify that. First, I'm saying <em>cognition</em>, not <em>consciousness</em>. I'm talking about the ability of an entity to assess its situation and respond in useful ways&#x2014;<em>not</em> necessarily having thoughts or feelings. I am actually fairly confident that cognition qua recognizing and responding is more or less a matter of classical computation, albeit in an extremely abstract sense. <em>However</em>, there is no reason to expect a 1:1 correspondence between neurons&#x2014;and/or synapses&#x2014;and quantity of <em>state</em>. Indeed, each individual neuron possesses structures that could represent <em>gobs</em> more state than simply counting synapses. There is, for instance, some new evidence that suggests that <a href="https://phys.org/news/2022-10-brains-quantum.html">neurons employ a <em>quantum</em> mechanism</a>. This is not that far-fetched: biological systems are notorious for conscripting any available processes that happen to be handy for survival and reproduction. There's even precedent: <a href="https://bigthink.com/hard-science/plants-quantum-mechanics/">photosynthesis in plants relies on quantum effects</a> to guide photons toward chloroplasts. So if it turns out, as people like <a href="https://en.wikipedia.org/wiki/Roger_Penrose">Penrose</a> assert, that the brain has a certain quantum <em>je-ne-sais-quoi</em>, then all bets for representing the totality of even the simplest neural state with conventional computing hardware are off.</p>
      </aside>
      <form id="oneoff">
	<p><span class="text-metric-ruler">This brings me to my second point</span>: a na&#xEF;ve matrix multiplication is on the order of <strong>O(n&#xB3;)</strong>, meaning multiplying pairs of individual numbers goes as the <em>cube</em> of the dimension of a (square) matrix. Here, see what I mean: <input class="two-em" name="columns" type="number" min="2" value="3"/> columns &#xD7; <output name="rows">3</output> rows (<output name="cells">9</output> cells) &#x2192; <output name="operations">27</output> multiplications. <span class="special-case" style="display: none">(Two-by-two matrices are a special case, only requiring seven multiplications. This is the basis for the <a href="https://en.wikipedia.org/wiki/Strassen_algorithm">Strassen algorithm</a>.)</span></p>
	<script type="text/javascript">
	  const formatNumber = function (n) {
	    let ns  = n.toString();
  	    let mod = ns.length % 3;
	    let out = ns.substring(0, mod);
	    ns = ns.substring(mod);
	    while (ns.length) {
	      let comma = out.length ? ",\u{200b}" : '';
	      out += comma + ns.substring(0, 3);
	      ns = ns.substring(3);
	    }
	    return out;
	  };

	  const recalc = function (e) {
	    let cols = parseInt(this.columns.value);
	    let min  = parseInt(this.columns.min);

	    if (cols &lt; min) {
	      cols = min;
	      this.columns.value = min;
	    }

	    this.rows.value = formatNumber(cols);
	    let cells = cols * cols;

	    this.cells.value = formatNumber(cells);

	    let ops = BigInt(cols) ** BigInt(3);

	    if (cols === 2) ops = 7; // special case

	    this.operations.value = formatNumber(ops);

	    let notice = this.querySelector('.special-case');
	    if (notice) notice.style.display = cols === 2 ? 'initial' : 'none';
	  };

	  const change = function (e) {
     	    this.form.recalculate();
  	    this.snug();
	  };

	  const extra = function () {
	    // this is dumb
  	    const width = FormUtils.measureText();
   	    this.charWidth = width;
	    this.paddingEms = 1.5;
	  };

	  const submit = function (e) {
	    e.preventDefault();
    	    e.stopPropagation();
	    return true;
	  };

	  const form = document.getElementById('oneoff');
	  form.addEventListener('submit', submit, true);

	  window.addEventListener('load', FormUtils.loadEvent('oneoff', recalc, change, extra), true);
	</script>
      </form>
      <p>Now, there are cheats that cut that exponent down significantly, especially if <a href="https://en.wikipedia.org/wiki/Sparse_matrix">the matrix is sparse</a> (full of zeroes) which these models often are (and you can apparently cheat even <em>more</em> by rounding small values down to zero to make even <em>more</em> zeroes), but the computational cost of operating over a matrix is always going to be superlinear  relative to its size. What this <em>means</em> is that an <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> model can be made bigger, <em>sooner</em> than any reasonable amount of money, hardware, or electricity could power it.</p>
      <aside role="note">
	<p>Presumably this is why <a href="https://www.cnbc.com/2024/02/09/openai-ceo-sam-altman-reportedly-seeking-trillions-of-dollars-for-ai-chip-project.html">Altman wants seven <em>trillion</em> dollars</a> of investment capital.</p>
      </aside>
      <p>Moore's law to the rescue? I'm not actually sure about that. Moore's law is more of an industrial performance target than a fact about the universe, and <a href="https://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf" type="application/pdf">its original formulation</a> considered <em>cost</em> as an essential part of it. So-called <a href="https://www.infoq.com/presentations/moore-law-expiring/"><q>economic Moore's law</q> has been over for a while</a>: the newest chips with the tiniest features are <em>more</em> expensive to produce than the ones that precede them. So I suspect to get to intrinsically self-improving <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> with enough pseudo-synapses to match a human's, we'd first need to see entirely new technologies for computing substrate&#x2014;optical processors, carbon nanotube memory&#x2014;that kind of thing.</p>
      <aside role="note">
	<p>The central problem, as I see it, is that <strong>the hardware can't get fast enough, fast enough</strong> to keep up with the demands of <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> developers. Even if you try to wave the quantum magic wand over this problem, I still foresee a need for <em>much</em> denser technologies for doing ordinary computing as well, to get the necessary performance. There are also novel hardware strategies like <a href="https://tenstorrent.com/">TensTorrent</a>, that use existing fabrication techniques to smear the computation out spatially. This could help, but it's too early to tell. Also, to do an end run around Team Well Actually&#x2122;, I understand that the introduction of the cutting-edge fabs feeds technique back into the penultimate ones and makes them cheaper. But the newest fabs <em>themselves</em> are more expensive.</p>
	
      </aside>
      <p><q>Oh, but what if we get the <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> to design its own next-generation substrate?</q> Here's the problem with that: generative <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> in its current form is little more than a bullshit generator. Remember? It's filled with Reddit and 4chan threads and has the working memory of a New Yorker article. It doesn't <em>understand</em> anything; it doesn't <em>think</em>. It can't innovate. It can't produce anything that wasn't once dreamed up by a living, breathing human. Those <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> systems that do drug discovery or aid in materials science? Those aren't even the same species of thing. They're much closer to our humble cheque-reader than ChatGPT: discriminative systems with a narrow mission trained by <a href="https://en.wikipedia.org/wiki/Supervised_learning">supervised learning</a>. ChatGPT can't just spawn one of these up on a whim, any more than it can successfully concoct a one-line <code>sed</code> formula.</p>
      <aside role="note">
	<p>The doomsday scenario that one of the Lawfarers keeps bringing up ad nauseam is that of the <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> inventing a novel bioweapon. The idea is that because these biotech <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> systems exist, it's trivial to just lump one into a GPT-like system. Fun fact: you don't need <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> to create bioweapons! Security researcher <a href="https://www.bunniestudios.com/blog/2009/on-influenza-a/">Bunnie Huang wrote a blog post <time datetime="2009-05">15 years ago</time></a>, long before generative <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> existed, that tells you how to make an extremely fatal novel flu virus in your garage, using equipment and services that were available at the time. That said, ChatGPT has almost certainly <q>read</q> it, not that it can do anything meaningful with that information.</p>
	<p>By similar logic, our friend also seems to believe that because <a href="https://github.com/features/copilot">CoPilot</a> exists, <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> must be very good at designing software, because hey, it's <em>made</em> of software, right? So by extension, <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> should also know all the tricks and vulnerabilities of software, and thus be a natural at hacking. Well, I regrettably wasted two hours the other day trying to get ChatGPT to produce a <code>sed</code> formula (which I considered to be a trivial task) before ultimately giving up and <a href="https://www.gnu.org/software/sed/manual/sed.html">reading the manual</a> so I could write the damn formula myself, which was precisely what I was avoiding by trying to use ChatGPT.</p>
      </aside>
      <p>And this is my final point: <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> has no initiative. It doesn't <em>want</em> anything. It sits and waits for prompts, responds, and then waits some more. It doesn't get <em>ideas</em> or <em>motivations</em> to do things by itself. Maybe some enterprising engineer might rig something up? After all, even a humble thermostat senses its environment and responds in kind, and that doesn't even need a <em>computer</em>, let alone <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a>. To approximate anything like <q>initiative</q>, the <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> would need a sensorimotor system: sensors, or something sensor-<em>like</em>, to detect information from the outside, and <q>motors</q> (not necessarily <em>literal</em> motors) to take some action in response. Furthermore, it would have to have some kind of internal representation of what <q>good</q> is.</p>
      <figure>
	<a href="https://www.flickr.com/photos/tantrum_dan/2525066151/"><img src="file/analogue-thermostat" alt="old American Standard thermostat with the cover off"/></a>
	<figcaption>
	  <p>A typical analogue thermostat uses a <a href="https://en.wikipedia.org/wiki/Bimetallic_strip">bimetallic coil</a> as a de facto thermometer, with a <a href="https://en.wikipedia.org/wiki/Mercury_switch">mercury switch</a> at the end. The coil reacts to the temperature in the room, which throws the switch one way or another. The lever that sets the desired temperature simply tilts the entire assembly, thus giving it a bias.</p>
	</figcaption>
      </figure>
      <p>This need not be complicated: <q>good</q> to a thermostat is the state of its bimetallic coil unwinding enough to cause the little blob of mercury to roll off the electrical contacts and over to the other side of the ampoule that makes up the mercury switch. So an <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> system, in addition to its sensorimotor infrastructure, could conceivably be outfitted with a rudimentary axiological subsystem consisting of <q>goods</q> to pursue and <q>bads</q> to avoid. In other words, <a href="https://en.wikipedia.org/wiki/Small_matter_of_programming">a simple matter of programming</a>.</p>
    </section>
    <section>
      <h2>They Just Want a Slave</h2>
      <p>That's next on the roadmap though, right? <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> agents? Like book-my-family-vacation-for-me kind of thing. Something that knows everything about you so it can anticipate what you'd like, has access to everybody's calendar and whatever so it can coordinate the timing, can write your boss (or hey, your boss's <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> agent) to request the time off, has access to your credit card so it can book the flights and hotel, etc. Something that will proactively work on new ways to surprise and delight you, just like a real human servant.</p>
      <figure>
	<img src="file/altman-hurr" alt="Sam Altman stars in HURR"/>
	<figcaption>
	  <p>Sam and Sam(antha), sitting in a tree&#x2026;</p>
	</figcaption>
      </figure>
      <p>I think this is a fantasy. It's concocted by people rich enough to <em>already</em> enjoy human servants, assuming&#x2014;probably correctly&#x2014;that there are people out there of lesser means who want the same kind of access. My instinct is that a product like this would be <em>extremely</em> touchy, and that's assuming you could even get it to work.</p>
      <p>First, the only way this thing is going to get to <q>know</q> you is through its sensory apparatus. If one of your kids has piano lessons, that's going to have to go into a calendar that it can see. Maybe it does the initial data entry on that too? The point is, somebody or something is going to have to keep on top of sharing <em>everything</em> with this thing, otherwise it's going to make mistakes that stem from missing some key piece of information. Like a person, its decisions are only as good as its inputs. <em>Un</em>like a person, however, I don't anticipate one of these things to be as good at improvising or triangulating, just plain picking up on vibes, or, crucially, <em>understanding that it needs to add a new information source</em>. I'm willing to be proven wrong though.</p>
      <aside role="note">
	<p>When I say <q>mistakes that stem from missing key information</q>, I mean something like when it goes to book your family vacation, it momentarily forgets that one of your kids exists because they didn't feature strongly enough as a signal, because their piano lessons weren't entered into the correct calendar. Something stupid like that.</p>
      </aside>
      <p>Next, I expect it'll have a hell of a time navigating interfaces&#x2014;whether it fakes up a user to click on buttons or jacks straight into an <a href="lexicon/#E6EC6ikxtvjZz3WYgRjo5K" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr title="application programming interface">API</abbr></a>. I think it'll have trouble determining if it got the result it was after. The (discriminative) purpose-specific <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> designed for looking at X-rays and such, keys off the wrong information all the time. Even being <q>after</q> something suggests having goals, but a multi-step process like arranging a vacation for an entire family is going to entail creating a composite <em>plan</em>, with <em>subsidiary</em> goals, conditions, principles and/or standing orders, modeling of interdependencies, elimination of internal conflicts, status checks, and alternatives in case something goes wrong. In other words, the artificial intelligence is going to need an artificial <em>imagination</em>.</p>
      <aside role="note">
      </aside>
      <p>I don't think the big matrix is going to cut it for this. Plans are Kiki. Text and images are Bouba. Good old-fashioned discriminative machine learning takes a Bouba and produces a Kiki (with the acknowledged risk of potentially being the <em>wrong</em> Kiki). Large language models take a Bouba and return another Bouba. In other <em>other</em> words, an <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a>'s ability to plan is probably going to need a whole new architecture that has a Kiki alongside the Bouba, and that doesn't exist yet. In lieu of some smart person designing one, we're back to the <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> that radically self-improves, which means we're back to the <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> with the arbitrarily large context window and/or mutable model (which we already blew past several paragraphs ago), which means we're back to needing new technology for computing hardware to keep this proposition from being improbably expensive. So why not just rent it from Sam Altman?</p>
      <p>Okay, one more consideration and then I'm going to walk away from this. People (law professors, policy advisors, tank-thinkers) have their knickers in a twist about <q>alignment</q>, an icky neologism referring to the extent to which the <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> is acting in your interests rather than <q>its own</q>. When you're renting one of these things from Sam Altman, or Mark Zuckerberg, or Satya Nadella, or Sundar Pichai, or Tim Cook, or Jeff Bezos, or Elon <del>fucking</del> Musk? What about <q>alignment</q> qua <em>their</em> interests? The question to ask isn't <q>is the <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> working for <em>itself</em> instead of working for <em>me</em>?</q> but rather is it working for <em>them</em>?</p>
      <aside role="note">
	<p>You know, just like any other software you don't control.</p>
      </aside>
      <p>What's to stop these <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> agents&#x2014;again, assuming you could even get one to <em>work</em>&#x2014;behaving in ways that benefit the people selling them at your expense? Using <em>their</em> preferred airlines and hotels, <em>their</em> brand of batteries and toilet paper? What's preventing one of these things from using your credit card to buy up a bunch of overstock its owner wanted to get rid of and framing it like it did something thoughtful for you? <em>Or</em>, what's preventing the owner from taking your private behavioural data, orders of magnitude more intimate than anything you've previously disclosed&#x2014;voluntarily or otherwise&#x2014;and packaging it up and selling it downstream? How would you ever know?</p>
      <p>So, again, this is why I'm an <a href="lexicon/#E9aJUrEorBeG-fen79xdNJ" rel="https://vocab.methodandstructure.com/content-inventory#mentions" typeof="http://www.w3.org/2004/02/skos/core#Concept"><abbr>AI</abbr></a> <em>meh</em>-ptic. There are <em>so</em> many mundane social problems with the technology <em>here and now</em> that don't even remotely range into the territory of <a href="https://en.wikipedia.org/wiki/Skynet_(Terminator)">Skynet</a>, <a href="https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer">Paperclip Maximizer</a>/<a href="https://en.wikipedia.org/wiki/Skynet_(Terminator)">Grey Goo</a>, <a href="https://en.wikipedia.org/wiki/Roko%27s_basilisk">Roko's Basilisk</a> or <a href="https://en.wikipedia.org/wiki/The_Matrix">The Matrix</a>. <em>So</em> many huge, conspicuous, world-changing events have to happen before <em>any</em> of those sci-fi situations are even close to plausible, yet <em>those</em> are the things getting the policy attention. This is a serious misallocation of cognitive resources, and I urge those in influential positions to smarten up.</p>
    </section>
    
</body>
</html>
