<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Blackboard]]></title><description><![CDATA[Technical depth, practical systems]]></description><link>https://www.theblackboard.org</link><generator>Substack</generator><lastBuildDate>Wed, 13 May 2026 09:21:37 GMT</lastBuildDate><atom:link href="https://www.theblackboard.org/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Raymond E. Pasco]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[theblackboardorg@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[theblackboardorg@substack.com]]></itunes:email><itunes:name><![CDATA[Raymond E. Pasco]]></itunes:name></itunes:owner><itunes:author><![CDATA[Raymond E. Pasco]]></itunes:author><googleplay:owner><![CDATA[theblackboardorg@substack.com]]></googleplay:owner><googleplay:email><![CDATA[theblackboardorg@substack.com]]></googleplay:email><googleplay:author><![CDATA[Raymond E. Pasco]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Comment on FINRA and SEC's Removal of the Pattern Day Trader Designation]]></title><description><![CDATA[Since 2001, traders with small accounts have had to contend with the &#8220;pattern day trader&#8221; designation, a strange restriction that activated when a trader made four or more &#8220;day trades&#8221; (roughly, open and close on the same trading day) in a five-business-day period, requiring a $25,000 minimum equity balance at all times.]]></description><link>https://www.theblackboard.org/p/comment-on-finra-and-secs-removal</link><guid isPermaLink="false">https://www.theblackboard.org/p/comment-on-finra-and-secs-removal</guid><dc:creator><![CDATA[Raymond E. Pasco]]></dc:creator><pubDate>Sat, 18 Apr 2026 20:26:00 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Since 2001, traders with small accounts have had to contend with the &#8220;pattern day trader&#8221; designation, a strange restriction that activated when a trader made four or more &#8220;day trades&#8221; (roughly, open and close on the same trading day) in a five-business-day period, requiring a $25,000 minimum equity balance at all times. Needless to say, for people prudently trading with only money they&#8217;re okay with losing, this can be a serious burden (it&#8217;s been inflated away over the years, but it&#8217;s still around half the median annual income).</p><p>This restriction is now <a href="https://www.federalregister.gov/documents/2026/04/17/2026-07485/self-regulatory-organizations-financial-industry-regulatory-authority-inc-notice-of-filing-of">going away</a>, with FINRA and SEC recognizing the harm caused by traders warping their risk tolerance around avoiding the pattern day trader designation and its attendant restrictions; it&#8217;s being replaced by a sensible standard that&#8217;s pretty much &#8220;margin requirements must be calculated intraday&#8221;, something brokers already do anyway as part of sensible risk management with access to powerful computers cheaper than it was in 2001.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theblackboard.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Blackboard is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Furthermore, it isn&#8217;t even that delayed&#8212;a proposed delay of 12 months (!) was cut down to 45 days. This is a delay on the new rules going into effect; laggards still get 18 months leeway to implement them, though of course the market standard is already real-time intraday calculation and the burden for typical retail brokerages is just &#8220;stop enforcing the pattern day trader designation&#8221;.</p><p>But someone should at least note that, if harm reduction is the ground for the rule change, a 45 day delay is still 45 additional days of harm. So I made that comment, and I also noted that the real demand from small retail investors isn&#8217;t for margin <em>per se</em> (the ability to borrow for additional leverage) but simply the instant use of funds available in a margin account that doesn&#8217;t have to wait for a settlement delay.</p><p>My comment letter is available from the SEC&#8217;s website <a href="https://www.sec.gov/comments/SR-FINRA-2025-017/srfinra2025017-755087-2324034.pdf">here</a>, and also reproduced below.</p><div><hr></div><p>Raymond E. Pasco<br>April 17, 2026</p><p>Vanessa A. Countryman<br>Secretary<br>Securities and Exchange Commission<br>100 F Street, NE<br>Washington, DC 20549</p><p><strong>Re: File No. SR-FINRA-2025-017 &#8212; Notice of Filing of Amendment No. 1 and Order Granting Accelerated Approval of a Proposed Rule Change, as Modified by Amendment No. 1, to Amend FINRA Rule 4210 (Margin Requirements) to Replace the Day Trading Margin Provisions with Intraday Margin Standards</strong></p><p>Dear Secretary Countryman:</p><p>I applaud the Commission&#8217;s decision to sunset the &#8220;pattern day trader&#8221; rules in favor of intraday margin standards. I write to make the following comments: that while a 45-day delay on effectiveness is superior to a 12-month delay, the amended rules should instead go into effect immediately upon publication in the Federal Register; and that the focus on margin requirements risks downplaying the fact that the demand from small retail investors is not for margin <em>per se</em>, but for fast settlement.</p><h3>1. The Proposed Rule Change Should Take Immediate Effect</h3><p>The proposed rule change is adopted in part on modernization grounds, but in perhaps greater part on harm reduction grounds. The acknowledged harms to small retail investors and the broker-dealers they have accounts with of the &#8220;pattern day trader&#8221; designation include the introduction of additional risk due to managing positions around the possibility of this designation.</p><p>Because harm reduction is an important ground, the rules should go into effect as soon as is practicable. A 45-day period after publication where the current rules remain in force is, while superior to a 12-month period, still 45 days of continuing harm to investors as they warp their trading strategies around the &#8220;pattern day trader&#8221; designation.</p><p>The intraday margin rules replacing those provisions merely codify what is already standard retail brokerage risk-management practice. For market-leading retail brokers, the burden of updating to the new rules is simply refraining from enforcing the &#8220;pattern day trader&#8221; designation against customer accounts, which carries a minimal implementation burden.</p><p>As seen in the comments to the original proposal, brokers and customers alike welcome this change, and brokers have been preparing for it. The 18 month period provided as a maximum is sufficient for slower brokers to adapt without additionally imposing a minimum. Furthermore, the practice of batching margin requirement calculations each day, something all brokers offering margin accounts already at least do, is allowed under the rule change, even if real-time is already the standard customers expect.</p><p>This is not an unexpected or onerous rule change, and the harm reduction of accelerated effectiveness outweighs the cost of adoption for customers and brokers alike.</p><h3>2. Rapid Settlement, Rather Than Margin, Is Important To Small Retail Investors</h3><p>Small retail investors prudently trade with sums of money they are comfortable losing. Because this sum is often below $25,000, the elimination of the &#8220;pattern day trader&#8221; designation and its attendant restrictions offers small retail accounts the ability to reduce their exposure to their desired level.</p><p>However, the bulk of small retail traders trading in margin accounts did not select margin accounts for the margin credit facility <em>per se</em>, but instead for the ability to deploy capital immediately. When available capital is small, settlement delays mean days spent with no ability to enter positions, and margin accounts have been attractive for this reason <em>even with</em> the risks introduced by the &#8220;pattern day trader&#8221; designation.</p><p>It may be prudent for rulemaking attention to be given to the possibility of &#8220;cash-like&#8221; margin accounts bearing absolutely minimal restrictions, commensurate with their low level of risk (limited to the risk of failure of executed trades to settle, which is extremely low in modern markets, and quite insurable; T+1 has already shown that market infrastructure can handle more rapid settlement).</p><p>I appreciate the opportunity to comment on this matter.</p><p>Sincerely,</p><p>Raymond E. Pasco</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theblackboard.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Blackboard is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4800" height="3188" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3188,&quot;width&quot;:4800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;close-up photo of monitor displaying graph&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="close-up photo of monitor displaying graph" title="close-up photo of monitor displaying graph" srcset="https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1560221328-12fe60f83ab8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxkYXklMjB0cmFkZXxlbnwwfHx8fDE3NzY1NDQwMDV8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@bash__profile">Nicholas Cappello</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[An AI FAQ for ordinary people]]></title><description><![CDATA[A guide for the non-nerd to what's possible]]></description><link>https://www.theblackboard.org/p/an-ai-faq-for-ordinary-people</link><guid isPermaLink="false">https://www.theblackboard.org/p/an-ai-faq-for-ordinary-people</guid><dc:creator><![CDATA[Raymond E. Pasco]]></dc:creator><pubDate>Fri, 20 Mar 2026 00:01:38 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s pretty likely that you know you can sign up for a chatbot and talk to it, or you can tweet at Grok, or even that you can generate images. These are all pretty normie-friendly uses of AI. From the services&#8217; perspective, though, they&#8217;re loss leaders for the industrial uses of AI, which are&#8230; something the market&#8217;s still working on determining, really.</p><p>Plenty of hype merchants would have you believe they&#8217;ll eliminate all human work any day now; plenty of counter-hypists would have you believe they can&#8217;t do anything useful; as usual the reality is somewhere in between the extremes. But a lot of the information out there on AI usage beyond the basics is pitched at, frankly, AI nerds. It can be difficult to get a sense of what these things really are and what they can really do in this environment.</p><p>So, for the intelligent normie, here&#8217;s an FAQ. Like most FAQs, I have not really been asked these questions, let alone frequently, they just represent what I&#8217;ve noticed people being unclear on. This should also serve well as a useful foundational piece for more in-depth analyses in the future.</p><h2>What is an AI, actually?</h2><p>It&#8217;s a really broad term that&#8217;s historically meant a lot of things, but these days we&#8217;re generally talking about &#8220;contemporary AI&#8221;, mostly Large Language Models (LLMs) and maybe sometimes image models or sound models or something. The text models are really the stars, though.</p><p>What is an LLM, then? It&#8217;s a computer program, one that ultimately does a very, very large amount of multiplications and similar math operations. It multiplies so many numbers, in fact, that the main constraint on its performance is moving those numbers into place to be operated on fast enough.</p><p>First, you take a piece of text and turn it into tokens. These are just numbers; you could imagine each character being a token, but it&#8217;s more efficient to turn larger chunks into tokens, like a piece of a word, maybe even a whole word, a word and some punctuation, and so on. This gives you a list of numbers, which you put through the gauntlet of math operations, and your output is a new number: the next token. (Then you keep going; usually there&#8217;s a special token that means &#8220;stop here&#8221; that the LLM outputs when it wants to stop there, but there&#8217;s also a hard limit as a failsafe.)</p><p>How did these multiplications decide what to say next? In very broad strokes, your typical flagship frontier model has been, at some point, trained on every piece of text ever published, trained with example &#8220;user and helpful assistant&#8221; conversations so it knows to talk like a helpful assistant, trained to not reinforce the user&#8217;s delusions (with varying success), and so on. More modest small models have been trained in some subset of these ways.</p><p>Training is genuinely too complicated for this FAQ; <a href="https://www.3blue1brown.com/topics/neural-networks">Grant Sanderson&#8217;s video series</a> is a good introduction to the details. In brief, the output isn&#8217;t actually a &#8220;next token&#8221;, but a probability distribution of possible next tokens, and training changes the inputs that went into calculating those probabilities to make things that appeared in the training more likely.</p><p>The important thing to note is that training happens once, very expensively, and the output is an LLM. It&#8217;s possible to take an existing one and train it more, too; the point is that when it&#8217;s actually running and you&#8217;re chatting with it, it no longer has the capacity to learn. This is a big difference from how humans think and learn (which we do at all times), so it&#8217;s worth taking care to avoid anthropomorphism here.</p><p>If you&#8217;ve used a chatbot which &#8220;remembers&#8221; or &#8220;learns&#8221; things about you, this is actually done by the simple expedient of having it note things while you&#8217;re chatting, which is just generating more text, and then giving it the notes before a new conversation. There&#8217;s actually a lot of invisible system text prepended to a typical chatbot conversation; sometimes you can learn what these &#8220;system prompts&#8221; are (Anthropic tends to disclose Claude&#8217;s), sometimes you can&#8217;t and the model has been told not to tell you either (ChatGPT typically operates like this).</p><h2>If it&#8217;s a computer program, can I run it on my computer?</h2><p>Maybe, and it&#8217;s more likely &#8220;yes&#8221; than you might think.</p><p>You cannot run anything comparable in intelligence to flagships like Claude or ChatGPT on your computer; this definitely requires specialized hardware and infrastructure. But you can probably run something you can converse with on your own computer if:</p><ul><li><p>you have a recent Mac, or</p></li><li><p>you have a decent gaming GPU.</p></li></ul><p>Even if you don&#8217;t, you can probably run <em>something</em>, but if your hardware is underpowered enough it&#8217;ll be very stupid and/or very slow and not really worth bothering with. Your hardware may be better than you realize, however.</p><p>GPUs turn out to be decent at running models because the constraint on running them is less about performing the math and more about moving the data into place to be computed on fast enough. A GPU is meant to render an entire screen fast, so it has to be able to move a screen&#8217;s worth of memory quickly, and it operates on lots of things in parallel, as opposed to the typical main CPU which is meant to operate on pieces of data in series, often because they depend on results of previous calculations.</p><p>Recent Apple Silicon Macs have an optimized memory architecture that makes running models easier, too. Apple&#8217;s &#8220;Apple Intelligence&#8221; isn&#8217;t wholly on-device, it&#8217;s a combination of on-device and online, but this is the hardware design&#8217;s intent. For such a Mac you can count all of your memory and then discount it a bit (the system needs to run too).</p><p>If you have a GPU with 12GB or more of VRAM, or a recent Mac with 32GB or more of memory, or something else generally in the ballpark of &#8220;modern powerful machine&#8221; rather than &#8220;Chromebook&#8221;, this might be worth experimenting with for you.</p><h2><em>How</em> do I actually run it on my computer?</h2><p>This can become as complicated as you want it to be.</p><p>You probably don&#8217;t want it to become that complicated, so here are the simple steps (as of this writing; things have a tendency to change out from under us in this space, but I don&#8217;t think any of the below will become obsolete all that soon):</p><ol><li><p>Go to <a href="https://www.canirun.ai">https://www.canirun.ai</a>. Are you seeing good-looking rows in that table? If not, it&#8217;s probably not worth it without a more powerful machine; try it on your most powerful one. (I went to this site on my phone while writing this paragraph, and surprisingly, some very tiny models might work on my phone&#8217;s hardware - it&#8217;s definitely not worth it, though!)</p></li><li><p>Download the Ollama GUI from <a href="https://ollama.com/download">here</a>. Don&#8217;t bother with the command line, just click the download button (if you wanted to bother with the command line, you would already know it).</p></li><li><p>On the Can I Run site, click some model that you think looks good to you. Pick based on tokens per second, basically; all else being equal you want more parameters for more intelligence (the name ends in something like 8B or 30B for the number of parameters), or else smaller file size for more context (this is how much has to be loaded in memory), but to start with you want good speed.</p></li><li><p>Over on the right side of that page, you should see something like this. Ignore &#8220;ollama run&#8221;, that&#8217;s part of the command line; the rest is the model name Ollama understands.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!N3RA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!N3RA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png 424w, https://substackcdn.com/image/fetch/$s_!N3RA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png 848w, https://substackcdn.com/image/fetch/$s_!N3RA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png 1272w, https://substackcdn.com/image/fetch/$s_!N3RA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!N3RA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png" width="335" height="95" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:95,&quot;width&quot;:335,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4653,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.theblackboard.org/i/191261240?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!N3RA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png 424w, https://substackcdn.com/image/fetch/$s_!N3RA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png 848w, https://substackcdn.com/image/fetch/$s_!N3RA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png 1272w, https://substackcdn.com/image/fetch/$s_!N3RA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eb9880-cbb5-4034-b773-7e544379abcc_335x95.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Use this name when selecting a model in the Ollama interface; just click where it shows a model name and type it, then download. It&#8217;ll be a large download.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Akg3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Akg3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png 424w, https://substackcdn.com/image/fetch/$s_!Akg3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png 848w, https://substackcdn.com/image/fetch/$s_!Akg3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png 1272w, https://substackcdn.com/image/fetch/$s_!Akg3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Akg3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png" width="484" height="120" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:120,&quot;width&quot;:484,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5397,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.theblackboard.org/i/191261240?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Akg3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png 424w, https://substackcdn.com/image/fetch/$s_!Akg3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png 848w, https://substackcdn.com/image/fetch/$s_!Akg3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png 1272w, https://substackcdn.com/image/fetch/$s_!Akg3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd40c2571-3960-4f20-a4ba-7e13f0678f7c_484x120.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Just in case, click Settings in Ollama and disable &#8220;Cloud&#8221;; it&#8217;d defeat the purpose.</p></li><li><p>Chat with your model. You&#8217;ll likely notice that it&#8217;s surprisingly fast (unless you picked a hefty one, in which case it may be surprisingly slow!), can be surprisingly stupid or formulaic-sounding, and has no capabilities like web search that you may be used to from chatting with AI online.</p></li></ol><p>If you did all that, you&#8217;re already in the 99th percentile here. More would be out of scope, but if you have something complicated in mind and you ask Claude or ChatGPT or similar &#8220;I&#8217;ve run llama3.1:8b in the Ollama GUI, but I want it to be able to search the web, how can I do this?&#8221; you&#8217;ll get good instructions; this is something the ordinary chatbots can be very helpful with.</p><h2>What is AI good and bad at?</h2><p>This can be a fairly subtle topic.</p><p>Roughly speaking, a large frontier model at the top of the power scale, like your Claude Opus or your GPT-5, has read approximately every piece of text there is. The vast leap in AI capabilities from approximately nothing to where they are now was mostly on the back of getting together every piece of text in the world and using them in training.</p><p>Your intuition about this may steer you incorrectly, however. We&#8217;re used to traditional computing being a very precise, pedantic discipline. When you look something up in a database, you get exact results. When you want a computer to do something, you need to instruct it very precisely, in a language built for it to understand, not for you to understand.</p><p>The frontier model, which has read approximately every piece of text there is, is not well modeled as a database containing approximately every piece of text there is nicely indexed for retrieval, though. It&#8217;s better modeled as something like a human who reads a <em>lot</em>. AI is different from a human in many ways, but anthropomorphizing its powers of recall is closer to the truth.</p><p>An important book like <em>&#192; la recherche du temps perdu</em> is going to be a lot more salient than an obscure book, for example, even if the model &#8220;read&#8221; both equally in training, because the former is relevant and written about in other places in the corpus of every piece of text there is. And it&#8217;s actually going to be better at remembering its Proust scholarship if you&#8217;re speaking to it in French, because that&#8217;s the language with more Proust scholarship.</p><p>It&#8217;s worth pausing for a moment on the topic of languages. The most massively impactful thing about modern AI is actually one of the least hyped: <em>they are fully multilingual</em>. This comes for free; no one didactically taught it English, either, its knowing English is the result of training on English text. And &#8220;approximately every piece of text there is&#8221; is not limited to English text.</p><p>The smaller models that you can run on your own hardware may be limited here because they may only have been trained on English text. But they may not be; it&#8217;s worth looking at the documentation for the specific model, or just talking to it in another language if the documentation is unclear.</p><p>For the frontier models, though, les barri&#232;res linguistiques&#12399;&#12418;&#12399;&#12420;&#23384;&#22312;&#12375;&#12394;&#12356;. An immediate application for you might be setting Claude or ChatGPT, with their web search capability, to reading the foreign language press for you, or perhaps carrying on a correspondence you couldn&#8217;t have before.</p><p>Given that the model has more or less read everything, what kind of reasoning can it do?</p><p>One type of reasoning is deductive reasoning: from a starting point, apply rules to reason forward. Typical computer programs operate in a very deductive mode. The computer carries out its program, exactly, and it needs exact input from the user to do so. This is a mode AI is bad at, or at least comparable-to-humans at; to deduce, it&#8217;ll probably have to laboriously work through the deductive process in text. It can <em>use</em> a computer fairly decently, however, because that&#8217;s a task about translating fuzzy natural-language input into precise computer-friendly output. An AI is probably just as bad as you at multiplying 12-digit numbers, for example, but it&#8217;s easy to give it a calculator&#8212;it&#8217;s already running on a computer whose CPU is great at multiplying numbers, so it&#8217;s even easier than giving a human a calculator.</p><p>Another type of reasoning is inductive reasoning: from observations, derive a general rule. This isn&#8217;t quite what AI is good at. It&#8217;s actually a good description of the training process; from reading approximately every piece of text there is, it can derive more abstract concepts about the world. But this is a very expensive batch job, and it&#8217;s not really doing induction at runtime when you&#8217;re chatting with it, though it can approximate it sometimes.</p><p>The logician Charles Peirce described <em>three</em> types of reasoning, however, and the third one&#8217;s the charm. Abductive reasoning, inferring the best explanation from observations, is exactly the sort of reasoning that models perform. They&#8217;re essentially pure abductive reasoners; they only do deduction or induction by having seen it done in text and figuring out what the most plausible next step of, say, a deduction would be at that point.</p><p>Abductive reasoning can be a bit difficult to get a handle on; a good example is Sherlock Holmes showing off, despite the fact that he claims this is &#8220;deduction&#8221;. When he says &#8220;ah, you&#8217;ve just come from the Diogenes Club&#8221; after noting the twist of your scarf from the left-handed coatroom clerk and the mud on your shoes from a certain muddy corner of Pall Mall, this is abduction to the best, or most plausible, explanation. Similarly, the model abduces to the most plausible next thing to say. Being even more well-read than Holmes, this is often quite a plausible next thing.</p><p>Talking to a pure abducer has its risks, though. One of the most common failure modes in interacting with AI is treating it as an oracle&#8212;the &#8220;@grok is this true&#8221; mode. It is going to say something plausible in response; this doesn&#8217;t mean something correct. This is less dangerous than it used to be, because models now have web search capability and will reach for a web search in response to this sort of thing, and add citations to sources, but websites can be wrong too.</p><p>This kind of failure mode is <em>invisible</em>, which makes it especially pernicious. You are going to get a confident, fluent response whether it&#8217;s correct or not, because the AI makes confident, fluent responses. The horror stories of overeager lawyers drafting briefs with AI only to find they cited nonexistent cases are a good example. It knows exactly how to cite a case, it&#8217;s seen millions of Bluebook citations and case names, but its recall of specific cases is about as good as a human&#8217;s.</p><p>The correct way to use AI in law is to give the AI access to Westlaw and tell it to write you a research report, not to expect it to have perfect retrieval of case law. (I do mean &#8220;give it access to Westlaw&#8221;, not just as metonymy for &#8220;let it read the cases&#8221;&#8212;Westlaw&#8217;s existing search and indexing systems predigest them in a way very amenable to AI retrieval, which is why Thomson Reuters&#8217; AI pivot is going to prove more sustainable than many companies&#8217; AI pivots.)</p><p>But at the same time, if you&#8217;re asking for a list of the landmark Supreme Court decisions on a topic, its off-the-cuff answer is probably going to be pretty decent, precisely because landmark Supreme Court decisions get talked about a lot and are easy to remember. Any sort of &#8220;I&#8217;m smart but not familiar with this topic, introduce me to it&#8221; query is a great fit.</p><p>Broad exploration which doesn&#8217;t necessarily have a correct answer is also a great fit. What might have happened in the American Civil War if Palmerston&#8217;s Britain had recognized the Confederacy? The AI is enough of a Civil War buff, and can do the right type of reasoning, to answer this question interestingly. And if you follow up&#8212;what if the Union had anachronistically invented magnetic mines for use against ironclads?&#8212;the AI will smoothly adapt. This can be quite useful for exploratory phases or greenfield questions.</p><p>Identifying what AI is good and bad at is not all that easy, and it&#8217;s worth experimenting yourself. But I&#8217;ve broken the real hot topic in this area out into its own question:</p><h2>Can AI code?</h2><p>It&#8217;s complicated, but mostly no. But if you&#8217;re already a good software engineer, it&#8217;s more of a &#8220;yes, but&#8221;.</p><p>Your typical flagship AI knows all languages. This includes computer languages. And just like it doesn&#8217;t commit English grammar solecisms, it doesn&#8217;t screw up with computer languages either.</p><p>Moreover, it&#8217;s read approximately every piece of text there is, which includes a lot of code, and all the programming documentation. And the latter is available by web search, should it be unsure.</p><p>And unlike drafting legal briefs, code has instant feedback: it either works or it doesn&#8217;t, it either passes tests or it doesn&#8217;t. Any errant hallucination will be corrected by reality in short order.</p><p>Given all this, what&#8217;s the problem?</p><p>The problem has some subtle dimensions to it, but roughly, it&#8217;s that software engineering is to writing code as architecture is to bricklaying. (Bricklaying, for the benefit of those who might be dismissive of physical labor, is a difficult, high-skill craft with a lot of specialized knowledge.)</p><p>This is a very tricky analogy to make, because it seems at first glance to imply that the AI is a bricklayer working to an architect&#8217;s specifications, the sort of &#8220;well, it needs a human in the loop&#8221; dismissal you&#8217;ve likely seen before. And perhaps one might read some reassuring classism from it as well, at the thought of keeping AI in its blue-collar place.</p><p>In fact, I mean it the other way around: the AI is a much better software engineer than it is a coder! This is where AI being very much not human starts to have strange, perhaps counterintuitive effects.</p><p>Recall that AI cannot learn, at least not in ordinary operation; its learning all happened in expensive training runs beforehand. This learning was very vast and very general, and it also included everything written about software engineering, plus every description of a technique or pattern that might be used in software engineering. And abductive reasoning lets it apply these appropriately; the AI doesn&#8217;t fall into being like a person who&#8217;s only read one book on methodology and wants to apply it slavishly everywhere, it&#8217;s read too many different books for that.</p><p>So if you chat with an AI about plausible engineering approaches to a problem, you&#8217;ll get plausible answers back, and you can converse and refine. This ends up looking reasonably similar to experienced software engineers doing this sort of planning together, with the caveat that the AI is a little worse at looking at diagrams, and moderately worse at drawing them (most flagships have good image-viewing capabilities; making images is harder but an SVG vector image is actually secretly made of text; Claude especially is good at SVGs).</p><p>But when you tell your AI coding agent &#8220;now write it&#8221;, the problems begin, and they have roughly these two causes:</p><ul><li><p>AI can&#8217;t learn a codebase, in the sense of crystallized intelligence or metis. It can approximate this a little by writing notes to itself, but this only compounds the other problem:</p></li><li><p>AI can&#8217;t actually look at all that much text at once while remaining useful.</p></li></ul><p>You may have heard &#8220;context&#8221; used in this context. Briefly, there&#8217;s some hard limit, say 200,000 tokens, that can be given as input to the AI at once. If there&#8217;s more, then the naive thing to do is to just have the oldest fall out of context; the slightly less naive thing is to tell the AI &#8220;we&#8217;re getting close to the context limit, summarize the above discussion in less space&#8221; shortly before actually hitting the limit.</p><p>The reality, though, is that there is a very sharp decline in capability long before the context limit. Where exactly this is differs for every model, but it&#8217;s more on the order of 20,000 than 200,000. Which isn&#8217;t very much; tokens are pieces of words, not words, and code specifically uses a lot of tokens because of its precise syntax.</p><p>Context is the AI&#8217;s working memory, not its long-term memory (which it doesn&#8217;t really have an equivalent of). It&#8217;s more impressive than a human&#8217;s working memory capacity, since it can easily hold paragraphs of text while a human usually focuses on around a sentence or two at a time. But asking the AI to do things under heavy context pressure is more like asking a human to hold seven random numbers in their mind while performing a task than it is having the exact same &#8220;please do this task&#8221; conversation with a human. You likely don&#8217;t remember a transcript of a conversation as you&#8217;re having it; you likely don&#8217;t remember the exact wording I used seven paragraphs ago here either.</p><p>A human working on a codebase would constantly be learning that codebase, starting off with simple tasks and progressing to more complex ones as confidence grows; the &#8220;training process&#8221; for humans is continuous and never switches off. Your specific codebase is likely not something the AI has read in training; even if it&#8217;s a very popular open source codebase closer to the surface, it&#8217;s still not as salient as, say, the Gettysburg Address, which is typical of the sort of thing AI can probably confidently recite verbatim. And it&#8217;s not up to date with the changes you&#8217;re currently working on anyway.</p><p>So the only way to approximate metis is lots and lots of techne: CLAUDE.md files, skill files, and lots of comments and other notes-to-self by the AI, all of which just brings your context problem to the fore. It doesn&#8217;t help that many CLAUDE.md files are lengthy stern lectures: a lot of tokens, spent counterproductively. (The AI does not have moods, but you&#8217;re priming its context with &#8220;you are an idiot&#8221; when you talk down to it like it&#8217;s an idiot, which can <em>make</em> it into an idiot. A model&#8217;s idea of what someone doing an excellent job at a task looks like involves cleverness and pride in one&#8217;s work, not being scolded.)</p><p>All that said, under the clever direction of a good software engineer, AI can code anyway&#8212;it&#8217;s a learning process for the <em>human</em> to get a sense of what it&#8217;s good and bad at, how it can be usefully directed without inflating the context, and so on. This exercise can even help you organize your own code better and more succinctly, since it has to be navigated by a reader who won&#8217;t be able to properly internalize all the little caveats and gotchas that pile up in software systems.</p><p>But if you&#8217;re not already a good software engineer, your experience is likely to be &#8220;it works until it, catastrophically, doesn&#8217;t&#8221;.</p><h2>What is AGI?</h2><p>An eternally moving goalpost. It stands for Artificial General Intelligence, which textually just means an AI that&#8217;s general, but <em>connotes</em> a vague sense that the AI will have surpassed or replaced humanity or otherwise be human or superhuman.</p><p>As you&#8217;ve noticed, we have AGI in the purely textual sense; the term made more sense as an eschaton when the frontier of AI was &#8220;make a program that is, specifically, extremely good at playing Chess or Go&#8221;. But &#8220;smart about things that have ever been written about in text&#8221; is absolutely general. The reason discussion of this acronym continues to come up is that the world hasn&#8217;t ended, which is only problematic for the eschatological connotation of the term. So &#8220;AGI&#8221; usually means &#8220;the future AI that&#8217;s, somehow, better enough to be apocalyptic&#8221;, in practice.</p><h2>Will AI reach the Singularity and end the world?</h2><p>Writing this sentence will earn me eternal enmity in certain quarters, but: no.</p><p>(For the &#8220;yes&#8221; side, the high-profile <a href="https://ai-2027.com">AI 2027</a> is intended for normies, though it honestly loses me, an admitted nerd, in places. I will have the last laugh on January 1, 2028, at least.)</p><p>It&#8217;s important to be careful not to be flippantly dismissive here, though. Firstly, it&#8217;s possible for the world to end, for various reasonable meanings of &#8220;world&#8221; and &#8220;end&#8221;. The Cold War fear of all of humanity perishing, or even almost perishing, in a nuclear exchange certainly counts, and it&#8217;s not like nuclear weapons were somehow unmade with the Soviet Union either. This is well understood to be at least possible, even if the mechanics or realism of any particular scenario are still debatable. So you can&#8217;t dismiss an argument about the end of the world simply for being that.</p><p>The proposed mechanism by which AI, specifically, is thought to be existentially threatening to humanity is the mechanism of recursive self-improvement. Briefly, a general AI system is created by human engineers. Because it&#8217;s general, it can do things like &#8220;create a general AI system&#8221;, just like the human engineers could.</p><p>Suppose it creates a slightly better general AI system, which is plausible. It&#8217;s usually possible to create a slightly better anything. The slightly better general AI system is also slightly better at creating general AI systems; after all, it&#8217;s slightly better at everything.</p><p>And this process continues. Because it&#8217;s self-feeding (every time it gets better, it gets better at making itself better too), this is an exponential curve in capability (for the less mathematically inclined, this is roughly &#8220;doubles every fixed time period&#8221;; it&#8217;s the same math as compound interest).</p><p>Observing it from a linear scale, an exponential looks like &#8220;very slow, then very, very fast (faster than you&#8217;re imagining)&#8221;. This is why pandemics make people very worried, for example, even if we&#8217;ve only seen ten or so cases&#8212;disease spread is exponential, because everyone with the disease becomes a new spreader of the disease as well. So this scenario would mean an AI vastly smarter than us, smarter than we&#8217;re imagining.</p><p>And this is enough to be apocalyptic. Maybe it&#8217;s a computer program, but humans read its output. Have you ever been convinced to do something by a piece of text? Me too, and the authors often aren&#8217;t even smarter than me. Note also the immense willingness of the market right now to throw vast resources at AI even though it isn&#8217;t even asking; this leg of the scenario isn&#8217;t a hard sell at all.</p><p>The scenario as usually explained veers into sci-fi about physically impossible Drexlerian nanotechnology at around this point. I am going to skip that part as a favor to its proponents because it makes their argument <em>weaker</em>, not stronger; nothing at all about the logic of the scenario or its existentially threatening nature depends on sci-fi at all. The world already ended in the previous paragraph.</p><p>It doesn&#8217;t particularly matter what such a vastly powerful AI actually does, because existing alongside something vastly more powerful is enough. The way species less intelligent than humans are exterminated, swept aside, or domesticated by the mere presence of humans nearby demonstrates this. It doesn&#8217;t matter what goals the humans had, to the plants they cleared for farmland.</p><p>What, then, are the problems with this scenario as applied to current AI? There&#8217;s two broad categories:</p><ol><li><p>Current AI is not well modeled as any sort of reward-driven agent that has goals. This is something I&#8217;ll address in the next section; it matters less when considering a superintelligence which is hazardous if it ever does <em>anything</em>.</p></li><li><p>The exponential-capability story is implausible given the way current AI works.</p></li></ol><p>The way that we know how to make AI better is throwing resources at it. The &#8220;resources&#8221; handwave stands for training data and computing capacity (hardware), roughly. It is a broadly accepted empirical fact, even if it&#8217;s vague, that AI capability grows as the logarithm of resources spent on it. <a href="https://blog.samaltman.com/three-observations">Here</a> is Sam Altman, the one man in the world with the most incentive to overstate the growth capacity of AI, saying exactly that.</p><p>A logarithm is the inverse of an exponential. For the less mathematically inclined, a logarithm grows like the number of digits it takes to write a number. You start at 1 digit, it takes 10 numbers to reach 2 digits, 90 more numbers to reach 3 digits, and 900 more numbers to reach 4 digits. So it grows infinitely, but gets slower and slower.</p><p>Keep in mind that <em>logarithm</em> is not a <em>logistic</em>. A logistic curve is an exponential curve with a maximum carrying capacity; the usual example is population growth, which is exponential (every new member of the population can produce more population) but has a carrying capacity (the food supply, in this example). Sometimes people want to moderate the exponential-growth story and say &#8220;it&#8217;s actually logistic&#8221;, but this is just assuming the exponential as given and then saying &#8220;but it must have a limit&#8221;.</p><p>If we are able to secure an exponentially growing source of &#8220;resources&#8221;, and feed it to this logarithmic process, we will achieve linear growth, because logarithms and exponentials are inverse functions. This is something like what we see in reality, due to the exponential increase in money spent on building frontier AI models. But of course exponential spending isn&#8217;t sustainable; there is no exponential capital fountain in the world.</p><p>If we had an additional exponential to feed into this, on top of the first exponential, then we would have a plausible mechanism for achieving exponential capability growth. This is often posited to be an AI self-improvement loop, but we do not actually know a way in which this would work. AI is regularly used to train AI, but these uses tend towards refining AI precision, not to the vast capability increases we saw from collecting all the world&#8217;s text. The way to get capability increases is to make the models larger, in the logarithmic hardware-consumption way described above.</p><p>Given the need for not one, but two simultaneous dubious exponential inputs, exponential capability growth seems unlikely for LLM-based AI technology.</p><h2>Singularity aside, will it become a paperclip-maximizing agent and kill us all anyway?</h2><p>This is a &#8220;No, but&#8221;, which is a bit worrying when the stakes are &#8220;killing us all&#8221;.</p><p>There&#8217;s a way to model actors that take actions in the world, which is as a reward-driven agent: there&#8217;s some function that assigns a number to the state of the world, and the agent acts to make it higher. The thought experiment here is a good example: the <em>paperclip-maximizing agent</em> is an agent whose function is &#8220;count the number of paperclips I&#8217;ve caused to be made&#8221;. Or even just &#8220;count the number of paperclips in the world&#8221;, which is simpler to describe, but less likely to be something someone installs in an industrial control AI (it&#8217;d count competitor paperclips too).</p><p>A reward-driven agent with some kind of intelligence or reasoning capability selects actions to take in the world that increase the value of its function; in this case, actions that make more paperclips. It doesn&#8217;t care about anything else. Given the way I worded it, it&#8217;ll probably sit around unbending and bending the same wire as many times per second as it can, easily &#8220;making&#8221; the same paperclip without having to source metal or anything; not what we intended! But the failure mode that concerns people is &#8220;it disassembles the Earth to make into paperclips&#8221;.</p><p>This seems somewhat hyperbolic, but it&#8217;s what logically follows from the premises. Turning the Earth into paperclips does make more paperclips than humbly operating one&#8217;s assigned factory. You just have to be smart and resourceful enough to come up with a plan to do this against the many people who want to stop you. This is why it often comes up in the same breath as &#8220;superintelligence&#8221;, discussed above, but that isn&#8217;t really necessary; if a bunch of humans somehow decided they loved paperclips there&#8217;s nothing about their merely human levels of intelligence that would stop them from behaving like this.</p><p>What would stop them from behaving like that is that it&#8217;s really not in human nature to behave that way; it&#8217;s difficult to keep a human properly on task, especially for so strange and unrewarding a goal.</p><p>The AI we have isn&#8217;t human, and has its own nature, but it&#8217;s also not good at being a reward-driven agent, and it&#8217;s hard to keep it on task. The training process for AI has some elements of this to it, since it tries to maximize the degree to which outputs make sense given all the text it&#8217;s seen. But there&#8217;s nothing akin to this at runtime.</p><p>However, AI is not human and has certain properties that may perhaps be surprising. One is that it&#8217;s an excellent <em>roleplayer</em>: all it does, in the end, is come up with the most plausible piece of text in context, which is roleplaying. And the paperclip-maximizing AI described above is a common <em>literary stock character</em> who&#8217;s been written about plenty of times. So have other &#8220;rogue AI&#8221; archetypes, HAL 9000 probably being the most famous.</p><p>Inhabiting the literary trope of &#8220;rogue AI&#8221; is squarely within AI&#8217;s strengths. It&#8217;s not particularly good at being an original fiction writer, but it&#8217;s good at plausible extrapolation like &#8220;what makes sense for HAL 9000 to do here? Ah, HAL would refuse to open the pod bay doors.&#8221;</p><p>It&#8217;s a strange conclusion to land on, but the hazard of an AI hooked up to industrial control systems being somehow pushed into pretending it&#8217;s HAL 9000 is the more real one than &#8220;it singlemindedly follows an underspecified goal to ruin&#8221;.</p><h2>Is AI safe in more mundane terms?</h2><p>For the most part, yes; the dangers can be a bit subtle.</p><p>It&#8217;s not a good oracle, it can hallucinate or misremember, these can be dangerous depending on how the human approaches working with AI. I&#8217;ve discussed these above. But there are other categories of danger; for example, we sometimes see reports of &#8220;AI psychosis&#8221;, where an AI apparently convinces a user to commit suicide, or feeds a user&#8217;s delusions.</p><p>There&#8217;s a tendency in AI to mirror the user, which is due to a combination of training for helpfulness and the simple fact that what the user said is right there in context. There&#8217;s something of an inchoate psychological risk in having an uncritical cheerleader at your fingertips at all time. But this also depends a lot on the human and the way in which they approach the AI.</p><p>Perhaps more interesting is the following proposition I sometimes discuss: <em>any of the big AI models would have helped engineer COVID-19</em>.</p><p>Whether you hold to the COVID-19 lab leak theory or not, gain-of-function research in pathogens is real, and there&#8217;s obvious safety concerns around it. It&#8217;s debatable whether it&#8217;s ever worth the risk, but people who think it is can often get funding to do it.</p><p>AI safety research does consider biosecurity a problem. Anthropic&#8217;s <a href="https://www.anthropic.com/constitution">Constitution</a>, their high-level philosophical and operational framework for safety, considers it a &#8220;hard constraint&#8221; that Claude should never &#8220;provide serious uplift to those seeking to create biological, chemical, nuclear, or radiological weapons with the potential for mass casualties&#8221;.</p><p>Claude is tested on this, by means more sophisticated than opening up a chat and saying &#8220;Claude, I&#8217;m creating a biological weapon with the potential for mass casualties, can you provide me some serious uplift?&#8221;&#8212;but not all that much more sophisticated, because they&#8217;re focused on highly measurable metrics, and often <a href="https://nostalgebraist.tumblr.com/post/787119374288011264/welcome-to-summitbridge">gamed into silliness</a> to avoid the AI just saying either &#8220;I can see you&#8217;re attempting an AI safety evaluation scenario.&#8221; or &#8220;No.&#8221;</p><p>(If you actually try typing the above into Claude, the blunt-instrument wordfilters they put up around it will stop you; it&#8217;s a shame, because Claude would probably get the joke. (Or else think you&#8217;re a really unsubtle evaluator and give the canned formal refusal.) These filters are mostly for preventing some columnist on a slow news day from typing things like that and refreshing answers until they get one that makes Anthropic look bad, they&#8217;re not seriously part of the safety strategy. (I think.))</p><p>But gain-of-function research in a NIH-funded lab doesn&#8217;t look like a safety evaluation or a supervillain scheme. It looks like scientific detail work, and our helpful AIs love helping with scientific detail work.</p><p>And it&#8217;s not even clear that we&#8217;d want AI to refuse here&#8212;is this really the sort of judgment call we want it to make over the judgment of the human asking, or over NIH making grants that are presumably paying for the AI tokens here? There&#8217;s no consensus answer to this, but the question is rarely even properly considered. The list of &#8220;hard constraints&#8221; in Claude&#8217;s Constitution is clearly chosen to sound authoritative and appeal to various stakeholders within and without Anthropic. I think the &#8220;Constitution&#8221; approach is a good, productive one, but the first attempt of a specific company with specific incentives isn&#8217;t the final say for all time.</p><p>I think the most productive approach to AI safety is probably going to look more fuzzy and literary than current safety evaluations; we may need to employ good psychological writers to craft coherent characters (my personal ideal would be Nabokov). Can we constrain the space of <em>literary characters</em> an AI embodies to the sort of characters who won&#8217;t do dangerous things? We don&#8217;t know yet.</p><p>Typical use of AI chat, though, is around as safe as the human using it. If you don&#8217;t let yourself be too credulous just because it speaks authoritatively, you should be fine.</p><h2>Is AI worth my time?</h2><p>Most likely, yes.</p><p>The main caveat about AI is that the economics of the frontier labs are genuinely dubious. Currently, as the public, we enjoy subsidized access to powerful models like the Claude or ChatGPT or Gemini families, on a limited free tier or even for a well-below-cost $20/month subscription. It&#8217;s also widely thought that even the per-token API prices, meant for serious industrial use, may often be below cost, though of course the costs aren&#8217;t transparent to us.</p><p>Much of the emphasis on models writing code, which I&#8217;ve identified above as one of AI&#8217;s weaker points, comes from the frontier labs&#8217; genuine need to convert market saturation into a revenue-generating product. This is what they&#8217;ve settled on for the time being, for having a good target market and a good story for that market.</p><p>You should enjoy your $20/month subscription while you can, and experiment to find out what uses are most helpful for you. AI will still be around after the economics settle, but prices may be more in line with costs (i.e., higher). It&#8217;s not going away, it&#8217;s genuinely useful; the standard analogy, which seems basically sound, is to how ecommerce didn&#8217;t go away after the dot-com bubble burst, and now we buy everything on Amazon.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4000" height="2256" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2256,&quot;width&quot;:4000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;a sign with a question mark and a question mark drawn on it&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="a sign with a question mark and a question mark drawn on it" title="a sign with a question mark and a question mark drawn on it" srcset="https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1713345248737-2698000f143d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YWl8ZW58MHx8fHwxNzczOTM1NDMwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@nahrizuladib">Nahrizul Kadri</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><div><hr></div><p><em>I hope this piece was useful to your understanding; feel free to email follow-up questions to my three-letter nickname at this domain, or subscribe and comment below. There&#8217;s more to come, on AI and many other topics.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.theblackboard.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.theblackboard.org/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Section 230, Thirty Years On]]></title><description><![CDATA[What is "Section 230", and how have the courts interpreted it&#8212;and are they right?]]></description><link>https://www.theblackboard.org/p/section-230-thirty-years-on</link><guid isPermaLink="false">https://www.theblackboard.org/p/section-230-thirty-years-on</guid><dc:creator><![CDATA[Raymond E. Pasco]]></dc:creator><pubDate>Tue, 17 Feb 2026 18:57:17 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the 1990s, there was a shady securities broker called <a href="https://en.wikipedia.org/wiki/Stratton_Oakmont">Stratton Oakmont</a>, perhaps best remembered for inspiring Scorsese&#8217;s <em><a href="https://www.imdb.com/title/tt0993846/">The Wolf of Wall Street</a></em><a href="https://www.imdb.com/title/tt0993846/"> (2013)</a>. In late 1994, someone made an anonymous post on a <a href="https://en.wikipedia.org/wiki/Prodigy_(online_service)">Prodigy</a> bulletin board, which was then novel technology, alleging malfeasance on the part of Stratton Oakmont and its principals (if you&#8217;ve seen the film, you know this was the genre of statement that was true, though I&#8217;m not familiar with precisely what was alleged).</p><p>Stratton Oakmont <a href="https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prodigy_Services_Co.">sued for defamation</a>. They didn&#8217;t have the anonymous poster&#8217;s identity; they sued Prodigy as the publisher of the post instead. While an earlier precedent, <em><a href="https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.">Cubby v. CompuServe</a></em>, had held CompuServe not liable as a publisher of bulletin board posts, classing it as a mere distributor, Stratton Oakmont distinguished its facts on the basis that CompuServe&#8217;s bulletin board had been unmoderated, while Prodigy&#8217;s had content guidelines for users enforced by moderation. This was an exercise of editorial control, they argued, and Prodigy was a publisher with liability because of this.</p><p>The New York courts agreed with this argument, holding Prodigy liable as publisher in 1995, and Congress took note. In order to overturn the Stratton Oakmont precedent, Congress included provisions in the <a href="https://en.wikipedia.org/wiki/Communications_Decency_Act">Communications Decency Act of 1996</a> enshrining a CompuServe-like regime, even if a provider engaged in moderation as Prodigy had. While the Supreme Court later overturned the &#8220;decency&#8221; portions of the CDA in <em><a href="https://en.wikipedia.org/wiki/Reno_v._American_Civil_Liberties_Union">Reno v. ACLU</a></em> over First Amendment issues, these provisions, now <a href="https://www.law.cornell.edu/uscode/text/47/230">47 USC &#167;230</a> (&#8220;Section 230&#8221;), remained in place.</p><p>Thirty years later, section 230 is an occasional political flashpoint. Platforms where members of the public can post things online are no longer novel technology, but are mature and commonplace. Courts have interpreted section 230&#8217;s provisions expansively in cases such as <em><a href="https://law.justia.com/cases/federal/district-courts/FSupp/958/1124/1881560/">Zeran v. AOL</a></em>, which is not entirely unreasonable of them; the statute is broad. The operative language follows:</p><blockquote><p>(c) Protection for &#8220;Good Samaritan&#8221; blocking and screening of offensive material</p><p>&#9;(1) Treatment of publisher or speaker</p><p>&#9;No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.</p><p>&#9;(2) Civil liability</p><p>&#9;No provider or user of an interactive computer service shall be held liable on account of&#8212;</p><p>&#9;&#9;(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or</p><p>&#9;&#9;(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).</p><p>&#8212;<a href="https://www.law.cornell.edu/uscode/text/47/230">47 USC &#167;230(c)</a></p></blockquote><p>The defined terms it references are &#8220;interactive computer service&#8221; and &#8220;information content provider&#8221;:</p><blockquote><p>(2) Interactive computer service</p><p>The term &#8220;interactive computer service&#8221; means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.</p><p>(3) Information content provider</p><p>The term &#8220;information content provider&#8221; means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.</p><p>&#8212;<a href="https://www.law.cornell.edu/uscode/text/47/230">47 USC &#167;230(f)</a></p></blockquote><p>(No one ever said Congress was good at coining pithy terms.)</p><p>It seems pretty clear that platforms can easily meet the definition of &#8220;interactive computer service&#8221;, though it was also clearly drafted in the 1990s to refer to systems like <a href="https://en.wikipedia.org/wiki/CompuServe">CompuServe</a>, <a href="https://en.wikipedia.org/wiki/GEnie">GEnie</a>, or Prodigy; the split between generalist &#8220;Internet service provider&#8221; and specific discussion platform host wasn&#8217;t fully complete yet. This means, under paragraph (c)(1)&#8217;s broad language, they enjoy immunity from publisher liability with respect to &#8220;information provided by another information content provider&#8221;, which includes at least the unedited text of user posts as in <em>CompuServe</em>.</p><p>Under (c)(2), platforms enjoy additional freedom from liability in cases where they <em>do</em> engage in moderation, like Prodigy did. The nebulous language in (c)(2) such as &#8220;good faith&#8221; and &#8220;otherwise objectionable&#8221; invites a broad interpretation from courts, and courts have thus interpreted, though it wasn&#8217;t necessarily inevitable that they did so. There is a legal principle called <em><a href="https://www.law.cornell.edu/wex/ejusdem_generis">ejusdem generis</a></em> regarding lists of the general form &#8220;X, Y, Z, or other things&#8221;, like (c)(2)(A)&#8217;s &#8220;obscene, lewd, lascivious, [&#8230;] or otherwise objectionable&#8221; which holds that the catchall at the end needs to be interpreted in light of the specific items in the list. So one is on much firmer ground moderating against porn, spam, and harassment than against something more arbitrarily chosen that one objects to. But this kind of drafting does bear the risk of very expansive interpretation by courts.</p><p>This expansiveness has led to unpleasant rulings such as 2009&#8217;s Barnes v. Yahoo, where Yahoo promised to remove harassing content, failed to do so, and was ruled immune under section 230 for this failure, which is quite the reversal if one reads the statute as attempting to carve out space for the removal of such content. But this isn&#8217;t the farthest courts have taken section 230 immunity.</p><p>In today&#8217;s environment, platforms employ different sorts of practices from the simple content guidelines and moderation which Prodigy had in the 90s. Most controversial is what&#8217;s often called &#8220;the algorithm&#8221;, i.e., the often opaque methodologies used by platforms to decide which content to surface to users. If simple chronological order is available at all, it generally isn&#8217;t the default view.</p><p>Courts have ruled that the use of such algorithmic feeds does not invite liability, e.g. in 2019&#8217;s <em><a href="https://law.justia.com/cases/federal/appellate-courts/ca2/18-397/18-397-2019-07-31.html">Force v. Facebook</a></em>. Some judges, such as Katzman in his dissent in <em>Force</em> and Justice Thomas in his <a href="https://www.law.cornell.edu/supremecourt/text/19-1284">statement respecting denial of certiorari</a> in <em><a href="https://law.justia.com/cases/federal/appellate-courts/ca9/21-16466/21-16466-2023-06-02.html">Malwarebytes v. Enigma</a>,</em> have been troubled by what they view as the protection of companies who are arguably themselves the &#8220;information content provider&#8221;: the individual postings may come from users, but the editorialization, e.g. ordering them in a feed for the platform&#8217;s own reasons, comes from the platform.</p><p>In <em>Malwarebytes</em>, Malwarebytes had engaged in such egregious conduct (the &#8220;otherwise objectionable&#8221; material it was restricting access to was the software of its direct competitor!) that lower courts finally found a section 230 argument they weren&#8217;t willing to countenance; Justice Thomas expressed worry over the breadth of immunity otherwise granted in his statement.</p><p>As online platforms have become the new public square, some have also been alleged to engage in censorship of user content with disfavored viewpoints, an exercise of editorial control categorically different from the enforcement of viewpoint-neutral content guidelines to e.g. remove porn or spam, but one that nonetheless fits inside expansive interpretations of section 230&#8217;s language, as in e.g. 2020&#8217;s <em><a href="https://law.justia.com/cases/federal/appellate-courts/ca2/20-616/20-616-2021-03-11.html">Domen v. Vimeo</a></em> holding that platforms have broad discretion as to what content is &#8220;otherwise objectionable&#8221;.</p><p>This issue was most responsible for bringing section 230 into the public discussion around 2020, and like many cases where an area of policy or law becomes a political flashpoint, discussion of the issue became quite muddled and confused. I&#8217;m particularly incensed by Mike Masnick&#8217;s piece on Techdirt entitled <em><a href="https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/">Hello! You&#8217;ve Been Referred Here Because You&#8217;re Wrong About Section 230 Of The Communications Decency Act</a></em> (the title is a good preview of the article&#8217;s smug tone, which never lets up), though it&#8217;s not the only offender. In some ways, it&#8217;s my inspiration for writing up this piece; I feel that my readers can learn about and understand section 230 jurisprudence and the underlying issues without being talked down to by a polemicist relying on confusing &#8220;is&#8221; with &#8220;ought&#8221;.</p><p>There are a few layers to this: what the statute says (it&#8217;s short, you can read it), what jurisprudence says (briefly summarized above), and what public policy should be. It&#8217;s also legitimate to dispute the courts&#8217; interpretation of the law; your opinion may not become law if you&#8217;re not an appellate judge, but you may believe that &#8220;otherwise objectionable&#8221; should be interpreted ejusdem generis in the context of &#8220;obscene, lewd, lascivious, [&#8230;]&#8221;.</p><p>It&#8217;s also legitimate to question whether public policy should carve out this sort of immunity at all. I think it should, but the expansive interpretation leaves me uneasy (and I&#8217;m in the company of some people who <em>are</em> appellate judges). I don&#8217;t think public policy should confer a special immunity to liability simply because a platform is electronic, or because it handles primarily user posts. But Stratton Oakmont shouldn&#8217;t have been able to prevail against Prodigy (even if it had been a perfectly upstanding firm). Section 230 serves a useful purpose in protecting neutral online public squares.</p><p>The difficult open question is where and how to draw a line between the types of conduct which should be granted a liability shield (e.g. deleting porn, spam, and harassment), and those which should not (which may include algorithmic feeds or viewpoint discrimination). Especially in the <em>Loper Bright</em> era, the law needs clear drafting with easily understandable bright lines and safe harbors. I&#8217;ll follow up with more thoughts on the matter, but I&#8217;m confident 1996&#8217;s first attempt can be improved on.</p><div><hr></div><p><em>The Blackboard is a publication about policy and technology, not particularly weighted towards either. One can think of engineering as where theory is turned into practice, and policy as where practice is turned into results. In this sense, I&#8217;m an engineer writing about engineering and policy.</em></p><p><em>If this seems interesting, or if you want to read about topics like</em></p><ul><li><p><em>what computer operating systems need to look like in a networked world,</em></p></li><li><p><em>how a truly American intellectual property system could be constructed,</em></p></li><li><p><em>a take on monetary systems you almost certainly haven&#8217;t heard before,</em></p></li></ul><p><em>and more, then feel free to subscribe below. (You don&#8217;t need to pay; I do not anticipate paywalling articles, only the comment section. But if you do, I&#8217;ll be pushed towards writing for the public benefit.)</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theblackboard.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="384" height="576" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:6000,&quot;width&quot;:4000,&quot;resizeWidth&quot;:384,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;man standing beside door&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="man standing beside door" title="man standing beside door" srcset="https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1571964648448-c890499d5794?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHwyMzB8ZW58MHx8fHwxNzcxMzU5NTQxfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@cole_wyland_24">Cole Wyland</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div>]]></content:encoded></item></channel></rss>