<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Computist Journal: 💡 Philosophy]]></title><description><![CDATA[Philosophical discussions around topics that are relevant for Computer Scientists.]]></description><link>https://blog.apiad.net/s/philosophy</link><generator>Substack</generator><lastBuildDate>Wed, 29 Apr 2026 19:52:11 GMT</lastBuildDate><atom:link href="https://blog.apiad.net/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Alejandro Piad Morffis]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[apiad@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[apiad@substack.com]]></itunes:email><itunes:name><![CDATA[Alejandro Piad Morffis]]></itunes:name></itunes:owner><itunes:author><![CDATA[Alejandro Piad Morffis]]></itunes:author><googleplay:owner><![CDATA[apiad@substack.com]]></googleplay:owner><googleplay:email><![CDATA[apiad@substack.com]]></googleplay:email><googleplay:author><![CDATA[Alejandro Piad Morffis]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Is the Universe a Computer? - Part I]]></title><description><![CDATA[Unravelling the true nature of the Cosmos, one bit at a time.]]></description><link>https://blog.apiad.net/p/is-the-universe-a-computer-part-i</link><guid isPermaLink="false">https://blog.apiad.net/p/is-the-universe-a-computer-part-i</guid><dc:creator><![CDATA[Alejandro Piad Morffis]]></dc:creator><pubDate>Fri, 02 Jan 2026 11:38:50 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="5616" height="3744" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3744,&quot;width&quot;:5616,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;silhouette photography of person&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="silhouette photography of person" title="silhouette photography of person" srcset="https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx1bml2ZXJzZXxlbnwwfHx8fDE3NjczMzc0OTd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@grakozy">Greg Rakozy</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>Let&#8217;s kick off 2026 with a thought experiment.</p><p>You&#8217;ve all heard the <a href="https://en.wikipedia.org/wiki/Simulation_hypothesis">Simulation Hypothesis</a> by now, right? The idea that we might just characters in a high-res version of <em>The Sims</em> running on a supercomputer in some teenager&#8217;s basement in another dimension.</p><p>Well, this is <strong>not</strong> that.</p><p>The simulation hypothesis is a fun thought experiment, and it&#8217;s certainly great for selling movie tips, but I think it misses the point entirely. It presumes an <em>outside</em> that we can never reach. But we don&#8217;t need to posit any external observer, a higher-dimensional teenager in a basement. What if the universe isn&#8217;t <em>inside</em> a computer; but it is <em>itself</em> a computer.</p><p>This is a serious metaphysical position held by several prominent thinkers known as <strong>pancomputationalism</strong>, which basically means everything there is, is just computation. In this and a follow up article, I want to run with this argument and stretch it to its ultimate implications. Not necessarily because <em>I</em> believe it (I will tell you <em>exactly</em> what I believe at the end) or because I want <em>you</em> to believe it.</p><p>No, I just honestly think this is a very intellectually engaging argument to discuss, and even if it ends up not being true in its ultimate form (which we shall see), it presents some pretty damn cool ideas&#8211;and some pretty horrific ones. So even if just as a fun thought experiment, I invite you to ponder on it for a while.</p><p>You may be surprised by what you discover.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.apiad.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Computist Journal is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The Universe as a Universal Computer</h2><p>To understand pancomputationalism, we have to go back to the 1980s, to a physicist-turned-polymath named Stephen Wolfram staring at a computer screen.</p><p>Wolfram was playing with something called <a href="https://en.wikipedia.org/wiki/cellular_automata">Cellular Automata</a>&#8212;incredibly simple programs that live on a grid of pixels. You start with a line of black and white squares and give them a rule: &#8220;If your neighbor is black and you are white, turn black in the next step.&#8221; It sounds like a toy or perhaps something you&#8217;d learn in a mid-school basic coding class.</p><p>CAs where not Wolfram&#8217;s invention. We can trace them back to Stanislaw Ulam and the polygenious John von Neumann back in the 1940s (because of course anything you find interesting in late 20th century was actually invented by von Neumann!) but it wasn&#8217;t until the 70s when John Conway brought them to the public&#8217;s attention with a series of puzzles about a specific form of 2-dimensional CA called <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">Game of Life</a>.</p><p>The following picture shows a &#8220;Glider Gun&#8221;, one of the most interesting patterns that answers one of Conway&#8217;s original questions: the existence of configurations that ran forever without looping (you can see how the small thingies that are thrown diagonally&#8212;called gliders&#8212;keep moving forever).</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!F6SM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!F6SM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif 424w, https://substackcdn.com/image/fetch/$s_!F6SM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif 848w, https://substackcdn.com/image/fetch/$s_!F6SM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif 1272w, https://substackcdn.com/image/fetch/$s_!F6SM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!F6SM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif" width="320" height="230.4" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:180,&quot;width&quot;:250,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:21272,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/gif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.apiad.net/i/183230309?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!F6SM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif 424w, https://substackcdn.com/image/fetch/$s_!F6SM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif 848w, https://substackcdn.com/image/fetch/$s_!F6SM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif 1272w, https://substackcdn.com/image/fetch/$s_!F6SM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0c3f3cf-af39-4d49-b66f-a78bfd068e3e_250x180.gif 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Taken from Wikipedia. By Lucas Vieira - Own work, CC BY-SA 3.0, <a href="https://commons.wikimedia.org/w/index.php?curid=101736">https://commons.wikimedia.org/w/index.php?curid=101736</a></figcaption></figure></div><p>The reason these are interesting, among other things is because depending on the initial configuration, there are several, qualitatively distinct developments a cellular automaton can follow. Wolfram classified them into 4 more-or-less well-defined classes.</p><p>Classes 1 and 2 corresponding to different ways in which the initial configuration converges to a stable configuration (like the Glider Gun) that may or not keep evolving, but in a very controlled, predictable way. Some of these eventually stop, others keep going forever, but always in a very structured form.</p><p>Class 3, in contrast, is chaotic, random, unpredictable. In fact, several configurations in this category are thought to be cryptographically-secure random generators&#8212;the strongest kind of randomness you can get from a deterministic set of rules.</p><p>And then there is Class 4, a category of CA configurations that exhibit such a degree of complexity in their evolution that Wolfram conjectured most if not all configurations in this category where capable of general-purpose computation! Eventually, it was shown that extremely simple CAs do possess this trait, including one of Wolfram&#8217;s original CAs, <a href="https://en.wikipedia.org/wiki/Rule_110">Rule 110</a>, and several configurations in Conway&#8217;s Game of Life.</p><p>The reason this is interesting for our discussion, is because it led some thinkers to ponder whether this kind of complexity born out of simplicity could be the explanation behind all of physics. The first one is perhaps Konrad Zuse, a German engineer who built the first programmable computer (the Z3) during World War II.</p><p>In his 1969 book <em>Rechnender Raum</em> (Calculating Space), he proposed that the universe is a discrete lattice. He argued that the infinities that plague quantum physics could be solved if we just admitted that space has a minimum pixel size. In Zuse&#8217;s vision, space isn&#8217;t an empty stage where things move around. Space <em>is the computer</em> itself. An electron isn&#8217;t a particle moving <em>through</em> the vacuum; it&#8217;s a propagating pattern <em>on</em> the spatial grid. Think of it like a glider in Conway&#8217;s Game of Life: the pixel doesn&#8217;t move, the <em>information</em> does.</p><p>This is the heart of <strong>Pancomputationalism</strong>. The idea that, deep down, the Universe is not a metaphorical clockwork machine but an actual digital computer, though not one made of transistors, of course. Wolfram takes this to an extreme, by positing all of physics can be derived from a simple computational model, a hypergraph of nodes and edges that change according to some specific rules. In this theory&#8212;which is perhaps the most advanced form of pancomputationalism&#8212;the underlying computational lattice is so intricate that extremely simple processes like one electron jumping from an energy state to another would require several trillions of operations.</p><p>But the details matter less than the general idea: if there are some extremely simple mechanism (like the Rule 110 CA) that can support general computation, then universal computers most be abundant in the Universe. As an example, the DNA molecule must be capable of universal computation!</p><p>So why not take this argument to its logical end? Perhaps this is all there is! Perhaps universal computation is the ultimate level of complexity. If that&#8217;s the case, then the whole Universe is ultimately just a very complex computer.</p><h2>Time for some counter-arguments</h2><p>Of course, the idea that the universe is just a computer&#8212;or more specifically, <a href="https://en.wikipedia.org/wiki/Turing_machine">a Turing machine</a>&#8212;is a hard pill to swallow for many. If the cosmos is indeed a digital lattice, then everything within it should be effectively computable. But nature has a way of throwing us curve balls that look suspiciously like hacks of the system. Processes that seem to require more than universal computation to work out.</p><p>Let&#8217;s check two of the most famous counter-arguments to pancomputationalism.</p><h3>Black Holes and Hypercomputers</h3><p>One of the most persistent challenges to pancomputationalism is the idea of <strong>Hypercomputation</strong>. The argument goes like this: one of the simplest ways to show something is more complex than a Turing machine is to show it can solve problems known to be computationally undecidable. The most famous of these is the Halting problem&#8211;the question of whether a given arbitrary computation will end up eventually.</p><p>This was proven by Alan Turing himself to be generally undecidable. That is, you <em>cannot</em> predict whether a computer program will eventually halt or run forever just by looking at its source code. You can do it for some programs (because the code has no loops, for example, or for some contrived mathematical property), but you cannot devise a single, universal rule&#8212;a computer program&#8212;that tells you without doubt whether another arbitrary program stops.</p><p>But you can simulate it! If you want to know whether a program ends, just run it. If it ends, then eventually you will know it. The problem is, if it doesn&#8217;t end, you will never know! You will just wait indefinitely, because you can never say &#8220;Ok, I&#8217;ve seen enough&#8221;, the program might just end next iteration.</p><p>Ok, so here&#8217;s the twist. This presumes each iteration takes a <em>finite</em> amount of time, so an infinite number of iterations must take an infinite time, so you cannot outwait it. But what if we could make each iteration take less and less time? Like, the first iteration takes 0.5 seconds, the next one 0.25 seconds, and so on, each iteration taking half of the previous one? If we had this magical computer that accelerates as it computes, we could finish an infinite amount of iterations in a finite time!</p><p>This is called hypercomputation, and is an extension of something called a <a href="https://en.wikipedia.org/wiki/Supertask">hypertask</a> to the realm of computational complexity. Hypertasks are like taking limits in real life. You perform each step slightly faster, in such a way that the infinite sum of smaller and smaller times converges to something finite. Much like Zeno&#8217;s paradox, but each step is not smaller, it&#8217;s just faster.</p><p>Can we have hypercomputers in the real world? Well, some physicists suggest that the extreme gravity of a special, theoretic kind of black holes could allow for supertasks. Imagine an observer orbiting a black hole while a computer falls into it. Because of the way gravity warps time, the falling computer, in its own timeframe, would spend eternity counting towards infinity. But we would observe all its infinite lifetime compressed into a finite amount of time.</p><p>This would be crazy! Imagine we want to know whether, e.g., <a href="https://en.wikipedia.org/wiki/Riemann_hypothesis">Riehmann hypothesis</a>&#8212;the most important unsolved problem in pure mathematics&#8212;is true. We would just make a program that would check all infinite zeros of the Zeta function, run it in a computer, and throw it into a black hole. As the computation stretches to infinity, we would know definitely whether any zero has real part different to 1/2, because the program would check them all in finite time!</p><p>We could solve almost any open mathematical problem this way!</p><p>Now, this does require some stretch of imagination, some bizarre physics that many don&#8217;t agree actually exists. These are not regular black holes, but a special kind of black hole that has never been observed. And of course we have to issue of actually building a computer than can last forever! But nevertheless, just proving the physical possibility of hypercomputation&#8211;even if impossible in engineering terms&#8211;would undermine the whole idea of pancomputationalism.</p><h3>The Mind vs. The Machine</h3><p>Then there is the challenge from the human side of the fence. Roger Penrose, in his seminal <em>The Emperor&#8217;s New Mind</em>, famously argued that human consciousness is fundamentally non-algorithmic.</p><p>His argument leans heavily on <a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems">G&#246;del&#8217;s Incompleteness iheorem</a>. Penrose claims that because we can &#8220;see&#8221; that a G&#246;delian statement is true&#8212;even though it&#8217;s unprovable within its own formal system&#8212;our brains must be doing something that transcends simple computation. He actually suggests that this &#8220;something&#8221; is a quantum gravity effect happening in the tiny structures of our brain called microtubules.</p><p>Critics like Scott Aaronson are quick to point out that even if our brains use quantum effects, that doesn&#8217;t necessarily make them hypercomputational. (A quantum computer, after all, is still just a computer, it just has a different complexity class). But Penrose&#8217;s argument hits on a deep-seated intuition: that phenomena such as love, consciousness, or creativity aren&#8217;t just very long strings of 1s and 0s. If this is true, then not only is the Universe itself uncomputable, but actually the most interesting human traits, perhaps even the root of intelligence itself, would be forever outside the reach of regular computers&#8212;no superintelligent AI for you, Sam Altman.</p><h2>The Universe Hardware Spec</h2><p>But for now, let&#8217;s run with the original argument. If we do assume the Universe is doing something akin to general computation, we can ask the next logical question? How long has this been going on? Not in years, of course&#8211;we kinda know that already&#8211;but in <em>computational steps</em>. How big is the Universal Hardware?</p><p>The question might seem bogus at first, but underneath this seemingly crazy assumption we will find some profound truths about the nature of computation. To even start to answer this we must link two seemingly separate realms: the digital and the physical.</p><p>We often think of information as something ethereal&#8212;software is just math, right? But in the 1960s, Rolf Landauer showed all us wrong. He realized that information is fundamentally tied to thermodynamics. Specifically, <a href="https://en.wikipedia.org/wiki/Landauer%27s_principle">Landauer&#8217;s Principle</a> states that any logically irreversible operation&#8212;like erasing a bit of information&#8212;must produce a specific amount of heat.</p><p>This isn&#8217;t the regular heat you feel when your computer works for too long, of course. That is just a byproduct of imperfect engineering&#8211;moving pieces with friction, electrical wiring with imperfect insulation, etc. No, this is much more fundamental. Even if you remove all sources of energy loss, the fact that two bits of information become one (when you <em>and</em> two boolean variables for example) means that some amount of energy is released, increasing the entropy of the system.</p><p><em>Thinking</em> has a literal energy cost.</p><p>This gives us a maximum performance for any given physical computer: <a href="https://en.wikipedia.org/wiki/Bremermann%27s_limit">Bremermann&#8217;s Limit</a>. Just take the total energy you input into the system, assume there are no extra losses, and you will still deplete that energy just <em>doing computation</em>. So, if the Universe is a computer, it must have this physical limit. After all, there is just so much energy in the Universe. How fast is the cosmic CPU?</p><p>If you take 1 kg of matter and turn it into the Ultimate Laptop, it can perform at most 1.36&#8197;&#215;&#8197;10<sup>50</sup> bits per second. That&#8217;s a staggering number&#8212;far beyond anything we can currently build&#8212;but it&#8217;s not infinite. It is a hard, physical ceiling. There is no &#8220;overclocking&#8221; the universe beyond the speed of light and the Planck constant.</p><p>When you scale this logic up to the entire observable universe, you get a spec sheet for reality that is both terrifying and humbling.</p><p>Physicist Seth Lloyd calculated that since the Big Bang, the entire universe has performed at most 10<sup>120</sup> operations on roughly 10<sup>90</sup> bits. To put that in perspective, there are some 10<sup>80</sup> elementary particles in the observable universe, so each particle must be represented by something like 10<sup>10</sup> bits. Huge, yes, but <em>finite</em>.</p><h2>Where does this leave us?</h2><p>If the Universe is indeed a computer, that has been running for a finite amount of time, and has thus performed a finite amount of computation, what can we say about the laws of physics. Are there things forever beyond the realm of Science.</p><p>This is question I want to answer next, but that will take us on a tour around some of the most important theoretical concepts in computer science. In the end, we will see that a Digital Universe has some damn horrifying constraints, but it also possesses some beautiful freedoms we can only dream of in an otherwise uncomputable Universe.</p><p>But that is a story for another Friday!</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.apiad.net/p/is-the-universe-a-computer-part-i/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.apiad.net/p/is-the-universe-a-computer-part-i/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Is Science the Answer to Life, the Universe, and Everything? - Part I]]></title><description><![CDATA[Exploring the intricate world of scientific theory and practice.]]></description><link>https://blog.apiad.net/p/is-science-the-answer-to-life-the</link><guid isPermaLink="false">https://blog.apiad.net/p/is-science-the-answer-to-life-the</guid><dc:creator><![CDATA[Alejandro Piad Morffis]]></dc:creator><pubDate>Wed, 14 Aug 2024 14:14:50 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" width="5616" height="3744" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3744,&quot;width&quot;:5616,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;silhouette photography of person&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="silhouette photography of person" title="silhouette photography of person" srcset="https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1444703686981-a3abbc4d4fe3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxhc3Ryb25vbXl8ZW58MHx8fHwxNzIzNjQ0NzUwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="true">Greg Rakozy</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>When my second daughter was 45 days old, she got a common respiratory virus. Within hours, she got admitted to the UCI, and we spent the scariest fifteen days of our lives watching her fight for hers. Fortunately, she got the best treatment modern medicine can give and the most incredible team of caring, tireless doctors and nurses. She was saved by the doctors, but she is only alive because of modern science.</p><p>This is not an isolated story. Most of us alive today would be dead without science. Before the scientific revolution, the odds of living beyond 40 years were abysmal. The leading cause of death in the middle ages was curable diseases like flu, diarrhea, and mild infections, which ravaged populations with little understanding of cause or prevention. Raising one of your children beyond 10 was a coin toss, very biased towards the negative. Let's just say the bets weren't on your side.</p><p>The scientific method has enabled many of the most impactful discoveries in the last few centuries&#8212;in terms of improving human life&#8212;including antibiotics, vaccines, electricity, surgery, brain scans, cars, trains, airplanes, the internet, &#8230; you name it. These advancements have transformed humanity, extending longevity and improving quality of life, both physically and intellectually. </p><p>Science has allowed us to explore the mysteries of the universe, from the smallest particles to the vast expanse of space. It has expanded our understanding of the natural world to the point we can now summon and control the primal forces that make and break atoms. Science has raised us talking monkeys to heights our hairy ancestors would consider godlike.</p><p>Many, including myself, believe that the scientific method is the most important discovery in mankind's history, and the collective body of knowledge it has allowed us to build is our most valuable treasure. Still, some of the most important questions we have are forever outside the realm of what science can answer. And that's a feature, not a bug.</p><p>In this article and subsequent articles, I will explore how modern science works, why it is so effective, and its intrinsic limitations, both conceptually and pragmatically. In the end, I aim to convince you that science is our best tool for understanding the natural world, but we still need other tools to have the broadest possible view of the human condition.</p><p>For this, we will analyze the nature of knowledge to understand the types of questions we may want to ask about the universe. Then, we'll review how science works at different levels of organization, from the process of making experiments and finding new discoveries to the complexities of publishing research and the biases involved in the industrialization of science. Finally, we'll criticize the scientific method and argue there are some fundamental questions it can't answer. Then, we'll briefly look at alternative epistemic systems, which we may also need to encompass in a fuller understanding of reality.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.apiad.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Mostly Harmless Ideas is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>What is knowledge?</h2><p>Knowledge is a tricky thing. We all intuitively know what we mean when we say we &#8220;know&#8221; something, right? But we know all sorts of stuff for different reasons. For example, I know 2+2=4 because that&#8217;s basically the definition of those symbols. On the other hand, I know water boils at around 100 C because I&#8217;ve seen it happen over and over. I also know that electrons are negatively charged, that the speed of light is constant in a vacuum, and more or less how DNA works, but I have no direct experience with any of those things.</p><p>At the most abstract level, the study of knowledge is the job of a field of philosophy called epistemology. A complete introduction to epistemology is far beyond the scope of this article and well above my pay grade, but we can try a very brief overview.</p><p>The most commonly accepted definition of knowledge &#8212;not without its own problems&#8212; is that <em>knowledge is justified true belief</em>. Let&#8217;s unpack that.</p><p>Obviously, to truly claim you <em>know</em> something, you have to <em>believe</em> it. I mean, you can definitely claim you know something is true even if you believe it is false&#8212;you can give a false testament in a trial, but you would be lying. On the other hand, if you believe in something that is objectively false&#8212;like the Earth is flat&#8212;then you are simply mistaken, no matter how strongly you believe it. You can believe that you know, but you don&#8217;t truly know, because you&#8217;re wrong. So far so good? Now comes the tricky part.</p><p>Say you buy a lottery ticket, and you believe with all your heart that you will win. Then, it so happens that you win the lottery. So you had a true belief. Would you say you <em>knew</em> you were going to win? Most rational people would claim you didn&#8217;t <em>actually know</em>. You just got lucky. Whatever reason you had to support that belief&#8212;maybe the ticket had your preferred numbers in some specific pattern, or maybe that day you were dressed with your favourite underwear&#8212;, that reason is unjustified. The lottery is absolutely random, so you couldn&#8217;t <em>know</em> you would win.</p><p>Similarly, you might believe you know the solution to a complex math problem, but if you made subtle errors that happened to cancel each other out, you would arrive at the correct answer for the wrong reasons. In this scenario, most rational people would argue that you don't truly know the answer, even if you happen to stumble upon it.</p><p>So, justified true belief. The question then becomes, how do we attain knowledge? That is, how can we ensure we are justified in believing that something is indeed true? This is what <em>epistemic systems</em> are for.</p><h3>Epistemic systems</h3><p>In short, an epistemic system is a set of rules that determine when a given claim is true. Different epistemic systems assert the truth value of claims with different strategies. For example, you can think of mathematics as an epistemic system that asserts all claims that can be logically deduced&#8212;using a very narrow set of inference rules&#8212;from a given set of non-contradictory axioms. </p><p>This is an example of a <em>formal epistemic system</em>, where the truth or falsity of a given claim is decided only by symbolic manipulation. There is no need to look at the real world. Triangles are abstract things, and we can know everything there is to know about them by thinking really hard. There are no triangles out there in the woods.</p><p>Another possible epistemic system is what I call <em>truth-by-authority. </em>This system asserts all claims that some predefined entity&#8212;e.g., your mother&#8212;decides, regardless of&#8230; well, anything. Jokes apart, many religions fall into this broad category, choosing as authorities either ancient scriptures of dubious origins or self-appointed messiahs. </p><p>Not all epistemic systems are equally trustworthy in every domain. For instance, I don't rely on my mother's authority for most issues&#8212;though there are some cases where she holds the absolute truth&#8212;but I trust mathematics in all relevant situations. The critical question is this:&nbsp;<em>How do we determine the applicability of an epistemic system to specific types of claims? </em>What questions can we justifiably believe based on a given system?</p><p>You can probably see where I&#8217;m going with this. To understand when science is justifiable as an epistemic system, we need to understand what types of claims can be answered with the scientific method. For the purpose of this article, I want to divide claims into three categories, one of which will be the prime domain of scientific inquiry.</p><p>First, let&#8217;s consider the questions that can be answered via pure reasoning. Questions like &#8220;how many prime numbers are there?&#8221; or  "what is the fastest way to sort these numbers?" fall into this category. These questions can be resolved through mathematical processes and logical deductions without empirical observation. These aren't necessarily easy questions, though. Rather, some of the hardest questions in math and computer science are in this set. </p><p>Next, we have claims that rely on empirical evidence, such as &#8220;what is the boiling point of water?&#8221; or &#8220;is the light next room turned on?&#8221; These require observational methods and experimentation. You simply cannot reason your way into knowing the precise temperature at which water molecules have enough kinetic energy to escape from each other&#8217;s electromagnetic attraction. You have to go out there and measure it, just as you have to open that door to see if the light is turned on. This is the domain of science.</p><p>Finally, the third category encompasses questions that can&#8217;t be answered either way. These include subjective or normative claims, such as &#8220;what is the best ice cream flavor?&#8221; and &#8220;is there an objective moral system?&#8221; which may not be effectively addressed by either logical reasoning or observation alone. This is the domain of philosophy, arts, and spirituality. Some of the most important questions in life, including the meaning of all this absurd existence, are very likely in this realm.</p><h2>How does science work?</h2><p>Now that we intuitively understand what types of questions we want to answer with science let&#8217;s try to explain how science works from the ground up. </p><p>In short, science is a systematic approach to investigating the natural world. It relies on formulating hypotheses based on observed phenomena and testing them through experiments. The results of these experiments either support or refute the initial ideas. This process is iterative, meaning each outcome may lead to new questions and hypotheses, continually refining our understanding of reality.</p><p>Crucially, science can almost never give a definite answer. All we can do is get an increasingly more accurate approximation of the true nature of reality. How close we can get is unclear. Some think the universe is fully understandable, and others claim there is some fundamental level of incomprehensibility we may never surpass. But, more importantly, this very question&#8212;the limits of science&#8212;is, for the most part, outside the realm of the types of questions that science can answer.</p><p>But let&#8217;s see how it works.</p><h3>Science as an Epistemic System</h3><p>At the lowest level, the scientific method is an epistemic system that attempts to assert the truth or falsity of objective claims about the natural world. By objective, I mean claims whose truth value is independent of the observer, so whether chocolate is the best flavor of ice cream, despite being a claim about a very natural thing, is outside the interest and reach of science because its truth value will depend on the observer. Most people would say yes, but some sociopaths might prefer vanilla.</p><p>How do we justify beliefs about the natural world? The most obvious answer is to verify them through direct experience. If I claim the light in the next room is on, you just need to open the door and check, right? This is the paradigm of <em>verifiability, </em>formally put forward in the early 20th century by a school of thought called Logical Positivism. </p><p>However, this approach has two problems. The easier one is with claims that cannot be directly observed or measured but still have measurable indirect effects. For example, suppose I claim the Moon exerts a small gravitational force over you. In that case, it is hard to conceive of an easily verifiable experiment that can readily convince you this is true. Almost everything we want science to talk about is outside of our direct, shared, macroscopic experience.</p><p>The harder problem, though, is much more fundamental: it is about <em>universal claims</em>. For example, if I say <em>all</em> electrons are negatively charged, there is no way to verify that claim. Even if you could directly measure the electric charge of any particular electron, you would still not be able to prove that all electrons in the universe, past, present, and future, have the same properties.</p><p>So, instead of verifiability, which seems to impose an absurdly high constraint on the types of claims&#8212;about the natural world&#8212;we can study, we want a more flexible paradigm that gives us, perhaps not absolute certainty, but as close as pragmatically possible.</p><p>This is what Karl Popper proposed with his concept of <em>falsifiability</em>. Rather than requiring claims to be provable, he suggested that scientific theories must be testable and, importantly, <em>falsifiable</em>. This means that in order for a statement to be considered scientific, there must be a possibility of it being shown false through observation or experiment.</p><p>According to this view, a theory is scientific if it can, in principle, be tested and potentially refuted by evidence. While no amount of positive evidence can convince us a theory is definitely true, if we test it long and hard enough and never find falsifying evidence, then we have to grant that, as far as evidence goes, this theory is the current best explanation for a given phenomenon. It becomes an accepted scientific consensus.</p><p>Shifting from verifiability to falsifiability also shifts the burden of proof from the person who makes the claim to the person trying to falsify it. At face value, this seems like a bad move, right? Now, whenever I make a claim, I don&#8217;t have to give you a way to verify it. Instead, it becomes your responsibility to demonstrate that my claim is false. </p><p>So what happens if my claim is impossible to falsify? I claim there is an invisible purple unicorn in my garage, undetectable by all instruments and producing no perceivable effect in the environment, yet it is there, watching over you and judging every decision you make. This claim is impossible to prove false, by definition. What can we do about it?</p><p>This is the problem of demarcation in the philosophy of science, which basically determines which claims are considered &#8220;scientific&#8221; to begin with. Simply put, if your claim is untestable or unfalsifiable, the scientific community won&#8217;t even pay attention to it. We simply ignore it. It is not in the interest of science to talk about magical, undetectable unicorns, right?</p><p>But it sounds easier than it really is because sometimes determining what counts as a possible falsification procedure becomes a heated debate. For example, some serious scientists consider string theory unscientific because, so far, it hasn&#8217;t produced any falsifiable predictions. Other prominent scientists instead claim we simply need more sophisticated epistemic principles.</p><p>However, barring these extreme cases, for the most part, it is often evident when a simple enough hypothesis is a scientific claim. Your hypothesis needs to make some testable predictions that can be verified. As long as you keep making predictions that are proven right, the trust in that hypothesis grows until it eventually becomes a mainstream confirmed theory. But the minute a prediction fails, your hypothesis is immediately rendered false, right? </p><p>Well, not necessarily.</p><h3>Science as an Iterative Approximation of Truth</h3><p>One of the most famous scientific theories of all time is classical (e.g., Newton&#8217;s) mechanics. It was the dominant explanation for most of the physical world for a long time. It is also one of the most beautiful theories in the mathematical sense, having but a few simple and coherent formulas that unify everything from how apples fall from trees to how the Moon rotates around the Earth. </p><p>We lived in the dark ages, and Newton gave us the light. Then Einstein came and made everything dark again. </p><p>Around the beginning of the 20th century, a bulk of evidence was piling up showing that something fishy was going on with Newton. The most important: the orbit of Mercury wasn&#8217;t precisely as predicted by classical mechanics. However, this had happened before. </p><p>The discovery of Uranus in 1781, the first new planet to be discovered since ancient times, was initially celebrated as a triumph of Newtonian mechanics. However, as astronomers observed Uranus's orbit, they noted irregularities that could not be reconciled with Newton's predictions. What is more likely&#8212;astronomers of the time asked themselves&#8212;that almighty Newton, who has worked perfectly so far, is actually wrong, or that we are missing something here? </p><p>This led to the hypothesis that another massive body was exerting gravitational influence on Uranus. Mathematicians Urbain Le Verrier and John Couch Adams calculated where such a planet should be based on the observed perturbations in Uranus's orbit. Their predictions were remarkably accurate, leading to the discovery of Neptune in 1846, which was found within one degree of the predicted position. So, Newton was right all along! The theory predicted that something was missing, and we found it.</p><p>Mercury's orbit was a different story, though. The planet's perihelion&#8212;the point in its orbit closest to the Sun&#8212;was observed to shift over time, a phenomenon known as perihelion precession. According to Newton's laws, this precession should be minimal and consistent; however, observations indicated a tiny discrepancy that no other explanation&#8212;like a missing planet&#8212;could account for. Something was seemingly wrong with Newton.</p><p>Then Einstein came along and presented a much more complex, precise theory that accounted perfectly for all the discrepancies. So, Newton was wrong all along! Right?</p><p>Well, the story doesn&#8217;t end there, either. In the early 20th century, more precise observations of Neptune revealed that its orbit also exhibited deviations from Newtonian predictions. Again, astronomers said there must be another planet out there, and indeed, they pointed their telescopes and found Pluto in 1930. However, subsequent studies revealed that Pluto's mass was insufficient to account for the observed perturbations in Neptune's orbit. So astronomers thought it must be yet another planet. They called it &#8220;Planet X&#8221;.</p><p>However, the search for planet X was stagnant. All observations of the night sky where planet X should have been failed to show anything as obvious as a missing planet. Then, astronomers went back to the original data that showed Neptune&#8217;s irregularities. They discovered one of the key observations was performed after some telescope maintenance, which might have introduced calibration errors. After removing those few noisy data points, the resulting data perfectly matched Newton&#8217;s predictions again&#8212;no new planet or fancy new theory was necessary.</p><p>So, is classical mechanics right or wrong? If we are being absolute, they are indeed wrong. They fail to predict how matter moves close to the speed of light and near supermassive objects like stars and black holes, yes. But, for most practical matters&#8212;hell, even for sending astronauts to the Moon&#8212;we can still rely on old Papa Newton. We still teach classical mechanics as scientific facts in high school, and that&#8217;s because, for 99% of the problems we&#8217;ll ever have, it really works!</p><p>How can this be? Aren&#8217;t falsified theories wrong? Shouldn&#8217;t we be done with Newton once and for all, especially since we have a more powerful theory that does all the previous one did, and more?</p><p>Well, the answer is that it depends. Some hypotheses are falsified with evidence showing they are way off, totally false, and useless. However, other hypotheses might just be approximations of the true nature of reality, and pretty good approximations if you are congnizant of their limitations. This is the case with classical physics. It isn&#8217;t right or wrong. It&#8217;s an approximate model of actual physics, and if you use it within reasonable&#8212;and well-known&#8212;constraints, it really works.</p><p>For all we know, general relativity is just another, more precise approximation of reality, but it might also break at some point. Right now, for example, no one knows how to unify general relativity&#8212;the physics of the large&#8212;with quantum mechanics&#8212;the physics of the very small&#8212;and it might be the case that these two theories are simply two approximations that break when you stretch them bad enough.</p><p>So, truth in science is always approximate. We seek not necessarily the ultimate truth but a better approximation of it, which accounts for all the phenomena our current instruments, math, and intelligence can comprehend.</p><h2>Moving on</h2><p>Science is still much more complex than what the previous story shows. Beyond the pure, rational search for truth and knowledge, other forces at play make science a complicated social phenomenon, where the winning theory isn&#8217;t always the one that most closely matches reality. And then there are economic incentives, careers at stake, institutions, governments&#8230; It&#8217;s a huge mess. But this article is already long enough, so I will leave the rest of this story for future entries. </p><p>Until then, stay curious.</p>]]></content:encoded></item><item><title><![CDATA[Are brains and computers alike?]]></title><description><![CDATA[Exploring the boldest of all theories of mind.]]></description><link>https://blog.apiad.net/p/are-brains-and-computers-alike</link><guid isPermaLink="false">https://blog.apiad.net/p/are-brains-and-computers-alike</guid><dc:creator><![CDATA[Alejandro Piad Morffis]]></dc:creator><pubDate>Mon, 29 Jul 2024 14:08:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!fa60!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fa60!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fa60!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fa60!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fa60!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fa60!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fa60!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg" width="1152" height="640" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:640,&quot;width&quot;:1152,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fa60!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fa60!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fa60!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fa60!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8f35206-596f-4329-af4b-8e1e4c0eb68f_1152x640.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">I hope this isn&#8217;t what you mean!</figcaption></figure></div><p>Is the brain a computer? At first glance, the answer might seem obvious: no. Brains and computers are very different. Brains are biological, made from organic material&#8212;gooey stuff&#8212;while computers are electronic devices made from non-organic materials&#8212;clicky stuff&#8212;practically the opposite of brains. Settled, right?</p><p>Is this a meaningful difference, though? When we ask if two things are equivalent, the answer may be widely different depending on how we compare them. We can focus on what things are made of or how things look, but these are often the most superficial&#8212;and less interesting&#8212;comparison criteria.</p><p>Let&#8217;s take a different approach and ask where two things are alike regarding <em>what they can do</em>, that is, comparing their <em>functionality</em>. Is this a meaningful comparison? Here&#8217;s an example of why there may be some meat to this. Think of the concept of <em>a chair</em>. How many shapes, materials, colors, sizes, and styles can you imagine a chair could have?</p><p>A chair made of wood and a chair made of steel are very different in composition, fabrication process, durability, etc., but they are still equivalent in terms of what chairs are used for&#8212;sitting down. Is a stool a chair? Is a rocking chair a chair? What about a gaming chair, the place you sit in your car, or the thing astronauts sit on inside a rocket ship? Heck, if we are talking about <em>stuff you can sit on</em>, even a flat enough boulder is a chair!</p><p>But wait, why are we talking about chairs? What does this have to do with brains and computers? That is a fair question, and here&#8217;s the point I want to make. When comparing things, one often significant point of comparison is regarding <em>what things can do</em>. That is, comparing things in terms of their <em>function</em>. If two things perform the same function, they are, in a sense, equivalent. Let&#8217;s call this a <em>functionalist</em> paradigm and get back to whether brains and computers are equivalent.</p><p>In this article, I want to tackle this question from the point of view of <em>computational functionalism</em>---a specific form of functionalism that I will lay out in more detail later. What I want to do in this article is, first, convince you that this question is much more profound than it seems at first sight, and second, give you some arguments on both sides of the discussion. As usual with these philosophical articles, I won&#8217;t (can&#8217;t) give you a definite answer, but I hope you come back on the other side more informed and perhaps have a bit of fun in the process.</p><div><hr></div><p><em>This article is heavily inspired by <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Suzi Travis&quot;,&quot;id&quot;:189532146,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ff38f7f-2b7e-40ae-8d1f-3b708802ea9e_749x748.jpeg&quot;,&quot;uuid&quot;:&quot;04fbf52b-14d9-438b-b3dd-8fdb93181571&quot;}" data-component-name="MentionToDOM"></span> incredible series on the philosophy of mind, and especially the article </em><a href="https://suzitravis.substack.com/p/will-ai-ever-be-conscious">Will AI Ever Be Conscious</a><em>, published in <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;When Life Gives You a Brain&quot;,&quot;id&quot;:2198411,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/suzitravis&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5c14a3d4-7ce3-40d4-a360-25cf064f9377_700x700.png&quot;,&quot;uuid&quot;:&quot;de2a8dd4-8ce6-49a0-b43d-d7495e336653&quot;}" data-component-name="MentionToDOM"></span>.</em> <em>While I tried to make this article self-contained, if you really want to get the full context of this discussion, and learn a hell lot more from an actual expert in neuroscience, please check <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Suzi Travis&quot;,&quot;id&quot;:189532146,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ff38f7f-2b7e-40ae-8d1f-3b708802ea9e_749x748.jpeg&quot;,&quot;uuid&quot;:&quot;01ed16ae-d7be-4650-bd8d-199f54553b74&quot;}" data-component-name="MentionToDOM"></span>&#8217;s work and subscribe to her substack. You're welcome.</em></p><div><hr></div><h2><strong>Computational functionalism in a nutshell</strong></h2><p>We will begin by dissecting the question &#8220;are brains and computers equivalent&#8221; and define precisely what we mean by &#8220;brain,&#8221; &#8220;computer,&#8221; and, well, &#8220;equivalent&#8221;, for the matter. Then, we will go over some of the most common counter-arguments to the computationalist hypothesis and, finally, present some arguments for why one could believe brains are indeed &#8220;just&#8221; computers.</p><h3><strong>What is a brain?</strong></h3><p>At first glance, this seems like an obvious question. A brain is that gooey stuff inside your skull where all cognition happens. It&#8217;s where, in some way we still don&#8217;t quite understand, you think your thoughts and feel your feelings. It&#8217;s also where&#8212;so we are told&#8212;something as elusive as consciousness resides.</p><p>However, let&#8217;s go back to our chair example. Many things out there don&#8217;t look like human brains but still have a similar&nbsp;<em>function</em>. Other animals have brains&#8212;granted, very similar to ours in many cases. But then you have octopuses (or octopi, whatever floats your boat). They have something like 2/3rds of their neurons in their arms! But almost no one would claim they don&#8217;t have at least some limited form of cognition. And what about aliens? Do we think any living, sentient, self-aware being out there will have something strikingly similar to this gooey, wrinkled piece of meat we call the brain?</p><p>As we&#8217;ve seen, for a functionalist, these differences are unimportant. What we care about when talking about brains is the <em>function</em> they perform&#8212;that&#8217;s the whole deal with functionalism! The functionalist will claim that whatever a brain is, is what a brain does. Cognition, sentience, consciousness, and all subjective experiences are all entirely defined in terms of their function. So, when functionalists summon the concept of a brain, they think about what a brain does: in short, <em>hosting a mind</em>.</p><p>So, let&#8217;s define the&nbsp;<em>brain</em>&nbsp;as the&nbsp;<em>kind of physical substrate that can host a mind</em>, with all the complex cognitive processes, subjective experiences, emotions, qualia, and everything else you want to claim a mind does.</p><h3><strong>What is a computer?</strong></h3><p>Like before, this seems like an obvious question, but at this point, we know better than to rush to a conclusion. Intuitively, a computer is something that does some advanced forms of calculation. You can have mainframe computers, desktops, laptops, smartphones, smartwatches, mini-computers like a Raspberry Pi, and very weird computers like what goes inside a self-driven car or a spaceship.</p><p>Furthermore, even though most actual computers we have today are made of silicon-based transistors laid out in tightly-knit circuit boards, this is hardly the only way to implement a computer. Before fully electronic computers, we had electromechanical computers with valves and moving parts that sounded like trains. Crazy dudes have proposed more than one design of a fully working hydraulic computer. We even have bioelectrical prototypes that actually mix gooey stuff with traditional circuits.</p><p>So, just as before, let&#8217;s consider the&nbsp;<em>function</em>&nbsp;of a computer. This is actually way easier than with brains because unlike brains&#8212;which are things out there in nature&#8212;computers are something we made up. And we have a whole field in computer science called&nbsp;<em>computability theory</em>&nbsp;dedicated precisely to studying what a computer can do.</p><p>In short, a computer is an abstract mathematical construct capable of computing any effectively computable function. An effectively computable function is any mathematical function that can be calculated with a finite series of mechanical steps without resorting to guessing or magical inspiration&#8212;in other words, an algorithm. Modern electronic computers are one possible physical embodiment of such abstraction, but they are hardly the only possibility.</p><p>So, let&#8217;s define a <em>computer</em> as <em>any device capable of computing all computable functions</em>.</p><h3><strong>What is computationalism</strong></h3><p>Now, we are ready to reframe our original question in more precise terms.</p><p>Let&#8217;s start with functionalism and build our way up. When talking about theories of mind, functionalism is the theory that all cognitive processes are completely characterized by their function. This means that, for example, whatever consciousness is, is just about what consciousness does. In other words, if you have some way to reproduce what consciousness, or any other cognitive process does, it doesn&#8217;t matter which substrate you use, what you will have is genuinely the same thing.</p><p>Put more boldly, <em>any system that performs the same functions as a mind is a mind,</em>&nbsp;period.</p><p>Computational functionalism goes one step further and claims that all cognitive processes are actually computable functions. That means a sufficiently complex and suitable programmed computer could perform these functions to the same extent a brain does, and so it would presumably host a mind. More formally, it claims that <em>all cognitive processes are computational in nature</em> and thus can, in principle, be implemented in a substrate other than biological brains, as long as it supports general-purpose computation.</p><p>In other words, <strong>computationalism is precisely the claim that brains are computers</strong>, understanding both these terms with all the nuances we have already discussed.</p><p>To be clear, computationalism doesn&#8217;t claim modern computers are conscious or even that our current most advanced AI systems are on the right path to ever becoming truly intelligent or conscious. It just claims there is some way to build a computer, at least in principle, that is indeed self-aware, fully intelligent, and conscious, even if we have no clue what it takes to build one.</p><p>Now, there are at least two forms of computationalism; let's call them <em>weak</em> and <em>strong</em> (these are my definitions, not standard). Weak computationalism is just the claim that cognition is computation. This means that all forms of intelligence&#8212;understood as problem-solving, reasoning, etc., irrespective of whether there is a sentient being in there&#8212;are just advanced forms of computation. In other words, there is nothing super-computational in humans, animals, aliens, or any other form of intelligence. A sufficiently powerful computer can be as intelligent as anything else.</p><p>Strong computationalism goes further and claims that consciousness, sentience, self-awareness, and qualia&#8212;that is, all subjective experiences&#8212;are also computational in nature. Thus, a sufficiently powerful computer with the correct software would indeed be conscious and experience an inner world, just as we presume all humans and many animals do.</p><p>The roots of computationalism can be traced back to McCulloch and Pitts, the fathers of connectionism, a competing theory of mind. They were the first to seriously suggest that neural activity is computational and to propose a mathematical model for it, a precursor of modern artificial neuronal networks. However, it wasn&#8217;t until well into the 1960s that computationalism started to be developed as a philosophical theory of mind.</p><p>However, perhaps the most famous instance of a computationalist perspective in popular culture is the Turing Test. In a seminal paper in 1950, Alan Turing, recognized by everyone as the forefather of computer science (Turing machines are named after him!), proposed what he called &#8220;the imitation game&#8221;, a thought experiment to determine whether a computer was thinking.</p><p>In this thought experiment, a computer and a human are placed behind closed doors, able to communicate with a second human&#8212;the judge&#8212;only through a chat interface. The judge can ask both participants anything, and it must ultimately decide which is the computer and which is the human. If the computer manages to confuse the human judge more often than not, Turing claims we must acknowledge the computer is performing something indistinguishable from what humans call thinking. According to functionalism, then the computer <em>is thinking</em>.</p><p>Now that we've sorted our concepts, let's tackle the question.</p><h2><strong>Is computational functionalism true?</strong></h2><p>We've already dismissed the most obvious counterargument to computationalism&#8212;that brains are gooey while computers are not. But even if we disregard this superficial difference in composition, there are still important structural differences between existing brains and computers that we cannot just gloss over. And then, once we clear those more direct counterarguments, we will turn into more nuanced and profound differences between brains and computers.</p><h3><strong>Mind independence of substrate</strong></h3><p>Computers have a distinctively hierarchical structure, from hardware to software, starting at transistors and building up to registers, microprocessors, kernels, operating systems, and applications&#8212;in a very simplistic description. If any part that performs any specific function breaks, it all falls apart.</p><p>On the contrary, the brain seems to have some structure, but it is much more fluid and flexible. You can remove entire brain regions, and it will often rewire itself and learn to perform the affected functions almost to perfection.</p><p>On the other hand, there seems to be a clear distinction between software and hardware in computers, to the point where you can move software around independently of the hardware. There is no such thing in the brain&#8212;as far as we know, we can't simply paste your thoughts into some portable device, load them into a freshly minted meat suit, and get a second copy of you.</p><p>Maybe this is it. Perhaps the interconnected, seemingly intrinsically inseparable nature of thoughts and substrate in the brain is fundamental to consciousness and self-awareness. Maybe as long as you can move the mind out of the brain, it cannot be a true mind.</p><p>Well, if this were the case, and the mind is inseparable from the brain, this realization would obliterate most known religions. Forget about any form of transcendental survival of the soul. As soon as your brain dies, your mind is dead. No uploading to the cloud, metaphorically or literally. However, while this might certainly upset a bunch of people, I'm not a religious person, so I'm not bothered by this particular argument.</p><p>And even if this were the case&#8212;a true mind must be inseparable from the substrate&#8212;, this seemingly explicit separation between hardware and software is, first, just an implementation detail, and second, kind of a useful lie as well.</p><p>If you look inside a modern microprocessor, it is not trivial to distinguish where hardware ends and software begins. Not only there is programmable code running at the microprocessor level but, even more importantly, the simplest logical circuits inside a computer are both hardware and software at the same time. Just the clever disposition of transistors in some specific layout is enough to make a circuit that adds, or multiplies, or does basically any other computable thing. The hardware <em>is</em> the software in these cases.</p><p>So, while some things&#8212;like cables&#8212;are clearly hardware and other things&#8212;like the browser you&#8217;re using right now&#8212;are clearly software, it is simply not true that there is a clear-cut point of separation between these two concepts. It&#8217;s just a useful abstraction.</p><p>And then we have artificial neural networks, like the ones powering ChatGPT. These are universal approximators, meaning they can compute any function to an arbitrary degree of precision. And they are much closer to this idea of a flexible architecture where software and hardware intertwine in ways that are hard to clearly differentiate. While modern artificial neural networks are far from a precise simulation of brains&#8212;and that&#8217;s by design, they are not even trying to simulate brains&#8212;there is no a priori reason why we can&#8217;t build a silicon computer that perfectly mimics the physical processes inside a real, biological brain.</p><p>This means any attack on computationalism based on structural differences between actual instances of brains and computers is futile. We cannot generalize a negative claim from a finite set of negative examples. No matter how many computers you find that are not brains, you can never be sure a computer cannot, by definition, ever be equivalent to a brain. This is just the typical problem of induction, so we must find another angle of attack, something that targets the fundamental definition of computer rather than any concrete implementation.</p><h3><strong>Syntax versus semantics</strong></h3><p>The most famous attack on computationalism is John Searle's Chinese room experiment. It is meant as a counterargument to Turing's imitation game, showing that even if a system can flawlessly simulate understanding, it may not be understanding at all. Here is a quick recap.</p><p>Suppose a man is placed inside a room without communication with the outer world except via an "input" and "output" window. Through the input window, the man receives, from time to time, sheets of paper with strange symbols on them. Using a presumably very big book, the man's only job is to follow a set of dead simple instructions that determine, for any combination of input symbols, what other strange symbols he must write on a new sheet and send it through the output window.</p><p>Now, here is the plot twist. The input symbols are well-written questions in Chinese, and the output symbols are the corresponding, correct answers. The huge book is designed such that an appropriate answer will be computed for every possible input question. So, when seen from the outside, it seems this system understands Chinese and can answer any plausible question in this language. However, Searle argues that neither the man, the book, nor any part of the system actually understands the questions. (If this sounds eerily similar to ChatGPT, give the man a round of applause; he came up with this thought experiment half a century before.)</p><p>Searle's point is that syntax and semantics are two qualitatively different layers of understanding, such that no degree of syntax manipulation&#8212;which is, according to Searle, all a computer can do&#8212;can amount to actually understanding the semantics of a given message. </p><p>For all experts today, it is evident that ChatGPT is only manipulating vectors in a way that happens to produce mostly coherent answers, but there is no real "mind" inside ChatGPT doing any sort of "understanding". In a similar sense, Searle believes&nbsp;<em>any computer is fundamentally incapable of true understanding</em>&nbsp;in any meaningful sense of the word simply because computation acts at the syntactic level and is incapable of bridging the gap to semantics.</p><p>Searle's attack is pretty strong. There is not much a computationalist can show as a defense other than trying to point to some place in the Chinese room where understanding happens. The best such defense is to claim that even if no single part of the system&#8212;neither the man, nor the book, nor the room itself&#8212;can be said to understand anything, <em>the system as a whole</em> does understand. In the same way, no single part of your brain is conscious, but you are as a whole, maybe? Still, not a very strong defense if you ask me.</p><h3><strong>Qualia and knowledge</strong></h3><p>Another interesting challenge to computationalism is the thought experiment known as "Mary's Room", conceived by philosopher Frank Jackson. It deals with the nature of subjective experiences&#8212;or qualia&#8212;and how important they are for grounding knowledge about reality. The story goes like this.</p><p>Mary is a neuroscientist who has lived her entire life in a black-and-white room, learning everything there is to know about color vision through books and scientific literature. She knows all the physical facts about color and how the human brain processes visual stimuli. However, she has never experienced true color. </p><p>One day, she finally exits the room and sees color for the first time. The question is whether Mary learns something qualitatively new about color, something that can only be learned through direct experience rather than by reading or indirectly studying the phenomenon of color perception.</p><p>For many, Mary's experience is indeed qualitatively new. If you think so, this raises critical questions for computationalism. If computationalism asserts that all cognitive processes can be reduced to computational functions, then one might wonder whether knowing all the physical facts about a phenomenon is equivalent to experiencing it. </p><p>Mary's Room suggests a gap between knowledge and experience&#8212;one that cannot be bridged by computation alone. This challenges the idea that a computer, no matter how advanced, could genuinely "know" or "experience" qualia in the same way a human does. At least, it tells us ChatGPT cannot truly understand what seeing red, feeling warmth, or being in love feels like.</p><p>The implications of this thought experiment are profound for the computationalist perspective. It suggests that even if a computer could simulate all the functions of a brain, it might still lack the intrinsic, subjective experiences that characterize human consciousness. In other words, while a computer might process information about colors and respond appropriately, it would not "know" what it is like to see red or feel the warmth of sunlight&#8212;experiences that are inherently qualitative and personal.</p><p>This distinction between knowledge and experience underscores a fundamental limitation of computationalism: it may account for the mechanics of cognition but fails to address the richness of conscious experience. As such, the Mary's Room thought experiment serves as a poignant reminder that understanding the brain as a computer may overlook the essential nature of qualia.</p><p>On the other hand&#8212;you may argue&#8212;if Mary indeed knew&nbsp;<em>everything</em>&nbsp;that could be known about color perception other than "how it feels," how is that any different from actual perception? In a sense, the brain doesn't truly "experiences" color. It just perceives some electrical impulses correlated with the frequency of the light waves coming through our eyes. Can't color perception be explained as just another layer of simulation on top of an actually blind, purely computational brain?</p><p>In any case, Mary's room doesn't preclude us from posing extremely advanced mechanical entities that emulate all the physical aspects of color perception. Nothing is inherently unmechanical in our current understanding of how light frequency stimulates some sensors that produce an electric signal in the brain. It is the subjective experience of <em>what it feels like to see red</em> that we cannot reduce to that mechanistic explanation. And this is precisely what functionalism attempts to capture: if two mechanisms perform the exact same functions, they are the same.</p><h3><strong>Other Attacks</strong></h3><p>In addition to the prominent arguments against computationalism, several other critiques have emerged that challenge the notion of equating brains with computers. Two notable lines of reasoning come from Roger Penrose's insights on mathematical understanding and the connectionist perspective on reasoning.</p><h4><strong>Non-computability of human creativity</strong></h4><p>Roger Penrose, a renowned physicist and mathematician, argues that certain mathematical insights are inherently non-computable. He posits that human mathematicians can grasp concepts and solve problems that transcend algorithmic computation, suggesting that there are aspects of human thought that cannot be replicated by any computer, regardless of its complexity.</p><p>As an example, Penrose believes that while we have a formal proof that some mathematical problems are indeed unsolvable using any method of effective computation, human mathematicas can summon some hyper-computational abilitites to gain insights into these problems. In the same sense, many propose that creativity and artistic expressions in humans are clear examples of non-computable thought processes.</p><h4><strong>Connectionist arguments against symbol manipulation</strong></h4><p>Connectionism is an alternative theory of mind that posits mental processes can be understood through networks of simple units (like neurons) rather than through traditional symbolic manipulation. Proponents of connectionism argue that reasoning does not necessarily require the explicit representation of symbols or the manipulation of those symbols according to formal rules. Instead, they suggest that cognitive processes can emerge from the interactions within a network, where knowledge is distributed rather than localized.</p><p>This perspective challenges the computationalist assumption that reasoning must rely on structured representations and symbol manipulation, proposing instead that cognition may arise from more fluid and dynamic processes akin to those found in neural networks, and that symbolic manipulation of the kind that happens in a traditional algorithmic procedure is but an emergent phenomenon in the brain, at best, but not a necessary feature of an actual mind.</p><h2><strong>Moving forward</strong></h2><p>Is the brain a computer? We honestly don't know. It is a tough question, perhaps the hardest question we can ever conceive, because it challenges our most valuable and effective tool for discovering truths about the world: science. Stay with me for a second.</p><p>Science&#8212;understood as the process of generating hypotheses, producing predictions from those hypotheses, and testing the results of those predictions via experiments to falisify or validate the hypotheses&#8212;is fundamentally a computational process. Any procedure for experimental verification must be laid out in such a simple, unambiguous set of steps as to be replicable by other scientists all around the world even if they don't speak our language.</p><p>Furthermore, <em>the language of science is mathematics</em>, the most strict and computable of all human-made-up languages. And computers are everywhere in science too. It is inconceivable today to perform any mildly complex experiment without the help of computers to run simulations, find patterns, and, well, compute stuff. Science is deeply computational, and it has always been. The forefathers of science, Galileo, Newton, Leibniz, and the rest, explicitly focused on quantifiable, measurable phenomena; the only things we could all agree were certain.</p><p>But the inner workings of the human mind and the nature of consciousness may be neither measurable nor quantifiable. Even when we finish mapping all neuronal pathways in the brain and discovering everything that happens at all physical levels in the gooey stuff, and load that into a computer, we might end up creating just another Mary, knowing everything about how a mind works, but incapable of truly experiencing what having a mind feels like.</p><p>And the sad part is that, if that's the case&#8212;if the mind is indeed non-computable&#8212;then we might actually never know for sure. After all, the best tool we have to understand how stuff works, Science, may be nothing more than a fancy algorithm.</p><p>Or maybe, that's the fun part.</p>]]></content:encoded></item><item><title><![CDATA[What is Truth?]]></title><description><![CDATA[Can linguistics, mathematics, and logic tell us anything about it?]]></description><link>https://blog.apiad.net/p/what-is-truth</link><guid isPermaLink="false">https://blog.apiad.net/p/what-is-truth</guid><dc:creator><![CDATA[Alejandro Piad Morffis]]></dc:creator><pubDate>Fri, 01 Sep 2023 10:01:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2TA1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>In the late 1840s, the people of Ireland had it really bad.</em></p><p><em>Famine reared its ugly face on the landscape of the emerald isle. Starving families and crumbling communities became ever more desperate for anything to eat.</em></p><p><em>In the end, a million Irish people starved to death. The direct cause was a devastating potato blight, but lurking behind that biological calamity was another sinister force: a notion of "Truth" rigidly held by British leadership.</em></p><p><em>In London, Prime Minister Lord John Russell was caught between mounting calls for intervention and the sacred text of Adam Smith's "The Wealth of Nations." To him, the doctrine of laissez-faire economics was more than an economic theory; it was an incontrovertible Truth, a moral code. He reasoned that assisting the Irish would corrupt the market and, worse, the souls of the suffering.</em></p><p><em>While farmers watched their children slowly waste away, British dogma stood firm. </em></p><p><em>A million died, and millions more suffered because a prevailing "Truth" overshadowed the tangible, horrifying reality before their eyes.</em></p><p><em>While other factors were indeed at play&#8212;longstanding British prejudice against the Irish (not universal, but widespread) and complicated laws surrounding trade&#8212;devout faith in this Truth was undeniably a part of what went wrong.</em></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2TA1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2TA1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!2TA1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!2TA1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!2TA1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2TA1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a981c451-2298-4b21-82a0-3b169bccc001_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1294761,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2TA1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!2TA1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!2TA1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!2TA1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa981c451-2298-4b21-82a0-3b169bccc001_1024x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Truth is one of, if not the most critical concept in human affairs. As individuals, we strive to understand the reality around us and shape it to our benefit. As a collective, we must agree on the most essential matters, from who&#8217;s who to what is or isn&#8217;t cool to do. All of our major institutions, from science to governments to religions, are predicated on the search for truth.</p><p>If you are reading this, you probably value reasoning, critical thinking, and truth. You probably also believe there is an objective reality that is at least approximately knowable using a combination of reasoning and observation. By objective reality, I mean the existence of a world outside our minds whose laws are independent of what we believe. And maybe you also believe this objective world is the primary cause of everything that happens inside our minds.</p><p>However, Truth with capital T is something really hard to define. Intuitively, we can say that Truth is the correspondence between our beliefs and reality. But are there different flavors of truth? Is it the same kind of thing saying &#8220;<em>two plus two equals four,&#8221;</em> &#8220;<em>electrons have a negative charge,&#8221;</em> &#8220;<em>murder is bad,&#8221; </em>or <em>&#8220;coffee tastes better than chocolate&#8221;</em>? Can the same thing be true sometimes and false in others? Can it be something other than true or false? Does your perspective matter, and if so, to what degree?</p><p>This article is an attempt to explore these questions. We will adopt a semantic approach grounded in formal logic, which is neither the only nor the most adequate point of view about truth always, but it is an interesting one in this case.</p><p>To begin our discussion, we must first agree on what we mean when we say something <em>is true. </em>Then, we will look at different types of truths and the systems that allow us to assert the truth of something. Finally, we&#8217;ll go back to formal logic and explore the implications of a theorem severely limiting our capacity to define Truth, with capital T, once and for all. This whole endeavor will take us on a tour around epistemology, logic, science, and the nature of reality.</p><p>As usual, I don&#8217;t pretend that you take my conclusions at face value, but rather to help you develop your own. Buckle up.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.apiad.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.apiad.net/subscribe?"><span>Subscribe now</span></a></p><h3>What types of things can be true?</h3><p>Let&#8217;s start our discussion by analyzing the things that can be <em>bearers of truth</em>. I will suggest a few examples:</p><ol><li><p><em>All prime numbers greater than 2 are odd.</em></p></li><li><p><em>The speed of light in space is approximately 300,000 kilometers per second.</em></p></li><li><p><em>All humans should be free to choose their goals in life.</em></p></li><li><p><em>Paper money has value.</em></p></li><li><p><em>Coffee tastes better with brown sugar.</em></p></li></ol><p>All of the above are things any sensible person could believe are true. But they are all not the same. If you don&#8217;t believe #1, you&#8217;re definitely wrong. However, you can disagree with #5; I won&#8217;t hold it to you. Then there are cases like #2, which most people would agree, but there is a bit of a &#8220;depends&#8221; there. How approximate is <em>approximately 300,000 km/s</em>? In contrast, #4 seems true, but only because a sufficiently high number of people believe it to be so. And then we have #3, which is different because it&#8217;s telling <em>how things</em> <em>should be, </em>not necessarily <em>how things are</em>.</p><p>But all of these are examples of things we can say that are <em>true</em> or <em>false</em>. One is about numbers, an abstract construction; another about a physical phenomenon, something really out there; yet another is about the human condition; and another about something as mundane as coffee. These worlds are as different as any two things can be. So the question is, what do they have in common?</p><p>The answer is that these are all claims formulated in natural language; these are <em>linguistic objects</em>. We are talking about many different things, from numbers to particles to human rights, but the things we evaluate, whether true or not, are not the actual numbers, particles, or people but <em>claims about those things</em>.</p><p>And so we arrive at our first conclusion about the nature of Truth.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><div class="pullquote"><p><em>Truth is not a property of the things themselves but of claims about them.</em></p></div><h3>Are there different types of claims?</h3><p>To understand the nature of Truth, we must then study the nature of claims. What types of claims can we formulate? Let&#8217;s look at three fundamental differences between claims.</p><h4>Subjective and objective claims</h4><p>The first obvious distinction is that of subjective vs. objective claims. When we say something is <em>objective</em>, we mean it depends only on the reality of an object and not on the subjects that may be thinking about it and their beliefs.</p><p>More specifically, let&#8217;s call an <strong>objective claim</strong> whose truth value is independent of the person evaluating the claim, such as &#8220;<em>The Earth is round.</em>&#8221; In contrast, a <strong>subjective claim</strong> is a claim whose truth value depends on the subject(s) evaluating it, such as, &#8220;<em>Chocolate ice cream tastes better than vanilla.</em>&#8221; Not all subjective claims are mere opinions, though. When many believe something is true, it can become true, such as &#8220;<em>This piece of paper is worth something.</em>&#8221;</p><p>The nice thing about objective claims is that, in principle, they do not allow for meaningful disagreement. The shape of the Earth, whatever that is, should definitely be an objective property of the Earth, irrespective of the observer. If two different observers at different moments disagree on something about the shape of the Earth, then either one or both of them must be wrong, right?</p><p>Well, <em>not exactly</em>. Even if we all agree that some properties are objective, and thus, we cannot, in principle, disagree on their objective<em><strong> </strong></em>traits, how we <em>talk</em> about these properties is highly subjective. Thus, if we both interpret an objective claim the same way and disagree on its truth value, then at least one of us must be wrong. <em>But we can still disagree on how we interpret the claim.</em> Keep this thought for now.</p><h4>Descriptive and normative claims</h4><p>The second important distinction is between descriptive (is-a) claims and normative (should-be) claims.</p><p>A <strong>descriptive claim</strong> states the way things are. It can be an objective claim, such as &#8220;<em>the sky is blue</em>&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> or a subjective claim, such as &#8220;<em>this is painful.</em>&#8221; In contrast, a <strong>normative claim</strong> states how things should be. An example is &#8220;<em>You should not steal.</em>&#8221; Descriptive claims are factual, meaning they always refer to some state of reality &#8212;even if a subjective state. In contrast, normative claims are always judgemental; they can never be based on the grounds of how things are.</p><p>This fundamental distinction is called <a href="https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem">Hume's guillotine</a>, a philosophical argument stating that no normative claim can be derived from a descriptive claim. This means there is no way to judge how things should be without introducing some normative axiom. This an issue we will cover in a future article.</p><h4>Specified and quantified claims</h4><p>The third interesting distinction is between specified and quantified claims.</p><p>A claim is <strong>specified</strong> when it refers to specific objects unambiguously, such as &#8220;<em>The Earth is round,&#8221; </em>or<em> &#8220;Two is a prime number.&#8221; </em>In contrast, a claim is <strong>quantified</strong> when it refers to some quantity of objects without specifying exactly which objects are those, such as &#8220;<em>Seven dwarfs live in the mountains.</em>&#8221;</p><p>The two most important quantified claims are <em>universal</em> and <em>existential claims</em>.</p><p>A <strong>universal</strong> <strong>claim</strong> is in the form, &#8220;<em>For all X (such that Y), it is true that Z.</em>&#8221; For example, &#8220;<em>all electrons are negatively charged,&#8221;</em> or &#8220;<em>all prime numbers greater than two are odd,&#8221; </em>or &#8220;<em>all humans have equal rights.&#8221;</em> These claims apply to all the elements of a given class, and thus, to assert their truth, we must commit to it for all elements of the given class, even if they are infinitely many or are unknown. This is usually a big leap requiring some theoretical framework to base the claim on.</p><p>In contrast, an <strong>existential claim </strong>is of the form &#8220;<em>There exist (at least one) X such that Z.</em>&#8221; For example, &#8220;<em>There is one planet with life in the Solar System.&#8221; </em>The easiest way to convince oneself that an existential claim is true is to find a specific object that satisfies the claim. However, sometimes, we can ensure an existential claim must be true even if we don&#8217;t know the exact object that satisfies it. Examples abound in math, but a real-life scenario could be something like, &#8220;<em>This person dies of a bullet shot, so there must be a gun.</em>&#8221;</p><h3>The position of claims toward knowledge </h3><p>Our desire to have robust world knowledge usually drives our interest in truth. So, comprehending the nature of the knowledge we obtain if a claim is true gives us clues on how to evaluate it and the nuances of its interpretation. Briefly explaining this intricate distinction is just the starting point.</p><p>A claim that depends only on the intrinsic characteristics of an object is an <strong>ontological claim</strong>. Ontology focuses on the core characteristics, origin, and properties. What is it? Where is it? What is it made of? How does it fit into a category of objects? These are examples of questions we answer with ontological claims.</p><p><strong>Teleological claims</strong>, on the other hand, describe causality. For example, <em>"It's raining because there is a storm"</em> relates a phenomenon to an object and the properties we associate with it. This category is tricky since the relationship can adopt the guise of an ontological claim&#8212;<em>"The existence of X is self-evident."</em> Also, most of the fallacies follow this structure. </p><p>Claims regarding truth itself are a bit different from those about objects and reality. <strong>Epistemological claims</strong> focus on explaining knowledge, how we can obtain facts, and how we should reason about nature and other claims. Most of the rest of this article will further clarify this idea.</p><h2>What makes a claim true?</h2><p>If Truth is a property of claims, then what makes a claim true? How do we decide whether a given utterance such as &#8220;<em>The Earth is approximately round,</em>&#8221; &#8220;<em>All even numbers can be decomposed into</em> <em>the sum of two primes,</em>&#8221; or &#8220;<em>You shall not commit adultery&#8221; </em>is true or false<em>?</em></p><p>To do that, we define an <strong>epistemic system</strong>, a set of rules determining which claims are true. For this discussion, consider an epistemic system &#8212;or an epistemology<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>&#8212; as a function that takes a claim in a given language as input and outputs either <em>true,</em>&nbsp;<em>false, </em>or <em>undecidable</em>.</p><p>An epistemic system is the very definition of truth for the set of claims it can talk about &#8212;that is, for the set of claims that can be expressed in the language of that system. We call this whole set of claims the <em>domain</em> of the epistemic system. Trying to determine the truth value of a claim outside the domain of an epistemic system is an example of a <em>category mistake</em>.</p><p>So now, let's look at different epistemic systems.</p><h3>A zoo of epistemic systems</h3><p>First, we have <em>formal</em> or <em>linguistic epistemologies</em>. In these systems, the notion of truth is defined via a set of allowed syntactic transformations upon a set of initial axioms or <em>a priori</em> truths &#8212;which is why I call it "linguistic" as an alternative to "formal." The most famous formal epistemic system is mathematics, but we can see specific branches of logic and math as distinct epistemic systems. Formal systems are our strongest epistemologies. No ambiguity or nuance is left when we can say something is formally true or false.</p><p>Second, we have <em>natural</em> or <em>empirical epistemologies</em>. These are systems in which the notion of truth is defined (oversimplifying) in terms of how much a given claim is supported by empirical evidence, but there are a lot of nuances here. The scientific method &#8212;interpreted mostly as <em>falsificationism</em>&#8212; is the most famous empirical epistemology. Empirical claims like &#8220;<em>Fotons have zero mass</em>&#8221; are conferred a true status once there is an insurmountable pile of evidence. However, we never get an absolute formal proof, like in math. Also, empirical claims are often approximate as newer theories refine previously held truths to reveal special cases where they don&#8217;t hold, such as general relativity did to Newtonian mechanics.</p><p>Third, we have <em>cultural</em> or <em>social epistemologies</em>, in which truth is defined as that which most individuals accept in that culture or society. A given society's moral values can be considered a cultural epistemology.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> For example, the claim &#8220;<em>It is rude to ask for someone&#8217;s age</em>&#8221; is only true if people believe so. This has the unpleasant effect of making morality relative, which not everyone agrees with, but that discussion is beyond our scope.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> Religions are cultural epistemologies that often attempt to grant truth status to claims that fall outside their domain, thus involving a category error.</p><p>Finally, we have <em>personal epistemologies</em>, in which the notion of truth is defined by an individual's <em>a priori</em> stance on each claim. For example, the meaning of life or the best way to enjoy it is perhaps only expressible in personal epistemologies. It doesn&#8217;t matter what everyone else believes is the meaning of life, only what you believe.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><h3>Are some epistemic systems better?</h3><p>At this point, it should be obvious that there is a thick line separating formal and empirical from social and personal epistemic systems. The former only formulate objective claims, while the latter only deal with subjective (or intersubjective) claims. </p><p>This does not mean that objective claims are &#8220;better.&#8221; We must agree on the truth value of subjective claims if we live as a society. For example, whether our system of government is desirable or not is an intersubjective claim of utmost importance, more than almost anything physics or math has to say.</p><p>However, there are some traits we may want all epistemic systems to have, regardless of whether they talk about things in reality or else. Ultimately, an epistemic system is a collection of claims and a method to assign a truth value to each claim. Two basic properties we can ask of such a system are <strong>consistency </strong>and <strong>completeness</strong>.</p><p><strong>Consistency</strong> is simple to define. An epistemic system is consistent if it never produces a contradiction. We say there is a contradiction when two contradictory claims are assigned the same truth value (true or false). Contradictions are easy to formulate in formal epistemic systems &#8212;e.g., <em>all prime numbers are odd,</em> and <em>there is at least one prime number that is even</em>&#8212; but they can happen in any epistemic system &#8212;e.g., <em>the economy is improving,</em> and <em>the quality of life for the average citizen is worsening</em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> </p><p>An inconsistent epistemic system is not merely useless but actively dangerous for several reasons. Since two contradictory claims cannot be true and false simultaneously &#8212;if you subscribe to logical reasoning&#8212; that epistemic system clearly asserts some false claims. It is harmful on those grounds because you could believe false things.</p><p>However, it gets worse because clever people can use a contradiction to <em>convince you of anything </em>just by using intuitionist logic. This is called the <em>principle of explosion, </em>and it works like this.</p><ol><li><p>Take any pair of contradictory claims <em>P</em> and <em>!P</em>, such as <em>the Earth is round </em>and <em>the Earth is flat</em>. Assume both are true. </p></li><li><p>Now take any claim <em>Q</em> you want to prove, such as <em>Aliens have visited us</em>. </p></li><li><p>Because <em>P</em> is true, logic tells us that <em>P</em> or <em>Q</em> is true.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p></li><li><p>Thus, the claim R = <em>the Earth is round</em> <em>or</em> <em>Aliens have visited us </em>must be true.</p></li><li><p>But since !P is true, we can reject the first part of the argument, so <em>the Earth is not round</em>; thus, the only way to make R true is that the second part is true.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p></li><li><p>Henceforth, we can know for sure aliens have visited us!</p></li></ol><p>If this sounds contrived, it is because it is! But the deduction is logically sound. It is just based on a flawed premise that both some claim and its opposite are true. Now, laid out like this, you can see the obvious mistake at the beginning: we assume a contradiction. But smart people can squeeze a contradiction inside seemingly inoffensive claims and wrap the deduction in enough linguistic clutter to hide the faulty logic.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p><p><strong>Completeness </strong>is a whole different monster. For an epistemic system to be complete, it must assert <em>true</em> or <em>false</em> for all the claims in its domain. That is, it must never assert that a claim is <em>undecidable</em>. We won&#8217;t talk too much about completeness now because that discussion deserves a whole article on its own, but let it suffice to say that we know <em>sufficiently strong formal systems cannot be both consistent and complete simultaneously</em>. </p><p>Thus, if you aim to have a consistent epistemic system strong enough to cover our existing math, at least, it cannot be complete. There are some undecidable claims in there. Otherwise, you must deal with bizarre concoctions like &#8220;<em>This sentence is false</em>.&#8221;</p><p>In any case, your epistemic system can be quite arbitrary as long as it is consistent. But you'll be dead pretty soon if it doesn't closely match objective reality &#8212;or at least the part of objective reality upon which your survival is contingent.</p><p>Hence, on pragmatic grounds, some epistemic systems can still be considered &#8220;better&#8221; than others because they confer truth status to claims that closely resemble objective reality, in the sense that when acting upon reality guided by those claims, you are more likely to get the desired outcome.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> Ugh, that was a mouthful.</p><h2>Defining Truth</h2><p>If we agree that Truth is a property of claims always defined in a specific epistemic system, we must ask exactly how these epistemic systems define Truth. Can a given epistemic system objectively define Truth with capital T?</p><p>Take, for example, the Scientific Method, our most cherished epistemic system, which defines truth mostly in terms of <a href="https://en.wikipedia.org/wiki/Falsifiability">falsifiability</a>. A scientific claim is considered true if it makes sufficiently strong falsifiable predictions and doesn't get falsified.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> Thus, the more we fail to falsify a scientific hypothesis, the more of a truth status we confer it.</p><p>But is this criterion of falsifiability something we can objectively claim as true? The criterion of falsifiability itself is a claim, and thus, we can ask if this claim is true and in what epistemic system we can evaluate it. But here's the kicker: We cannot evaluate falsifiability in the scientific epistemic system. The claim that specifies which scientific claims are true is itself not a scientific claim. Falsifiability is not falsifiable!</p><p>Thus, science needs some meta-epistemic system to evaluate the meta-claims about the claims of science. This is unfortunate because once we go that way, we can find meta-meta-claims that cannot be evaluated in that meta-system. Does this make sense so far?</p><p>What we want is a self-contained epistemic system. One in which the rules that determine which claims are true are also expressible within that same system and also evaluated to be true. If we had this, we might attempt to claim we have found an objective definition of Truth.</p><p>However, this seems impossible to achieve with any sufficiently strong epistemic system. We can prove it in formal systems: <em>no sufficiently strong formal system can define its own criteria for truth</em>.</p><p>This result is called <a href="https://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem">Tarki&#8217;s undefinability theorem</a>. It is hard to say outside logic if it applies since it relies on specific formal semantics. If we want epistemic systems that at least cover science &#8212;and, by extension, math&#8212; we are severely limited in self-representation.</p><p>However, natural language <em>is</em> self-contained. All claims about natural language, and claims about those claims, and so on, are expressible in natural language. Can we find a good semantic definition of Truth in natural language that works for any claim?</p><p>It seems difficult because natural language contains claims like &#8220;<em>This sentence is false</em>&#8221; that cannot be evaluated consistently. This is a classic example of self-referential negation, the type of claim that leads to <a href="https://en.wikipedia.org/wiki/Russell%27s_paradox">Russell's paradox</a>. Any sufficiently self-contained system must be able to produce these types of claims.</p><p>To get rid of those, we can instead attempt a layered approach, in which self-referential claims can never exist, such as the following:</p><ol><li><p>Take all claims that do not contain the phrase &#8220;is true&#8221; or &#8220;is false&#8221; and define that as layer zero. </p></li><li><p>Define Truth for claims in layer zero, using whatever epistemic system you prefer.</p></li><li><p>Take all claims X in layer zero, build the claims &#8220;<em>X is true</em>&#8221; and &#8220;<em>X is false</em>,&#8221; and define that as layer one.</p></li><li><p>Define Truth in layer one, saying, &#8220;<em>The claim &#8216;</em>X is true<em>&#8217; is true if and only if the claim &#8216;</em>X<em>&#8217; is true according to the epistemic system used in layer zero</em>.&#8221;</p></li><li><p>Carry on for layers two and beyond.</p></li></ol><p>Layer zero will contain all non-meta claims. Things like regular logical, scientific, social, and ethical claims. We still haven't completely solved how to assign truth to those, but whatever you do, using any epistemic system will work for us.</p><p>Then, layers one and upward contain increasingly meta claims. But, crucially, no layer contains self-referential claims. That is because, by construction, layers one and above only refer to lower-layer claims, and layer zero doesn&#8217;t contain any self-referential claim. Claims like &#8220;<em>This sentence is false</em>&#8221; do not exist anywhere in this infinite ladder of layers because &#8220;<em>This sentence</em>&#8221; is not even a valid claim, so there&#8217;s no X from which to form &#8220;<em>X is false.</em>&#8221;</p><p>Have we solved the Truth, though? Well, only partially, it seems. For starters, we know there are many claims this layered system cannot represent, including all those self-referential ones. How do we know we didn&#8217;t throw away anything meaningful? If logic says anything about the real world, it seems to imply we can&#8217;t have our cake and eat it, too. We either have a weak system, an incomplete system, or an inconsistent one.</p><div><hr></div><h2>Conclusions?</h2><p>In summary, a claim is only true or false in a given epistemic system. There is no notion of truth outside an epistemic system and no universal epistemology. There are only useful epistemic systems in some contexts.</p><p>When we talk about &#8220;the truth&#8221; of something, that something is never directly an entity in objective or subjective reality but <em>a claim</em> about those entities. Claims are formulated in a given language, and all language requires at least an emitter and a receptor, two subjects. Thus, meaning in that language is always <em>intersubjective</em>, contingent on the interpretation that those subjects attach to each utterance.</p><p>To be clear, there can be objective truths (with lowercase t). Those are claims about objective reality that are (approximately) true in an agreed epistemology. Claims about math, natural laws, physics, and chemistry would be objective, as well as claims about how social entities behave under certain conditions, like humans or companies.</p><p>But there can&#8217;t be an objective definition of Truth with capital T.</p><p>That would imply there is a single, universal, self-contained epistemology on which to evaluate all possible claims. Tarski's undefinability shows that at least for formal systems &#8212;the ones where we would expect to be easier to demonstrate things&#8212; the definition of truth always needs a bigger epistemic system, which also cannot self-contain.</p><p>This doesn't mean all truth must be relative, though. There are grounds for evaluating different epistemic systems and deciding which ones are most sensible. Pragmatism is one of them. If your epistemic system fails to produce useful predictions, it's at best useless and, at worst, dangerous for yourself.</p><p>But pragmatism is just one possible meta-system. There is no a priori way to demonstrate that pragmatism is any more true than, say, revelation. It is only more useful.</p><h3>Reality and Truth</h3><p>As we peel back the layers of "Truth," we must confront a humbling yet inescapable limitation: the observation problem. Simply put, we are all unreliable witnesses. Our senses are limited tools, and they&#8217;re not only reporting information to our minds, but they&#8217;re also interpreting. </p><p>Even when we employ instruments to extend our senses, we're still bound by frameworks and constructs we have devised.</p><p>&#8220;<em>Inherent murkiness</em>&#8221; might be a great way to frame how we see things. We're not just interpreting data; we're interpreting our interpretations of that data. All this interpretation takes place within a mind that is a product of subjective experiences, hardly a blank slate.</p><p>This brings us to a final unsettling epiphany: <em>all truths are, to varying degrees, subjective.</em> Gravity, our perception of time, and even concepts like existence are all filtered through our all-too-human lenses. In Truth, like in all matters, we are constrained by our empirical experiences and the words and symbols we've devised to describe them.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This is a <em>semantic theory of truth</em>, in which truth is analyzed from the linguistic point of view as a property of sentences in a given language. There are other theories of truth, not necessarily contradicting the semantic ones.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Well, you can make the pedantic argument that <em>the sky is blue</em> is subjective because <em>blueness</em> is not a quality of the objects but an emergent qualia of the way our perceptual systems work. And yes, you would be right. But you could interpret that claim as &#8220;<em>the sky scatters light in the frequencies that most people associate with blueness</em>.&#8221; And in that sense, it is an objective claim. Again, <em>interpretation matters</em>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><em>Epistemology</em> is actually a branch of philosophy that studies, among other things, the nature of knowledge and how to attain it. It is way bigger than we can discuss here, so I apologize to my fellow philosophers for borrowing and oversimplifying the concept.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Note that social sciences, insofar as they make claims that can be falsified experimentally, are <em>not</em> social but empirical epistemologies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Perhaps a more evident social truth is that &#8220;<em>Paper money is worth something</em>.&#8221; Money is only valuable as long as people believe it is. Once people believe money is worthless, it immediately loses its value. The same happens with stocks, bonds, and any other currency without material value, including &#8212;yes, you know&#8212; cryptocurrencies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Maybe you believe that your definition of &#8220;the meaning of life&#8221; is universal, but I don&#8217;t, and that fact already makes it a personal epistemology.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>If you don&#8217;t think these two claims are contradictory, then that shows we have a different interpretation of what <em>the economy is improving</em> means, underscoring this essay's whole point.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>This is a crucial step that should be self-evident; if not, think about it for a second. The usual (and logical) meaning of &#8220;or&#8221; is <em>literally</em> that at least one of two claims is true. Thus, if I &#8220;or&#8221; a true claim with anything else, the composite claim is, by definition, also true.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>This is the other crucial step of the proof. If &#8220;A or B&#8221; is true, then either A, B, or both must be true. Otherwise, the whole claim would be false. Thus, if I know that A is false, that automatically tells me that B has to be true.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>And this can happen by accident, too. It gets even more concerning when you realize that P and Q don&#8217;t need any semantic relation.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>This is called <em>a</em> <em>pragmatic theory of truth:</em> true claims are those that, when acting upon them, you obtain the best possible results.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Yes, there is a lot of nuance here. We&#8217;ll discuss science and falsifiability in greater depth in a future post.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Can Machines Think?]]></title><description><![CDATA[What Alan Turing's seminal paper Computer Machinery and Intelligence tells us about the nature of thinking.]]></description><link>https://blog.apiad.net/p/can-machines-think</link><guid isPermaLink="false">https://blog.apiad.net/p/can-machines-think</guid><dc:creator><![CDATA[Alejandro Piad Morffis]]></dc:creator><pubDate>Mon, 10 Jul 2023 09:14:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lgvp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lgvp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lgvp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lgvp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lgvp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lgvp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:250564,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lgvp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lgvp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lgvp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lgvp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f14f666-2f07-4c78-8c85-5213e12e3770_1024x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Is this your notion of an intelligent machine, Mr. Turing? &#8212; Generated with SDXL via <a href="https://t.me/lovelaice_bot">Lovelaice</a>.</figcaption></figure></div><p>In 1950, Alan Turing, then a well-established mathematician and logician recognized by many as one of the forefathers of the nascent Computer Science discipline, published one of his seminal papers, <em><a href="https://academic.oup.com/mind/article-pdf/LIX/236/433/9866119/433.pdf">Computing Machinery and Intelligence</a></em>. In it, Turing formulated his most famous thought experiment, <em>the Imitation Game</em>. It has since been known as the <em>Turing Test</em>, widely regarded as the quintessential artificial intelligence test.</p><p>But Turing never intended his imitation game to be taken as an actual, operationalizable test. He was going for a deeper philosophical discussion, looking for a definition of &#8220;thinking.&#8221; And by designing his test as he did and claiming what a positive result would imply, he told us his philosophical stance on this question.</p><blockquote><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Andrew Smith&quot;,&quot;id&quot;:97521723,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0218f38-1d3c-4789-b1d5-1cc0ca039c45_938x909.jpeg&quot;,&quot;uuid&quot;:&quot;58daefb7-d102-48e4-a4a8-d46b673f777e&quot;}" data-component-name="MentionToDOM"></span> published a <a href="https://goatfury.substack.com/p/turing-2023">deep dive into the Turing Test</a> at <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Goatfury Writes&quot;,&quot;id&quot;:1583177,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/goatfury&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bf801887-f48f-43cc-a3c6-1548a7c4fe9c_972x972.png&quot;,&quot;uuid&quot;:&quot;3dddd06f-16d5-4f76-80b0-c9f374b86833&quot;}" data-component-name="MentionToDOM"></span>. I recommend you read it before moving on. It is a nice complement to this post and goes deeper into the history and the story behind the test, and the many attempts to solve it.</p></blockquote><p>In this post, I invite you to analyze Turing&#8217;s imitation game from a philosophical perspective. First, we&#8217;ll go over the original test as Turing devised it. Then, we&#8217;ll tackle what &#8220;passing&#8221; the test implies and look at some criticism. Finally, we&#8217;ll briefly discuss how close we are to claiming something like &#8220;We&#8217;ve cracked the Turing Test.&#8221;</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.apiad.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Mostly Harmless Ideas is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>The Imitation Game</h2><p>Turing was a very good logician. So, it is only normal that he tackles this problem with the utmost formality. He opens the paper with the following statement:</p><blockquote><p><em>I propose to consider the question, &#8220;Can machines think?&#8221; This should begin with definitions of the meaning of the terms &#8216;machine&#8217; and &#8216;think&#8217;. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. [&#8230;] Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.</em> - Alan Turing, 1950.</p></blockquote><p>Thus, instead of trying to answer this abstract question, Turing proposes a different question, which he considers equivalent in the sense that matters to us. He calls it &#8220;the imitation game,&#8221; which works as follows.</p><p>Consider two humans and a computer. One of the humans and the computer are located behind a closed door and act as guests, while the second human acts as host or judge. The role of this judge is to determine which of the two guests is human and which is a computer. To achieve this, the judge can ask whatever question they want via a chat interface &#8212;a technical question like &#8220;Factorize this huge prime number&#8221;, a subjective question like &#8220;What is your favorite movie?,&#8221; or anything in between. If the computer consistently succeeds in tricking the judge, we say it &#8220;won&#8221; the imitation game.</p><p>Notice that winning the game for the computer is not simply a matter of being more intelligent, knowledgeable, or even more &#8220;human-like&#8221; than the human. The objective of the computer is to be <em>indistinguishable</em> from the human, which means the judge should, on average, be no better at detecting who is who than chance. In contrast, for humans to win, they must be consistently detected as such. So the game is asymmetric in this sense.</p><p>Turing claims that when taken to its extreme, this imitation game will require the computer to display the full range of cognitive capabilities we consider to make up the human thinking process. Note that this includes purposefully pretending to be less capable on some tasks to avoid being detected &#8212;e.g., purposefully failing to solve a 9x9 sudoku, a trivial task for any modern computer that no human could do in a few seconds.</p><p>If we agree with Turing, we must concede the following statement: </p><div class="pullquote"><p><em>Any machine that can consistently win the imitation game is doing something indistinguishable from what humans call &#8220;thinking.&#8221;</em></p></div><p>An important note is that this says nothing about whether that machine is sentient or has any motivations or subjective experience. Turing is not claiming that such a machine would &#8220;feel&#8221; something. He claims that whatever this process we call &#8220;thinking&#8221; is about, it can&#8217;t be anything beyond what this machine is doing.</p><h2>What does &#8220;thinking&#8221; mean?</h2><p>Assuming Turing&#8217;s claim is true implies subscribing to a specific definition of &#8220;thinking.&#8221; To discover that definition, let&#8217;s try to falsify the consequent.</p><p>Suppose we accept that a computer can consistently beat the imitation game but still claim it is not thinking. But how do we know? By the definition of the imitation game, whatever the computer does is indistinguishable from what a human would do in the same situation.</p><p>The only way to argue that this machine is not thinking is to claim that <em>thinking</em> <em>is not just what can be observed</em>. A perfect imitation of thinking is not equivalent to thinking. There&#8217;s got to be something fundamental to the definition of &#8220;thinking&#8221; that cannot be distinguished from the outside.</p><p>Turing is thus arguing for a <em>functionalist</em> definition of &#8220;thinking&#8221;: what we mean when we say &#8220;someone is thinking&#8221; can be captured entirely in the <em>function</em> of thinking. What thinking <em>is</em>, is nothing more than what thinking <em>does</em>, according to Turing, at least.</p><p>There are many places where a functionalist definition seems accurate. For example, what is the definition of &#8220;to move?&#8221; Regardless of how something moves &#8212;whether using wheels, legs, combustion, or air; self-propelled or by external action; with purpose or randomly; etc.&#8212; if it moves, it moves. There is nothing more to &#8220;moving&#8221; than its function &#8212;<em>to change the location of something</em>. In other words, you cannot perfectly imitate motion while not actually moving.</p><p>Turing argues that the same happens with &#8220;thinking.&#8221; There is nothing more to <em>thinking</em> than its function. </p><p>However, even if you agree that thinking is purely a functional concept, it is still unclear if the imitation game completely captures that functionality. Turing claims that even if we can&#8217;t define thinking as a concrete set of enumerable qualities, we can undoubtedly discover what <em>not thinking</em> is. Whenever we accurately distinguish the computer from the human, we detect an instance of <em>not thinking</em>. If we can&#8217;t find any, we must agree that what has happened is an instance of <em>thinking.</em></p><p>In summary, to agree with Turing means we agree with the following two claims:</p><ol><li><p><strong>Thinking is a functional concept. </strong>This means there is nothing to the definition of &#8220;thinking&#8221; beyond its function, whatever that function is<em>.</em> Equivalently, whatever system &#8212;an information processing machine or a biological brain&#8212; that performs the function that characterizes <em>thinking</em> must be considered to be <em>thinking</em>.</p></li><li><p><strong>The only wa</strong>y<strong> to consistently beat the imitation game is by performing this function. </strong>This means that whatever clever trick you could come up with to build a computer program that imitates thinking but is not fully thinking, it can be detected, <em>in principle,</em> by a sufficiently competent judge (or judges).</p></li></ol><p>Thus, to falsify Turing&#8217;s claim, we have to attack either of those two arguments. Let&#8217;s look at some of the main criticisms.</p><h2>Criticism</h2><p>Turing considers nine different counterarguments in his paper, from theological to philosophical to practical. He focuses on what he saw, at his time, as the main opposing viewpoints. Instead of reviewing them in detail, I want to focus on two main broad attack strategies, one for each claim. My purpose is not to convince you either side is correct but to give you as much ammunition as possible to think it through yourself.</p><p>Attacking the first claim &#8212;that thinking is a functional concept&#8212; requires us to pose an alternative definition for thinking. We could claim that machines cannot think because the essence of thinking somehow eludes machines or because it is special to biological entities, animals, or even humans. Regardless of how indistinguishable from human thinking the computer seems to behave, it will never amount to actual thinking because it lacks the <em>essence </em>of thinking.</p><p>This essentialist definition translates to: &#8220;Machines cannot think because thinking is not something that machines can do.&#8221; This is straight circular reasoning, but you&#8217;d be surprised at how many seemingly good rebuttals are just this in disguise.</p><p>The most famous argument against the functionalist definition of thinking is John Searle&#8217;s <em><a href="https://plato.stanford.edu/entries/chinese-room/">Chinese room</a> </em>thought experiment. Searle posits a setup in which an information processing system &#8212;a man following a rules book for translating English to Chinese&#8212; exhibits a behavior indistinguishable from &#8220;understanding.&#8221; Still, he claims that no part of that system &#8212;neither the man nor the book&#8212; understands Chinese, and crucially, <em>neither does the system as a whole</em>. Searle goes much deeper, and I want to give his argument the respect it deserves, so I&#8217;ll leave this discussion for a future post.</p><p>In any case, attacking the functionalist definition of thinking is a very difficult position to adopt because if there&#8217;s something essential to thinking that can&#8217;t be distinguished from the outside &#8212;in the sense that no experiment one could make could differentiate a sufficiently good &#8220;imitation of thinking&#8221; from &#8220;actual thinking&#8221;&#8212; then we can&#8217;t understand what thinking is with the tools of science. All science can do is design experiments and observe behaviors from the outside.</p><p>Attacking the second claim is somewhat easier. Even if we accept the functional definition of thinking, who&#8217;s to say the imitation game captures all the complexities necessary to ensure there&#8217;s no way to win it but by thinking? Here we can point out that humans are gullible &#8212;as the many recent stories of people claiming some AI program is sentient show us.</p><p>However, this is an attack on concrete implementations of the experiment, not its essence. Most people are gullible in certain circumstances, but even if several people have claimed over the years that some computer programs are conscious or sentient, the collective humanity is convinced we aren&#8217;t there yet. So, in principle, we have uncovered the computer so far, every single time.</p><p>Another attack focuses on the anthropomorphic bias of the imitation game. Surely there are ways a thinking entity &#8212;an extraterrestrial being, for example&#8212; could be discovered in the imitation game, not because of a lack of cognitive abilities but because their background knowledge and reasoning modes would be so different from ours that any judge would immediately recognize the human. And while this is a strong argument, it doesn&#8217;t undermine Turing&#8217;s original claim.</p><p>Turing claims that anything that can pass the imitation game can think, but he doesn&#8217;t claim that anything that can think should be able to pass the imitation game. In formal terms, the imitation game is a <em>sufficient but not necessary</em> condition to claim someone (or something) is capable of thinking, at least at the cognitive level humans can. So, while the anthropomorphic bias argument does highlight a relevant limitation of Turing&#8217;s imitation game, it doesn&#8217;t technically falsify the claims.</p><h2>Cracking the Turing Test</h2><p>We started this post by emphasizing that Turing didn&#8217;t expect this imitation game to be implemented as an objective test of &#8220;intelligence&#8221; for computers. Yet, it has been interpreted as such by myriad computer experts and non-experts alike over the years, and many attempts have been made to operationalize it. It doesn&#8217;t help that Turing claims he hoped by the year 2000, we would have computers that could, in principle, beat 30% of the judges in such a setup. Many took this as an objective milestone to claim we reached strong AI.</p><p>Any concrete implementation of the imitation game runs into the practical issues of human gullibility and biases, which makes it almost impossible to select judges who are guaranteed not to fall for cheap tricks. These issues alone explain all the occasions before 2022 when someone claimed they beat the Turing Test.</p><p>However, starting in 2023, for the first time, we had technology that is eerily close to what many people would claim a worthy contender for the imitation game: large language models (LLMs). Some of the wildest claims about modern LLMs like GPT-5 seem to imply the most powerful of these models are capable of human-level reasoning, at least in some domains.</p><p>But can GPT-5 pass the Turing Test? Again, this is hard to evaluate objectively because there are so many implementation details to get right. But I, and most other experts, don&#8217;t believe we are there yet. For all their incredible skills, LLMs fail in predictable ways, allowing any sufficiently careful and well-informed judge to detect them. So no, my bet is current artificial intelligence can&#8217;t <em>yet</em> beat the imitation game, at least in the spirit originally proposed by Turing.</p><p>In any case, there&#8217;s no GPT-5 out there consistently tricking humans into believing it is one of us.</p><p>Or is it? And how would we know?</p><div><hr></div><p><em>I'm deeply thankful to <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Eldar Sarajlic&quot;,&quot;id&quot;:12024051,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2d38f0b8-d8c4-41d1-8ac7-74bf1ae3eadb_435x435.jpeg&quot;,&quot;uuid&quot;:&quot;fb9d02cb-2c6a-4072-a1c9-75e09616b010&quot;}" data-component-name="MentionToDOM"></span> and <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Andrew Smith&quot;,&quot;id&quot;:97521723,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0218f38-1d3c-4789-b1d5-1cc0ca039c45_938x909.jpeg&quot;,&quot;uuid&quot;:&quot;47ea0718-4086-41c6-badc-93142f18825f&quot;}" data-component-name="MentionToDOM"></span> for their helpful feedback.</em></p>]]></content:encoded></item></channel></rss>