<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>jolek78&#39;s blog</title>
    <link>https://jolek78.writeas.com/</link>
    <description>thoughts from a friendly human being</description>
    <pubDate>Sun, 05 Apr 2026 19:50:33 +0000</pubDate>
    
    <item>
      <title>Reflections on an (impossible) escape from capitalism</title>
      <link>https://jolek78.writeas.com/reflections-on-an-impossible-escape-from-capitalism?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[It was an ordinary Tuesday evening. The package had arrived by courier that morning, but I&#39;d only opened it after dinner, with that silent ceremony I perform every time new hardware arrives - as if opening a box quickly were a form of disrespect toward the object. Inside was a MINISFORUM UM690L. Small, almost ridiculously small. A Ryzen 9 6900HX in a form factor that fit in the palm of a hand. I put it on the desk and looked at it. Looked at it again. And then something uncomfortable occurred to me. I had ordered it from a Chinese retailer, with a credit card, through a completely traceable payment infrastructure, from one of the most centralised and surveilled commercial ecosystems in existence. To build a homelab that would let me escape centralised, surveilled ecosystems.&#xA;&#xA;!--more--&#xA;&#xA;The funny thing - funny in the sense that it makes you laugh, but badly - is that I&#39;m not alone. Every day, somewhere in the world, someone orders a mini-PC, a Raspberry Pi, a Mikrotik managed switch, with the declared goal of taking back control of their digital life. They order it on Alibaba, pay with PayPal, wait for the courier. And they see nothing strange in any of this, because the contradiction has become so structural it&#39;s turned invisible. This article is an attempt to make it visible again. Without easy solutions, because I don&#39;t have any. When did I ever?&#xA;&#xA;The homelab promise&#xA;&#xA;When, in 2019, I began self-hosting practically everything - Nextcloud, Jellyfin, Navidrome, FreshRSS, Open WebUI and about twenty-five other services across roughly twenty Docker containers on Proxmox LXC - I did it with a precise motivation: I wanted to know where my data lived, who could read it, and to have the option of switching it off myself if I ever felt like it. Not when a company decides to cancel a service, not when someone else changes the licensing terms. Me. This came after a long period of reflection on myself, on the work I was doing and still do, and on the technological society I live in. It&#39;s an ideological choice before it&#39;s a technical one. Technology as a tool of autonomy rather than control; infrastructure as something you own rather than something that owns you. I hope no one is alarmed when I say that some of these reflections began, in part, with reading Theodore Kaczynski&#39;s Manifesto, before naturally moving on to more authoritative sources. Yes, I&#39;m eccentric, but not quite that much.&#xA;&#xA;When you pay a subscription to a cloud service, the transaction doesn&#39;t end the moment you authorise the payment. Shoshana Zuboff, in The Age of Surveillance Capitalism, calls this mechanism behavioral surplus: the behavioural data extracted beyond what&#39;s needed to provide the service, then resold as predictive raw material.&#xA;&#xA;  &#34;Under the regime of surveillance capitalism, however, the first text does not stand alone; it trails a shadow close behind. The first text, full of promise, actually functions as the supply operation for the second text: the shadow text. Everything that we contribute to the first text, no matter how trivial or fleeting, becomes a target for surplus extraction. That surplus fills the pages of the second text. This one is hidden from our view: &#39;read only&#39; for surveillance capitalists. In this text our experience is dragooned as raw material to be accumulated and analyzed as means to others&#39; market ends. The shadow text is a burgeoning accumulation of behavioral surplus and its analyses, and it says more about us than we can know about ourselves.&#34;&#xA;&#xA;You&#39;re not the customer of the system - you&#39;re its product. Your habits, your schedules, your preferences, your hesitations before clicking on something: all of it is collected, modelled, sold. The transaction isn&#39;t monthly; it&#39;s continuous, invisible, and never ends as long as you use the service. With hardware, in principle, the transaction is one-off: you buy, you pay, it&#39;s done, it&#39;s yours. The drive is in your room, not on a server subject to government requests, security breaches, or business decisions that have nothing to do with you but affect your access to those services. This distinction - between a tool you use and a system that uses you - is the real stake of the homelab. It&#39;s not about saving money, it&#39;s not about performance. It&#39;s about who controls what.&#xA;&#xA;The problem is that building this infrastructure requires hardware, time, knowledge, and resources. The hardware comes from somewhere; the time, knowledge, and energy come from a privilege not granted to everyone.&#xA;&#xA;The market I hadn&#39;t seen&#xA;&#xA;Search &#34;mini PC homelab&#34; on any marketplace. What you find is a productive ecosystem that has exploded over the last five years in ways I honestly didn&#39;t expect.&#xA;&#xA;MINISFORUM, Beelink, Trigkey, Geekom, GMKtec. Zimaboard, with its single-board aesthetic designed explicitly for people who want home racks. Raspberry Pi and the galaxy of clones - Orange Pi, Rock Pi, Banana Pi. Mikrotik managed switches at accessible prices. 1U rack cases to mount under a desk. M.2 NVMe SSDs with TBW calculated for small server workloads. Silent PSUs designed to run 24/7. A market built from scratch that exists precisely because there&#39;s a community of people who want to run servers at home. r/homelab and r/selfhosted on Reddit have approximately 2.8 and 1.7 million subscribers respectively - publicly verifiable numbers, and growing. YouTube is full of dedicated channels. There&#39;s an entire attention economy built around &#34;escaping&#34; the attention economy.&#xA;&#xA;But it&#39;s worth asking: who built this market, and why. MINISFORUM and Beelink don&#39;t exist out of ideological sympathy toward the homelab movement. They exist because they identified a profitable segment and served it with industrial precision. Kate Crawford, in Atlas of AI, documents how technological supply chains follow niche demand with the same efficiency they follow mass demand: factories in Guangdong optimise production lines not for a worldview, but for a margin. The fact that the resulting product also satisfies an ideological need is, from the producer&#39;s perspective, irrelevant.&#xA;&#xA;  &#34;The Victorian environmental disaster at the dawn of the global information society, shows how the relations between technology and its materials, environments, and labor practices are interwoven. Just as Victorians precipitated ecological disaster for their early cables, so do contemporary mining and global supply chains further imperil the delicate ecological balance of our era.&#34;&#xA;&#xA;The mechanism had already been described with theoretical precision in 1999 by Luc Boltanski and Ève Chiapello in The New Spirit of Capitalism. Their thesis: capitalism is never defeated by critique - it&#39;s incorporated. When a critique becomes widespread enough, the system absorbs it and transforms it into a market segment. The artistic critique of the Sixties - autonomy, authenticity, rejection of standardisation - became the marketing of the creative economy. The critique of digital centralisation - sovereignty, privacy, control - has become an online catalogue to browse.&#xA;&#xA;Resistance has become a market segment. Every time someone buys a UM690L to stop paying subscriptions to services they don&#39;t control, a factory in Guangdong sells a UM690L. Capitalism hasn&#39;t been defeated - it has shifted (at least for a small slice of the population: nerds, hackers) the extraction point from subscriptions to hardware.&#xA;&#xA;The accumulation syndrome&#xA;&#xA;There&#39;s a further level, more ridiculous and more personal, that homelab communities never openly discuss but that anyone with a homelab recognises immediately. The Raspberry Pi 4 bought &#34;for a project.&#34; The old ThinkPad kept because &#34;you never know.&#34; The 4TB drive recovered from a decommissioned NAS - &#34;it might come in handy.&#34; The second-hand switch bought on eBay for eighteen quid because it was cheap and might be useful. The cables, the cables, the cables.&#xA;&#xA;r/homelab has a term for this: just in case hardware. It&#39;s the hardware of the imaginary future, of projects that exist only in your head, of configurations you&#39;ll finally test one day - one day. In the meantime it occupies a shelf, draws power on standby, and generates a diffuse sense of possibility that&#39;s indistinguishable from the most classic consumerism. The underlying psychological mechanism has a precise name: compensatory consumption - purchasing as a response to a perceived loss of autonomy or control. You buy hardware because buying hardware gives you the feeling of recovering agency over something. The aesthetic differs from traditional consumerism - no luxury logos, no recognisable status symbols - but the mechanism is identical.&#xA;&#xA;That said, there&#39;s a partially honest answer to all of this: the second-hand and refurbished market. The ThinkPad X230 on eBay, the Dell R720 server decommissioned from a data centre, the drive from someone who upgraded their NAS. Hardware that would otherwise go to landfill, with its lifespan extended, without generating new production demand. It&#39;s closer to repair ethics than compulsive purchasing. But it has its own internal contradiction: it requires even more technical competence than buying new - knowing how to evaluate wear, diagnose an unknown component, deal with ten-year-old drivers. The barrier to entry rises further. And the refurbished market is itself now an organised commercial sector, with its own margins, platforms, and pricing logic. It&#39;s not a clean way out. It&#39;s a less dirty one.&#xA;&#xA;And then there&#39;s the energy question, which is usually ignored in homelab discussions but is actually the most uncomfortable of all - uncomfortable enough to deserve a fuller treatment later. For now let&#39;s just say: every machine on your shelf that &#34;draws power on standby&#34; is a line item in the energy bill that the homelab movement rarely budgets for.&#xA;&#xA;It&#39;s not for everyone. And it shouldn&#39;t be that way.&#xA;&#xA;There&#39;s a second level of the paradox that is even more uncomfortable than the first. Building a homelab requires money - relatively little, but it requires it. It requires physical space. It requires a decent internet connection. And it requires time. A lot of time. Not installation time - that&#39;s measurable, finite. The learning time that precedes everything else. To reach the point where you can set up a working infrastructure with Proxmox, LXC containers, centralised authentication, reverse proxy, automated backups - you already need to have spent years understanding how Linux works, how to reason about networks and permissions, how to read a log. I&#39;ve been at this since Red Hat in 1997, and it took me nearly thirty years to get where I am. I should know this by now. And yet it still catches me off guard.&#xA;&#xA;That time didn&#39;t fall from the sky. It&#39;s time I was able to dedicate because I had a certain kind of job, a certain stability, a certain amount of mental energy left at the end of the day. It&#39;s time belonging to the comfortable middle class with a stable, or near-stable, position - not someone working three warehouse shifts a week. Passion isn&#39;t enough.&#xA;&#xA;Johan Söderberg documents this in Hacking Capitalism: the FOSS movement was born as resistance to capitalism, but reproduces within itself hierarchies of skill and merit that make it structurally exclusive. Freedom is technically available to anyone, but effective access requires resources distributed in anything but a democratic fashion. Söderberg goes further than simply observing exclusivity: voluntary open-source labour produces use value - working software, documentation, community support - which capital then extracts as exchange value without compensating those who produced it. Red Hat builds a billion-dollar company on a kernel written largely by volunteers. It&#39;s not just that not everyone can enter: it&#39;s that those who do often work for someone without knowing it. The homelab inherits this problem and amplifies it.&#xA;&#xA;  &#34;The narrative of orthodox historical materialism corresponds with some very popular ideas in the computer underground. It is widely held that the infinite reproducibility of information made possible by computers (forces of production) has rendered intellectual property (relations of production, superstructure) obsolete. The storyline of post-industrial ideology is endorsed but with a different ending. Rather than culminating in global markets, technocracy and liberalism, as Daniel Bell and the futurists would have it; hackers are looking forward to a digital gift economy and high-tech anarchism.&#34;&#xA;&#xA;This isn&#39;t a peculiarity of the homelab movement: it&#39;s a recurring structure across every technological wave. Langdon Winner, in his influential essay Do Artifacts Have Politics?, argued that technological choices are never neutral - they embed power structures, distribute access non-randomly. Amateur radio in the 1920s, the personal computer in the 1980s, the internet in the 1990s: every time the promise was democratising, every time the actual distribution followed pre-existing lines of privilege. Not through malice, but through structure.&#xA;&#xA;The irony is this: those who would most need digital autonomy - those who can&#39;t afford subscriptions, who live under governments that surveil communications, who are most exposed to data collection - are exactly those least likely to be able to build a homelab. Not for lack of interest or intelligence. For lack of time, money, and years of privileged exposure to technology.&#xA;&#xA;Homelab communities don&#39;t usually talk about this. They talk about which mini-PC to buy, how to optimise power consumption, which distro to use as a base. The conversation about structural exclusivity exists, but at the margins - in Jacobin, in Logic Magazine, in EFF activism - while the centre of the discourse remains impermeable. It&#39;s not that no one talks about it: it&#39;s that the peripheries talk about it, and peripheries don&#39;t set the agenda. All this conversation takes place in a room to which not everyone has a ticket. And nobody inside seems to find that particularly problematic.&#xA;&#xA;A technological cosplay?&#xA;&#xA;So is the whole thing a joke? Is the homelab just anti-capitalist cosplay while you continue to fund the same supply chains? In part, yes.&#xA;&#xA;The UM690L was designed in China, assembled in China, shipped via container on ships burning bunker fuel. Global maritime transport accounts for roughly 2.5% of global CO₂ emissions - a share the IMO has been trying to reduce for years with slow progress and continuously deferred targets. Then: distributed via Alibaba, paid by credit card. Every piece of technological hardware carries an extractive chain that begins in lithium mines in Bolivia and cobalt mines in the Democratic Republic of Congo, passes through factories in Guangdong, and ends in electronic waste processing centres in Ghana. The hardware travels that supply chain exactly like any other consumer device. And hardware has a lifecycle. In five years the UM690L will be too slow, or it&#39;ll break, or something will come out with far better energy efficiency to ignore. And I&#39;ll buy again. The mini-PC market for homelabs depends on the obsolescence of previous purchases - exactly like any other consumer market.&#xA;&#xA;The critique of capitalism, when widespread enough, isn&#39;t suppressed - it gets incorporated. The system absorbs the values of resistance and transforms them into a market segment. Autonomy becomes a selling point. Decentralisation becomes a brand. The rebel who wanted to exit the system finds themselves funding a new vertical of the same system, convinced they&#39;re making an ethical choice.&#xA;&#xA;The other side&#xA;&#xA;But there&#39;s a structural difference that would be dishonest to ignore.&#xA;&#xA;When you pay a subscription to a cloud service, the cost isn&#39;t just the monthly fee. It&#39;s the ongoing cession of data, behaviours, habits. It&#39;s Zuboff&#39;s behavioral surplus: you&#39;re not using a service - you&#39;re being used as raw material to train models, build profiles, sell advertising. The transaction never ends, in ways you often can&#39;t see and can&#39;t opt out of as long as you use the service.&#xA;&#xA;With hardware, the transaction ends. Your data stays on a physical drive in your room, not on a server subject to government requests, breaches, or business decisions that have nothing to do with you but impact your life. The software running on it - Proxmox, Debian, Nextcloud, Jellyfin - is open source and yours: if something changes in a way you don&#39;t accept, you can leave. This resilience has real value - but it&#39;s worth noting it&#39;s asymmetric resilience. It works for those who have the skills to exercise it. For those who don&#39;t, the theoretical portability of your own data from Nextcloud to something else requires exactly the same skills already identified as a barrier to entry. The freedom to leave is real. Access to that freedom, much less so.&#xA;&#xA;And then there&#39;s the energy question I&#39;ve been putting off long enough. The major hyperscalers - AWS, Google, Azure - operate with a PUE (Power Usage Effectiveness) between 1.1 and 1.2. For every watt of useful computation, they dissipate barely 0.1-0.2 watts in heat and infrastructure. They have enormous economies of scale, optimised industrial cooling, significant renewable energy investment, and above all: their servers run at very high utilisation rates. Almost always busy.&#xA;&#xA;A home homelab works radically differently. The machine runs 24/7 even when it&#39;s doing nothing - and for most of the time, it&#39;s doing nothing. Navidrome serving three requests a day, FreshRSS fetching every hour, an LDAP container listening without receiving connections. You&#39;re paying the energy cost of the infrastructure regardless of usage. The implicit PUE of a homelab, honestly calculated against the ratio of total consumption to actual workload, is far worse than a data centre&#39;s. IEA data (Data Centres and Data Transmission Networks, updated annually) shows that major cloud providers progressively improve energy efficiency through economies of scale that no individual homelab can replicate.&#xA;&#xA;This doesn&#39;t automatically mean cloud is the ethically correct choice - the problem doesn&#39;t reduce to PUE, and surveillance has costs that aren&#39;t measured in kilowatts. It means that anyone with SolarPunk values who chooses the homelab must reckon with a real contradiction: the choice of sovereignty may be, watt for watt, energetically more expensive than the system they&#39;re trying to exit. I don&#39;t have a clean answer. But ignoring the question would be dishonest.&#xA;&#xA;Söderberg acknowledges that the FOSS movement has produced concrete, undeniable gains - they&#39;re simply not enough, on their own, to subvert the dynamics of informational capitalism. It&#39;s not a critique of the homelab. It&#39;s a critique of the homelab presented as a sufficient revolutionary act.&#xA;&#xA;What happens at eleven at night - and beyond&#xA;&#xA;That night, with the mini-PC on the desk, I kept going. I installed Proxmox. I configured the network. I started bringing up containers one by one. And at some point - three hours had passed, I had three terminals open and was debugging nslcd to centralise LDAP authentication across all the containers - I realised something: I was doing all this because I enjoyed doing it. Not to resist something. Not to advance an ideological agenda. Because there was a problem to solve and solving it gave me satisfaction. Mihaly Csikszentmihalyi describes this state in Flow as total absorption in a task calibrated to your skill level: time expands, attention narrows, awareness of context dissolves. It&#39;s not motivation - it&#39;s something more immediate. Debugging an authentication problem at eleven at night on a system I didn&#39;t have to build is, neuropsychologically, indistinguishable from pleasure. Not from the satisfaction of finishing: from the process itself. And for someone AuDHD like me, hyperfocus lets you lose track of time, and literally escape a world you viscerally despise.&#xA;&#xA;Hadn&#39;t you worked that out yet?&#xA;&#xA;When I finished and closed everything, the satisfaction was still there. Along with a slightly uncomfortable awareness: I probably could have used a hosted service, lived just as well, and not lost three hours of a weeknight. But in the meantime I&#39;d understood how PAM works, I&#39;d read documentation I&#39;d never opened before, I&#39;d implemented it on my homelab, I&#39;d learned something I hadn&#39;t known I wanted to know.&#xA;&#xA;And here the circle closes in a slightly unsettling way. Söderberg talks about voluntary open-source work as the production of pure use value - the intrinsic pleasure of making, understanding, building something that works. But it&#39;s exactly this use value that capital then extracts as exchange value: the competence I accumulate debugging LDAP at eleven at night is the same I bring to work the next day, that I put into articles like this one, that I share in communities where others use it to build their own homelabs. Technical pleasure isn&#39;t neutral. It has a production chain. Not always visible, but real.&#xA;&#xA;This is what the homelab is, at least for me: a way of learning that produces, as a side effect, an infrastructure I control. The ideology is there, but it comes after. First comes the pleasure of understanding how something works. And this resolves none of the contradictions I&#39;ve described above - it leaves them all standing, makes them stranger. Am I resisting capitalism, or just cultivating an expensive hobby with a political aesthetic?&#xA;&#xA;The hacker ethic&#xA;&#xA;The word &#34;hacker&#34; has had a bad press for decades. In Nineties news bulletins it meant hooded criminal; in the security industry&#39;s jargon it became a marketing term to prepend to anything. Neither has much to do with the word&#39;s historical meaning. Steven Levy, in Hackers: Heroes of the Computer Revolution, reconstructs the culture that formed around MIT and Stanford laboratories in the Sixties: a community of programmers for whom code was an aesthetic object, access to information a moral principle, and technical competence the only legitimate hierarchy. The principles Levy identifies as the &#34;hacker ethic&#34; are precise: access to computers - and to anything that can teach you how the world works - should be unlimited and total. All information should be free. Decentralised systems are preferable to centralised ones. Hackers should be judged by what they produce, not by credentials, age, race, or position. You can create art and beauty with a computer.&#xA;&#xA;It&#39;s not a political manifesto in the traditional sense. It&#39;s something more visceral - a disposition toward the world, a way of standing before a system you don&#39;t yet understand: the correct response is to dismantle it, understand how it works, and put it back together better than before.&#xA;&#xA;Pekka Himanen, in The Hacker Ethic and the Spirit of the Information Age - with a preface by Linus Torvalds and an afterword by Manuel Castells, which already says something about the project&#39;s ambition - performs a more explicit theoretical operation. He constructs the hacker ethic in direct opposition to the Protestant work ethic described by Max Weber: where Weber saw work as duty, discipline as virtue, and leisure as the absence of production, Himanen identifies in the hacker a figure who works for passion, considers play an integral part of work, and rejects the sharp separation between productive time and free time. The hacker doesn&#39;t work for money - money is a side effect, when it arrives. They work because the problem is interesting. Because the elegant solution has value in itself. Because understanding how something works is, in itself, sufficient.&#xA;&#xA;  &#34;Hacker activity is also joyful. It often has its roots in playful explorations. Torvalds has described, in messages on the Net, how Linux began to expand from small experiments with the computer he had just acquired. In the same messages, he has explained his motivation for developing Linux by simply stating that &#39;it was/is fun working on it.&#39; Tim Berners-Lee, the man behind the Web, also describes how this creation began with experiments in linking what he called &#39;play programs.&#39; Wozniak relates how many characteristics of the Apple computer &#39;came from a game, and the fun features that were built in were only to do one pet project, which was to program … [a game called] Breakout and show it off at the club.&#39;&#34;&#xA;&#xA;Recognise something? I do. Those three hours debugging nslcd at eleven at night weren&#39;t work in the Weberian sense - nobody was paying me, nobody had asked me to do it, there was no corporate objective to meet. They were hacking in the precise sense Levy and Himanen describe: exploration motivated by curiosity, with the infrastructure as an object of study as well as utility. The homelab is, culturally, a direct expression of the hacker ethic. It&#39;s no coincidence that homelab communities and open source communities overlap almost perfectly, sharing the same language, the same platforms, the same values.&#xA;&#xA;But here, as elsewhere in this article, the story gets complicated.&#xA;&#xA;The hacker ethic promises a pure meritocracy: you&#39;re judged by what you can do, not by who you are. It&#39;s an attractive idea. It&#39;s also, in practice, a partial fiction. Technical meritocracy presupposes that everyone starts from the same point - that skills are accessible to anyone who truly wants to acquire them, that the time to acquire them is equitably distributed, that mentorship networks and learning resources are available regardless of context. The homelab as hacker practice inherits both things: the genuine quality of curiosity as a driver, and structural exclusivity as an undeclared side effect. The pleasure of dismantling a system to understand how it works is real and shouldn&#39;t be devalued. But that pleasure is available, in practice, to those who already have the ticket to get in.&#xA;&#xA;Conclusions&#xA;&#xA;The MINISFORUM runs, alongside the other &#34;electronic gizmos,&#34; on a rack next to my armchair - the one where, at the end of the day, I indulge my guilty pleasure of reading a book in the company of my cats. Proxmox, the Tor relay, the Nextcloud server, the ZFS NAS, the small server running the LLM models I experiment with, and the services that let me have something resembling digital sovereignty within the limits of what&#39;s possible. The contradictions I&#39;ve described don&#39;t get resolved. They&#39;re held together, with difficulty, the way any intellectually complex position on a complex system is held together.&#xA;&#xA;The first: the market that made accessible homelab possible is the same market from which the homelab is supposed to emancipate us. If this explosion of cheap, efficient mini-PCs hadn&#39;t happened - if capitalism hadn&#39;t decided to build exactly what we wanted - how many of us would have taken the same path? How much of our &#34;ethical choice&#34; depends on the existence of products designed and sold precisely for us?&#xA;&#xA;The second: does incorporated resistance really get defused, or does it remain resistance even when someone profits from it? Boltanski and Chiapello describe the incorporation mechanism, but they don&#39;t argue that critique loses all efficacy in the process. Perhaps the homelab is simultaneously a product of the system and a real, if partial, form of withdrawal from it. The two things aren&#39;t mutually exclusive.&#xA;&#xA;The third: if digital autonomy requires decades of accumulated competences, enough spare time to use them, and enough money to buy the hardware, are we building a democratic alternative? Or are we building an exclusive club with a rebellious aesthetic, reproducing the same hierarchies of privilege it claims to be fighting?&#xA;&#xA;The fourth: if your homelab, watt for watt, consumes more than the cloud you reject, are you building digital sovereignty - or are you just externalising the problem, shifting it from data surveillance to energy impact?&#xA;&#xA;I don&#39;t know. But at least I know where my data is.&#xA;&#xA;Fun Fact&#xA;&#xA;This article was written in Markdown using a Flatnotes instance running as a CT container on Proxmox, while listening to a symphonic metal playlist served by Navidrome - another CT container - pulling ogg files from a ZFS NAS over an NFS share. The cited books were in epub on Calibre Web. In the background, Nextcloud on a Raspberry Pi 4 was syncing and backing everything up. Spelling errors were corrected by Qwen2.5, a locally-run LLM model. All of this from a laptop running Linux.&#xA;&#xA;Coincidence? I think not.&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/reflections-on-an-impossible-escape-from-capitalism&#34;Discuss.../a&#xA;&#xA;#Homelab #SelfHosted #SurveillanceCapitalism #Privacy #OpenSource #HackerEthic #SolarPunk #DigitalSovereignty #FOSS #Linux&#xA;&#xA;div class=&#34;center&#34;a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a :: a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a :: a href=&#34;mailto:jolek78@posteo.net&#34;Email/a  :: a href=&#34;https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net&#34; Element/a/div]]&gt;</description>
      <content:encoded><![CDATA[<p>It was an ordinary Tuesday evening. The package had arrived by courier that morning, but I&#39;d only opened it after dinner, with that silent ceremony I perform every time new hardware arrives – as if opening a box quickly were a form of disrespect toward the object. Inside was a MINISFORUM UM690L. Small, almost ridiculously small. A Ryzen 9 6900HX in a form factor that fit in the palm of a hand. I put it on the desk and looked at it. Looked at it again. And then something uncomfortable occurred to me. I had ordered it from a Chinese retailer, with a credit card, through a completely traceable payment infrastructure, from one of the most centralised and surveilled commercial ecosystems in existence. To build a homelab that would let me escape centralised, surveilled ecosystems.</p>



<p>The funny thing – funny in the sense that it makes you laugh, but badly – is that I&#39;m not alone. Every day, somewhere in the world, someone orders a mini-PC, a Raspberry Pi, a Mikrotik managed switch, with the declared goal of taking back control of their digital life. They order it on Alibaba, pay with PayPal, wait for the courier. And they see nothing strange in any of this, because the contradiction has become so structural it&#39;s turned invisible. This article is an attempt to make it visible again. Without easy solutions, because I don&#39;t have any. When did I ever?</p>

<h2 id="the-homelab-promise" id="the-homelab-promise">The homelab promise</h2>

<p>When, in 2019, I began self-hosting practically everything – Nextcloud, Jellyfin, Navidrome, FreshRSS, Open WebUI and about twenty-five other services across roughly twenty Docker containers on Proxmox LXC – I did it with a precise motivation: I wanted to know where my data lived, who could read it, and to have the option of switching it off myself if I ever felt like it. Not when a company decides to cancel a service, not when someone else changes the licensing terms. Me. This came after a long period of reflection on myself, on the work I was doing and still do, and on the technological society I live in. It&#39;s an ideological choice before it&#39;s a technical one. Technology as a tool of autonomy rather than control; infrastructure as something you own rather than something that owns you. I hope no one is alarmed when I say that some of these reflections began, in part, with reading Theodore Kaczynski&#39;s Manifesto, before naturally moving on to more authoritative sources. Yes, I&#39;m eccentric, but not quite that much.</p>

<p>When you pay a subscription to a cloud service, the transaction doesn&#39;t end the moment you authorise the payment. Shoshana Zuboff, in <em>The Age of Surveillance Capitalism</em>, calls this mechanism <strong>behavioral surplus</strong>: the behavioural data extracted beyond what&#39;s needed to provide the service, then resold as predictive raw material.</p>

<blockquote><p>“Under the regime of surveillance capitalism, however, the first text does not stand alone; it trails a shadow close behind. The first text, full of promise, actually functions as the supply operation for the second text: the shadow text. Everything that we contribute to the first text, no matter how trivial or fleeting, becomes a target for surplus extraction. That surplus fills the pages of the second text. This one is hidden from our view: &#39;read only&#39; for surveillance capitalists. In this text our experience is dragooned as raw material to be accumulated and analyzed as means to others&#39; market ends. The shadow text is a burgeoning accumulation of behavioral surplus and its analyses, and it says more about us than we can know about ourselves.”</p></blockquote>

<p>You&#39;re not the customer of the system – you&#39;re its product. Your habits, your schedules, your preferences, your hesitations before clicking on something: all of it is collected, modelled, sold. The transaction isn&#39;t monthly; it&#39;s continuous, invisible, and never ends as long as you use the service. With hardware, in principle, the transaction is one-off: you buy, you pay, it&#39;s done, it&#39;s yours. The drive is in your room, not on a server subject to government requests, security breaches, or business decisions that have nothing to do with you but affect your access to those services. This distinction – between a tool you use and a system that uses you – is the real stake of the homelab. It&#39;s not about saving money, it&#39;s not about performance. It&#39;s about who controls what.</p>

<p>The problem is that building this infrastructure requires hardware, time, knowledge, and resources. The hardware comes from somewhere; the time, knowledge, and energy come from a privilege not granted to everyone.</p>

<h2 id="the-market-i-hadn-t-seen" id="the-market-i-hadn-t-seen">The market I hadn&#39;t seen</h2>

<p>Search “mini PC homelab” on any marketplace. What you find is a productive ecosystem that has exploded over the last five years in ways I honestly didn&#39;t expect.</p>

<p>MINISFORUM, Beelink, Trigkey, Geekom, GMKtec. Zimaboard, with its single-board aesthetic designed explicitly for people who want home racks. Raspberry Pi and the galaxy of clones – Orange Pi, Rock Pi, Banana Pi. Mikrotik managed switches at accessible prices. 1U rack cases to mount under a desk. M.2 NVMe SSDs with TBW calculated for small server workloads. Silent PSUs designed to run 24/7. A market built from scratch that exists precisely because there&#39;s a community of people who want to run servers at home. r/homelab and r/selfhosted on Reddit have approximately 2.8 and 1.7 million subscribers respectively – publicly verifiable numbers, and growing. YouTube is full of dedicated channels. There&#39;s an entire attention economy built around “escaping” the attention economy.</p>

<p>But it&#39;s worth asking: who built this market, and why. MINISFORUM and Beelink don&#39;t exist out of ideological sympathy toward the homelab movement. They exist because they identified a profitable segment and served it with industrial precision. Kate Crawford, in <em>Atlas of AI</em>, documents how technological supply chains follow niche demand with the same efficiency they follow mass demand: factories in Guangdong optimise production lines not for a worldview, but for a margin. The fact that the resulting product also satisfies an ideological need is, from the producer&#39;s perspective, irrelevant.</p>

<blockquote><p>“The Victorian environmental disaster at the dawn of the global information society, shows how the relations between technology and its materials, environments, and labor practices are interwoven. Just as Victorians precipitated ecological disaster for their early cables, so do contemporary mining and global supply chains further imperil the delicate ecological balance of our era.”</p></blockquote>

<p>The mechanism had already been described with theoretical precision in 1999 by Luc Boltanski and Ève Chiapello in <em>The New Spirit of Capitalism</em>. Their thesis: capitalism is never defeated by critique – it&#39;s incorporated. When a critique becomes widespread enough, the system absorbs it and transforms it into a market segment. The artistic critique of the Sixties – autonomy, authenticity, rejection of standardisation – became the marketing of the creative economy. The critique of digital centralisation – sovereignty, privacy, control – has become an online catalogue to browse.</p>

<p>Resistance has become a market segment. Every time someone buys a UM690L to stop paying subscriptions to services they don&#39;t control, a factory in Guangdong sells a UM690L. Capitalism hasn&#39;t been defeated – it has shifted (at least for a small slice of the population: nerds, hackers) the extraction point from subscriptions to hardware.</p>

<h2 id="the-accumulation-syndrome" id="the-accumulation-syndrome">The accumulation syndrome</h2>

<p>There&#39;s a further level, more ridiculous and more personal, that homelab communities never openly discuss but that anyone with a homelab recognises immediately. The Raspberry Pi 4 bought “for a project.” The old ThinkPad kept because “you never know.” The 4TB drive recovered from a decommissioned NAS – “it might come in handy.” The second-hand switch bought on eBay for eighteen quid because it was cheap and might be useful. The cables, the cables, the cables.</p>

<p>r/homelab has a term for this: <strong>just in case hardware</strong>. It&#39;s the hardware of the imaginary future, of projects that exist only in your head, of configurations you&#39;ll finally test one day – one day. In the meantime it occupies a shelf, draws power on standby, and generates a diffuse sense of possibility that&#39;s indistinguishable from the most classic consumerism. The underlying psychological mechanism has a precise name: <strong>compensatory consumption</strong> – purchasing as a response to a perceived loss of autonomy or control. You buy hardware because buying hardware gives you the feeling of recovering agency over something. The aesthetic differs from traditional consumerism – no luxury logos, no recognisable status symbols – but the mechanism is identical.</p>

<p>That said, there&#39;s a partially honest answer to all of this: the second-hand and refurbished market. The ThinkPad X230 on eBay, the Dell R720 server decommissioned from a data centre, the drive from someone who upgraded their NAS. Hardware that would otherwise go to landfill, with its lifespan extended, without generating new production demand. It&#39;s closer to repair ethics than compulsive purchasing. But it has its own internal contradiction: it requires even more technical competence than buying new – knowing how to evaluate wear, diagnose an unknown component, deal with ten-year-old drivers. The barrier to entry rises further. And the refurbished market is itself now an organised commercial sector, with its own margins, platforms, and pricing logic. It&#39;s not a clean way out. It&#39;s a less dirty one.</p>

<p>And then there&#39;s the energy question, which is usually ignored in homelab discussions but is actually the most uncomfortable of all – uncomfortable enough to deserve a fuller treatment later. For now let&#39;s just say: every machine on your shelf that “draws power on standby” is a line item in the energy bill that the homelab movement rarely budgets for.</p>

<h2 id="it-s-not-for-everyone-and-it-shouldn-t-be-that-way" id="it-s-not-for-everyone-and-it-shouldn-t-be-that-way">It&#39;s not for everyone. And it shouldn&#39;t be that way.</h2>

<p>There&#39;s a second level of the paradox that is even more uncomfortable than the first. Building a homelab requires money – relatively little, but it requires it. It requires physical space. It requires a decent internet connection. And it requires time. A lot of time. Not installation time – that&#39;s measurable, finite. The learning time that precedes everything else. To reach the point where you can set up a working infrastructure with Proxmox, LXC containers, centralised authentication, reverse proxy, automated backups – you already need to have spent years understanding how Linux works, how to reason about networks and permissions, how to read a log. I&#39;ve been at this since Red Hat in 1997, and it took me nearly thirty years to get where I am. I should know this by now. And yet it still catches me off guard.</p>

<p>That time didn&#39;t fall from the sky. It&#39;s time I was able to dedicate because I had a certain kind of job, a certain stability, a certain amount of mental energy left at the end of the day. It&#39;s time belonging to the comfortable middle class with a stable, or near-stable, position – not someone working three warehouse shifts a week. Passion isn&#39;t enough.</p>

<p>Johan Söderberg documents this in <em>Hacking Capitalism</em>: the FOSS movement was born as resistance to capitalism, but reproduces within itself hierarchies of skill and merit that make it structurally exclusive. Freedom is technically available to anyone, but effective access requires resources distributed in anything but a democratic fashion. Söderberg goes further than simply observing exclusivity: voluntary open-source labour produces use value – working software, documentation, community support – which capital then extracts as exchange value without compensating those who produced it. Red Hat builds a billion-dollar company on a kernel written largely by volunteers. It&#39;s not just that not everyone can enter: it&#39;s that those who do often work for someone without knowing it. The homelab inherits this problem and amplifies it.</p>

<blockquote><p>“The narrative of orthodox historical materialism corresponds with some very popular ideas in the computer underground. It is widely held that the infinite reproducibility of information made possible by computers (forces of production) has rendered intellectual property (relations of production, superstructure) obsolete. The storyline of post-industrial ideology is endorsed but with a different ending. Rather than culminating in global markets, technocracy and liberalism, as Daniel Bell and the futurists would have it; hackers are looking forward to a digital gift economy and high-tech anarchism.”</p></blockquote>

<p>This isn&#39;t a peculiarity of the homelab movement: it&#39;s a recurring structure across every technological wave. Langdon Winner, in his influential essay <em>Do Artifacts Have Politics?</em>, argued that technological choices are never neutral – they embed power structures, distribute access non-randomly. Amateur radio in the 1920s, the personal computer in the 1980s, the internet in the 1990s: every time the promise was democratising, every time the actual distribution followed pre-existing lines of privilege. Not through malice, but through structure.</p>

<p>The irony is this: those who would most need digital autonomy – those who can&#39;t afford subscriptions, who live under governments that surveil communications, who are most exposed to data collection – are exactly those least likely to be able to build a homelab. Not for lack of interest or intelligence. For lack of time, money, and years of privileged exposure to technology.</p>

<p>Homelab communities don&#39;t usually talk about this. They talk about which mini-PC to buy, how to optimise power consumption, which distro to use as a base. The conversation about structural exclusivity exists, but at the margins – in Jacobin, in Logic Magazine, in EFF activism – while the centre of the discourse remains impermeable. It&#39;s not that no one talks about it: it&#39;s that the peripheries talk about it, and peripheries don&#39;t set the agenda. All this conversation takes place in a room to which not everyone has a ticket. And nobody inside seems to find that particularly problematic.</p>

<h2 id="a-technological-cosplay" id="a-technological-cosplay">A technological cosplay?</h2>

<p>So is the whole thing a joke? Is the homelab just anti-capitalist cosplay while you continue to fund the same supply chains? In part, yes.</p>

<p>The UM690L was designed in China, assembled in China, shipped via container on ships burning bunker fuel. Global maritime transport accounts for roughly 2.5% of global CO₂ emissions – a share the IMO has been trying to reduce for years with slow progress and continuously deferred targets. Then: distributed via Alibaba, paid by credit card. Every piece of technological hardware carries an extractive chain that begins in lithium mines in Bolivia and cobalt mines in the Democratic Republic of Congo, passes through factories in Guangdong, and ends in electronic waste processing centres in Ghana. The hardware travels that supply chain exactly like any other consumer device. And hardware has a lifecycle. In five years the UM690L will be too slow, or it&#39;ll break, or something will come out with far better energy efficiency to ignore. And I&#39;ll buy again. The mini-PC market for homelabs depends on the obsolescence of previous purchases – exactly like any other consumer market.</p>

<p>The critique of capitalism, when widespread enough, isn&#39;t suppressed – it gets incorporated. The system absorbs the values of resistance and transforms them into a market segment. Autonomy becomes a selling point. Decentralisation becomes a brand. The rebel who wanted to exit the system finds themselves funding a new vertical of the same system, convinced they&#39;re making an ethical choice.</p>

<h2 id="the-other-side" id="the-other-side">The other side</h2>

<p>But there&#39;s a structural difference that would be dishonest to ignore.</p>

<p>When you pay a subscription to a cloud service, the cost isn&#39;t just the monthly fee. It&#39;s the ongoing cession of data, behaviours, habits. It&#39;s Zuboff&#39;s behavioral surplus: you&#39;re not using a service – you&#39;re being used as raw material to train models, build profiles, sell advertising. The transaction never ends, in ways you often can&#39;t see and can&#39;t opt out of as long as you use the service.</p>

<p>With hardware, the transaction ends. Your data stays on a physical drive in your room, not on a server subject to government requests, breaches, or business decisions that have nothing to do with you but impact your life. The software running on it – Proxmox, Debian, Nextcloud, Jellyfin – is open source and yours: if something changes in a way you don&#39;t accept, you can leave. This resilience has real value – but it&#39;s worth noting it&#39;s asymmetric resilience. It works for those who have the skills to exercise it. For those who don&#39;t, the theoretical portability of your own data from Nextcloud to something else requires exactly the same skills already identified as a barrier to entry. The freedom to leave is real. Access to that freedom, much less so.</p>

<p>And then there&#39;s the energy question I&#39;ve been putting off long enough. The major hyperscalers – AWS, Google, Azure – operate with a <strong>PUE</strong> (Power Usage Effectiveness) between 1.1 and 1.2. For every watt of useful computation, they dissipate barely 0.1-0.2 watts in heat and infrastructure. They have enormous economies of scale, optimised industrial cooling, significant renewable energy investment, and above all: their servers run at very high utilisation rates. Almost always busy.</p>

<p>A home homelab works radically differently. The machine runs 24/7 even when it&#39;s doing nothing – and for most of the time, it&#39;s doing nothing. Navidrome serving three requests a day, FreshRSS fetching every hour, an LDAP container listening without receiving connections. You&#39;re paying the energy cost of the infrastructure regardless of usage. The implicit PUE of a homelab, honestly calculated against the ratio of total consumption to actual workload, is far worse than a data centre&#39;s. IEA data (<em>Data Centres and Data Transmission Networks</em>, updated annually) shows that major cloud providers progressively improve energy efficiency through economies of scale that no individual homelab can replicate.</p>

<p>This doesn&#39;t automatically mean cloud is the ethically correct choice – the problem doesn&#39;t reduce to PUE, and surveillance has costs that aren&#39;t measured in kilowatts. It means that anyone with SolarPunk values who chooses the homelab must reckon with a real contradiction: the choice of sovereignty may be, watt for watt, energetically more expensive than the system they&#39;re trying to exit. I don&#39;t have a clean answer. But ignoring the question would be dishonest.</p>

<p>Söderberg acknowledges that the FOSS movement has produced concrete, undeniable gains – they&#39;re simply not enough, on their own, to subvert the dynamics of informational capitalism. It&#39;s not a critique of the homelab. It&#39;s a critique of the homelab presented as a sufficient revolutionary act.</p>

<h2 id="what-happens-at-eleven-at-night-and-beyond" id="what-happens-at-eleven-at-night-and-beyond">What happens at eleven at night – and beyond</h2>

<p>That night, with the mini-PC on the desk, I kept going. I installed Proxmox. I configured the network. I started bringing up containers one by one. And at some point – three hours had passed, I had three terminals open and was debugging nslcd to centralise LDAP authentication across all the containers – I realised something: I was doing all this because I enjoyed doing it. Not to resist something. Not to advance an ideological agenda. Because there was a problem to solve and solving it gave me satisfaction. Mihaly Csikszentmihalyi describes this state in <em>Flow</em> as total absorption in a task calibrated to your skill level: time expands, attention narrows, awareness of context dissolves. It&#39;s not motivation – it&#39;s something more immediate. Debugging an authentication problem at eleven at night on a system I didn&#39;t have to build is, neuropsychologically, indistinguishable from pleasure. Not from the satisfaction of finishing: from the process itself. And for someone AuDHD like me, hyperfocus lets you lose track of time, and literally escape a world you viscerally despise.</p>

<p>Hadn&#39;t you worked that out yet?</p>

<p>When I finished and closed everything, the satisfaction was still there. Along with a slightly uncomfortable awareness: I probably could have used a hosted service, lived just as well, and not lost three hours of a weeknight. But in the meantime I&#39;d understood how PAM works, I&#39;d read documentation I&#39;d never opened before, I&#39;d implemented it on my homelab, I&#39;d learned something I hadn&#39;t known I wanted to know.</p>

<p>And here the circle closes in a slightly unsettling way. Söderberg talks about voluntary open-source work as the production of pure use value – the intrinsic pleasure of making, understanding, building something that works. But it&#39;s exactly this use value that capital then extracts as exchange value: the competence I accumulate debugging LDAP at eleven at night is the same I bring to work the next day, that I put into articles like this one, that I share in communities where others use it to build their own homelabs. Technical pleasure isn&#39;t neutral. It has a production chain. Not always visible, but real.</p>

<p>This is what the homelab is, at least for me: a way of learning that produces, as a side effect, an infrastructure I control. The ideology is there, but it comes after. First comes the pleasure of understanding how something works. And this resolves none of the contradictions I&#39;ve described above – it leaves them all standing, makes them stranger. Am I resisting capitalism, or just cultivating an expensive hobby with a political aesthetic?</p>

<h2 id="the-hacker-ethic" id="the-hacker-ethic">The hacker ethic</h2>

<p>The word “hacker” has had a bad press for decades. In Nineties news bulletins it meant hooded criminal; in the security industry&#39;s jargon it became a marketing term to prepend to anything. Neither has much to do with the word&#39;s historical meaning. Steven Levy, in <em>Hackers: Heroes of the Computer Revolution</em>, reconstructs the culture that formed around MIT and Stanford laboratories in the Sixties: a community of programmers for whom code was an aesthetic object, access to information a moral principle, and technical competence the only legitimate hierarchy. The principles Levy identifies as the “hacker ethic” are precise: access to computers – and to anything that can teach you how the world works – should be unlimited and total. All information should be free. Decentralised systems are preferable to centralised ones. Hackers should be judged by what they produce, not by credentials, age, race, or position. You can create art and beauty with a computer.</p>

<p>It&#39;s not a political manifesto in the traditional sense. It&#39;s something more visceral – a disposition toward the world, a way of standing before a system you don&#39;t yet understand: the correct response is to dismantle it, understand how it works, and put it back together better than before.</p>

<p>Pekka Himanen, in <em>The Hacker Ethic and the Spirit of the Information Age</em> – with a preface by Linus Torvalds and an afterword by Manuel Castells, which already says something about the project&#39;s ambition – performs a more explicit theoretical operation. He constructs the hacker ethic in direct opposition to the Protestant work ethic described by Max Weber: where Weber saw work as duty, discipline as virtue, and leisure as the absence of production, Himanen identifies in the hacker a figure who works for passion, considers play an integral part of work, and rejects the sharp separation between productive time and free time. The hacker doesn&#39;t work for money – money is a side effect, when it arrives. They work because the problem is interesting. Because the elegant solution has value in itself. Because understanding how something works is, in itself, sufficient.</p>

<blockquote><p>“Hacker activity is also joyful. It often has its roots in playful explorations. Torvalds has described, in messages on the Net, how Linux began to expand from small experiments with the computer he had just acquired. In the same messages, he has explained his motivation for developing Linux by simply stating that &#39;it was/is fun working on it.&#39; Tim Berners-Lee, the man behind the Web, also describes how this creation began with experiments in linking what he called &#39;play programs.&#39; Wozniak relates how many characteristics of the Apple computer &#39;came from a game, and the fun features that were built in were only to do one pet project, which was to program … [a game called] Breakout and show it off at the club.&#39;”</p></blockquote>

<p>Recognise something? I do. Those three hours debugging nslcd at eleven at night weren&#39;t work in the Weberian sense – nobody was paying me, nobody had asked me to do it, there was no corporate objective to meet. They were hacking in the precise sense Levy and Himanen describe: exploration motivated by curiosity, with the infrastructure as an object of study as well as utility. The homelab is, culturally, a direct expression of the hacker ethic. It&#39;s no coincidence that homelab communities and open source communities overlap almost perfectly, sharing the same language, the same platforms, the same values.</p>

<p>But here, as elsewhere in this article, the story gets complicated.</p>

<p>The hacker ethic promises a pure meritocracy: you&#39;re judged by what you can do, not by who you are. It&#39;s an attractive idea. It&#39;s also, in practice, a partial fiction. Technical meritocracy presupposes that everyone starts from the same point – that skills are accessible to anyone who truly wants to acquire them, that the time to acquire them is equitably distributed, that mentorship networks and learning resources are available regardless of context. The homelab as hacker practice inherits both things: the genuine quality of curiosity as a driver, and structural exclusivity as an undeclared side effect. The pleasure of dismantling a system to understand how it works is real and shouldn&#39;t be devalued. But that pleasure is available, in practice, to those who already have the ticket to get in.</p>

<h2 id="conclusions" id="conclusions">Conclusions</h2>

<p>The MINISFORUM runs, alongside the other “electronic gizmos,” on a rack next to my armchair – the one where, at the end of the day, I indulge my guilty pleasure of reading a book in the company of my cats. Proxmox, the Tor relay, the Nextcloud server, the ZFS NAS, the small server running the LLM models I experiment with, and the services that let me have something resembling digital sovereignty within the limits of what&#39;s possible. The contradictions I&#39;ve described don&#39;t get resolved. They&#39;re held together, with difficulty, the way any intellectually complex position on a complex system is held together.</p>

<p>The first: the market that made accessible homelab possible is the same market from which the homelab is supposed to emancipate us. If this explosion of cheap, efficient mini-PCs hadn&#39;t happened – if capitalism hadn&#39;t decided to build exactly what we wanted – how many of us would have taken the same path? How much of our “ethical choice” depends on the existence of products designed and sold precisely for us?</p>

<p>The second: does incorporated resistance really get defused, or does it remain resistance even when someone profits from it? Boltanski and Chiapello describe the incorporation mechanism, but they don&#39;t argue that critique loses all efficacy in the process. Perhaps the homelab is simultaneously a product of the system and a real, if partial, form of withdrawal from it. The two things aren&#39;t mutually exclusive.</p>

<p>The third: if digital autonomy requires decades of accumulated competences, enough spare time to use them, and enough money to buy the hardware, are we building a democratic alternative? Or are we building an exclusive club with a rebellious aesthetic, reproducing the same hierarchies of privilege it claims to be fighting?</p>

<p>The fourth: if your homelab, watt for watt, consumes more than the cloud you reject, are you building digital sovereignty – or are you just externalising the problem, shifting it from data surveillance to energy impact?</p>

<p>I don&#39;t know. But at least I know where my data is.</p>

<h2 id="fun-fact" id="fun-fact">Fun Fact</h2>

<p>This article was written in Markdown using a Flatnotes instance running as a CT container on Proxmox, while listening to a symphonic metal playlist served by Navidrome – another CT container – pulling ogg files from a ZFS NAS over an NFS share. The cited books were in epub on Calibre Web. In the background, Nextcloud on a Raspberry Pi 4 was syncing and backing everything up. Spelling errors were corrected by Qwen2.5, a locally-run LLM model. All of this from a laptop running Linux.</p>

<p>Coincidence? I think not.</p>

<p><a href="https://remark.as/p/jolek78/reflections-on-an-impossible-escape-from-capitalism">Discuss...</a></p>

<p><a href="https://jolek78.writeas.com/tag:Homelab" class="hashtag"><span>#</span><span class="p-category">Homelab</span></a> <a href="https://jolek78.writeas.com/tag:SelfHosted" class="hashtag"><span>#</span><span class="p-category">SelfHosted</span></a> <a href="https://jolek78.writeas.com/tag:SurveillanceCapitalism" class="hashtag"><span>#</span><span class="p-category">SurveillanceCapitalism</span></a> <a href="https://jolek78.writeas.com/tag:Privacy" class="hashtag"><span>#</span><span class="p-category">Privacy</span></a> <a href="https://jolek78.writeas.com/tag:OpenSource" class="hashtag"><span>#</span><span class="p-category">OpenSource</span></a> <a href="https://jolek78.writeas.com/tag:HackerEthic" class="hashtag"><span>#</span><span class="p-category">HackerEthic</span></a> <a href="https://jolek78.writeas.com/tag:SolarPunk" class="hashtag"><span>#</span><span class="p-category">SolarPunk</span></a> <a href="https://jolek78.writeas.com/tag:DigitalSovereignty" class="hashtag"><span>#</span><span class="p-category">DigitalSovereignty</span></a> <a href="https://jolek78.writeas.com/tag:FOSS" class="hashtag"><span>#</span><span class="p-category">FOSS</span></a> <a href="https://jolek78.writeas.com/tag:Linux" class="hashtag"><span>#</span><span class="p-category">Linux</span></a></p>

<div class="center"><a href="https://fosstodon.org/@jolek78">Mastodon</a> :: <a href="https://pixelfed.social/jolek78">Pixelfed</a> :: <a href="mailto:jolek78@posteo.net">Email</a>  :: <a href="https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net"> Element</a></div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/reflections-on-an-impossible-escape-from-capitalism</guid>
      <pubDate>Sun, 05 Apr 2026 15:46:47 +0000</pubDate>
    </item>
    <item>
      <title>&#34;Game of Life&#34;: the game that wasn&#39;t a game</title>
      <link>https://jolek78.writeas.com/game-of-life-the-game-that-wasnt-a-game?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Do you remember Flash games? The ones that ran in the browser before Adobe decided to kill everything in 2020? I do. There were sites - Miniclip, Newgrounds - that were a kind of uncurated digital playground, pages with black backgrounds and popups everywhere, where you could spend hours without really understanding what you were doing. You complain about brainrot? Maybe you don&#39;t remember the nineties web and that girl with the wart singing the polka... Anyway, it was one of those unremarkable afternoons. I don&#39;t remember the exact site - one of those places with incomprehensible URLs like &#34;geocities.com/~someone/games&#34; and graphics that hurt your eyes. I stumbled onto something strange. The Adobe Flash logo hadn&#39;t even finished loading, there were no instructions, no &#34;Play&#34; button. Just a grid of black and white cells changing, generation after generation, apparently at random.&#xA;&#xA;!--more--&#xA;&#xA;I waited. I thought it was still loading. Nothing. The grid kept changing. I tried clicking on the cells. Nothing. I tried pressing keys on the keyboard. Nothing. I watched for a few minutes, waiting for something to happen - a game over, a score, an objective. Nothing. It wasn&#39;t a game. There was nothing to &#34;play.&#34; It was like watching rain fall, but digital. Hundreds and hundreds of pixels kept appearing and disappearing. I got bored, closed the tab. Years later - I don&#39;t remember how many, a lot - I happened to read an article on Wikipedia. The title was &#34;Conway&#39;s Game of Life.&#34; And the penny dropped.&#xA;&#xA;What I had seen that day wasn&#39;t a game, or at least not in the traditional sense. It was a simulation. And that simulation, with four rules that even a child could understand, was doing something that none of those rules explicitly anticipated: producing complexity. Order from chaos. Structures that emerged, grew, interacted. Patterns that moved across the grid as if alive. And then - and this is where I had my epiphany - those structures could simulate an electronic circuit. Any electronic circuit. Theoretically, any computation that a Turing machine can perform. Four rules, binary cells. In essence: a universal computing machine.&#xA;&#xA;Welcome to the story of how the English mathematician John Horton Conway, trying to build the simplest possible toy, accidentally built one of the most powerful demonstrations of how complexity can emerge from nothing. Dear creationists - yes, this one&#39;s for you too.&#xA;&#xA;Von Neumann had a question&#xA;&#xA;Before Conway, there was Von Neumann. John von Neumann - Bond, James... okay, I&#39;ll stop - was already asking, back in the 1940s, a question that sounds almost philosophical: can a machine build a copy of itself? It wasn&#39;t an abstract question. Von Neumann had already demonstrated theoretically that it was possible. His model - a two-dimensional &#34;cellular automaton&#34; - proved the principle. It worked like this: a configuration of cells on a grid contains within itself the &#34;instructions&#34; (encoded as the arrangement of active and inactive cells) to replicate itself. The structure reads these instructions, manipulates the surrounding cells, and generates an identical copy of itself in another area of the grid. The copy contains the same instructions, so it can repeat the process indefinitely. It&#39;s every engineer&#39;s dream (or nightmare, depending on your perspective): a machine that reproduces without external intervention.&#xA;&#xA;The problem was the monstrous complexity of the system. Von Neumann&#39;s model required 29 different states per cell - twenty-nine - and a set of rules that filled pages and pages of algebra. It was functional, demonstrably correct, but it was a monster. Nobody could really grasp it at a glance, let alone implement it and study it in practice. It was like having the perfect recipe for a dish, but with 300 rare ingredients and 50 steps requiring laboratory equipment.&#xA;&#xA;In 1962, the English mathematician John Horton Conway - professor at Cambridge, specialising in group theory and other things that sound complicated - decided to do something apparently simple. He looked for the most minimal possible version of Von Neumann&#39;s idea. A system of rules poor enough to be understandable by anyone, but rich enough to allow complex behaviour and, eventually, self-reproduction. It took him years. Not weeks, not months. Years. From 1962 to 1970. Eight years of proposals, tests, failures, adjustments. Every ruleset was analysed: too ordered? Everything converges to fixed configurations and the system dies. Too chaotic? Total noise, no structures. Conway was looking for a precise critical point: enough stability to allow persistent forms, enough instability to allow unpredictable and interesting behaviour.&#xA;&#xA;He was obsessed with this balance. He tested it on graph paper (computers weren&#39;t yet fast enough to do it quickly), with groups of students, by hand, generation after generation. Painstaking work. Or the work of a madman, depending on how you look at it.&#xA;&#xA;By 1970 he had found what he was looking for. He called it the &#34;Game of Life.&#34; Martin Gardner, who had a monthly column in Scientific American called &#34;Mathematical Games,&#34; presented it in October of that year. And within weeks it became one of the most famous objects in the entire history of recreational mathematics and computer science.&#xA;&#xA;The four rules (and why each one matters)&#xA;&#xA;The system is embarrassingly simple. You have an infinite two-dimensional grid (in practice: very large). Each cell can be in one of two states: alive (black) or dead (white). Each cell has eight neighbours - the four cardinal directions plus the four diagonals. At each generation, all cells simultaneously update their state following four rules:&#xA;&#xA;A live cell with fewer than 2 live neighbours dies - isolation. There isn&#39;t enough interaction to sustain life. It&#39;s loneliness that kills.&#xA;&#xA;A live cell with 2 or 3 live neighbours survives - stability. Local density is just right. There&#39;s enough support, but not too much competition. It&#39;s the point of equilibrium.&#xA;&#xA;A live cell with more than 3 live neighbours dies - overpopulation. Too much competition for resources (it&#39;s a metaphor, but it works). Too much crowding suffocates.&#xA;&#xA;A dead cell with exactly 3 live neighbours comes to life - reproduction. Three live cells create the conditions to generate new life. Not 2, not 4. Exactly 3.&#xA;&#xA;That&#39;s it. Nothing else. No exceptions, no special conditions, no &#34;if this cell is particular then...&#34;. Four rules, applied uniformly to every cell, every generation, forever.&#xA;&#xA;Now stop for a moment and think about this: where is the complexity in these rules? Where does it say that structures must emerge? Where does it say that patterns can exist that move, oscillate, interact in non-trivial ways? Nowhere. The rules only talk about individual cells and their immediate neighbours. Nothing more. And yet complexity emerges. It emerges necessarily, as an inevitable consequence of that subtle balance Conway spent eight years searching for. It isn&#39;t programmed into the rules. It&#39;s an emergent property of the system. And this is the point that made the penny drop for me, years after that grid: complexity doesn&#39;t need to be designed. It can simply happen, if the conditions are right.&#xA;&#xA;The taxonomy&#xA;&#xA;In the first year after publication in Scientific American, readers - programmers, mathematicians, students, enthusiasts - flooded the magazine with discoveries. It had become a viral phenomenon, in an era when &#34;viral&#34; still meant photocopies and letters sent by post. And very quickly a natural classification of structures emerged.&#xA;&#xA;Still lifes - completely stable patterns that never change. The simplest is the &#34;block&#34;: a 2×2 square. In the block, every cell has exactly 3 neighbours - the other three cells of the square. Each one survives because it has exactly 3 live neighbours. The pattern doesn&#39;t change, doesn&#39;t move. It&#39;s just there, motionless, forever. Other examples: the &#34;beehive,&#34; the &#34;loaf&#34; - stable forms that once formed remain identical.&#xA;&#xA;Oscillators - patterns that change but return to their initial configuration after a finite number of generations. The simplest is the &#34;blinker&#34;: three cells in a horizontal line. In the next generation they become three cells in a vertical line. Then back to horizontal. Then vertical. Period 2, infinite oscillation. Other more complex examples: the &#34;toad&#34; (period 2), the &#34;beacon&#34; (period 2), the &#34;pulsar&#34; (period 3, one of the most visually beautiful).&#xA;&#xA;And then there&#39;s the one. The glider, my favourite - illustrated in detail on MathWorld.&#xA;&#xA;Five cells, arranged in a specific configuration that looks almost like a wonky little triangle. And this thing - this small five-cell structure - moves. Not in the sense that the cells physically shift around the grid (the cells are fixed, remember). In the sense that the pattern propagates through space, one cell at a time, diagonally downward to the right (or in any direction, depending on the initial orientation). After four generations, the glider has returned to its original configuration, but shifted one position diagonally. And then it continues. Forever. It crosses the grid indefinitely, unless it meets an obstacle.&#xA;&#xA;And here something starts to change in the way people thought about the system. Because a glider isn&#39;t just a pretty pattern to watch. It&#39;s a signal. It&#39;s something that carries information from point A to point B. It has a direction, it has a speed (c/4, where c is the maximum possible speed in the Game of Life, which is one cell per generation), it has persistence.&#xA;&#xA;And if you have a glider, the next question is obvious: can you create something that generates more gliders? The answer arrived in 1970, a few months after the original publication. Bill Gosper - an MIT programmer, one of the first hackers in history - found the &#34;glider gun.&#34; A configuration of 36 cells that, every 30 generations, spits out a new glider. A periodic signal generator. A signal. A periodic source. A precise direction. In a 2D grid with binary cells and four elementary rules. This is where the story is going.&#xA;&#xA;The heart: four rules, one Turing machine&#xA;&#xA;TL;DR: The Game of Life is Turing-complete. This means that, in principle, you can perform any computation that a Turing machine can perform, inside a 2D grid with binary cells and four rules. No processor. No integrated circuits. Just cells being born and dying according to Conway&#39;s four rules.&#xA;&#xA;To understand why the Game of Life is Turing-complete, you need to take a step back on what &#34;Turing-complete&#34; means. Alan Turing, in 1936 (at 24 years old - the age at which I was still playing at being a Wikipedia editor), defined an abstract model of computation: a machine that reads an infinite tape of cells, writes on it, and moves forward or backward, following a finite set of deterministic rules. If a system can simulate any Turing machine - that is, if you can configure it to perform any computation that is computable - that system is Turing-complete. Which means, in practice, that it&#39;s universal from a computational standpoint. There is nothing a Turing machine can do that this system cannot do (given enough space and time).&#xA;&#xA;Now back to the Game of Life. We have the glider: a signal that moves. We have the glider gun: a periodic source of signals. But is this enough to build a computer? No. To have a logic circuit you need the fundamental logic gates - AND, OR, NOT. All basic boolean operations. Everything else - addition, multiplication, comparisons, conditional jumps, arbitrary algorithms - is built by combining logic gates.&#xA;&#xA;Logic gates in the Game of Life are implemented by exploiting interactions between gliders. When two gliders intersect, the result depends on their relative configuration, the precise timing of the encounter, the direction of approach. Some combinations cause the two gliders to completely annihilate each other (output: no glider). Others produce new gliders in specific directions (output: one or more gliders). By changing the geometry of the encounter - the exact position of the glider guns that generate them, the timing, the distances - you can build configurations that behave like AND, OR, and NOT gates. The incoming gliders represent the input bits (0 or 1, depending on whether the glider is present or not). The outgoing gliders represent the result of the logical operation.&#xA;&#xA;If you have logic gates, you have combinational circuits. If you have combinational circuits and a memory mechanism (implemented with glider loops and oscillating patterns), you have sequential circuits. And if you have arbitrary sequential circuits, you have a Turing machine.&#xA;&#xA;This isn&#39;t theory. It&#39;s been done. In 2000, Paul Rendell built a functioning Turing machine entirely within the Game of Life - with tape, read/write head, states, transitions. In 2010, a group of researchers led by Paul Chapman took the concept further still and built a complete computer - including a display - that runs the Game of Life... inside the Game of Life.&#xA;&#xA;These implementations are, obviously, infinitely slower than a real processor. A single clock cycle requires hundreds or thousands of generations. A simple addition takes billions of steps. But they work. The computation happens, correct, deterministic, verifiable.&#xA;&#xA;But back to the main point. What does all this mean? It means that the grid I was staring at all those years ago - that thing I didn&#39;t understand, that looked like organised noise - had more theoretical computational power than any processor I&#39;ve ever used. Not in terms of speed (that would be ridiculous) but in terms of what can be done.&#xA;&#xA;The biology that isn&#39;t biology (but almost)&#xA;&#xA;Conway never claimed that the Game of Life literally simulated biological life. The rules have nothing to do with DNA, cells, metabolism, evolution. There&#39;s no natural selection, no adaptation. It&#39;s a purely deterministic system where the same initial conditions always produce the same result. Zero stochasticity, zero mutations, zero genetics.&#xA;&#xA;And yet the field of &#34;artificial life&#34; owes an enormous debt to the Game of Life. Because the GoL demonstrated experimentally a principle that before 1970 was more philosophical intuition than concrete proof: biological complexity doesn&#39;t require an intelligent designer. It can emerge from simple rules, applied uniformly, with nothing more than local interactions between identical elements. Self-organisation - structures that emerge without central coordination. Competition for space - patterns that survive are those satisfying the conditions of survival (the four rules). Emergence of hierarchical structures - from the single glider (elementary pattern) to the glider gun (generator) to logic circuits (systems of patterns that interact in a coordinated way).&#xA;&#xA;It&#39;s not biological evolution in the Darwinian sense. But it&#39;s the same underlying principle: from simplicity, complexity emerges, without that complexity needing to be explicitly encoded in the fundamental rules.&#xA;&#xA;The risk here is always falling into superficial analogies that don&#39;t hold up to analysis. The GoL doesn&#39;t simulate real ecosystems. The &#34;cells&#34; aren&#39;t biological cells. There&#39;s no metabolism, no sexual reproduction, no genetic variability. The parallel should be taken for what it is: an illustrative case of a more general principle, not a replica of real life. But it remains true that when you watch a glider gun fire gliders indefinitely, or when you see complex patterns emerging from random initial configurations, it&#39;s hard not to think: &#34;this looks alive.&#34; It isn&#39;t, of course. But the boundary between &#34;looks alive&#34; and &#34;is alive&#34; is more blurred than we like to admit.&#xA;&#xA;Wolfram and the search for universality&#xA;&#xA;If Conway showed that complexity emerges from simplicity in one specific case, Stephen Wolfram tried to do something more ambitious: systematically map all possible behaviour of simple cellular systems.&#xA;&#xA;Wolfram - physicist, mathematician, creator of Mathematica (yes, that Mathematica) - published in the 1980s a series of papers on one-dimensional &#34;cellular automata,&#34; even simpler versions of the Game of Life. Imagine not a 2D grid, but a single row of cells. Each cell has only two neighbours (left and right) instead of eight. Each cell can be 0 or 1. And a cell&#39;s behaviour in the next generation depends only on its own state and those of its two neighbours.&#xA;&#xA;How many possible rules are there for such a system? 256. Exactly 256, because there are 8 possible configurations of three cells (2³), and for each you must decide whether the central cell will be 0 or 1 in the next generation (2⁸ = 256 total combinations).&#xA;&#xA;Wolfram numbered them all - Rule 0, Rule 1, Rule 2... Rule 255 - and tested them systematically, generation after generation, starting from different initial configurations. And he discovered that, despite their apparent diversity, all 256 automata naturally grouped into four categories of behaviour:&#xA;&#xA;Class I - convergence to a uniform state. Everything dies or everything becomes the same. Total order, extremely boring.&#xA;&#xA;Class II - simple periodic behaviour. Oscillators, stable patterns that repeat. Interesting order, but predictable.&#xA;&#xA;Class III - complete chaos. Pseudo-random noise, no persistent structures. Unpredictable but not interesting.&#xA;&#xA;Class IV - the interesting point. Complex non-periodic behaviour. Structures that emerge, interact, produce patterns that are neither ordered nor chaotic. It&#39;s the zone between order and chaos where interesting things happen.&#xA;&#xA;Class IV is the one that matters. It&#39;s the critical point - the same balance Conway spent eight years chasing in the Game of Life. And in 2002, Matthew Cook (working with Wolfram) formally proved that Rule 110 - a single one-dimensional ruleset among the 256 possible - is Turing-complete.&#xA;&#xA;  Rule 110. Three bits of input, one bit of output, eight total rules. Simpler than the Game of Life. And universal.&#xA;&#xA;Wolfram went further. In his controversial book A New Kind of Science (2002, over 1,200 pages that he wrote entirely himself, which already says something about the personality), he launched a much larger thesis: that the universe itself might fundamentally be a cellular automaton. That physical reality - the behaviour of particles, fields, forces, gravity - might be the result of simple rules applied uniformly to a discrete grid of &#34;cells&#34; at a sub-Planck scale.&#xA;&#xA;It&#39;s a bold thesis. The scientific community received it with significant scepticism - it isn&#39;t easily falsifiable in the traditional sense, it requires enormous conceptual leaps, and Wolfram doesn&#39;t exactly have a reputation for modesty (understatement). But it hasn&#39;t been disproved. And the fact that systems as simple as Rule 110 are sufficient to produce universal behaviour is proof that the principle works: from simplicity, any level of computational complexity can emerge.&#xA;&#xA;If the universe really is a cellular automaton, then God (or the Flying Spaghetti Monster) is a programmer who wrote very simple rules and then pressed &#34;Enter.&#34; Everything else - stars, galaxies, you reading this - is emergence. All consequence, no explicit design.&#xA;&#xA;The cultural legacy&#xA;&#xA;There&#39;s something strange about the cultural history of the Game of Life. There&#39;s nothing to win, nothing to lose, no objectives. It isn&#39;t a tool - it produces no practically useful results in any sense. It solves no real problems. It&#39;s a pure intellectual object. A puzzle with no solution because it has no question. And yet millions of people have implemented it. In every imaginable programming language. Python, Java, C, Rust, JavaScript, Haskell, Brainfuck (yes, really). On every platform. Arduino, Raspberry Pi, FPGA, GPU with CUDA. In every format. Terminal with ASCII art, graphical interfaces, physical LEDs, E-ink screens. On programmable calculators. On Game Boy. On two-euro microcontrollers.&#xA;&#xA;It has become the &#34;Hello World&#34; of simulation. The first program you write when you want to understand emergence, cellular automata, complexity. And every time someone re-implements it - and they do it purely for pleasure - they repeat an act that Conway performed in 1970: taking an abstract idea and turning it into something concrete, tangible, visible.&#xA;&#xA;I did it myself, years later - I implemented it in bash. There was no practical reason for it. I did it because I wanted to truly understand it, build it with my own hands, see how it worked. And this pattern repeats throughout hacker and open source culture. If you want to play the game of life - which sounds like it means something else entirely - you can download the script from here.&#xA;&#xA;The Game of Life has been ported to systems Conway would never have imagined. Someone implemented it in Excel with formulas. Someone built it with real electronic circuits. Someone constructed it with quantum cellular automata. Someone used it to generate music (every live cell is a note). Someone made it three-dimensional. Pure pleasure in building something that works, that does what it should do, that is elegant in its simplicity. It&#39;s the practical demonstration that mathematical beauty exists.&#xA;&#xA;Back to the grid&#xA;&#xA;That Flash grid I was staring at all those years ago is still in my memory with a strange clarity, like the feeling of looking at something that made no sense. Today, though, I know it made more sense than I could have imagined. Four rules, no objective, no designer saying &#34;now do this, now do that.&#34; And the result was - and is - one of the most elegant demonstrations that complexity doesn&#39;t need an author. That it can emerge from nothing.&#xA;&#xA;Von Neumann had asked: can a machine reproduce itself? Conway had searched for the simplest possible system that showed interesting behaviour. And what he found was something much larger: proof that universal computation can emerge from binary cells and four elementary rules. And everything else - the gliders, the guns, the logic circuits, Turing-completeness, the hypnotic beauty of the patterns that emerge - is consequence. Pure, inevitable consequence.&#xA;&#xA;And perhaps this is why that Flash grid stayed with me for years, even without understanding it. Because at some unconscious level I could sense that there was something fundamental inside it. Something that spoke to how the universe works - not literally, perhaps, but as a metaphor. As a demonstration that simple rules, applied consistently, produce everything we see around us. That the complexity of the world - ourselves included - might simply be a consequence of rules we don&#39;t yet know how to read.&#xA;&#xA;To quote an old UAAR slogan: &#34;The bad news is that God doesn&#39;t exist. The good news is that you don&#39;t need him.&#34;&#xA;&#xA;Sources and further reading&#xA;&#xA;Foundational papers and books&#xA;– Gardner, M. (1970). &#34;Mathematical Games: The Fantastic Combinations of John Conway&#39;s New Solitaire Game &#39;Life&#39;&#34;. Scientific American, 223(4), 120-123.&#xA;– Von Neumann, J. (1966). Theory of Self-Reproducing Automata. University of Illinois Press.&#xA;– Wolfram, S. (1983). &#34;Statistical Mechanics of Cellular Automata&#34;. Reviews of Modern Physics, 55(3), 601-644.&#xA;– Berlekamp, E. R., Conway, J. H., &amp; Guy, R. K. (1982). Winning Ways for Your Mathematical Plays, Volume 2: Games in Particular. Academic Press.&#xA;&#xA;Turing-completeness and implementations&#xA;– Rendell, P. (2000). &#34;A Turing Machine in Conway&#39;s Game of Life&#34;.&#xA;– Chapman, P., et al. (2006). &#34;OTCA Metapixel – Life in Life&#34;.&#xA;– Cook, M. (2004). &#34;Universality in Elementary Cellular Automata&#34;. Complex Systems, 15(1), 1-40.&#xA;&#xA;Online resources&#xA;– LifeWiki: https://conwaylife.com/wiki/ (the definitive resource, cataloguing thousands of patterns)&#xA;– Wikipedia: https://en.wikipedia.org/wiki/Conway%27sGameofLife&#xA;– Gosper&#39;s Glider Gun: https://en.wikipedia.org/wiki/Gun(cellularautomaton)&#xA;– Rule 110: https://en.wikipedia.org/wiki/Rule110&#xA;– Turing completeness: https://en.wikipedia.org/wiki/Turing_completeness&#xA;&#xA;Pattern explorers&#xA;– Online simulators: https://playgameoflife.com/&#xA;– Golly (dedicated software): http://golly.sourceforge.net/&#xA;&#xA;Interviews and biographical material&#xA;– Numberphile – John Conway interview series&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/game-of-life-the-game-that-wasnt-a-game&#34;Discuss.../a&#xA;&#xA;#GameOfLife #Conway #CellularAutomata #TuringComplete #Complexity #EmergentBehaviour #Mathematics #Wolfram #ComputerScience #Hacker&#xA;&#xA;div class=&#34;center&#34;a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a :: a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a :: a href=&#34;mailto:jolek78@posteo.net&#34;Email/a  :: a href=&#34;https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net&#34; Element/a/div]]&gt;</description>
      <content:encoded><![CDATA[<p>Do you remember Flash games? The ones that ran in the browser before Adobe decided to kill everything in 2020? I do. There were sites – Miniclip, Newgrounds – that were a kind of uncurated digital playground, pages with black backgrounds and popups everywhere, where you could spend hours without really understanding what you were doing. You complain about brainrot? Maybe you don&#39;t remember the nineties web and that girl with the wart singing the polka... Anyway, it was one of those unremarkable afternoons. I don&#39;t remember the exact site – one of those places with incomprehensible URLs like “geocities.com/~someone/games” and graphics that hurt your eyes. I stumbled onto something strange. The Adobe Flash logo hadn&#39;t even finished loading, there were no instructions, no “Play” button. Just a grid of black and white cells changing, generation after generation, apparently at random.</p>



<p>I waited. I thought it was still loading. Nothing. The grid kept changing. I tried clicking on the cells. Nothing. I tried pressing keys on the keyboard. Nothing. I watched for a few minutes, waiting for something to happen – a game over, a score, an objective. Nothing. It wasn&#39;t a game. There was nothing to “play.” It was like watching rain fall, but digital. Hundreds and hundreds of pixels kept appearing and disappearing. I got bored, closed the tab. Years later – I don&#39;t remember how many, a lot – I happened to read an article on Wikipedia. The title was “Conway&#39;s Game of Life.” And the penny dropped.</p>

<p>What I had seen that day wasn&#39;t a game, or at least not in the traditional sense. It was a simulation. And that simulation, with four rules that even a child could understand, was doing something that none of those rules explicitly anticipated: producing complexity. Order from chaos. Structures that emerged, grew, interacted. Patterns that moved across the grid as if alive. And then – and this is where I had my epiphany – those structures could simulate an electronic circuit. Any electronic circuit. Theoretically, any computation that a Turing machine can perform. Four rules, binary cells. In essence: a universal computing machine.</p>

<p>Welcome to the story of how the English mathematician John Horton Conway, trying to build the simplest possible toy, accidentally built one of the most powerful demonstrations of how complexity can emerge from nothing. Dear creationists – yes, this one&#39;s for you too.</p>

<h2 id="von-neumann-had-a-question" id="von-neumann-had-a-question">Von Neumann had a question</h2>

<p>Before Conway, there was Von Neumann. John von Neumann – Bond, James... okay, I&#39;ll stop – was already asking, back in the 1940s, a question that sounds almost philosophical: can a machine build a copy of itself? It wasn&#39;t an abstract question. Von Neumann had already demonstrated theoretically that it was possible. His model – a two-dimensional “cellular automaton” – proved the principle. It worked like this: a configuration of cells on a grid contains within itself the “instructions” (encoded as the arrangement of active and inactive cells) to replicate itself. The structure reads these instructions, manipulates the surrounding cells, and generates an identical copy of itself in another area of the grid. The copy contains the same instructions, so it can repeat the process indefinitely. It&#39;s every engineer&#39;s dream (or nightmare, depending on your perspective): a machine that reproduces without external intervention.</p>

<p>The problem was the monstrous complexity of the system. Von Neumann&#39;s model required 29 different states per cell – twenty-nine – and a set of rules that filled pages and pages of algebra. It was functional, demonstrably correct, but it was a monster. Nobody could really grasp it at a glance, let alone implement it and study it in practice. It was like having the perfect recipe for a dish, but with 300 rare ingredients and 50 steps requiring laboratory equipment.</p>

<p>In 1962, the English mathematician John Horton Conway – professor at Cambridge, specialising in group theory and other things that sound complicated – decided to do something apparently simple. He looked for the most minimal possible version of Von Neumann&#39;s idea. A system of rules poor enough to be understandable by anyone, but rich enough to allow complex behaviour and, eventually, self-reproduction. It took him years. Not weeks, not months. Years. From 1962 to 1970. Eight years of proposals, tests, failures, adjustments. Every ruleset was analysed: too ordered? Everything converges to fixed configurations and the system dies. Too chaotic? Total noise, no structures. Conway was looking for a precise critical point: enough stability to allow persistent forms, enough instability to allow unpredictable and interesting behaviour.</p>

<p>He was obsessed with this balance. He tested it on graph paper (computers weren&#39;t yet fast enough to do it quickly), with groups of students, by hand, generation after generation. Painstaking work. Or the work of a madman, depending on how you look at it.</p>

<p>By 1970 he had found what he was looking for. He called it the “Game of Life.” Martin Gardner, who had a monthly column in Scientific American called “Mathematical Games,” presented it in October of that year. And within weeks it became one of the most famous objects in the entire history of recreational mathematics and computer science.</p>

<h2 id="the-four-rules-and-why-each-one-matters" id="the-four-rules-and-why-each-one-matters">The four rules (and why each one matters)</h2>

<p>The system is embarrassingly simple. You have an infinite two-dimensional grid (in practice: very large). Each cell can be in one of two states: alive (black) or dead (white). Each cell has eight neighbours – the four cardinal directions plus the four diagonals. At each generation, all cells simultaneously update their state following four rules:</p>
<ol><li><p>A live cell with fewer than 2 live neighbours dies – isolation. There isn&#39;t enough interaction to sustain life. It&#39;s loneliness that kills.</p></li>

<li><p>A live cell with 2 or 3 live neighbours survives – stability. Local density is just right. There&#39;s enough support, but not too much competition. It&#39;s the point of equilibrium.</p></li>

<li><p>A live cell with more than 3 live neighbours dies – overpopulation. Too much competition for resources (it&#39;s a metaphor, but it works). Too much crowding suffocates.</p></li>

<li><p>A dead cell with exactly 3 live neighbours comes to life – reproduction. Three live cells create the conditions to generate new life. Not 2, not 4. Exactly 3.</p></li></ol>

<p>That&#39;s it. Nothing else. No exceptions, no special conditions, no “if this cell is particular then...”. Four rules, applied uniformly to every cell, every generation, forever.</p>

<p>Now stop for a moment and think about this: where is the complexity in these rules? Where does it say that structures must emerge? Where does it say that patterns can exist that move, oscillate, interact in non-trivial ways? Nowhere. The rules only talk about individual cells and their immediate neighbours. Nothing more. And yet complexity emerges. It emerges necessarily, as an inevitable consequence of that subtle balance Conway spent eight years searching for. It isn&#39;t programmed into the rules. It&#39;s an emergent property of the system. And this is the point that made the penny drop for me, years after that grid: complexity doesn&#39;t need to be designed. It can simply happen, if the conditions are right.</p>

<h2 id="the-taxonomy" id="the-taxonomy">The taxonomy</h2>

<p>In the first year after publication in Scientific American, readers – programmers, mathematicians, students, enthusiasts – flooded the magazine with discoveries. It had become a viral phenomenon, in an era when “viral” still meant photocopies and letters sent by post. And very quickly a natural classification of structures emerged.</p>

<p><strong>Still lifes</strong> – completely stable patterns that never change. The simplest is the “block”: a 2×2 square. In the block, every cell has exactly 3 neighbours – the other three cells of the square. Each one survives because it has exactly 3 live neighbours. The pattern doesn&#39;t change, doesn&#39;t move. It&#39;s just there, motionless, forever. Other examples: the “beehive,” the “loaf” – stable forms that once formed remain identical.</p>

<p><strong>Oscillators</strong> – patterns that change but return to their initial configuration after a finite number of generations. The simplest is the “blinker”: three cells in a horizontal line. In the next generation they become three cells in a vertical line. Then back to horizontal. Then vertical. Period 2, infinite oscillation. Other more complex examples: the “toad” (period 2), the “beacon” (period 2), the “pulsar” (period 3, one of the most visually beautiful).</p>

<p>And then there&#39;s the one. The glider, my favourite – illustrated in detail on MathWorld.</p>

<p>Five cells, arranged in a specific configuration that looks almost like a wonky little triangle. And this thing – this small five-cell structure – moves. Not in the sense that the cells physically shift around the grid (the cells are fixed, remember). In the sense that the pattern propagates through space, one cell at a time, diagonally downward to the right (or in any direction, depending on the initial orientation). After four generations, the glider has returned to its original configuration, but shifted one position diagonally. And then it continues. Forever. It crosses the grid indefinitely, unless it meets an obstacle.</p>

<p>And here something starts to change in the way people thought about the system. Because a glider isn&#39;t just a pretty pattern to watch. It&#39;s a signal. It&#39;s something that carries information from point A to point B. It has a direction, it has a speed (c/4, where c is the maximum possible speed in the Game of Life, which is one cell per generation), it has persistence.</p>

<p>And if you have a glider, the next question is obvious: can you create something that generates more gliders? The answer arrived in 1970, a few months after the original publication. Bill Gosper – an MIT programmer, one of the first hackers in history – found the “glider gun.” A configuration of 36 cells that, every 30 generations, spits out a new glider. A periodic signal generator. A signal. A periodic source. A precise direction. In a 2D grid with binary cells and four elementary rules. This is where the story is going.</p>

<h2 id="the-heart-four-rules-one-turing-machine" id="the-heart-four-rules-one-turing-machine">The heart: four rules, one Turing machine</h2>

<p>TL;DR: The Game of Life is Turing-complete. This means that, in principle, you can perform any computation that a Turing machine can perform, inside a 2D grid with binary cells and four rules. No processor. No integrated circuits. Just cells being born and dying according to Conway&#39;s four rules.</p>

<p>To understand why the Game of Life is Turing-complete, you need to take a step back on what “Turing-complete” means. Alan Turing, in 1936 (at 24 years old – the age at which I was still playing at being a Wikipedia editor), defined an abstract model of computation: a machine that reads an infinite tape of cells, writes on it, and moves forward or backward, following a finite set of deterministic rules. If a system can simulate any Turing machine – that is, if you can configure it to perform any computation that is computable – that system is Turing-complete. Which means, in practice, that it&#39;s universal from a computational standpoint. There is nothing a Turing machine can do that this system cannot do (given enough space and time).</p>

<p>Now back to the Game of Life. We have the glider: a signal that moves. We have the glider gun: a periodic source of signals. But is this enough to build a computer? No. To have a logic circuit you need the fundamental logic gates – AND, OR, NOT. All basic boolean operations. Everything else – addition, multiplication, comparisons, conditional jumps, arbitrary algorithms – is built by combining logic gates.</p>

<p>Logic gates in the Game of Life are implemented by exploiting interactions between gliders. When two gliders intersect, the result depends on their relative configuration, the precise timing of the encounter, the direction of approach. Some combinations cause the two gliders to completely annihilate each other (output: no glider). Others produce new gliders in specific directions (output: one or more gliders). By changing the geometry of the encounter – the exact position of the glider guns that generate them, the timing, the distances – you can build configurations that behave like AND, OR, and NOT gates. The incoming gliders represent the input bits (0 or 1, depending on whether the glider is present or not). The outgoing gliders represent the result of the logical operation.</p>

<p>If you have logic gates, you have combinational circuits. If you have combinational circuits and a memory mechanism (implemented with glider loops and oscillating patterns), you have sequential circuits. And if you have arbitrary sequential circuits, you have a Turing machine.</p>

<p>This isn&#39;t theory. It&#39;s been done. In 2000, Paul Rendell built a functioning Turing machine entirely within the Game of Life – with tape, read/write head, states, transitions. In 2010, a group of researchers led by Paul Chapman took the concept further still and built a complete computer – including a display – that runs the Game of Life... inside the Game of Life.</p>

<p>These implementations are, obviously, infinitely slower than a real processor. A single clock cycle requires hundreds or thousands of generations. A simple addition takes billions of steps. But they work. The computation happens, correct, deterministic, verifiable.</p>

<p>But back to the main point. What does all this mean? It means that the grid I was staring at all those years ago – that thing I didn&#39;t understand, that looked like organised noise – had more theoretical computational power than any processor I&#39;ve ever used. Not in terms of speed (that would be ridiculous) but in terms of what can be done.</p>

<h2 id="the-biology-that-isn-t-biology-but-almost" id="the-biology-that-isn-t-biology-but-almost">The biology that isn&#39;t biology (but almost)</h2>

<p>Conway never claimed that the Game of Life literally simulated biological life. The rules have nothing to do with DNA, cells, metabolism, evolution. There&#39;s no natural selection, no adaptation. It&#39;s a purely deterministic system where the same initial conditions always produce the same result. Zero stochasticity, zero mutations, zero genetics.</p>

<p>And yet the field of “artificial life” owes an enormous debt to the Game of Life. Because the GoL demonstrated experimentally a principle that before 1970 was more philosophical intuition than concrete proof: biological complexity doesn&#39;t require an intelligent designer. It can emerge from simple rules, applied uniformly, with nothing more than local interactions between identical elements. Self-organisation – structures that emerge without central coordination. Competition for space – patterns that survive are those satisfying the conditions of survival (the four rules). Emergence of hierarchical structures – from the single glider (elementary pattern) to the glider gun (generator) to logic circuits (systems of patterns that interact in a coordinated way).</p>

<p>It&#39;s not biological evolution in the Darwinian sense. But it&#39;s the same underlying principle: from simplicity, complexity emerges, without that complexity needing to be explicitly encoded in the fundamental rules.</p>

<p>The risk here is always falling into superficial analogies that don&#39;t hold up to analysis. The GoL doesn&#39;t simulate real ecosystems. The “cells” aren&#39;t biological cells. There&#39;s no metabolism, no sexual reproduction, no genetic variability. The parallel should be taken for what it is: an illustrative case of a more general principle, not a replica of real life. But it remains true that when you watch a glider gun fire gliders indefinitely, or when you see complex patterns emerging from random initial configurations, it&#39;s hard not to think: “this looks alive.” It isn&#39;t, of course. But the boundary between “looks alive” and “is alive” is more blurred than we like to admit.</p>

<h2 id="wolfram-and-the-search-for-universality" id="wolfram-and-the-search-for-universality">Wolfram and the search for universality</h2>

<p>If Conway showed that complexity emerges from simplicity in one specific case, Stephen Wolfram tried to do something more ambitious: systematically map all possible behaviour of simple cellular systems.</p>

<p>Wolfram – physicist, mathematician, creator of Mathematica (yes, <em>that</em> Mathematica) – published in the 1980s a series of papers on one-dimensional “cellular automata,” even simpler versions of the Game of Life. Imagine not a 2D grid, but a single row of cells. Each cell has only two neighbours (left and right) instead of eight. Each cell can be 0 or 1. And a cell&#39;s behaviour in the next generation depends only on its own state and those of its two neighbours.</p>

<p>How many possible rules are there for such a system? 256. Exactly 256, because there are 8 possible configurations of three cells (2³), and for each you must decide whether the central cell will be 0 or 1 in the next generation (2⁸ = 256 total combinations).</p>

<p>Wolfram numbered them all – Rule 0, Rule 1, Rule 2... Rule 255 – and tested them systematically, generation after generation, starting from different initial configurations. And he discovered that, despite their apparent diversity, all 256 automata naturally grouped into four categories of behaviour:</p>

<p><strong>Class I</strong> – convergence to a uniform state. Everything dies or everything becomes the same. Total order, extremely boring.</p>

<p><strong>Class II</strong> – simple periodic behaviour. Oscillators, stable patterns that repeat. Interesting order, but predictable.</p>

<p><strong>Class III</strong> – complete chaos. Pseudo-random noise, no persistent structures. Unpredictable but not interesting.</p>

<p><strong>Class IV</strong> – the interesting point. Complex non-periodic behaviour. Structures that emerge, interact, produce patterns that are neither ordered nor chaotic. It&#39;s the zone between order and chaos where interesting things happen.</p>

<p>Class IV is the one that matters. It&#39;s the critical point – the same balance Conway spent eight years chasing in the Game of Life. And in 2002, Matthew Cook (working with Wolfram) formally proved that Rule 110 – a single one-dimensional ruleset among the 256 possible – is Turing-complete.</p>

<blockquote><p>Rule 110. Three bits of input, one bit of output, eight total rules. Simpler than the Game of Life. And universal.</p></blockquote>

<p>Wolfram went further. In his controversial book <em>A New Kind of Science</em> (2002, over 1,200 pages that he wrote entirely himself, which already says something about the personality), he launched a much larger thesis: that the universe itself might fundamentally be a cellular automaton. That physical reality – the behaviour of particles, fields, forces, gravity – might be the result of simple rules applied uniformly to a discrete grid of “cells” at a sub-Planck scale.</p>

<p>It&#39;s a bold thesis. The scientific community received it with significant scepticism – it isn&#39;t easily falsifiable in the traditional sense, it requires enormous conceptual leaps, and Wolfram doesn&#39;t exactly have a reputation for modesty (understatement). But it hasn&#39;t been disproved. And the fact that systems as simple as Rule 110 are sufficient to produce universal behaviour is proof that the principle works: from simplicity, any level of computational complexity can emerge.</p>

<p>If the universe really is a cellular automaton, then God (or the Flying Spaghetti Monster) is a programmer who wrote very simple rules and then pressed “Enter.” Everything else – stars, galaxies, you reading this – is emergence. All consequence, no explicit design.</p>

<h2 id="the-cultural-legacy" id="the-cultural-legacy">The cultural legacy</h2>

<p>There&#39;s something strange about the cultural history of the Game of Life. There&#39;s nothing to win, nothing to lose, no objectives. It isn&#39;t a tool – it produces no practically useful results in any sense. It solves no real problems. It&#39;s a pure intellectual object. A puzzle with no solution because it has no question. And yet millions of people have implemented it. In every imaginable programming language. Python, Java, C, Rust, JavaScript, Haskell, Brainfuck (yes, really). On every platform. Arduino, Raspberry Pi, FPGA, GPU with CUDA. In every format. Terminal with ASCII art, graphical interfaces, physical LEDs, E-ink screens. On programmable calculators. On Game Boy. On two-euro microcontrollers.</p>

<p>It has become the “Hello World” of simulation. The first program you write when you want to understand emergence, cellular automata, complexity. And every time someone re-implements it – and they do it purely for pleasure – they repeat an act that Conway performed in 1970: taking an abstract idea and turning it into something concrete, tangible, visible.</p>

<p>I did it myself, years later – I implemented it in bash. There was no practical reason for it. I did it because I wanted to truly understand it, build it with my own hands, see how it worked. And this pattern repeats throughout hacker and open source culture. If you want to play the game of life – which sounds like it means something else entirely – you can download the script from here.</p>

<p>The Game of Life has been ported to systems Conway would never have imagined. Someone implemented it in Excel with formulas. Someone built it with real electronic circuits. Someone constructed it with quantum cellular automata. Someone used it to generate music (every live cell is a note). Someone made it three-dimensional. Pure pleasure in building something that works, that does what it should do, that is elegant in its simplicity. It&#39;s the practical demonstration that mathematical beauty exists.</p>

<h2 id="back-to-the-grid" id="back-to-the-grid">Back to the grid</h2>

<p>That Flash grid I was staring at all those years ago is still in my memory with a strange clarity, like the feeling of looking at something that made no sense. Today, though, I know it made more sense than I could have imagined. Four rules, no objective, no designer saying “now do this, now do that.” And the result was – and is – one of the most elegant demonstrations that complexity doesn&#39;t need an author. That it can emerge from nothing.</p>

<p>Von Neumann had asked: can a machine reproduce itself? Conway had searched for the simplest possible system that showed interesting behaviour. And what he found was something much larger: proof that universal computation can emerge from binary cells and four elementary rules. And everything else – the gliders, the guns, the logic circuits, Turing-completeness, the hypnotic beauty of the patterns that emerge – is consequence. Pure, inevitable consequence.</p>

<p>And perhaps this is why that Flash grid stayed with me for years, even without understanding it. Because at some unconscious level I could sense that there was something fundamental inside it. Something that spoke to how the universe works – not literally, perhaps, but as a metaphor. As a demonstration that simple rules, applied consistently, produce everything we see around us. That the complexity of the world – ourselves included – might simply be a consequence of rules we don&#39;t yet know how to read.</p>

<p>To quote an old UAAR slogan: “The bad news is that God doesn&#39;t exist. The good news is that you don&#39;t need him.”</p>

<h2 id="sources-and-further-reading" id="sources-and-further-reading">Sources and further reading</h2>

<p><strong>Foundational papers and books</strong>
– Gardner, M. (1970). “Mathematical Games: The Fantastic Combinations of John Conway&#39;s New Solitaire Game &#39;Life&#39;“. <em>Scientific American</em>, 223(4), 120-123.
– Von Neumann, J. (1966). <em>Theory of Self-Reproducing Automata</em>. University of Illinois Press.
– Wolfram, S. (1983). “Statistical Mechanics of Cellular Automata”. <em>Reviews of Modern Physics</em>, 55(3), 601-644.
– Berlekamp, E. R., Conway, J. H., &amp; Guy, R. K. (1982). <em>Winning Ways for Your Mathematical Plays, Volume 2: Games in Particular</em>. Academic Press.</p>

<p><strong>Turing-completeness and implementations</strong>
– Rendell, P. (2000). “A Turing Machine in Conway&#39;s Game of Life”.
– Chapman, P., et al. (2006). “OTCA Metapixel – Life in Life”.
– Cook, M. (2004). “Universality in Elementary Cellular Automata”. <em>Complex Systems</em>, 15(1), 1-40.</p>

<p><strong>Online resources</strong>
– LifeWiki: <a href="https://conwaylife.com/wiki/">https://conwaylife.com/wiki/</a> (the definitive resource, cataloguing thousands of patterns)
– Wikipedia: <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life</a>
– Gosper&#39;s Glider Gun: <a href="https://en.wikipedia.org/wiki/Gun_(cellular_automaton">https://en.wikipedia.org/wiki/Gun_(cellular_automaton</a>)
– Rule 110: <a href="https://en.wikipedia.org/wiki/Rule_110">https://en.wikipedia.org/wiki/Rule_110</a>
– Turing completeness: <a href="https://en.wikipedia.org/wiki/Turing_completeness">https://en.wikipedia.org/wiki/Turing_completeness</a></p>

<p><strong>Pattern explorers</strong>
– Online simulators: <a href="https://playgameoflife.com/">https://playgameoflife.com/</a>
– Golly (dedicated software): <a href="http://golly.sourceforge.net/">http://golly.sourceforge.net/</a></p>

<p><strong>Interviews and biographical material</strong>
– Numberphile – John Conway interview series</p>

<p><a href="https://remark.as/p/jolek78/game-of-life-the-game-that-wasnt-a-game">Discuss...</a></p>

<p><a href="https://jolek78.writeas.com/tag:GameOfLife" class="hashtag"><span>#</span><span class="p-category">GameOfLife</span></a> <a href="https://jolek78.writeas.com/tag:Conway" class="hashtag"><span>#</span><span class="p-category">Conway</span></a> <a href="https://jolek78.writeas.com/tag:CellularAutomata" class="hashtag"><span>#</span><span class="p-category">CellularAutomata</span></a> <a href="https://jolek78.writeas.com/tag:TuringComplete" class="hashtag"><span>#</span><span class="p-category">TuringComplete</span></a> <a href="https://jolek78.writeas.com/tag:Complexity" class="hashtag"><span>#</span><span class="p-category">Complexity</span></a> <a href="https://jolek78.writeas.com/tag:EmergentBehaviour" class="hashtag"><span>#</span><span class="p-category">EmergentBehaviour</span></a> <a href="https://jolek78.writeas.com/tag:Mathematics" class="hashtag"><span>#</span><span class="p-category">Mathematics</span></a> <a href="https://jolek78.writeas.com/tag:Wolfram" class="hashtag"><span>#</span><span class="p-category">Wolfram</span></a> <a href="https://jolek78.writeas.com/tag:ComputerScience" class="hashtag"><span>#</span><span class="p-category">ComputerScience</span></a> <a href="https://jolek78.writeas.com/tag:Hacker" class="hashtag"><span>#</span><span class="p-category">Hacker</span></a></p>

<div class="center"><a href="https://fosstodon.org/@jolek78">Mastodon</a> :: <a href="https://pixelfed.social/jolek78">Pixelfed</a> :: <a href="mailto:jolek78@posteo.net">Email</a>  :: <a href="https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net"> Element</a></div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/game-of-life-the-game-that-wasnt-a-game</guid>
      <pubDate>Sat, 21 Feb 2026 09:00:00 +0000</pubDate>
    </item>
    <item>
      <title>Legacy systems: problem or resource?</title>
      <link>https://jolek78.writeas.com/legacy-systems-problem-or-resource?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Tuesday morning, 9 AM. After a routine patching session, a long-standing ZFS storage system running Solaris 11 suddenly stops talking to its Windows 10 clients. The culprit is the usual, maddening SMB dialect dance: Windows pushes for SMB 3 on security grounds, while Solaris&#39;s native service struggles through the negotiation. Two days of banging my head against the wall - hard - and then the discovery: OpenCSW. A community that maintains updated packages for Solaris where the vendor long since threw in the towel. Updated libraries, sorted dependencies, problem solved. There are volunteers out there patching critical systems better than the official vendor ever did. Worth knowing.&#xA;&#xA;!--more--&#xA;&#xA;Same film, next scene.&#xA;&#xA;Friday afternoon - because critical migrations always happen out of hours. I&#39;m migrating a system from Red Hat 7 to Red Hat 9. Why? To support the new version of Charon-SSP, the Stromasys emulator that lets SPARC hardware run on x86. All of this to keep alive a virtual machine running Solaris 9, an operating system from 2002 that went end-of-life in 2014. It&#39;s a layered structure, each level propping up the one below. One of those classic houses of cards you can&#39;t quite understand how it stays balanced.&#xA;&#xA;Welcome to the world of legacy systems. A world where &#34;modernising&#34; often means finding increasingly creative ways to change nothing at all, and where communities and old-school sysadmins are the ones guarding infrastructure that corporations abandoned long ago. Try asking Oracle for Solaris support: they&#39;ll laugh in your face.&#xA;&#xA;The numbers&#xA;&#xA;In January 2025, the UK government published a report that should have rattled a few chairs at Westminster. Twenty-eight percent of central government IT systems are classified as legacy - up from 26% in 2023. Estimated productivity losses? Forty-five billion pounds. In 2024, the NHS recorded 123 critical IT system crashes. One hundred and twenty-three.&#xA;&#xA;But wait, because the numbers get even more interesting when you look at the banking sector. COBOL - a programming language dating back to 1959 - still processes 95% of global ATM transactions, 43% of the world&#39;s banking systems, and around 3 trillion dollars of commerce every day. Every day. It&#39;s estimated there are still 220 billion lines of COBOL code in production.&#xA;&#xA;And Windows XP? The one Microsoft stopped supporting in 2014? Today, 1-2% of internet-connected devices still run it. Sounds small until you realise we&#39;re talking about millions of machines. And not your grandad&#39;s PC: we&#39;re talking about MRI scanners in hospitals, industrial control systems, bank ATMs. Critical devices that can&#39;t be updated because the software controlling them only runs on XP, and re-certifying the entire system would cost more than building a new one.&#xA;&#xA;Remember WannaCry in 2017? The ransomware that paralysed 75,000 computers in 99 countries? The NHS was devastated. And do you know how many Windows XP machines the NHS had in 2019 - two years after the attack, five years after end-of-support? 2,300.&#xA;&#xA;At this point in the story one might say &#34;right, the problem is clear: legacy systems are dangerous and need replacing.&#34; And that would be the easy narrative - the one that consultants selling &#34;digital transformation&#34; love, and vendors wanting to sell licences love. What if I told you that a Solaris 11 system, properly isolated in a VLAN, is significantly more stable and secure than a shiny new Ubuntu 24.04 LTS?&#xA;&#xA;Reality, as always, is more complicated.&#xA;&#xA;Problems upon problems&#xA;&#xA;Here&#39;s the fundamental issue: we use the word &#34;legacy&#34; as if it meant one thing, when it actually covers at least three completely different situations.&#xA;&#xA;Type 1: Unavoidable legacy&#xA;Solaris 9 on SPARC hardware controlling industrial machinery. Windows XP on MRI scanners. Systems where hardware and software are inseparable, where an upgrade would require replacing equipment worth millions, where re-certification for medical or industrial use would take years and fortunes. These systems are legacy out of necessity, not negligence. There&#39;s no fault here. There&#39;s only the reality of a technological ecosystem where certain devices have 20-30 year lifespans and the software controlling them can&#39;t be changed without changing everything else.&#xA;&#xA;Type 2: Avoidable legacy&#xA;CentOS 7, for instance. End of support: 30 June 2024. Available alternatives: AlmaLinux, Rocky Linux, migration to RHEL. Cost of migration? Economically: it depends. In time, resources, learning: enormous. How many CentOS 7 systems are still in production today? Too many. Why? Because nobody wants to pay RHEL licences, because &#34;we&#39;ll do it next quarter,&#34; because &#34;there are other important things to deal with,&#34; because &#34;if it ain&#39;t broke, don&#39;t fix it.&#34; This is legacy by choice - or rather, by inertia. It&#39;s an organisational decision, not a technical one.&#xA;&#xA;Type 3: Non-legacy perceived as legacy&#xA;Take COBOL on modern IBM mainframes. Today&#39;s mainframes aren&#39;t the ones from the 1970s - they&#39;re immensely powerful machines, with dedicated processors, hardware security, 99.999% uptime. The COBOL running on them is the same as ever, but the underlying infrastructure is current. Is the code legacy, or the platform? And if the platform is modern, can we still call it legacy? The distinction is fundamental because it determines the strategy. A Type 1 system needs to be isolated and protected. A Type 2 system needs to be migrated. A Type 3 system needs to be left alone. Try explaining that to a CTO who just finished reading a Gartner report on &#34;legacy modernisation.&#34;&#xA;&#xA;From a thread on TheLayoff:&#xA;&#xA;  &#34;FWIW, there&#39;s a very good chance that your electronic footprint on any given day has passed through a piece of SPARC equipment running Solaris, and that will continue to happen for a good portion of your lifetime.&#34;&#xA;&#xA;Would you believe me if I told you I&#39;ve seen original BSD systems with eleven years of uptime?&#xA;&#xA;The real problem isn&#39;t the machines&#xA;&#xA;Here we get to the heart of the matter. And the answer will surprise you: the real problem with legacy systems isn&#39;t technological. It&#39;s human.&#xA;&#xA;Let&#39;s talk about the &#34;COBOL Cowboys&#34; - retired programmers called back on consulting contracts when something breaks. They&#39;re the last generation that knows how those systems actually work. When they leave, they take decades of undocumented knowledge with them. According to Deloitte, companies have seen a 23% decline in mainframe workforce over the last five years, with 63% of those positions left unfilled. It&#39;s not that there&#39;s no money to hire - it&#39;s that there&#39;s nobody to hire. Young developers don&#39;t want to learn COBOL. It&#39;s &#34;unsexy.&#34; It&#39;s &#34;archaic.&#34; It&#39;s &#34;boomer stuff.&#34;&#xA;&#xA;From ComputerWeekly:&#xA;&#xA;  &#34;The retirement of the generation of experts who possess in-depth knowledge of Cobol systems is leading to a severe knowledge shortage. They have knowledge not only of the Cobol programming language, but also of the specific systems they have worked on and built over the years&#34; - Tijs van der Storm, CWI/University of Groningen&#xA;&#xA;And so we find ourselves in a paradoxical situation: systems processing trillions of dollars a day, managed by people who might die of old age before anyone learns to replace them. Knowledge transfer never happened. Documentation - where it exists - is outdated, incomplete, written in a language nobody understands anymore. And every year that passes, the gap widens.&#xA;&#xA;This is the real legacy problem. Not the systems. The people.&#xA;&#xA;When modernisation fails (spoiler: often)&#xA;&#xA;There&#39;s a story that people in the UK know well, but that strangely never comes up when &#34;digital transformation&#34; is being discussed. It&#39;s called the National Programme for IT, or NPfIT.&#xA;&#xA;Launched in 2002, it was the largest public sector IT project in British history. The goal? Modernise the entire NHS IT infrastructure. Initial budget: 6 billion pounds. Planned completion: 2010.&#xA;&#xA;In 2011, after nine years of delays, exploding costs, vendors abandoning the project, and a system that simply didn&#39;t work, the UK government announced the dismantling of NPfIT. Final estimated cost: over 10 billion pounds. For a system that was never completed.&#xA;&#xA;What went wrong? Practically everything. Top-down decisions made by politicians who didn&#39;t understand technology. Rigid contracts with vendors who didn&#39;t understand the NHS. Resistance from medical staff who hadn&#39;t been consulted. Continuously shifting requirements. Impossible integrations with existing systems.&#xA;&#xA;From TechMonitor:&#xA;&#xA;  &#34;A lack of digital and procurement capability within government has led to wasted expenditure and lack of progress on major digital transformation programmes.&#34;&#xA;&#xA;The lesson? &#34;Modernising&#34; is not automatically better than &#34;maintaining.&#34; Sometimes, the legacy system that works is preferable to the modern system that never will. But this lesson, apparently, we haven&#39;t learned. Because the dominant narrative remains the same: legacy = bad, modern = good. And consultants keep selling the shiny new thing.&#xA;&#xA;Strategies that actually work&#xA;&#xA;TL;DR: There is no single solution. There&#39;s a matrix of options ranging from virtualisation to isolation, from refactoring to API wrapping. The choice depends on the type of legacy, the budget, and the acceptable level of risk.&#xA;&#xA;The Gartner 7Rs (yes, they have a name for everything):&#xA;&#xA;Retire - Switch it off. Only works if nobody&#39;s actually using it.&#xA;Retain - Keep it as is. Sometimes the best choice.&#xA;Relocate - Move it to new infrastructure without changes.&#xA;Rehost - &#34;Lift and shift&#34; to cloud. Changes the hardware, not the software.&#xA;Replatform - Minimal changes to run on a modern platform.&#xA;Refactor - Rewrite parts of the code while maintaining functionality.&#xA;Rearchitect - Completely redesign. The riskiest and most expensive.&#xA;&#xA;Virtualisation and emulation&#xA;For systems on proprietary architectures (SPARC, VAX, Alpha, PA-RISC), solutions like Stromasys Charon emulate the original hardware on x86-64 platforms. The operating system and software don&#39;t change - only the iron underneath does. For legacy x86 systems (Windows XP, Server 2003, old Linux), standard virtualisation (Proxmox, VMware, KVM) allows you to &#34;freeze&#34; the environment and keep it running indefinitely. I&#39;ve seen Proxmox setups running Windows 3.11. I&#39;m not joking.&#xA;&#xA;Network isolation&#xA;If a system can&#39;t be patched, it can at least be isolated. Dedicated VLANs, restrictive firewalls, air-gap where possible. It doesn&#39;t fix the problem, but it limits the impact in case of compromise.&#xA;&#xA;API wrapping&#xA;Put a modern REST layer in front of a legacy system. The mainframe keeps doing what it knows how to do; the outside world talks to the API. This is the strategy many banks use to expose COBOL functionality to mobile applications.&#xA;&#xA;The public sector: a special case&#xA;&#xA;Those who work in the public sector know that the dynamics differ from the private sector in ways that make the legacy problem even more complex.&#xA;&#xA;Multi-year budgets. You can&#39;t decide in January to modernise a system and have the money by March. Funding cycles are long, rigid, subject to political priorities that change with every election.&#xA;&#xA;Procurement. Buying software in the public sector is a bureaucratic nightmare. Tenders, compliance requirements, impact assessments, GDPR, accessibility. A purchase that takes a week in the private sector takes months here.&#xA;&#xA;Compliance. Systems handling health, education, or tax data are subject to stringent regulatory requirements. You can&#39;t simply &#34;migrate to the cloud&#34; - you have to demonstrate that the cloud complies with an endless list of standards.&#xA;&#xA;Service continuity (which in my view is the core problem). If a private company&#39;s system goes down for a day, they lose money. If a system managing national exams, or medical prescriptions, or pension payments goes down, the consequences fall on real people with no alternatives. The risk of downtime during a migration is often simply unacceptable.&#xA;&#xA;And then there&#39;s the political dimension. Every government wants to announce its own &#34;digital revolution.&#34; Nobody wants to inherit the previous government&#39;s problems. And so projects get started, abandoned, restarted, re-abandoned, in an endless cycle of waste.&#xA;&#xA;NPfIT wasn&#39;t an exception. It was the rule.&#xA;&#xA;The uncomfortable question&#xA;&#xA;At this point, the question nobody wants to ask is this: what if some legacy systems were simply… better? Not better in an absolute sense, but better for their specific purpose?&#xA;&#xA;Let me tell you something. I worked for years in environments dealing with large-scale Oracle infrastructure - the company that sells &#34;cloud transformation&#34; and &#34;modern infrastructure&#34; to half the world. And among other things, you know what got managed day to day? Old ZFS storage. Stuff that, on paper, should have been &#34;modernised&#34; years ago. Those machines had been running since before Docker existed, before Kubernetes, before &#34;cloud native&#34; became a term. And they worked. Quietly. Without drama. Nobody was in any hurry to replace them. Why would they be? In pursuit of what advantage, exactly?&#xA;&#xA;The COBOL processing bank transactions has been optimised for sixty years. Every bug has been found and fixed. Every edge case has been handled. Every possible scenario has been tested in production billions of times. It&#39;s code that has achieved a kind of perfection through Darwinian evolution. Rewriting it in Python would mean starting from scratch. New bugs. New untested scenarios. Years of instability before reaching the same level of reliability.&#xA;&#xA;And in the meantime? In the meantime, the legacy system keeps working. There&#39;s a reason banks aren&#39;t in a rush to abandon mainframes. It&#39;s not ignorance. It&#39;s not laziness. It&#39;s that they&#39;ve done the maths and understood that the risk of the new outweighs the cost of the old. And the old administrators have retired. But this is an uncomfortable truth. It doesn&#39;t sell well in PowerPoint presentations. It doesn&#39;t generate consulting contracts. It doesn&#39;t make tech headlines.&#xA;&#xA;And so we keep talking about &#34;modernisation&#34; as if it were automatically a good thing. As if &#34;new&#34; meant &#34;better.&#34; As if technology had a moral direction.&#xA;&#xA;So what?&#xA;&#xA;Legacy doesn&#39;t mean old - it means abandoned. The problem is never technical - it&#39;s always organisational. And &#34;modernising&#34; is not automatically better than &#34;maintaining.&#34;&#xA;&#xA;If there&#39;s one lesson, it&#39;s this: be suspicious of anyone with simple answers to complex problems.&#xA;&#xA;Every time I hear some manager say &#34;we need to automate everything with AI,&#34; I think about the software pachyderms holding up half of critical infrastructure. I think about the time it would take to train a model on COBOL written in 1987 with no documentation. I think about how long it would take to migrate a Java 1.7 system running on Solaris 9. I think about the hours spent reverse-engineering platforms still running Lotus Notes. I think about the costs. I think about the risks. And then I think that those same managers don&#39;t have the budget to hire juniors willing - and why should they be, when the IT world is moving in a completely different direction - to learn systems that have been decommissioned for at least thirty years. And I laugh. Bitterly, but I laugh. Then I take a few drops of CBD to calm myself down.&#xA;&#xA;Before talking about artificial intelligence - and those who know me know I&#39;m not against AI at all - perhaps we should make sure that human intelligence doesn&#39;t retire, taking years of undocumented knowledge with it. But that, evidently, is a less sexy priority to put on the slides.&#xA;&#xA;Sources and further reading&#xA;&#xA;UK government reports&#xA;– NAO: &#34;The sustainability of government IT&#34; (January 2025)&#xA;https://www.nao.org.uk/reports/local-government-financial-sustainability-2025/&#xA;– NHS Digital: Infrastructure assessment reports&#xA;https://www.bma.org.uk/advice-and-support/nhs-delivery-and-workforce/the-future/building-the-future-healthcare-infras&#xA;&#xA;COBOL and mainframes&#xA;– Reuters: &#34;Banks scramble to fix old systems&#34; (Commonwealth Bank Australia cost analysis)&#xA;https://www.reuters.com/article/technology/banks-scramble-to-fix-old-systems-as-it-cowboys-ride-into-sunset-idUSKBN17C0CN/&#xA;– IBM: &#34;COBOL Modernization&#34;&#xA;https://www.ibm.com/think/topics/cobol-modernization&#xA;&#xA;Legacy virtualisation&#xA;– Stromasys: &#34;What are legacy systems&#34;&#xA;https://www.stromasys.com/resources/what-are-legacy-systems-challenges-benefits/&#xA;– Proxmox Forums: discussions on legacy system virtualisation&#xA;https://forum.proxmox.com/tags/legacy/&#xA;&#xA;Sector analysis&#xA;– Gartner: 7Rs of Application Modernization&#xA;https://www.techtarget.com/searchCloudComputing/tip/Use-the-7-Rs-to-develop-an-app-modernization-strategy&#xA;– Deloitte: Mainframe workforce decline study&#xA;https://www.deloitte.com/us/en/insights/topics/technology-management/tech-trends/2023/future-mainframe-technology-latest-trends.html&#xA;– WSJ: How AI Can Rev Up Mainframe Modernization&#xA;https://deloitte.wsj.com/cio/how-ai-can-rev-up-mainframe-modernization-2e3c1c4a&#xA;&#xA;Case studies: failures&#xA;– Computer Weekly: &#34;What went wrong with the National Programme for IT&#34;&#xA;https://www.computerweekly.com/opinion/Six-reasons-why-the-NHS-National-Programme-for-IT-failed&#xA;– NAO: Post-implementation review NPfIT&#xA;https://www.nao.org.uk/reports/review-of-the-final-benefits-statement-for-programmes-previously-managed-under-the-national-programme-for-it-in-the-nhs/&#xA;&#xA;Security&#xA;– WannaCry incident reports&#xA;https://any.run/malware-trends/wannacry/&#xA;– NHS Windows XP audit findings (2019)&#xA;https://www.verdict.co.uk/windows-xp-nhs/&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/legacy-systems-problem-or-resource&#34;Discuss.../a&#xA;&#xA;#LegacySystems #Sysadmin #COBOL #Solaris #Linux #PublicSector #DigitalTransformation #Mainframe #OpenSource #Infrastructure&#xA;&#xA;div class=&#34;center&#34;a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a :: a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a :: a href=&#34;mailto:jolek78@posteo.net&#34;Email/a  :: a href=&#34;https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net&#34; Element/a/div]]&gt;</description>
      <content:encoded><![CDATA[<p>Tuesday morning, 9 AM. After a routine patching session, a long-standing ZFS storage system running Solaris 11 suddenly stops talking to its Windows 10 clients. The culprit is the usual, maddening SMB dialect dance: Windows pushes for SMB 3 on security grounds, while Solaris&#39;s native service struggles through the negotiation. Two days of banging my head against the wall – hard – and then the discovery: OpenCSW. A community that maintains updated packages for Solaris where the vendor long since threw in the towel. Updated libraries, sorted dependencies, problem solved. There are volunteers out there patching critical systems better than the official vendor ever did. Worth knowing.</p>



<p>Same film, next scene.</p>

<p>Friday afternoon – because critical migrations always happen out of hours. I&#39;m migrating a system from Red Hat 7 to Red Hat 9. Why? To support the new version of Charon-SSP, the Stromasys emulator that lets SPARC hardware run on x86. All of this to keep alive a virtual machine running Solaris 9, an operating system from 2002 that went end-of-life in 2014. It&#39;s a layered structure, each level propping up the one below. One of those classic houses of cards you can&#39;t quite understand how it stays balanced.</p>

<p>Welcome to the world of legacy systems. A world where “modernising” often means finding increasingly creative ways to change nothing at all, and where communities and old-school sysadmins are the ones guarding infrastructure that corporations abandoned long ago. Try asking Oracle for Solaris support: they&#39;ll laugh in your face.</p>

<h2 id="the-numbers" id="the-numbers">The numbers</h2>

<p>In January 2025, the UK government published a report that should have rattled a few chairs at Westminster. Twenty-eight percent of central government IT systems are classified as legacy – up from 26% in 2023. Estimated productivity losses? Forty-five billion pounds. In 2024, the NHS recorded 123 critical IT system crashes. One hundred and twenty-three.</p>

<p>But wait, because the numbers get even more interesting when you look at the banking sector. COBOL – a programming language dating back to 1959 – still processes 95% of global ATM transactions, 43% of the world&#39;s banking systems, and around 3 trillion dollars of commerce every day. Every day. It&#39;s estimated there are still 220 billion lines of COBOL code in production.</p>

<p>And Windows XP? The one Microsoft stopped supporting in 2014? Today, 1-2% of internet-connected devices still run it. Sounds small until you realise we&#39;re talking about millions of machines. And not your grandad&#39;s PC: we&#39;re talking about MRI scanners in hospitals, industrial control systems, bank ATMs. Critical devices that can&#39;t be updated because the software controlling them only runs on XP, and re-certifying the entire system would cost more than building a new one.</p>

<p>Remember WannaCry in 2017? The ransomware that paralysed 75,000 computers in 99 countries? The NHS was devastated. And do you know how many Windows XP machines the NHS had in 2019 – two years after the attack, five years after end-of-support? 2,300.</p>

<p>At this point in the story one might say “right, the problem is clear: legacy systems are dangerous and need replacing.” And that would be the easy narrative – the one that consultants selling “digital transformation” love, and vendors wanting to sell licences love. What if I told you that a Solaris 11 system, properly isolated in a VLAN, is significantly more stable and secure than a shiny new Ubuntu 24.04 LTS?</p>

<p>Reality, as always, is more complicated.</p>

<h2 id="problems-upon-problems" id="problems-upon-problems">Problems upon problems</h2>

<p>Here&#39;s the fundamental issue: we use the word “legacy” as if it meant one thing, when it actually covers at least three completely different situations.</p>

<p><strong>Type 1: Unavoidable legacy</strong>
Solaris 9 on SPARC hardware controlling industrial machinery. Windows XP on MRI scanners. Systems where hardware and software are inseparable, where an upgrade would require replacing equipment worth millions, where re-certification for medical or industrial use would take years and fortunes. These systems are legacy out of necessity, not negligence. There&#39;s no fault here. There&#39;s only the reality of a technological ecosystem where certain devices have 20-30 year lifespans and the software controlling them can&#39;t be changed without changing everything else.</p>

<p><strong>Type 2: Avoidable legacy</strong>
CentOS 7, for instance. End of support: 30 June 2024. Available alternatives: AlmaLinux, Rocky Linux, migration to RHEL. Cost of migration? Economically: it depends. In time, resources, learning: enormous. How many CentOS 7 systems are still in production today? Too many. Why? Because nobody wants to pay RHEL licences, because “we&#39;ll do it next quarter,” because “there are other important things to deal with,” because “if it ain&#39;t broke, don&#39;t fix it.” This is legacy by choice – or rather, by inertia. It&#39;s an organisational decision, not a technical one.</p>

<p><strong>Type 3: Non-legacy perceived as legacy</strong>
Take COBOL on modern IBM mainframes. Today&#39;s mainframes aren&#39;t the ones from the 1970s – they&#39;re immensely powerful machines, with dedicated processors, hardware security, 99.999% uptime. The COBOL running on them is the same as ever, but the underlying infrastructure is current. Is the code legacy, or the platform? And if the platform is modern, can we still call it legacy? The distinction is fundamental because it determines the strategy. A Type 1 system needs to be isolated and protected. A Type 2 system needs to be migrated. A Type 3 system needs to be left alone. Try explaining that to a CTO who just finished reading a Gartner report on “legacy modernisation.”</p>

<p>From a thread on TheLayoff:</p>

<blockquote><p>“FWIW, there&#39;s a very good chance that your electronic footprint on any given day has passed through a piece of SPARC equipment running Solaris, and that will continue to happen for a good portion of your lifetime.”</p></blockquote>

<p>Would you believe me if I told you I&#39;ve seen original BSD systems with eleven years of uptime?</p>

<h2 id="the-real-problem-isn-t-the-machines" id="the-real-problem-isn-t-the-machines">The real problem isn&#39;t the machines</h2>

<p>Here we get to the heart of the matter. And the answer will surprise you: the real problem with legacy systems isn&#39;t technological. It&#39;s human.</p>

<p>Let&#39;s talk about the “COBOL Cowboys” – retired programmers called back on consulting contracts when something breaks. They&#39;re the last generation that knows how those systems actually work. When they leave, they take decades of undocumented knowledge with them. According to Deloitte, companies have seen a 23% decline in mainframe workforce over the last five years, with 63% of those positions left unfilled. It&#39;s not that there&#39;s no money to hire – it&#39;s that there&#39;s nobody to hire. Young developers don&#39;t want to learn COBOL. It&#39;s “unsexy.” It&#39;s “archaic.” It&#39;s “boomer stuff.”</p>

<p>From ComputerWeekly:</p>

<blockquote><p>“The retirement of the generation of experts who possess in-depth knowledge of Cobol systems is leading to a severe knowledge shortage. They have knowledge not only of the Cobol programming language, but also of the specific systems they have worked on and built over the years” – Tijs van der Storm, CWI/University of Groningen</p></blockquote>

<p>And so we find ourselves in a paradoxical situation: systems processing trillions of dollars a day, managed by people who might die of old age before anyone learns to replace them. Knowledge transfer never happened. Documentation – where it exists – is outdated, incomplete, written in a language nobody understands anymore. And every year that passes, the gap widens.</p>

<p>This is the real legacy problem. Not the systems. The people.</p>

<h2 id="when-modernisation-fails-spoiler-often" id="when-modernisation-fails-spoiler-often">When modernisation fails (spoiler: often)</h2>

<p>There&#39;s a story that people in the UK know well, but that strangely never comes up when “digital transformation” is being discussed. It&#39;s called the National Programme for IT, or NPfIT.</p>

<p>Launched in 2002, it was the largest public sector IT project in British history. The goal? Modernise the entire NHS IT infrastructure. Initial budget: 6 billion pounds. Planned completion: 2010.</p>

<p>In 2011, after nine years of delays, exploding costs, vendors abandoning the project, and a system that simply didn&#39;t work, the UK government announced the dismantling of NPfIT. Final estimated cost: over 10 billion pounds. For a system that was never completed.</p>

<p>What went wrong? Practically everything. Top-down decisions made by politicians who didn&#39;t understand technology. Rigid contracts with vendors who didn&#39;t understand the NHS. Resistance from medical staff who hadn&#39;t been consulted. Continuously shifting requirements. Impossible integrations with existing systems.</p>

<p>From TechMonitor:</p>

<blockquote><p>“A lack of digital and procurement capability within government has led to wasted expenditure and lack of progress on major digital transformation programmes.”</p></blockquote>

<p>The lesson? “Modernising” is not automatically better than “maintaining.” Sometimes, the legacy system that works is preferable to the modern system that never will. But this lesson, apparently, we haven&#39;t learned. Because the dominant narrative remains the same: legacy = bad, modern = good. And consultants keep selling the shiny new thing.</p>

<h2 id="strategies-that-actually-work" id="strategies-that-actually-work">Strategies that actually work</h2>

<p>TL;DR: There is no single solution. There&#39;s a matrix of options ranging from virtualisation to isolation, from refactoring to API wrapping. The choice depends on the type of legacy, the budget, and the acceptable level of risk.</p>

<p><strong>The Gartner 7Rs (yes, they have a name for everything):</strong></p>
<ol><li><strong>Retire</strong> – Switch it off. Only works if nobody&#39;s actually using it.</li>
<li><strong>Retain</strong> – Keep it as is. Sometimes the best choice.</li>
<li><strong>Relocate</strong> – Move it to new infrastructure without changes.</li>
<li><strong>Rehost</strong> – “Lift and shift” to cloud. Changes the hardware, not the software.</li>
<li><strong>Replatform</strong> – Minimal changes to run on a modern platform.</li>
<li><strong>Refactor</strong> – Rewrite parts of the code while maintaining functionality.</li>
<li><strong>Rearchitect</strong> – Completely redesign. The riskiest and most expensive.</li></ol>

<p><strong>Virtualisation and emulation</strong>
For systems on proprietary architectures (SPARC, VAX, Alpha, PA-RISC), solutions like Stromasys Charon emulate the original hardware on x86-64 platforms. The operating system and software don&#39;t change – only the iron underneath does. For legacy x86 systems (Windows XP, Server 2003, old Linux), standard virtualisation (Proxmox, VMware, KVM) allows you to “freeze” the environment and keep it running indefinitely. I&#39;ve seen Proxmox setups running Windows 3.11. I&#39;m not joking.</p>

<p><strong>Network isolation</strong>
If a system can&#39;t be patched, it can at least be isolated. Dedicated VLANs, restrictive firewalls, air-gap where possible. It doesn&#39;t fix the problem, but it limits the impact in case of compromise.</p>

<p><strong>API wrapping</strong>
Put a modern REST layer in front of a legacy system. The mainframe keeps doing what it knows how to do; the outside world talks to the API. This is the strategy many banks use to expose COBOL functionality to mobile applications.</p>

<h2 id="the-public-sector-a-special-case" id="the-public-sector-a-special-case">The public sector: a special case</h2>

<p>Those who work in the public sector know that the dynamics differ from the private sector in ways that make the legacy problem even more complex.</p>

<p><strong>Multi-year budgets.</strong> You can&#39;t decide in January to modernise a system and have the money by March. Funding cycles are long, rigid, subject to political priorities that change with every election.</p>

<p><strong>Procurement.</strong> Buying software in the public sector is a bureaucratic nightmare. Tenders, compliance requirements, impact assessments, GDPR, accessibility. A purchase that takes a week in the private sector takes months here.</p>

<p><strong>Compliance.</strong> Systems handling health, education, or tax data are subject to stringent regulatory requirements. You can&#39;t simply “migrate to the cloud” – you have to demonstrate that the cloud complies with an endless list of standards.</p>

<p><strong>Service continuity</strong> (which in my view is the core problem). If a private company&#39;s system goes down for a day, they lose money. If a system managing national exams, or medical prescriptions, or pension payments goes down, the consequences fall on real people with no alternatives. The risk of downtime during a migration is often simply unacceptable.</p>

<p>And then there&#39;s the political dimension. Every government wants to announce its own “digital revolution.” Nobody wants to inherit the previous government&#39;s problems. And so projects get started, abandoned, restarted, re-abandoned, in an endless cycle of waste.</p>

<p>NPfIT wasn&#39;t an exception. It was the rule.</p>

<h2 id="the-uncomfortable-question" id="the-uncomfortable-question">The uncomfortable question</h2>

<p>At this point, the question nobody wants to ask is this: what if some legacy systems were simply… better? Not better in an absolute sense, but better for their specific purpose?</p>

<p>Let me tell you something. I worked for years in environments dealing with large-scale Oracle infrastructure – the company that sells “cloud transformation” and “modern infrastructure” to half the world. And among other things, you know what got managed day to day? Old ZFS storage. Stuff that, on paper, should have been “modernised” years ago. Those machines had been running since before Docker existed, before Kubernetes, before “cloud native” became a term. And they worked. Quietly. Without drama. Nobody was in any hurry to replace them. Why would they be? In pursuit of what advantage, exactly?</p>

<p>The COBOL processing bank transactions has been optimised for sixty years. Every bug has been found and fixed. Every edge case has been handled. Every possible scenario has been tested in production billions of times. It&#39;s code that has achieved a kind of perfection through Darwinian evolution. Rewriting it in Python would mean starting from scratch. New bugs. New untested scenarios. Years of instability before reaching the same level of reliability.</p>

<p>And in the meantime? In the meantime, the legacy system keeps working. There&#39;s a reason banks aren&#39;t in a rush to abandon mainframes. It&#39;s not ignorance. It&#39;s not laziness. It&#39;s that they&#39;ve done the maths and understood that the risk of the new outweighs the cost of the old. And the old administrators have retired. But this is an uncomfortable truth. It doesn&#39;t sell well in PowerPoint presentations. It doesn&#39;t generate consulting contracts. It doesn&#39;t make tech headlines.</p>

<p>And so we keep talking about “modernisation” as if it were automatically a good thing. As if “new” meant “better.” As if technology had a moral direction.</p>

<h2 id="so-what" id="so-what">So what?</h2>

<p>Legacy doesn&#39;t mean old – it means abandoned. The problem is never technical – it&#39;s always organisational. And “modernising” is not automatically better than “maintaining.”</p>

<p>If there&#39;s one lesson, it&#39;s this: be suspicious of anyone with simple answers to complex problems.</p>

<p>Every time I hear some manager say “we need to automate everything with AI,” I think about the software pachyderms holding up half of critical infrastructure. I think about the time it would take to train a model on COBOL written in 1987 with no documentation. I think about how long it would take to migrate a Java 1.7 system running on Solaris 9. I think about the hours spent reverse-engineering platforms still running Lotus Notes. I think about the costs. I think about the risks. And then I think that those same managers don&#39;t have the budget to hire juniors willing – and why should they be, when the IT world is moving in a completely different direction – to learn systems that have been decommissioned for at least thirty years. And I laugh. Bitterly, but I laugh. Then I take a few drops of CBD to calm myself down.</p>

<p>Before talking about artificial intelligence – and those who know me know I&#39;m not against AI at all – perhaps we should make sure that human intelligence doesn&#39;t retire, taking years of undocumented knowledge with it. But that, evidently, is a less sexy priority to put on the slides.</p>

<h2 id="sources-and-further-reading" id="sources-and-further-reading">Sources and further reading</h2>

<p><strong>UK government reports</strong>
– NAO: “The sustainability of government IT” (January 2025)
<a href="https://www.nao.org.uk/reports/local-government-financial-sustainability-2025/">https://www.nao.org.uk/reports/local-government-financial-sustainability-2025/</a>
– NHS Digital: Infrastructure assessment reports
<a href="https://www.bma.org.uk/advice-and-support/nhs-delivery-and-workforce/the-future/building-the-future-healthcare-infras">https://www.bma.org.uk/advice-and-support/nhs-delivery-and-workforce/the-future/building-the-future-healthcare-infras</a></p>

<p><strong>COBOL and mainframes</strong>
– Reuters: “Banks scramble to fix old systems” (Commonwealth Bank Australia cost analysis)
<a href="https://www.reuters.com/article/technology/banks-scramble-to-fix-old-systems-as-it-cowboys-ride-into-sunset-idUSKBN17C0CN/">https://www.reuters.com/article/technology/banks-scramble-to-fix-old-systems-as-it-cowboys-ride-into-sunset-idUSKBN17C0CN/</a>
– IBM: “COBOL Modernization”
<a href="https://www.ibm.com/think/topics/cobol-modernization">https://www.ibm.com/think/topics/cobol-modernization</a></p>

<p><strong>Legacy virtualisation</strong>
– Stromasys: “What are legacy systems”
<a href="https://www.stromasys.com/resources/what-are-legacy-systems-challenges-benefits/">https://www.stromasys.com/resources/what-are-legacy-systems-challenges-benefits/</a>
– Proxmox Forums: discussions on legacy system virtualisation
<a href="https://forum.proxmox.com/tags/legacy/">https://forum.proxmox.com/tags/legacy/</a></p>

<p><strong>Sector analysis</strong>
– Gartner: 7Rs of Application Modernization
<a href="https://www.techtarget.com/searchCloudComputing/tip/Use-the-7-Rs-to-develop-an-app-modernization-strategy">https://www.techtarget.com/searchCloudComputing/tip/Use-the-7-Rs-to-develop-an-app-modernization-strategy</a>
– Deloitte: Mainframe workforce decline study
<a href="https://www.deloitte.com/us/en/insights/topics/technology-management/tech-trends/2023/future-mainframe-technology-latest-trends.html">https://www.deloitte.com/us/en/insights/topics/technology-management/tech-trends/2023/future-mainframe-technology-latest-trends.html</a>
– WSJ: How AI Can Rev Up Mainframe Modernization
<a href="https://deloitte.wsj.com/cio/how-ai-can-rev-up-mainframe-modernization-2e3c1c4a">https://deloitte.wsj.com/cio/how-ai-can-rev-up-mainframe-modernization-2e3c1c4a</a></p>

<p><strong>Case studies: failures</strong>
– Computer Weekly: “What went wrong with the National Programme for IT”
<a href="https://www.computerweekly.com/opinion/Six-reasons-why-the-NHS-National-Programme-for-IT-failed">https://www.computerweekly.com/opinion/Six-reasons-why-the-NHS-National-Programme-for-IT-failed</a>
– NAO: Post-implementation review NPfIT
<a href="https://www.nao.org.uk/reports/review-of-the-final-benefits-statement-for-programmes-previously-managed-under-the-national-programme-for-it-in-the-nhs/">https://www.nao.org.uk/reports/review-of-the-final-benefits-statement-for-programmes-previously-managed-under-the-national-programme-for-it-in-the-nhs/</a></p>

<p><strong>Security</strong>
– WannaCry incident reports
<a href="https://any.run/malware-trends/wannacry/">https://any.run/malware-trends/wannacry/</a>
– NHS Windows XP audit findings (2019)
<a href="https://www.verdict.co.uk/windows-xp-nhs/">https://www.verdict.co.uk/windows-xp-nhs/</a></p>

<p><a href="https://remark.as/p/jolek78/legacy-systems-problem-or-resource">Discuss...</a></p>

<p><a href="https://jolek78.writeas.com/tag:LegacySystems" class="hashtag"><span>#</span><span class="p-category">LegacySystems</span></a> <a href="https://jolek78.writeas.com/tag:Sysadmin" class="hashtag"><span>#</span><span class="p-category">Sysadmin</span></a> <a href="https://jolek78.writeas.com/tag:COBOL" class="hashtag"><span>#</span><span class="p-category">COBOL</span></a> <a href="https://jolek78.writeas.com/tag:Solaris" class="hashtag"><span>#</span><span class="p-category">Solaris</span></a> <a href="https://jolek78.writeas.com/tag:Linux" class="hashtag"><span>#</span><span class="p-category">Linux</span></a> <a href="https://jolek78.writeas.com/tag:PublicSector" class="hashtag"><span>#</span><span class="p-category">PublicSector</span></a> <a href="https://jolek78.writeas.com/tag:DigitalTransformation" class="hashtag"><span>#</span><span class="p-category">DigitalTransformation</span></a> <a href="https://jolek78.writeas.com/tag:Mainframe" class="hashtag"><span>#</span><span class="p-category">Mainframe</span></a> <a href="https://jolek78.writeas.com/tag:OpenSource" class="hashtag"><span>#</span><span class="p-category">OpenSource</span></a> <a href="https://jolek78.writeas.com/tag:Infrastructure" class="hashtag"><span>#</span><span class="p-category">Infrastructure</span></a></p>

<div class="center"><a href="https://fosstodon.org/@jolek78">Mastodon</a> :: <a href="https://pixelfed.social/jolek78">Pixelfed</a> :: <a href="mailto:jolek78@posteo.net">Email</a>  :: <a href="https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net"> Element</a></div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/legacy-systems-problem-or-resource</guid>
      <pubDate>Wed, 28 Jan 2026 16:30:00 +0000</pubDate>
    </item>
    <item>
      <title>Iran 2026: 17 years later, same mistake</title>
      <link>https://jolek78.writeas.com/iran-2026-17-years-later-same-mistake?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[It was a Saturday in 2015, perhaps 2016. I was still &#34;normal&#34; back then, still convinced that technology was inherently positive, potentially revolutionary, still naive enough to believe that the internet liberated by definition. I was browsing books at Waterstones on Sauchiehall Street in Glasgow—one of my little guilty pleasures since I landed in Scotland—when I came across &#34;The Net Delusion: The Dark Side of Internet Freedom&#34; by Evgeny Morozov. I picked up the book, went downstairs, sat in the in-house café and started reading. And I went into crisis. His thesis demolished, piece by piece, the narrative of the &#34;Twitter Revolution&#34; of 2009 in Iran. In the book, Morozov cited an analysis by Golnaz Esfandiari, an Iranian journalist for Foreign Policy, who had done something simple but, these days, almost revolutionary: journalism (if you&#39;re laughing at this point, you&#39;re good people...). She had looked at where the tweets with #iranelection actually came from during the 2009 protests. And the answer? From the West. Not from Iran. Wait, what? Yes, exactly. It was theater. Western self-celebration masquerading as solidarity. &#xA;&#xA;!--more--&#xA;&#xA;I remember closing the book with an unpleasant feeling. Morozov doesn&#39;t give you the satisfaction of choosing a side in history. He forces you to see that technology amplifies everything—the good and the bad, freedom and control. And that authoritarian regimes have a very steep learning curve, unfortunately. Fifteen years later, the young people in Tehran are trying again: they&#39;re taking to the streets trying to overthrow the regime. In the West, I thought we had learned our lesson, that we would stop projecting our technological fantasies onto real protest movements. Obviously, I was wrong.&#xA;&#xA;Iran 2009, or when Twitter (didn&#39;t) overthrow a regime&#xA;To understand why Iran 2026 is déjà-vu, we need to go back 17 years. June 2009. Mahmoud Ahmadinejad is re-elected president of Iran with 63% of the vote. The opposition—led by Hossein Mousavi—cries fraud. Millions take to the streets. Tehran fills with green. It&#39;s the explosion of the &#34;Green Movement.&#34; And here begins the narrative that would define a decade. CNN headlines: &#34;Iran&#39;s Twitter Revolution.&#34; Time Magazine puts Twitter on the cover with the Iranian flag. Andrew Sullivan—a famous blogger at the time—obsessively tweets using #iranelection and is called &#34;the voice of the Iranian people.&#34; Western media cite tweets as if they were dispatches from a war zone. The story was beautiful: young Iranians, tech-savvy and hungry for democracy, were using Twitter to organize protests, coordinate demonstrations, evade regime censorship. Facebook to plan, Twitter to coordinate, YouTube to document. It was the digital revolution overthrowing a dictatorship. Technology defeating repression. The good guys defeating the bad guys. The US State Department was so convinced of Twitter&#39;s importance that Jared Cohen—an official—sent an official email to Twitter asking them to &#34;delay scheduled maintenance&#34; so as not to interrupt the Iranian protests. Twitter agreed.&#xA;&#xA;Then came Golnaz Esfandiari, an Iranian journalist for Radio Free Europe/Radio Liberty. Where did the tweets actually come from? In June 2010, a year after the protests, Esfandiari published an article in Foreign Policy titled &#34;The Twitter Devolution.&#34; She wrote:&#xA;&#xA;  &#34;Western journalists who couldn&#39;t reach—or didn&#39;t bother reaching?—people on the ground in Iran simply scrolled through the English-language tweets posted with tag #iranelection. Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.&#34;&#xA;&#xA;Question: Why would Iranians organizing protests in Iran write in English? Esfandiari had identified the main Twitter hubs commenting on the Tehran protests and discovered something embarrassing: one was in the United States, one in Turkey, one in Switzerland. The latter&#39;s profile stated they &#34;specialized in urging people to take to the streets.&#34; She interviewed Mehdi Yahyanejad, manager of Balatarin (one of the most popular Farsi-language websites) who said:&#xA;&#xA;  &#34;Twitter&#39;s impact inside Iran is nil [...] Here [in the United States], there is lots of buzz. But once you look, you see most of it are Americans tweeting among themselves.&#34;&#xA;&#xA;Iranians—the real ones, in the streets—used SMS, phone calls, word of mouth. Traditional methods. Twitter was mainly useful for one thing: letting the world know what was happening. Documentation, not organization. But the numbers were even worse. In his 2011 book, Morozov cited data that made everything even clearer: only 19,235 Twitter accounts registered in Iran (0.027% of the population) on the eve of the 2009 elections. And many Green Movement sympathizers had changed their Twitter location to &#34;Tehran&#34; to confuse authorities, making it nearly impossible to distinguish whether people tweeting from Iran were in Tehran or, say, Los Angeles. An Al-Jazeera analysis cited by Morozov clarified that fact-checking during the protests had confirmed only 60 active Twitter accounts in Tehran. Sixty. And when Iranian authorities tightened their grip on online communications, that number dropped to six.&#xA;&#xA;Vahid Online, a prominent Iranian blogger who was in Tehran during the protests, dismantled the Twitter Revolution thesis even more directly:&#xA;&#xA;  &#34;Twitter never became very popular in Iran. [But] because the world was watching Iran with such [great interest] during those days, it led many to believe falsely that Iranian people were also getting their news through Twitter.&#34;&#xA;&#xA;Morozov put it with a perfect metaphor:&#xA;&#xA;  &#34;If a tree falls in the forest and everyone tweets about it, it may not be the tweets that moved it.&#34;&#xA;&#xA;At this point in the story one could say &#34;okay, the protests happened in Iran and the West encouraged and celebrated them. What&#39;s wrong with that?&#34; Nothing, except that the ayatollah regime learned the lesson. This was the part of Morozov&#39;s thesis that had really shaken me. In plain terms, while the West was self-celebrating the &#34;Twitter Revolution,&#34; the Iranian government was taking notes. They understood that social media could be more useful to them than to activists. They could track who posted, they could identify protest leaders, they could infiltrate groups, they could use data to arrest, torture, kill. The 2009 protests were brutally suppressed. The Green Movement failed. And the regime emerged stronger, more experienced, more prepared to use technology as a weapon of control. But we were the good guys helping, right? Esfandiari and Morozov tried to tell us we were doing everything wrong, that we were projecting our fantasies and underestimating authoritarian regimes. Did we listen? Evidently not.&#xA;&#xA;Iran 2026: same film, different cast&#xA;December 28, 2025. Protests begin in Iran. Economic crisis, the rial—Iran&#39;s currency—collapsed to 1.4 million per dollar, 40% inflation, UN sanctions reimposed in September, the entire Iranian &#34;Axis of Resistance&#34; in tatters after the 12-day war with Israel in June. The streets fill. First Tehran, then the whole country. 31 provinces. Millions of people. And of course, social media explodes. Twitter/X fills with videos, slogans, messages of solidarity. Western media cite tweets as primary sources. Reza Pahlavi—the exiled heir of the Shah deposed in 1979—calls for protests from his social accounts. Persian TV channels in exile (Manoto, Iran International) broadcast 24/7. The US State Department operates a Persian-language Twitter account (@USABehFarsi) constantly posting messages of support. Repetita (non) iuvant. It&#39;s 2009 again. Same narrative, same enthusiasm, same conviction that this time—this time for real—Twitter and social media will overthrow the regime. Then, on January 8, 2026, on the twelfth day of protests, the Iranian regime does something interesting. It shuts off the internet. Completely.&#xA;&#xA;And I, being a good nerd who doesn&#39;t sleep, lives at night, and does things better left unsaid—sorry, the statute of limitations hasn&#39;t expired yet—asked myself: wait. If the internet is off in Iran, where is all this content coming from? Who is telling this story? And above all: are we making the same mistake as 2009 again? So, browsing here and there, I came across a long article by Shahram Akbarzadeh—professor of &#34;Middle East &amp; Central Asian Politics&#34; at Deakin University—titled &#34;The web of Big Lies: state-sponsored disinformation in Iran.&#34; And I started reading.&#xA;&#xA;Before moving forward: stop&#xA;Let&#39;s make one thing clear right away, because I already know someone will misunderstand: I stand in solidarity with those protesting in Iran. Completely. A theocratic regime that kills protesters—estimates range from 44 to 20,000 dead, impossible to know for certain precisely because of the blackout—deserves nothing but condemnation. The reasons for the protests are real, legitimate, understandable. Devastating economic crisis, systematic repression, 47 years of religious dictatorship. Those who take to the streets risk their lives. And they do.&#xA;&#xA;But solidarity doesn&#39;t mean suspending critical thinking. It doesn&#39;t mean uncritically accepting every narrative being sold to us. It doesn&#39;t mean ignoring who is constructing this narrative, how and why. On the contrary. If we truly care about the Iranians who are protesting, we have a duty to understand what&#39;s really happening. Because wrong narratives have real consequences. And the consequences are always paid by them, not by us tweeting from the couch. So: solidarity yes, but also questions, if no one minds.&#xA;&#xA;Technical Box: the evolution of digital censorship&#xA;TL;DR: Iran didn&#39;t simply &#34;pull the plug.&#34; It implemented the most sophisticated layered censorship system ever seen, which leaves infrastructure apparently normal while completely isolating the population. It&#39;s precision censorship, not sledgehammer censorship.&#xA;&#xA;8:30 PM IRST (5:00 PM UTC). NetBlocks, the organization that monitors global connectivity, registers a sudden collapse: Iran goes from 100% to ~3% connectivity in a few hours. Not just mobile, also landlines, also phones. Calling into Iran from abroad? Impossible. Journalists trying from Dubai can&#39;t connect. Families abroad can&#39;t reach relatives in Tehran. Total blackout. But there&#39;s something curious. BGP routes—the paths that make the internet work—remain visible. Iranian servers continue responding to pings. From outside, the infrastructure looks normal. Cloudflare, IODA (Georgia Tech), all traditional monitoring systems see Iran still &#34;online.&#34; Yet user traffic has dropped 97%. How is this possible? To understand what happened on January 8, we need a step back. Iran has developed three generations of shutdowns, each more sophisticated than the last:&#xA;&#xA;2019—Brute Force: During the November 2019 protests (which caused ~1,500 deaths), the regime simply removed BGP routes. It&#39;s like ripping out cables: crude, visible, it took 24+ hours to implement because every ISP had to do it manually. Economically devastating—banks stopped, the economy collapsed for six days.&#xA;&#xA;2022—&#34;Digital Curfew&#34;: During the Mahsa Amini protests, selective targeting. They shut down cell towers in specific areas, slowed internet during protest hours (4:00-10:00 PM), blocked specific apps (WhatsApp, Instagram). More refined, less expensive.&#xA;&#xA;2025-2026—&#34;Stealth Blackout&#34;: The final form. And here it becomes technically fascinating.&#xA;&#xA;The current system operates at a single national chokepoint—all Iranian ISPs converge at a few state-controlled exit points. There, a layered system filters everything:&#xA;&#xA;Layer 1—DNS Poisoning: Any DNS query for foreign domains gets redirected to 10.10.34.34—a private IP serving a generic block page. You search for google.com? You get an Iranian server saying &#34;domain not found.&#34;&#xA;&#xA;Layer 2—Protocol Whitelisting: Only three protocols pass: DNS (port 53), HTTP (port 80), HTTPS (port 443). Everything else gets silently dropped. SSH? No. OpenVPN? No. WireGuard? No. Any traditional VPN? No. Zero response, zero error, simply... nothing.&#xA;&#xA;Layer 3—Deep Packet Inspection (DPI): The showpiece. System purchased in 2008 from Nokia Siemens Networks, continuously updated. The DPI inspects ALL HTTPS traffic:&#xA;Reads the SNI (Server Name Indication) field in the TLS handshake&#xA;Inspects the commonName field in certificates&#xA;Analyzes HTTP headers (case-sensitive!)&#xA;Injects TCP RST or HTTP 403 block pages on the fly&#xA;Selective throttling of encrypted traffic. Practical example: you try to visit Twitter via HTTPS. Your browser starts the TLS handshake. The DPI reads &#34;twitter.com&#34; in the SNI field—which travels in cleartext—and injects a TCP RST. Connection terminated. Twitter&#39;s server doesn&#39;t even know you tried to connect.&#xA;&#xA;Layer 4—National Information Network (NIN): The national Iranian intranet. Domestic services (banking, some state news sites) work perfectly. It&#39;s the internet... but only Iranian.&#xA;&#xA;The result:&#xA;From the perspective of BGP routers: everything normal&#xA;From the perspective of servers: ping responds, infrastructure up&#xA;From the perspective of users: the internet no longer exists&#xA;&#xA;It&#39;s genius, in the technical sense of the term.&#xA;&#xA;During the June 2025 blackout (during the war with Israel), some tools worked:&#xA;Psiphon: 1.5 million users maintained (one third of normal base) thanks to multi-protocol design&#xA;Ceno Browser: decentralized peer-to-peer, from 600 to 8,000 active peers&#xA;Tor bridges: shot up&#xA;Starlink: worked... for those who could afford it (hotels, offices, a few privileged people)&#xA;&#xA;But in the current January 2026 blackout?&#xA;Even Starlink has started suffering interference. The regime has learned. And the cost? The impact?&#xA;Hospitals: booking systems offline&#xA;Banks: digital transactions blocked&#xA;Pharmacies: impossible to verify electronic prescriptions&#xA;Shops: many didn&#39;t open (POS not working)&#xA;&#xA;The real purpose isn&#39;t to stop the economy. It&#39;s to stop documentation. It&#39;s to obscure the massacres.&#xA;&#xA;And Signal?&#xA;There&#39;s an interesting detail completely missing from the 2026 protests narrative, and the silence says a lot. Signal—the encrypted messaging app considered the gold standard for activists and dissidents—is barely mentioned. No articles, no appeals, no campaigns to bypass censorship. Yet Signal had been the weapon of choice during the 2017-2018 protests.&#xA;&#xA;  &#34;Signal has always been advertised as the go-to application for dissidents or activists to stay secure from any state authority,&#34;&#xA;&#xA;said Mahsa Alimardani, researcher for Article19, in 2021.&#xA;&#xA;But what happened?&#xA;January 2021, after a massive migration from WhatsApp to Signal, the Iranian government labeled it as &#34;criminal content&#34; and blocked it completely. September 2022, during the Mahsa Amini protests, Signal was still blocked and had to launch a global campaign (#IRanASignalProxy) to create proxy servers to bypass censorship. January 2026? Total silence. Signal had been neutralized four years earlier. The technically superior option to all others—end-to-end encryption by default, zero metadata collection, run by a nonprofit—had already been removed from the playing field. The regime had done its homework. They had identified the most dangerous tool for them and crushed it while it was still small, years before it became mainstream.&#xA;&#xA;And when the total blackout arrived on January 8, the debate about &#34;Signal yes/no&#34; was already obsolete.&#xA;&#xA;But If the Internet Is off, how do they communicate?&#xA;This is the key question. On January 8, the internet dies in Iran. But videos keep arriving. Tweets continue. News continues. How?&#xA;&#xA;First answer: Starlink&#xA;Some Iranians—very few—have access to Starlink, Elon Musk&#39;s satellite service. Mainly hotels, offices, homes of wealthy people. These become the few &#34;eyes&#34; that can still communicate with the outside. But we&#39;re talking about an infinitesimal percentage of the population. And even Starlink is suffering increasing interference.&#xA;&#xA;Second answer: before the blackout&#xA;Many videos we see now were uploaded before January 8. They get re-shared, re-posted, presented as &#34;real-time&#34; but actually they&#39;re from days ago. Difficult to distinguish without precise geolocation and verifiable timestamps.&#xA;&#xA;Third answer (the uncomfortable one): from outside&#xA;Most of the narrative doesn&#39;t come from Iran. It comes from Persian TV channels in exile, from the Iranian diaspora, from social accounts of opponents abroad.&#xA;&#xA;And here things get complicated.&#xA;&#xA;Where does the narrative really start?&#xA;Euronews, January 10, 2026:&#xA;&#xA;  &#34;Rumours have been particularly widespread throughout the two weeks of mass protests across Iran. Many of those rumours originate from anonymous users on social media platforms, and are being covered by media outlets, purely for headline purposes.&#34;&#xA;&#xA;The Conversation (academic analysis):&#xA;&#xA;  &#34;Instagram and Twitter are filled with such reaction, making this form of engagement unusually widespread and visible... Iranian dissident news channels outside the country have become key but controversial sources of rolling information, shaping their own narratives from limited available reports.&#34;&#xA;&#xA;Miaan Group (Middle East research organization):&#xA;&#xA;  &#34;Available evidence suggests that Pahlavi support is uneven, largely media- and social-media-driven, and not underpinned by organized infrastructure on the ground. Overstating exile-led narratives risks misreading the protest&#39;s domestic drivers and reinforcing Tehran&#39;s justification for repression.&#34;&#xA;&#xA;That is: by amplifying the narrative constructed from abroad, we&#39;re literally giving the regime justification to massacre protesters. And this isn&#39;t speculation. Jerusalem Post cites an Iranian expert:&#xA;&#xA;  &#34;The monarchist Persian language media stations, especially Manoto TV, are manipulating images of protests in Iran to portray Reza Pahlavi as the only man whose name is heard in the streets, but this is a completely false and duplicitous depiction.&#34;&#xA;&#xA;We&#39;re talking about active manipulation. Not generous interpretation—manipulation. Real videos of protests, audio removed, false voice-overs added to make it seem like people are asking for the Shah&#39;s return. Black and white become increasingly similar to gray, don&#39;t they?&#xA;&#xA;Who commands this revolution?&#xA;Reza Pahlavi. The exiled heir. 65 years old, has lived in the United States since he was 16 (when his father was overthrown in 1979). He explicitly called for protests from January 8, using his social channels. But how much support does he really have in Iran?&#xA;&#xA;From CNN, with rare honesty:&#xA;&#xA;  &#34;Analysts say that it is unclear what might be driving the renewed excitement for the royal family in Iran. Arash Azizi, an academic and author of the book &#39;What Iranians Want,&#39; told CNN that, while Pahlavi &#39;has turned himself into a frontrunner in Iranian opposition politics,&#39; he is also &#39;a divisive figure and not a unifying one.&#39;&#34;&#xA;&#xA;And here lies the paradox. Iranians take to the streets for the collapsed economy, for personal freedoms, for the end of religious dictatorship, for civil rights. Not necessarily for the return of the monarchy. The Shah—Pahlavi&#39;s father—was himself a dictator, supported by the CIA, responsible for brutal repression. The 1979 revolution overthrew him precisely for this. But the narrative reaching the West? &#34;They want Pahlavi.&#34; Why? Because the exile TV channels say so. Because the Iranian diaspora—living in Los Angeles, London, Paris—supports him. Because videos are manipulated to make it seem like people are asking for him. And the regime? The regime uses exactly this narrative to justify the massacres. &#34;See? It&#39;s a monarchist insurrection supported from abroad. They&#39;re foreign agents. Terrorists. The repression is justified.&#34;&#xA;&#xA;And we, therefore, what should we do? Stay silent?&#xA;&#xA;Source analysis—aka &#34;Who are we really citing&#34;&#xA;Let&#39;s look at where the &#34;news&#34; about Iran comes from:&#xA;&#xA;Iran International: Persian TV based in London. Funding: controversial, documented Saudi ties. Repeatedly accused of manipulating footage.&#xA;&#xA;Manoto TV: Another Persian TV in exile. Declared pro-monarchist. Accused of false voice-overs.&#xA;&#xA;HRANA (Human Rights Activists News Agency): Based in the United States. Founded by anti-regime activists. Provides the death toll numbers. Primary source for many Western media.&#xA;&#xA;Reza Pahlavi: The heir himself. Worth commenting?&#xA;&#xA;US State Department: Twitter account @USABehFarsi posting in Persian. Constant message: &#34;we support you, overthrow the regime.&#34;&#xA;&#xA;Notice something? All major sources are based outside Iran, have a clear political agenda (anti-regime, often pro-Pahlavi), and in some cases there&#39;s documented content manipulation. And sources from Iran are nearly nonexistent, practically zero, because the internet is off. So the narrative is being constructed entirely from outside, in an information vacuum, by actors with specific interests. It&#39;s 2009 again. But in 2009, at least, they were naive Westerners tweeting about Iran thinking they were helping. In 2026 we have active video manipulation, exile TV channels constructing false narratives, the US State Department directly feeding Persian social media, Western media citing compromised sources as primary. All this while Iran is completely offline.&#xA;&#xA;And meanwhile, the real people protesting for real reasons—economy, freedom, dignity—die every day. 2,000 dead. Maybe 6,000. Maybe 20,000. We&#39;ll never know for certain, precisely thanks to the blackout.&#xA;&#xA;Am I naive? Perhaps&#xA;I return to that unpleasant feeling from ten years ago, when I closed &#34;The Net Delusion.&#34; Morozov doesn&#39;t let you win. He doesn&#39;t let you choose the good guys&#39; side. He shows you that technology amplifies existing power dynamics. That authoritarian regimes learn. That Western slacktivism has real consequences.&#xA;&#xA;And that the worst thing we can do is project our technological fantasies—the &#34;Twitter Revolution&#34;—onto real protest movements, with real people risking real lives. When we get the narrative wrong, when we amplify the wrong voices, when we manipulate content to conform to our preferred story... the consequences aren&#39;t paid by us. They&#39;re paid by them. The Iranians who protest don&#39;t need us to tweet #IranProtests from the couch. They don&#39;t need exile TV channels manipulating their videos. They don&#39;t need the US State Department publicly &#34;supporting&#34; them (giving the regime the &#34;foreign interference&#34; narrative).&#xA;&#xA;They need us to understand what&#39;s really happening, to distinguish between real protests and constructed narratives, to be careful about who we amplify and why. They need us to stop believing that the internet solves political problems with a simple &#34;click and share.&#34; Because, as Morozov warned us, it often complicates them.&#xA;&#xA;The internet is serious business, and should be treated seriously.&#xA;&#xA;The blackout becomes permanent&#xA;In mid-January 2026, news emerged that could make everything even more disturbing. Iran International reported that the Iranian regime is finalizing a project to permanently disconnect the country from the global internet. And it&#39;s not just a theoretical project. It&#39;s almost operational.&#xA;&#xA;The architecture of the great Iranian firewall&#xA;The details are chilling in their concreteness. The data center is bunkerized under the Fanap building in Pardis IT Town (20km from Tehran), designed to withstand missile attacks. It has a capacity of 400 server racks with Huawei hardware. Estimated cost is between $700 million and $1 billion. Logistics saw 24 containers enter Iran after the June 2025 war. Management is assigned to ArvanCloud (Iranian cloud) through a shell company called Ayandeh Afzay-e Karaneh. And the connections are clear: Fanap and its CEO Shahab Javanmardi are under US sanctions for ties to intelligence and IRGC.&#xA;&#xA;How it would work technically&#xA;The system is based on the National Information Network (NIN)—a project started in 2005, gradually implemented from 2013 and fully operational since 2019. It&#39;s the Iranian intranet, in essence. It works like this: when you connect in Iran, your traffic passes through a centralized control point—the Telecommunication Infrastructure Company (TIC), state monopoly. There, the system decides. Request for a .ir site or NIN service? Goes on the domestic Iranian network. Request for a foreign site? Goes to the gateway toward the global internet (if active).&#xA;&#xA;The &#34;kill switch&#34; simply disables the foreign gateway. And suddenly Iranian banks work (on NIN), local e-commerce works (on NIN), government services work (on NIN), Iranian emails work (on NIN), while Google, Twitter, Facebook, all the foreign internet is at zero. The difference from 2019 is substantial. Before, shutting off the internet meant paralyzing the economy—no banks, no payments, nothing. It cost billions per day. It wasn&#39;t sustainable long-term. Now instead? They can shut off the global internet while leaving everything else working. It&#39;s economically sustainable. They can maintain it for months.&#xA;&#xA;A technological paradox&#xA;Here&#39;s something that struck me: technically it&#39;s sophisticated—very sophisticated. But strategically... there&#39;s a contradiction that almost doesn&#39;t make sense. Let&#39;s look at how modern surveillance works in Russia and the United States, not to defend it, obviously, but to understand the difference in approach.&#xA;&#xA;The Russian model (SORM): The internet stays open and functioning. Users can access Google, Facebook, Twitter. But every ISP has installed an FSB &#34;black box&#34; that records everything. Every email, every click, every message. Storage is mandatory: 6 months of full content, 3 years of metadata according to the 2016 Yarovaya law. The FSB can retrieve data in real-time directly, without the ISP knowing what they&#39;re looking for. In 2023: 500,000 surveillance requests approved, only 272 denied. The result? Opponents use the internet normally, thinking they&#39;re free. They organize, communicate, build networks. And meanwhile the system records everything. When needed—20,000+ arrests for online speech between 2022 and 2024—they already have all the evidence, all the contacts, the entire map of social relationships.&#xA;&#xA;The American model (PRISM): Same logic, different implementation. Since Snowden we know that NSA accesses Google, Facebook, Microsoft, Apple servers directly. They collect everyone&#39;s metadata. &#34;We kill people based on metadata,&#34; said former CIA director Michael Hayden. Appearance of democracy and free internet. Reality of invisible but total mass surveillance.&#xA;&#xA;The Iranian approach (NIN): Shut off the internet when needed.&#xA;&#xA;By shutting off the internet, Iran loses all the intelligence capability these systems provide. They can no longer track who talks to whom. They can&#39;t infiltrate groups. They can&#39;t monitor opponents&#39; communications. They can&#39;t build maps of social networks. They literally remove the most powerful surveillance tool that exists from themselves. In exchange they get the ability to hide massacres for a few weeks. But at the cost of complete loss of intelligence during the blackout, blatant evidence of authoritarianism, economic damage even with NIN functioning, international isolation, and demonstration of their own fragility.&#xA;&#xA;Russia and the USA have understood something that Iran seems not to have grasped: invisible control is infinitely more effective than visible control. You let people think they&#39;re free, let them use the internet, let them communicate. And meanwhile you record everything, analyze everything. When needed, you strike with surgical precision having all the necessary evidence. Iran has built a visible digital cage. One that declares to the world &#34;we&#39;re an authoritarian regime terrified of our population.&#34; One that eliminates its own surveillance capability precisely when it would need it most. It&#39;s the difference between long-term thinking (building permanent intelligence systems) and short-term thinking (hiding today&#39;s massacres). SORM and PRISM are invisible dystopias, and they work precisely because people don&#39;t see them. NIN is visible dystopia. And visible dystopias tend to generate revolutions, and fail soon.&#xA;&#xA;IranWire reports that the plan is to maintain the blackout at least until the Iranian New Year, March 20, 2026.&#xA;&#xA;It is, in essence, an act of desperation. The general population (level 1) will have only NIN, zero external access. &#34;Authorized&#34; professionals (level 2) will have NIN plus filtered internet. Government, IRGC and elite (level 3) will have full access. Every connection is tracked via national ID and phone number. Every access is attributable. And when they reactivate the internet—even partially—they&#39;ll know exactly who used Starlink, who used a VPN, who shared videos.&#xA;&#xA;The model, needless to say, is China. The Chinese Great Firewall blocks foreign services but replaces them—Baidu instead of Google, Weibo instead of Twitter. China offers you an alternative, even if controlled. Iran? Iran can simply shut everything off and force you onto the national network. And with Huawei providing hardware and expertise (the same ones who built the Chinese system), and Russia providing advanced DPI technology (Protei), they have all the puzzle pieces.&#xA;&#xA;And we&#39;re back to square one, as usual.&#xA;&#xA;Tor: the infinite technological war&#xA;While Starlink makes headlines as the only tool of freedom, there&#39;s a tool that for nearly 20 years has been playing hide and seek with authoritarian regimes. Tor—The Onion Router—is historically the tool of choice for those living under censorship. During the Chinese Great Firewall. During the Russian invasion of Ukraine. In Egypt during the Arab Spring. In Syria. And in Iran, repeatedly.&#xA;&#xA;Every time Iran has experienced moments of crisis, Tor has seen massive usage spikes:&#xA;&#xA;2009—Green Movement: Tor relays shot up to 1.5 million Iranian users. The regime blocked direct connections. Users discovered Tor Bridges (non-public relays, harder to block). The regime learned.&#xA;&#xA;2019—November, gasoline protests: Complete blackout for 6 days. Tor usage dropped to zero along with all internet. But when they turned it back on, the number of Tor users was higher than before. People had learned.&#xA;&#xA;2022—Mahsa Amini, Woman Life Freedom: Nightly digital curfews (only mobile networks off 4:00-10:00 PM). Tor Bridges exploded. The regime implemented DPI to recognize Tor traffic and block it selectively.&#xA;&#xA;And here&#39;s the interesting point. It&#39;s not a simple block. It&#39;s a continuous technological war.&#xA;&#xA;In 2012, a Tor developer wrote on the official blog a phrase that should make us reflect:&#xA;&#xA;  &#34;The Iranian government has, in less than a year and starting from scratch, caught up and now surpassed the Tor project in technical ability.&#34;&#xA;&#xA;What does this mean practically? That the Iranian regime has developed DPI systems capable of recognizing Tor traffic even if encrypted. Wait, how is that possible? Tor uses SSL/TLS exactly as if it were cleartext. All traffic is encrypted. How do they distinguish it? By watching behavior, not content. It&#39;s like recognizing someone by the way they walk even if they&#39;re wearing a disguise. Iranian DPI analyzes:&#xA;&#xA;Packet timing: Tor routes traffic through three relays, creating characteristic latency patterns&#xA;Packet size: Tor uses 512-byte cells, an unusual size&#xA;TLS handshake: The &#34;hello, I&#39;m a client&#34; / &#34;hello, I&#39;m a server&#34; sequence has specific patterns for each protocol&#xA;Traffic flow: Tor sends data in bursts different from a normal HTTPS connection&#xA;&#xA;They&#39;re not reading inside encrypted packets. They&#39;re watching from outside and recognizing the fingerprint. In real-time. On all national traffic. It&#39;s technically impressive.&#xA;&#xA;But the Tor project responded&#xA;The strategy has evolved over time, essentially completely disguising Tor traffic to make it look like something else.&#xA;&#xA;Pluggable Transports: Tor in disguise. Traffic is made to look like normal web browsing, or Skype, or something else.&#xA;Snowflake: Tor hiding behind WebRTC connections (those used for video calls). Hard to block without blocking all video calls.&#xA;Meek: Tor disguising itself as traffic toward legitimate services like Microsoft Azure or Amazon CloudFront. To block it they have to block services they themselves use.&#xA;Distributed Bridge Relays: Secret non-public relays, harder to identify and block.&#xA;&#xA;And it works. Sometimes. Until the regime updates again. Snowflake gets identified? Tor develops a new pluggable transport. The regime recognizes it? The next one is developed. For every step forward by censors, Tor actively responds.&#xA;&#xA;And now? January 2026? Here&#39;s a problem. Tor usage data always has a publication delay to protect users. But historically, the pattern is always the same: crisis and protests begin, censorship increases, Tor usage shoots up, the regime develops countermeasures, and finally total blackout if necessary. Given that we&#39;re in total blackout, Tor usage has crashed to zero—like in 2019. You can&#39;t use Tor if you don&#39;t have internet, not even censored internet. But when they turn it back on—and they will, even just partially—I expect to see a massive spike. Because Iranians have learned. Starlink costs too much. Normal VPNs get blocked. Tor, with the right bridges, still works.&#xA;&#xA;But there&#39;s another paradox. Tor protects anonymity during connection. But the simple fact of trying to connect to Tor is identifiable by DPI. And traceable to your national ID. So the regime can see: who tried to use Tor (even if blocked), when they tried and for how long. And when the blackout ends, they might have a complete list of &#34;technologically sophisticated dissidents&#34; to arrest. It&#39;s the same logic as Starlink—retroactive use as evidence of dissent. The fight for free internet in Iran has been going on for nearly 20 years. It&#39;s not a new story. But even Tor can be defeated by a total blackout. And with the NIN/Huawei system becoming permanent, even when they turn the internet back on it might be an internet so controlled, so filtered, so tracked, that not even Tor will be enough.&#xA;&#xA;Conclusions—and some questions&#xA;I started with Morozov, with that unpleasant feeling from ten years ago. With the discovery that the &#34;Twitter Revolution&#34; of 2009 was a Western projection, not an Iranian reality. And I&#39;ve arrived here. Iran 2026. Same film, different cast. Same narratives constructed from abroad. Same amplification of exile voices. Same video manipulation. Same regime using all this as justification to massacre. But there&#39;s a crucial difference from 2009. In 2009, the regime had learned that the internet was useful to them (surveillance) but dangerous (documentation). In 2026, they&#39;ve solved the equation radically: they&#39;ve built a system to have internet when they need it (domestic NIN) and shut it off when they don&#39;t (kill switch toward the outside). 700 million—1 billion dollars. Huawei hardware. Russian DPI. Anti-missile bunker. 400 server racks. Operational by March 2026. It&#39;s no longer temporary and expensive censorship. It&#39;s permanent information control infrastructure. It&#39;s a digital cage.&#xA;&#xA;Where does the content about Iranian protests come from while the internet has been off for 9+ days? Mainly from abroad. From exile TV channels with controversial funding. From diaspora living thousands of miles away. From sources with clear agendas and, in some cases, documented manipulation.&#xA;&#xA;Who is driving the narrative? Pahlavi from the USA. Manoto TV altering audio. Iran International accused of false voice-overs. US State Department tweeting in Persian. Diaspora demonstrating with Shah flags.&#xA;&#xA;Who is driving the real protests in Iran? Probably no one. Probably it&#39;s leaderless, organic, driven by economic desperation and 47 years of repression. The people in the streets shout &#34;bread, work, freedom&#34;—not necessarily &#34;bring us back the Shah.&#34;&#xA;&#xA;But the narrative reaching us? That one talks about Pahlavi. About monarchy. About &#34;Iranian Revolution 2.0.&#34; Exactly the narrative the regime wants to justify the massacres. &#34;See? Western plot. Foreign agents. Monarchist terrorists.&#34;&#xA;&#xA;And the gap between narrative and reality? It costs human lives. 2,000 dead? 6,000? 12,000? 20,000? We&#39;ll never know for certain, precisely thanks to the blackout that was supposed to be &#34;temporary&#34; and is becoming permanent.&#xA;&#xA;Morozov was right&#xA;The internet, unfortunately, is not free by definition. Technology amplifies existing power dynamics. Authoritarian regimes learn, adapt, build increasingly sophisticated systems. The Iranian regime has spent 17 years—from 2009 to today—studying how to control the internet. They&#39;ve invested billions. They&#39;ve collaborated with China and Russia. They&#39;ve developed DPI that recognizes Tor, systems that block VPNs, architectures that allow economically sustainable blackouts. And the technological &#34;resistance&#34;? It depends on Elon Musk donating Starlink—and he can decide to turn it off tomorrow. It depends on Tor Project playing whack-a-mole with Iranian countermeasures. It depends on individuals who risk arrest and torture to use circumvention technologies. It&#39;s not a fair fight. It never was.&#xA;&#xA;Cyber-utopianism is a drug. It makes us feel good. It makes us feel like we&#39;re &#34;helping.&#34; That technology always wins. That the internet liberates. But reality is more complex, more uncomfortable. Technology is a tool. And like all tools, it can be used to liberate or to oppress. Authoritarian regimes have resources, expertise, and zero ethical constraints. The &#34;resistance&#34; has volunteers, limited budgets, and the weight of not wanting to cause harm. The Iranians who protest don&#39;t need us to celebrate Starlink as savior. They don&#39;t need us to amplify narratives constructed from abroad. They don&#39;t need our slacktivism. They need us to understand what&#39;s really happening. To distinguish between real protests and constructed narratives. To not give the regime the propaganda ammunition it needs. To stop believing that the internet solves political problems. They need us to finally learn the lesson Morozov was trying to teach us 15 years ago.&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/iran-2026-17-years-later-same-mistake&#34;Discuss.../a&#xA;&#xA;#Iran #IranProtests #NetDelusion #EvgenyMorozov #TwitterRevolution #Tor #Starlink #DigitalCensorship #InternetFreedom #Authoritarianism&#xA;&#xA;div class=&#34;center&#34;a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a :: a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a :: a href=&#34;mailto:jolek78@posteo.net&#34;Email/a  :: a href=&#34;https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net&#34; Element/a/div]]&gt;</description>
      <content:encoded><![CDATA[<p>It was a Saturday in 2015, perhaps 2016. I was still “normal” back then, still convinced that technology was inherently positive, potentially revolutionary, still naive enough to believe that the internet liberated by definition. I was browsing books at Waterstones on Sauchiehall Street in Glasgow—one of my little guilty pleasures since I landed in Scotland—when I came across “The Net Delusion: The Dark Side of Internet Freedom” by Evgeny Morozov. I picked up the book, went downstairs, sat in the in-house café and started reading. And I went into crisis. His thesis demolished, piece by piece, the narrative of the “Twitter Revolution” of 2009 in Iran. In the book, Morozov cited an analysis by Golnaz Esfandiari, an Iranian journalist for Foreign Policy, who had done something simple but, these days, almost revolutionary: journalism (if you&#39;re laughing at this point, you&#39;re good people...). She had looked at where the tweets with <a href="https://jolek78.writeas.com/tag:iranelection" class="hashtag"><span>#</span><span class="p-category">iranelection</span></a> actually came from during the 2009 protests. And the answer? From the West. Not from Iran. Wait, what? Yes, exactly. It was theater. Western self-celebration masquerading as solidarity.</p>



<p>I remember closing the book with an unpleasant feeling. Morozov doesn&#39;t give you the satisfaction of choosing a side in history. He forces you to see that technology amplifies everything—the good and the bad, freedom and control. And that authoritarian regimes have a very steep learning curve, unfortunately. Fifteen years later, the young people in Tehran are trying again: they&#39;re taking to the streets trying to overthrow the regime. In the West, I thought we had learned our lesson, that we would stop projecting our technological fantasies onto real protest movements. Obviously, I was wrong.</p>

<h2 id="iran-2009-or-when-twitter-didn-t-overthrow-a-regime" id="iran-2009-or-when-twitter-didn-t-overthrow-a-regime">Iran 2009, or when Twitter (didn&#39;t) overthrow a regime</h2>

<p>To understand why Iran 2026 is déjà-vu, we need to go back 17 years. June 2009. Mahmoud Ahmadinejad is re-elected president of Iran with 63% of the vote. The opposition—led by Hossein Mousavi—cries fraud. Millions take to the streets. Tehran fills with green. It&#39;s the explosion of the “Green Movement.” And here begins the narrative that would define a decade. CNN headlines: “Iran&#39;s Twitter Revolution.” Time Magazine puts Twitter on the cover with the Iranian flag. Andrew Sullivan—a famous blogger at the time—obsessively tweets using <a href="https://jolek78.writeas.com/tag:iranelection" class="hashtag"><span>#</span><span class="p-category">iranelection</span></a> and is called “the voice of the Iranian people.” Western media cite tweets as if they were dispatches from a war zone. The story was beautiful: young Iranians, tech-savvy and hungry for democracy, were using Twitter to organize protests, coordinate demonstrations, evade regime censorship. Facebook to plan, Twitter to coordinate, YouTube to document. It was the digital revolution overthrowing a dictatorship. Technology defeating repression. The good guys defeating the bad guys. The US State Department was so convinced of Twitter&#39;s importance that Jared Cohen—an official—sent an official email to Twitter asking them to “delay scheduled maintenance” so as not to interrupt the Iranian protests. Twitter agreed.</p>

<p>Then came Golnaz Esfandiari, an Iranian journalist for Radio Free Europe/Radio Liberty. Where did the tweets actually come from? In June 2010, a year after the protests, Esfandiari published an article in Foreign Policy titled “The Twitter Devolution.” She wrote:</p>

<blockquote><p>“Western journalists who couldn&#39;t reach—or didn&#39;t bother reaching?—people on the ground in Iran simply scrolled through the English-language tweets posted with tag <a href="https://jolek78.writeas.com/tag:iranelection" class="hashtag"><span>#</span><span class="p-category">iranelection</span></a>. Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.”</p></blockquote>

<p>Question: Why would Iranians organizing protests in Iran write in English? Esfandiari had identified the main Twitter hubs commenting on the Tehran protests and discovered something embarrassing: one was in the United States, one in Turkey, one in Switzerland. The latter&#39;s profile stated they “specialized in urging people to take to the streets.” She interviewed Mehdi Yahyanejad, manager of Balatarin (one of the most popular Farsi-language websites) who said:</p>

<blockquote><p>“Twitter&#39;s impact inside Iran is nil [...] Here [in the United States], there is lots of buzz. But once you look, you see most of it are Americans tweeting among themselves.”</p></blockquote>

<p>Iranians—the real ones, in the streets—used SMS, phone calls, word of mouth. Traditional methods. Twitter was mainly useful for one thing: letting the world know what was happening. Documentation, not organization. But the numbers were even worse. In his 2011 book, Morozov cited data that made everything even clearer: only 19,235 Twitter accounts registered in Iran (0.027% of the population) on the eve of the 2009 elections. And many Green Movement sympathizers had changed their Twitter location to “Tehran” to confuse authorities, making it nearly impossible to distinguish whether people tweeting from Iran were in Tehran or, say, Los Angeles. An Al-Jazeera analysis cited by Morozov clarified that fact-checking during the protests had confirmed only 60 active Twitter accounts in Tehran. Sixty. And when Iranian authorities tightened their grip on online communications, that number dropped to six.</p>

<p>Vahid Online, a prominent Iranian blogger who was in Tehran during the protests, dismantled the Twitter Revolution thesis even more directly:</p>

<blockquote><p>“Twitter never became very popular in Iran. [But] because the world was watching Iran with such [great interest] during those days, it led many to believe falsely that Iranian people were also getting their news through Twitter.”</p></blockquote>

<p>Morozov put it with a perfect metaphor:</p>

<blockquote><p>“If a tree falls in the forest and everyone tweets about it, it may not be the tweets that moved it.”</p></blockquote>

<p>At this point in the story one could say “okay, the protests happened in Iran and the West encouraged and celebrated them. What&#39;s wrong with that?” Nothing, except that the ayatollah regime learned the lesson. This was the part of Morozov&#39;s thesis that had really shaken me. In plain terms, while the West was self-celebrating the “Twitter Revolution,” the Iranian government was taking notes. They understood that social media could be more useful to them than to activists. They could track who posted, they could identify protest leaders, they could infiltrate groups, they could use data to arrest, torture, kill. The 2009 protests were brutally suppressed. The Green Movement failed. And the regime emerged stronger, more experienced, more prepared to use technology as a weapon of control. But we were the good guys helping, right? Esfandiari and Morozov tried to tell us we were doing everything wrong, that we were projecting our fantasies and underestimating authoritarian regimes. Did we listen? Evidently not.</p>

<h2 id="iran-2026-same-film-different-cast" id="iran-2026-same-film-different-cast">Iran 2026: same film, different cast</h2>

<p>December 28, 2025. Protests begin in Iran. Economic crisis, the rial—Iran&#39;s currency—collapsed to 1.4 million per dollar, 40% inflation, UN sanctions reimposed in September, the entire Iranian “Axis of Resistance” in tatters after the 12-day war with Israel in June. The streets fill. First Tehran, then the whole country. 31 provinces. Millions of people. And of course, social media explodes. Twitter/X fills with videos, slogans, messages of solidarity. Western media cite tweets as primary sources. Reza Pahlavi—the exiled heir of the Shah deposed in 1979—calls for protests from his social accounts. Persian TV channels in exile (Manoto, Iran International) broadcast 24/7. The US State Department operates a Persian-language Twitter account (@USABehFarsi) constantly posting messages of support. Repetita (non) iuvant. It&#39;s 2009 again. Same narrative, same enthusiasm, same conviction that this time—this time for real—Twitter and social media will overthrow the regime. Then, on January 8, 2026, on the twelfth day of protests, the Iranian regime does something interesting. It shuts off the internet. Completely.</p>

<p>And I, being a good nerd who doesn&#39;t sleep, lives at night, and does things better left unsaid—sorry, the statute of limitations hasn&#39;t expired yet—asked myself: wait. If the internet is off in Iran, where is all this content coming from? Who is telling this story? And above all: are we making the same mistake as 2009 again? So, browsing here and there, I came across a long article by Shahram Akbarzadeh—professor of “Middle East &amp; Central Asian Politics” at Deakin University—titled “The web of Big Lies: state-sponsored disinformation in Iran.” And I started reading.</p>

<h2 id="before-moving-forward-stop" id="before-moving-forward-stop">Before moving forward: stop</h2>

<p>Let&#39;s make one thing clear right away, because I already know someone will misunderstand: I stand in solidarity with those protesting in Iran. Completely. A theocratic regime that kills protesters—estimates range from 44 to 20,000 dead, impossible to know for certain precisely because of the blackout—deserves nothing but condemnation. The reasons for the protests are real, legitimate, understandable. Devastating economic crisis, systematic repression, 47 years of religious dictatorship. Those who take to the streets risk their lives. And they do.</p>

<p>But solidarity doesn&#39;t mean suspending critical thinking. It doesn&#39;t mean uncritically accepting every narrative being sold to us. It doesn&#39;t mean ignoring who is constructing this narrative, how and why. On the contrary. If we truly care about the Iranians who are protesting, we have a duty to understand what&#39;s really happening. Because wrong narratives have real consequences. And the consequences are always paid by them, not by us tweeting from the couch. So: solidarity yes, but also questions, if no one minds.</p>

<h2 id="technical-box-the-evolution-of-digital-censorship" id="technical-box-the-evolution-of-digital-censorship">Technical Box: the evolution of digital censorship</h2>

<p>TL;DR: Iran didn&#39;t simply “pull the plug.” It implemented the most sophisticated layered censorship system ever seen, which leaves infrastructure apparently normal while completely isolating the population. It&#39;s precision censorship, not sledgehammer censorship.</p>

<p>8:30 PM IRST (5:00 PM UTC). NetBlocks, the organization that monitors global connectivity, registers a sudden collapse: Iran goes from 100% to ~3% connectivity in a few hours. Not just mobile, also landlines, also phones. Calling into Iran from abroad? Impossible. Journalists trying from Dubai can&#39;t connect. Families abroad can&#39;t reach relatives in Tehran. Total blackout. But there&#39;s something curious. BGP routes—the paths that make the internet work—remain visible. Iranian servers continue responding to pings. From outside, the infrastructure looks normal. Cloudflare, IODA (Georgia Tech), all traditional monitoring systems see Iran still “online.” Yet user traffic has dropped 97%. How is this possible? To understand what happened on January 8, we need a step back. Iran has developed three generations of shutdowns, each more sophisticated than the last:</p>

<p>2019—Brute Force: During the November 2019 protests (which caused ~1,500 deaths), the regime simply removed BGP routes. It&#39;s like ripping out cables: crude, visible, it took 24+ hours to implement because every ISP had to do it manually. Economically devastating—banks stopped, the economy collapsed for six days.</p>

<p>2022—”Digital Curfew”: During the Mahsa Amini protests, selective targeting. They shut down cell towers in specific areas, slowed internet during protest hours (4:00-10:00 PM), blocked specific apps (WhatsApp, Instagram). More refined, less expensive.</p>

<p>2025-2026—”Stealth Blackout”: The final form. And here it becomes technically fascinating.</p>

<p>The current system operates at a single national chokepoint—all Iranian ISPs converge at a few state-controlled exit points. There, a layered system filters everything:</p>

<p><strong>Layer 1—DNS Poisoning:</strong> Any DNS query for foreign domains gets redirected to 10.10.34.34—a private IP serving a generic block page. You search for google.com? You get an Iranian server saying “domain not found.”</p>

<p><strong>Layer 2—Protocol Whitelisting:</strong> Only three protocols pass: DNS (port 53), HTTP (port 80), HTTPS (port 443). Everything else gets silently dropped. SSH? No. OpenVPN? No. WireGuard? No. Any traditional VPN? No. Zero response, zero error, simply... nothing.</p>

<p><strong>Layer 3—Deep Packet Inspection (DPI):</strong> The showpiece. System purchased in 2008 from Nokia Siemens Networks, continuously updated. The DPI inspects ALL HTTPS traffic:
– Reads the SNI (Server Name Indication) field in the TLS handshake
– Inspects the commonName field in certificates
– Analyzes HTTP headers (case-sensitive!)
– Injects TCP RST or HTTP 403 block pages on the fly
– Selective throttling of encrypted traffic. Practical example: you try to visit Twitter via HTTPS. Your browser starts the TLS handshake. The DPI reads “twitter.com” in the SNI field—which travels in cleartext—and injects a TCP RST. Connection terminated. Twitter&#39;s server doesn&#39;t even know you tried to connect.</p>

<p><strong>Layer 4—National Information Network (NIN):</strong> The national Iranian intranet. Domestic services (banking, some state news sites) work perfectly. It&#39;s the internet... but only Iranian.</p>

<p>The result:
– From the perspective of BGP routers: everything normal
– From the perspective of servers: ping responds, infrastructure up
– From the perspective of users: the internet no longer exists</p>

<p>It&#39;s genius, in the technical sense of the term.</p>

<p>During the June 2025 blackout (during the war with Israel), some tools worked:
– Psiphon: 1.5 million users maintained (one third of normal base) thanks to multi-protocol design
– Ceno Browser: decentralized peer-to-peer, from 600 to 8,000 active peers
– Tor bridges: shot up
– Starlink: worked... for those who could afford it (hotels, offices, a few privileged people)</p>

<p>But in the current January 2026 blackout?
Even Starlink has started suffering interference. The regime has learned. And the cost? The impact?
– Hospitals: booking systems offline
– Banks: digital transactions blocked
– Pharmacies: impossible to verify electronic prescriptions
– Shops: many didn&#39;t open (POS not working)</p>

<p>The real purpose isn&#39;t to stop the economy. It&#39;s to stop documentation. It&#39;s to obscure the massacres.</p>

<h2 id="and-signal" id="and-signal">And Signal?</h2>

<p>There&#39;s an interesting detail completely missing from the 2026 protests narrative, and the silence says a lot. Signal—the encrypted messaging app considered the gold standard for activists and dissidents—is barely mentioned. No articles, no appeals, no campaigns to bypass censorship. Yet Signal had been the weapon of choice during the 2017-2018 protests.</p>

<blockquote><p>“Signal has always been advertised as the go-to application for dissidents or activists to stay secure from any state authority,”</p></blockquote>

<p>said Mahsa Alimardani, researcher for Article19, in 2021.</p>

<p>But what happened?
January 2021, after a massive migration from WhatsApp to Signal, the Iranian government labeled it as “criminal content” and blocked it completely. September 2022, during the Mahsa Amini protests, Signal was still blocked and had to launch a global campaign (<a href="https://jolek78.writeas.com/tag:IRanASignalProxy" class="hashtag"><span>#</span><span class="p-category">IRanASignalProxy</span></a>) to create proxy servers to bypass censorship. January 2026? Total silence. Signal had been neutralized four years earlier. The technically superior option to all others—end-to-end encryption by default, zero metadata collection, run by a nonprofit—had already been removed from the playing field. The regime had done its homework. They had identified the most dangerous tool for them and crushed it while it was still small, years before it became mainstream.</p>

<p>And when the total blackout arrived on January 8, the debate about “Signal yes/no” was already obsolete.</p>

<h2 id="but-if-the-internet-is-off-how-do-they-communicate" id="but-if-the-internet-is-off-how-do-they-communicate">But If the Internet Is off, how do they communicate?</h2>

<p>This is the key question. On January 8, the internet dies in Iran. But videos keep arriving. Tweets continue. News continues. How?</p>

<p><strong>First answer: Starlink</strong>
Some Iranians—very few—have access to Starlink, Elon Musk&#39;s satellite service. Mainly hotels, offices, homes of wealthy people. These become the few “eyes” that can still communicate with the outside. But we&#39;re talking about an infinitesimal percentage of the population. And even Starlink is suffering increasing interference.</p>

<p><strong>Second answer: before the blackout</strong>
Many videos we see now were uploaded <em>before</em> January 8. They get re-shared, re-posted, presented as “real-time” but actually they&#39;re from days ago. Difficult to distinguish without precise geolocation and verifiable timestamps.</p>

<p><strong>Third answer (the uncomfortable one): from outside</strong>
Most of the narrative doesn&#39;t come from Iran. It comes from Persian TV channels in exile, from the Iranian diaspora, from social accounts of opponents abroad.</p>

<p>And here things get complicated.</p>

<h2 id="where-does-the-narrative-really-start" id="where-does-the-narrative-really-start">Where does the narrative really start?</h2>

<p>Euronews, January 10, 2026:</p>

<blockquote><p>“Rumours have been particularly widespread throughout the two weeks of mass protests across Iran. Many of those rumours originate from anonymous users on social media platforms, and are being covered by media outlets, purely for headline purposes.”</p></blockquote>

<p>The Conversation (academic analysis):</p>

<blockquote><p>“Instagram and Twitter are filled with such reaction, making this form of engagement unusually widespread and visible... Iranian dissident news channels outside the country have become key but controversial sources of rolling information, shaping their own narratives from limited available reports.”</p></blockquote>

<p>Miaan Group (Middle East research organization):</p>

<blockquote><p>“Available evidence suggests that Pahlavi support is uneven, largely media- and social-media-driven, and not underpinned by organized infrastructure on the ground. Overstating exile-led narratives risks misreading the protest&#39;s domestic drivers and reinforcing Tehran&#39;s justification for repression.”</p></blockquote>

<p>That is: by amplifying the narrative constructed from abroad, we&#39;re literally giving the regime justification to massacre protesters. And this isn&#39;t speculation. Jerusalem Post cites an Iranian expert:</p>

<blockquote><p>“The monarchist Persian language media stations, especially Manoto TV, are manipulating images of protests in Iran to portray Reza Pahlavi as the only man whose name is heard in the streets, but this is a completely false and duplicitous depiction.”</p></blockquote>

<p>We&#39;re talking about active manipulation. Not generous interpretation—manipulation. Real videos of protests, audio removed, false voice-overs added to make it seem like people are asking for the Shah&#39;s return. Black and white become increasingly similar to gray, don&#39;t they?</p>

<h2 id="who-commands-this-revolution" id="who-commands-this-revolution">Who commands this revolution?</h2>

<p>Reza Pahlavi. The exiled heir. 65 years old, has lived in the United States since he was 16 (when his father was overthrown in 1979). He explicitly called for protests from January 8, using his social channels. But how much support does he really have in Iran?</p>

<p>From CNN, with rare honesty:</p>

<blockquote><p>“Analysts say that it is unclear what might be driving the renewed excitement for the royal family in Iran. Arash Azizi, an academic and author of the book &#39;What Iranians Want,&#39; told CNN that, while Pahlavi &#39;has turned himself into a frontrunner in Iranian opposition politics,&#39; he is also &#39;a divisive figure and not a unifying one.&#39;”</p></blockquote>

<p>And here lies the paradox. Iranians take to the streets for the collapsed economy, for personal freedoms, for the end of religious dictatorship, for civil rights. Not necessarily for the return of the monarchy. The Shah—Pahlavi&#39;s father—was himself a dictator, supported by the CIA, responsible for brutal repression. The 1979 revolution overthrew him precisely for this. But the narrative reaching the West? “They want Pahlavi.” Why? Because the exile TV channels say so. Because the Iranian diaspora—living in Los Angeles, London, Paris—supports him. Because videos are manipulated to make it seem like people are asking for him. And the regime? The regime uses exactly this narrative to justify the massacres. “See? It&#39;s a monarchist insurrection supported from abroad. They&#39;re foreign agents. Terrorists. The repression is justified.”</p>

<p>And we, therefore, what should we do? Stay silent?</p>

<h2 id="source-analysis-aka-who-are-we-really-citing" id="source-analysis-aka-who-are-we-really-citing">Source analysis—aka “Who are we really citing”</h2>

<p>Let&#39;s look at where the “news” about Iran comes from:</p>

<p><strong>Iran International:</strong> Persian TV based in London. Funding: controversial, documented Saudi ties. Repeatedly accused of manipulating footage.</p>

<p><strong>Manoto TV:</strong> Another Persian TV in exile. Declared pro-monarchist. Accused of false voice-overs.</p>

<p><strong>HRANA (Human Rights Activists News Agency):</strong> Based in the United States. Founded by anti-regime activists. Provides the death toll numbers. Primary source for many Western media.</p>

<p><strong>Reza Pahlavi:</strong> The heir himself. Worth commenting?</p>

<p><strong>US State Department:</strong> Twitter account @USABehFarsi posting in Persian. Constant message: “we support you, overthrow the regime.”</p>

<p>Notice something? All major sources are based outside Iran, have a clear political agenda (anti-regime, often pro-Pahlavi), and in some cases there&#39;s documented content manipulation. And sources from Iran are nearly nonexistent, practically zero, because the internet is off. So the narrative is being constructed entirely from outside, in an information vacuum, by actors with specific interests. It&#39;s 2009 again. But in 2009, at least, they were naive Westerners tweeting about Iran thinking they were helping. In 2026 we have active video manipulation, exile TV channels constructing false narratives, the US State Department directly feeding Persian social media, Western media citing compromised sources as primary. All this while Iran is completely offline.</p>

<p>And meanwhile, the real people protesting for real reasons—economy, freedom, dignity—die every day. 2,000 dead. Maybe 6,000. Maybe 20,000. We&#39;ll never know for certain, precisely thanks to the blackout.</p>

<h2 id="am-i-naive-perhaps" id="am-i-naive-perhaps">Am I naive? Perhaps</h2>

<p>I return to that unpleasant feeling from ten years ago, when I closed “The Net Delusion.” Morozov doesn&#39;t let you win. He doesn&#39;t let you choose the good guys&#39; side. He shows you that technology amplifies existing power dynamics. That authoritarian regimes learn. That Western slacktivism has real consequences.</p>

<p>And that the worst thing we can do is project our technological fantasies—the “Twitter Revolution”—onto real protest movements, with real people risking real lives. When we get the narrative wrong, when we amplify the wrong voices, when we manipulate content to conform to our preferred story... the consequences aren&#39;t paid by us. They&#39;re paid by them. The Iranians who protest don&#39;t need us to tweet <a href="https://jolek78.writeas.com/tag:IranProtests" class="hashtag"><span>#</span><span class="p-category">IranProtests</span></a> from the couch. They don&#39;t need exile TV channels manipulating their videos. They don&#39;t need the US State Department publicly “supporting” them (giving the regime the “foreign interference” narrative).</p>

<p>They need us to understand what&#39;s really happening, to distinguish between real protests and constructed narratives, to be careful about who we amplify and why. They need us to stop believing that the internet solves political problems with a simple “click and share.” Because, as Morozov warned us, it often complicates them.</p>

<p>The internet is serious business, and should be treated seriously.</p>

<h2 id="the-blackout-becomes-permanent" id="the-blackout-becomes-permanent">The blackout becomes permanent</h2>

<p>In mid-January 2026, news emerged that could make everything even more disturbing. Iran International reported that the Iranian regime is finalizing a project to permanently disconnect the country from the global internet. And it&#39;s not just a theoretical project. It&#39;s almost operational.</p>

<h3 id="the-architecture-of-the-great-iranian-firewall" id="the-architecture-of-the-great-iranian-firewall">The architecture of the great Iranian firewall</h3>

<p>The details are chilling in their concreteness. The data center is bunkerized under the Fanap building in Pardis IT Town (20km from Tehran), designed to withstand missile attacks. It has a capacity of 400 server racks with Huawei hardware. Estimated cost is between $700 million and $1 billion. Logistics saw 24 containers enter Iran after the June 2025 war. Management is assigned to ArvanCloud (Iranian cloud) through a shell company called Ayandeh Afzay-e Karaneh. And the connections are clear: Fanap and its CEO Shahab Javanmardi are under US sanctions for ties to intelligence and IRGC.</p>

<h3 id="how-it-would-work-technically" id="how-it-would-work-technically">How it would work technically</h3>

<p>The system is based on the National Information Network (NIN)—a project started in 2005, gradually implemented from 2013 and fully operational since 2019. It&#39;s the Iranian intranet, in essence. It works like this: when you connect in Iran, your traffic passes through a centralized control point—the Telecommunication Infrastructure Company (TIC), state monopoly. There, the system decides. Request for a .ir site or NIN service? Goes on the domestic Iranian network. Request for a foreign site? Goes to the gateway toward the global internet (if active).</p>

<p>The “kill switch” simply disables the foreign gateway. And suddenly Iranian banks work (on NIN), local e-commerce works (on NIN), government services work (on NIN), Iranian emails work (on NIN), while Google, Twitter, Facebook, all the foreign internet is at zero. The difference from 2019 is substantial. Before, shutting off the internet meant paralyzing the economy—no banks, no payments, nothing. It cost billions per day. It wasn&#39;t sustainable long-term. Now instead? They can shut off the global internet while leaving everything else working. It&#39;s economically sustainable. They can maintain it for months.</p>

<h2 id="a-technological-paradox" id="a-technological-paradox">A technological paradox</h2>

<p>Here&#39;s something that struck me: technically it&#39;s sophisticated—very sophisticated. But strategically... there&#39;s a contradiction that almost doesn&#39;t make sense. Let&#39;s look at how modern surveillance works in Russia and the United States, not to defend it, obviously, but to understand the difference in approach.</p>

<p><strong>The Russian model (SORM):</strong> The internet stays open and functioning. Users can access Google, Facebook, Twitter. But every ISP has installed an FSB “black box” that records everything. Every email, every click, every message. Storage is mandatory: 6 months of full content, 3 years of metadata according to the 2016 Yarovaya law. The FSB can retrieve data in real-time directly, without the ISP knowing what they&#39;re looking for. In 2023: 500,000 surveillance requests approved, only 272 denied. The result? Opponents use the internet normally, thinking they&#39;re free. They organize, communicate, build networks. And meanwhile the system records everything. When needed—20,000+ arrests for online speech between 2022 and 2024—they already have all the evidence, all the contacts, the entire map of social relationships.</p>

<p><strong>The American model (PRISM):</strong> Same logic, different implementation. Since Snowden we know that NSA accesses Google, Facebook, Microsoft, Apple servers directly. They collect everyone&#39;s metadata. “We kill people based on metadata,” said former CIA director Michael Hayden. Appearance of democracy and free internet. Reality of invisible but total mass surveillance.</p>

<p><strong>The Iranian approach (NIN):</strong> Shut off the internet when needed.</p>

<p>By shutting off the internet, Iran loses all the intelligence capability these systems provide. They can no longer track who talks to whom. They can&#39;t infiltrate groups. They can&#39;t monitor opponents&#39; communications. They can&#39;t build maps of social networks. They literally remove the most powerful surveillance tool that exists from themselves. In exchange they get the ability to hide massacres for a few weeks. But at the cost of complete loss of intelligence during the blackout, blatant evidence of authoritarianism, economic damage even with NIN functioning, international isolation, and demonstration of their own fragility.</p>

<p>Russia and the USA have understood something that Iran seems not to have grasped: invisible control is infinitely more effective than visible control. You let people think they&#39;re free, let them use the internet, let them communicate. And meanwhile you record everything, analyze everything. When needed, you strike with surgical precision having all the necessary evidence. Iran has built a visible digital cage. One that declares to the world “we&#39;re an authoritarian regime terrified of our population.” One that eliminates its own surveillance capability precisely when it would need it most. It&#39;s the difference between long-term thinking (building permanent intelligence systems) and short-term thinking (hiding today&#39;s massacres). SORM and PRISM are invisible dystopias, and they work precisely because people don&#39;t see them. NIN is visible dystopia. And visible dystopias tend to generate revolutions, and fail soon.</p>

<p>IranWire reports that the plan is to maintain the blackout at least until the Iranian New Year, March 20, 2026.</p>

<p>It is, in essence, an act of desperation. The general population (level 1) will have only NIN, zero external access. “Authorized” professionals (level 2) will have NIN plus filtered internet. Government, IRGC and elite (level 3) will have full access. Every connection is tracked via national ID and phone number. Every access is attributable. And when they reactivate the internet—even partially—they&#39;ll know exactly who used Starlink, who used a VPN, who shared videos.</p>

<p>The model, needless to say, is China. The Chinese Great Firewall blocks foreign services but replaces them—Baidu instead of Google, Weibo instead of Twitter. China offers you an alternative, even if controlled. Iran? Iran can simply shut everything off and force you onto the national network. And with Huawei providing hardware and expertise (the same ones who built the Chinese system), and Russia providing advanced DPI technology (Protei), they have all the puzzle pieces.</p>

<p>And we&#39;re back to square one, as usual.</p>

<h2 id="tor-the-infinite-technological-war" id="tor-the-infinite-technological-war">Tor: the infinite technological war</h2>

<p>While Starlink makes headlines as the only tool of freedom, there&#39;s a tool that for nearly 20 years has been playing hide and seek with authoritarian regimes. Tor—The Onion Router—is historically the tool of choice for those living under censorship. During the Chinese Great Firewall. During the Russian invasion of Ukraine. In Egypt during the Arab Spring. In Syria. And in Iran, repeatedly.</p>

<p>Every time Iran has experienced moments of crisis, Tor has seen massive usage spikes:</p>

<p><strong>2009—Green Movement:</strong> Tor relays shot up to 1.5 million Iranian users. The regime blocked direct connections. Users discovered Tor Bridges (non-public relays, harder to block). The regime learned.</p>

<p><strong>2019—November, gasoline protests:</strong> Complete blackout for 6 days. Tor usage dropped to zero along with all internet. But when they turned it back on, the number of Tor users was <em>higher</em> than before. People had learned.</p>

<p><strong>2022—Mahsa Amini, Woman Life Freedom:</strong> Nightly digital curfews (only mobile networks off 4:00-10:00 PM). Tor Bridges exploded. The regime implemented DPI to recognize Tor traffic and block it selectively.</p>

<p>And here&#39;s the interesting point. It&#39;s not a simple block. It&#39;s a continuous technological war.</p>

<p>In 2012, a Tor developer wrote on the official blog a phrase that should make us reflect:</p>

<blockquote><p>“The Iranian government has, in less than a year and starting from scratch, caught up and now surpassed the Tor project in technical ability.”</p></blockquote>

<p>What does this mean practically? That the Iranian regime has developed DPI systems capable of recognizing Tor traffic even if encrypted. Wait, how is that possible? Tor uses SSL/TLS exactly as if it were cleartext. All traffic is encrypted. How do they distinguish it? By watching behavior, not content. It&#39;s like recognizing someone by the way they walk even if they&#39;re wearing a disguise. Iranian DPI analyzes:</p>
<ul><li><strong>Packet timing:</strong> Tor routes traffic through three relays, creating characteristic latency patterns</li>
<li><strong>Packet size:</strong> Tor uses 512-byte cells, an unusual size</li>
<li><strong>TLS handshake:</strong> The “hello, I&#39;m a client” / “hello, I&#39;m a server” sequence has specific patterns for each protocol</li>
<li><strong>Traffic flow:</strong> Tor sends data in bursts different from a normal HTTPS connection</li></ul>

<p>They&#39;re not reading inside encrypted packets. They&#39;re watching from outside and recognizing the fingerprint. In real-time. On all national traffic. It&#39;s technically impressive.</p>

<h3 id="but-the-tor-project-responded" id="but-the-tor-project-responded">But the Tor project responded</h3>

<p>The strategy has evolved over time, essentially completely disguising Tor traffic to make it look like something else.</p>
<ul><li><strong>Pluggable Transports:</strong> Tor in disguise. Traffic is made to look like normal web browsing, or Skype, or something else.</li>
<li><strong>Snowflake:</strong> Tor hiding behind WebRTC connections (those used for video calls). Hard to block without blocking all video calls.</li>
<li><strong>Meek:</strong> Tor disguising itself as traffic toward legitimate services like Microsoft Azure or Amazon CloudFront. To block it they have to block services they themselves use.</li>
<li><strong>Distributed Bridge Relays:</strong> Secret non-public relays, harder to identify and block.</li></ul>

<p>And it works. Sometimes. Until the regime updates again. Snowflake gets identified? Tor develops a new pluggable transport. The regime recognizes it? The next one is developed. For every step forward by censors, Tor actively responds.</p>

<p>And now? January 2026? Here&#39;s a problem. Tor usage data always has a publication delay to protect users. But historically, the pattern is always the same: crisis and protests begin, censorship increases, Tor usage shoots up, the regime develops countermeasures, and finally total blackout if necessary. Given that we&#39;re in total blackout, Tor usage has crashed to zero—like in 2019. You can&#39;t use Tor if you don&#39;t have internet, not even censored internet. But when they turn it back on—and they will, even just partially—I expect to see a massive spike. Because Iranians have learned. Starlink costs too much. Normal VPNs get blocked. Tor, with the right bridges, still works.</p>

<p>But there&#39;s another paradox. Tor protects anonymity during connection. But the simple fact of trying to connect to Tor is identifiable by DPI. And traceable to your national ID. So the regime can see: who tried to use Tor (even if blocked), when they tried and for how long. And when the blackout ends, they might have a complete list of “technologically sophisticated dissidents” to arrest. It&#39;s the same logic as Starlink—retroactive use as evidence of dissent. The fight for free internet in Iran has been going on for nearly 20 years. It&#39;s not a new story. But even Tor can be defeated by a total blackout. And with the NIN/Huawei system becoming permanent, even when they turn the internet back on it might be an internet so controlled, so filtered, so tracked, that not even Tor will be enough.</p>

<h2 id="conclusions-and-some-questions" id="conclusions-and-some-questions">Conclusions—and some questions</h2>

<p>I started with Morozov, with that unpleasant feeling from ten years ago. With the discovery that the “Twitter Revolution” of 2009 was a Western projection, not an Iranian reality. And I&#39;ve arrived here. Iran 2026. Same film, different cast. Same narratives constructed from abroad. Same amplification of exile voices. Same video manipulation. Same regime using all this as justification to massacre. But there&#39;s a crucial difference from 2009. In 2009, the regime had learned that the internet was useful to them (surveillance) but dangerous (documentation). In 2026, they&#39;ve solved the equation radically: they&#39;ve built a system to have internet when they need it (domestic NIN) and shut it off when they don&#39;t (kill switch toward the outside). 700 million—1 billion dollars. Huawei hardware. Russian DPI. Anti-missile bunker. 400 server racks. Operational by March 2026. It&#39;s no longer temporary and expensive censorship. It&#39;s permanent information control infrastructure. It&#39;s a digital cage.</p>

<p>Where does the content about Iranian protests come from while the internet has been off for 9+ days? Mainly from abroad. From exile TV channels with controversial funding. From diaspora living thousands of miles away. From sources with clear agendas and, in some cases, documented manipulation.</p>

<p>Who is driving the narrative? Pahlavi from the USA. Manoto TV altering audio. Iran International accused of false voice-overs. US State Department tweeting in Persian. Diaspora demonstrating with Shah flags.</p>

<p>Who is driving the real protests in Iran? Probably no one. Probably it&#39;s leaderless, organic, driven by economic desperation and 47 years of repression. The people in the streets shout “bread, work, freedom”—not necessarily “bring us back the Shah.”</p>

<p>But the narrative reaching us? That one talks about Pahlavi. About monarchy. About “Iranian Revolution 2.0.” Exactly the narrative the regime <em>wants</em> to justify the massacres. “See? Western plot. Foreign agents. Monarchist terrorists.”</p>

<p>And the gap between narrative and reality? It costs human lives. 2,000 dead? 6,000? 12,000? 20,000? We&#39;ll never know for certain, precisely thanks to the blackout that was supposed to be “temporary” and is becoming permanent.</p>

<h2 id="morozov-was-right" id="morozov-was-right">Morozov was right</h2>

<p>The internet, unfortunately, is not free by definition. Technology amplifies existing power dynamics. Authoritarian regimes learn, adapt, build increasingly sophisticated systems. The Iranian regime has spent 17 years—from 2009 to today—studying how to control the internet. They&#39;ve invested billions. They&#39;ve collaborated with China and Russia. They&#39;ve developed DPI that recognizes Tor, systems that block VPNs, architectures that allow economically sustainable blackouts. And the technological “resistance”? It depends on Elon Musk donating Starlink—and he can decide to turn it off tomorrow. It depends on Tor Project playing whack-a-mole with Iranian countermeasures. It depends on individuals who risk arrest and torture to use circumvention technologies. It&#39;s not a fair fight. It never was.</p>

<p>Cyber-utopianism is a drug. It makes us feel good. It makes us feel like we&#39;re “helping.” That technology always wins. That the internet liberates. But reality is more complex, more uncomfortable. Technology is a tool. And like all tools, it can be used to liberate or to oppress. Authoritarian regimes have resources, expertise, and zero ethical constraints. The “resistance” has volunteers, limited budgets, and the weight of not wanting to cause harm. The Iranians who protest don&#39;t need us to celebrate Starlink as savior. They don&#39;t need us to amplify narratives constructed from abroad. They don&#39;t need our slacktivism. They need us to understand what&#39;s really happening. To distinguish between real protests and constructed narratives. To not give the regime the propaganda ammunition it needs. To stop believing that the internet solves political problems. They need us to finally learn the lesson Morozov was trying to teach us 15 years ago.</p>

<p><a href="https://remark.as/p/jolek78/iran-2026-17-years-later-same-mistake">Discuss...</a></p>

<p><a href="https://jolek78.writeas.com/tag:Iran" class="hashtag"><span>#</span><span class="p-category">Iran</span></a> <a href="https://jolek78.writeas.com/tag:IranProtests" class="hashtag"><span>#</span><span class="p-category">IranProtests</span></a> <a href="https://jolek78.writeas.com/tag:NetDelusion" class="hashtag"><span>#</span><span class="p-category">NetDelusion</span></a> <a href="https://jolek78.writeas.com/tag:EvgenyMorozov" class="hashtag"><span>#</span><span class="p-category">EvgenyMorozov</span></a> <a href="https://jolek78.writeas.com/tag:TwitterRevolution" class="hashtag"><span>#</span><span class="p-category">TwitterRevolution</span></a> <a href="https://jolek78.writeas.com/tag:Tor" class="hashtag"><span>#</span><span class="p-category">Tor</span></a> <a href="https://jolek78.writeas.com/tag:Starlink" class="hashtag"><span>#</span><span class="p-category">Starlink</span></a> <a href="https://jolek78.writeas.com/tag:DigitalCensorship" class="hashtag"><span>#</span><span class="p-category">DigitalCensorship</span></a> <a href="https://jolek78.writeas.com/tag:InternetFreedom" class="hashtag"><span>#</span><span class="p-category">InternetFreedom</span></a> <a href="https://jolek78.writeas.com/tag:Authoritarianism" class="hashtag"><span>#</span><span class="p-category">Authoritarianism</span></a></p>

<div class="center"><a href="https://fosstodon.org/@jolek78">Mastodon</a> :: <a href="https://pixelfed.social/jolek78">Pixelfed</a> :: <a href="mailto:jolek78@posteo.net">Email</a>  :: <a href="https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net"> Element</a></div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/iran-2026-17-years-later-same-mistake</guid>
      <pubDate>Sun, 18 Jan 2026 20:30:00 +0000</pubDate>
    </item>
    <item>
      <title>Anna&#39;s Archive: Robin Hood of knowledge or</title>
      <link>https://jolek78.writeas.com/annas-archive-robin-hood-of-knowledge-or?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[3:00 AM. Another one of those nights where my brain decided sleep was overrated. After my usual nocturnal walk through the streets of a remote Scottish town—where even a fox observed me with that &#34;humans are weird&#34; look—I sat back down at my server. Just a quick scan of my RSS feeds, I told myself, then I can start work. When...&#xA;&#xA;  We backed up Spotify (metadata and music files). It&#39;s distributed in bulk torrents (~300TB), grouped by popularity.&#xA;  This release includes the largest publicly available music metadata database with 256 million tracks and 186 million unique ISRCs.&#xA;  It&#39;s the world&#39;s first &#34;preservation archive&#34; for music which is fully open (meaning it can easily be mirrored by anyone with enough disk space), with 86 million music files, representing around 99.6% of listens.&#xA;&#xA;The news came from Anna&#39;s Archive—the world&#39;s largest pirate library—which had just scraped Spotify&#39;s entire catalog. Not just metadata, but also the audio files. 86 million tracks, 300 terabytes. I stopped to reread those numbers, then thought: holy shit, how big is this thing? &#xA;&#xA;!--more--&#xA;&#xA;And so, while the rest of the world slept, I started digging. This is one of those stories that needs to be told—a story weaving together hacker idealism, technology, billions of dollars in AI training data, and an ethical paradox few want to truly confront.&#xA;&#xA;When Z-Library fell&#xA;November 3, 2022. The FBI seized Z-Library&#39;s domains, one of the world&#39;s largest pirate libraries. Two alleged operators were arrested in Argentina. The community panicked—Z-Library served millions of students, researchers, and readers. And suddenly, everything vanished.&#xA;&#xA;But someone was prepared. A group called PiLiMi (Pirate Library Mirror) had created complete backups of all shadow libraries for years. LibGen, Z-Library, Sci-Hub. Everything. When Z-Library fell, these backups were ready. But there was a problem: petabytes of unusable data with no way to search them.&#xA;&#xA;Enter Anna Archivist—a pseudonym, probably a collective—who understood something fundamental: preserving data is useless if it&#39;s not accessible. Days after Z-Library&#39;s seizure, Anna&#39;s Archive was online with a meta-search engine aggregating all shadow library catalogs, making them searchable and—crucially—virtually impossible to censor.&#xA;&#xA;The numbers&#xA;December 2025:&#xA;&#xA;61.3 million books (PDF, EPUB, MOBI, DjVu)&#xA;95.5 million academic papers&#xA;256 million music tracks (Spotify metadata)&#xA;86 million audio files (~300TB)&#xA;Total: ~1.1 Petabyte in unified torrents&#xA;&#xA;To put this in perspective: the sum of all academic knowledge produced by humanity, plus a gigantic slice of world literary production, plus now music. All indexed, searchable, downloadable. Free. And virtually impossible to shut down.&#xA;&#xA;Why it can&#39;t be killed&#xA;Remember Napster? Centralized servers, one lawsuit, shut down in a day. BitTorrent learned from that—decentralized everything. But Anna&#39;s Archive goes further, combining layers of resilience that make it practically immortal:&#xA;&#xA;Distributed Frontend: Multiple domain mirrors (.li, .se, .org, .gs), Tor hidden service, Progressive Web App that works offline. Block one, others continue.&#xA;&#xA;Distributed Database: Elasticsearch + PostgreSQL + public API. Anyone can download the entire database and host their own instance. No central server to attack.&#xA;&#xA;Distributed Files: This is the genius part. Anna&#39;s Archive hosts almost nothing directly. Instead:&#xA;&#xA;IPFS (InterPlanetary File System): Files identified by cryptographic hash, served by volunteer nodes worldwide&#xA;BitTorrent: Classic torrents with multiple trackers, self-sustaining swarms&#xA;HTTP Gateways: For normal users who just want to click-and-download, links redirect to public IPFS gateways&#xA;&#xA;Result: user downloads via normal HTTP, but content comes from a decentralized network. Can&#39;t shut down IPFS. Can&#39;t stop BitTorrent. Can block gateways, but hundreds exist and anyone can create new ones.&#xA;&#xA;OpSec: Domains registered via privacy-focused Icelandic registrar, bulletproof hosting in non-cooperative jurisdictions, Bitcoin payments, PGP-encrypted communications, zero personal information.&#xA;&#xA;The only way to stop Anna&#39;s Archive would be to shut down the internet. Or convince every single seeder to stop. Good luck.&#xA;&#xA;81.7 terabytes free for meta&#xA;And here&#39;s where it gets disturbing.&#xA;&#xA;February 2025. Documents from Kadrey v. Meta are unsealed—a class action by authors against Meta for using their pirated books to train Llama AI models. Internal emails reveal a shocking timeline:&#xA;&#xA;October 2022 - Melanie Kambadur, Senior Research Manager:&#xA;&#xA;  I don&#39;t think we should use pirated material. I really need to draw a line there.&#xA;&#xA;Eleonora Presani, Meta employee:&#xA;&#xA;  Using pirated material should be beyond our ethical threshold. SciHub, ResearchGate, LibGen are basically like PirateBay... they&#39;re distributing content that is protected by copyright and they&#39;re infringing it.&#xA;&#xA;January 2023 - Meeting with Mark Zuckerberg present:&#xA;&#xA;  [Zuckerberg] wants to move this stuff forward, and we need to find a way to unblock all this.&#xA;&#xA;April 2023 - Nikolay Bashlykov, Meta engineer:&#xA;&#xA;  Using Meta IP addresses to load through torrents pirate content... torrenting from a corporate laptop doesn&#39;t feel right.&#xA;&#xA;2023-2024: The Operation&#xA;&#xA;Meta downloaded:&#xA;&#xA;81.7 TB via Anna&#39;s Archive torrents (35.7 TB from Z-Library alone)&#xA;80.6 TB from LibGen&#xA;Total: ~162 TB of pirated books&#xA;&#xA;Method: BitTorrent client on separate infrastructure, VPN to obscure origin, active seeding to other peers. Result: 197,000 copyrighted books integrated into Llama training data.&#xA;&#xA;June 2025: the ruling&#xA;Judge Vince Chhabria (Northern District California) applied the four-factor fair use test. The decision is legally fascinating and ethically disturbing.&#xA;&#xA;Factor 1 - Transformative Use: Meta wins decisively. The judge ruled AI training is &#34;spectacularly transformative&#34;—fundamentally different from human reading. The purpose isn&#39;t to express the content but to learn statistical relationships between words.&#xA;&#xA;Factor 2 - Nature of Work: Neutral. Creative fiction gets more copyright protection than factual works, but this didn&#39;t tip the scales either way.&#xA;&#xA;Factor 3 - Amount Used: Meta wins. Even though they used entire books, the judge found this necessary for training. You can&#39;t cherry-pick sentences and expect an AI to learn language patterns.&#xA;&#xA;Factor 4 - Market Effect: This is where the judge&#39;s discomfort shows through:&#xA;&#xA;  Generative AI has the potential to flood the market with endless amounts of images, songs, articles, books... So by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way.&#xA;&#xA;He sees the problem clearly. AI trained on copyrighted works will compete with and potentially destroy the market for those very works. But the plaintiffs couldn&#39;t prove specific economic harm with hard data.&#xA;&#xA;The final ruling: &#34;Given the state of the record, the Court has no choice but to grant summary judgment.&#34; Meta wins on these specific facts. But the judge adds a critical caveat: &#34;In most cases, training LLMs on copyrighted works without permission is likely infringing and not fair use.&#34;&#xA;&#xA;Meta didn&#39;t win because what they did was legitimate. They won because the authors&#39; lawyers didn&#39;t build a strong enough evidentiary case. It&#39;s a technical legal victory that sidesteps the ethical question entirely.&#xA;&#xA;The precedent this sets is chilling: AI companies can pirate with relative impunity if they have good lawyers and plaintiffs can&#39;t prove specific damages.&#xA;&#xA;The math&#xA;Scenario A (legal):&#xA;&#xA;Meta negotiates licenses with publishers&#xA;Cost: $50-100 million (conservative estimate)&#xA;Authors receive royalties&#xA;&#xA;Scenario B (what they did):&#xA;&#xA;Download 81.7 TB for free&#xA;Legal defense: ~$5 million&#xA;Win in court&#xA;Authors receive: $0&#xA;&#xA;Meta&#39;s savings: $45-95 million&#xA;&#xA;And now every AI company knows: download from Anna&#39;s Archive, risk a lawsuit with weak evidence, save tens of millions.&#xA;&#xA;Anna&#39;s Archive also revealed they provide &#34;SFTP bulk access to approximately 30 companies&#34;—primarily Chinese LLM startups and data brokers—who contribute money or data. DeepSeek publicly admitted using Anna&#39;s Archive data for training. No consequences in Chinese jurisdiction.&#xA;&#xA;Aaron Swartz and the question that haunts this story&#xA;There&#39;s a ghost here. His name is Aaron Swartz, and his story illuminates everything wrong with how we treat information access.&#xA;&#xA;2011: Aaron, 24, brilliant programmer, Reddit co-founder, and information freedom activist, connected to MIT&#39;s network and downloaded 4.8 million academic papers from JSTOR. His intent was to make publicly-funded research freely available. He wasn&#39;t enriching himself. He was acting on principle.&#xA;&#xA;The response was swift and brutal. Federal prosecutors threw the book at him: 13 felony charges, maximum penalty of 50 years in prison and $1 million in fines. For downloading academic papers. The prosecution was led by U.S. Attorney Carmen Ortiz, who called it &#34;stealing is stealing, whether you use a computer command or a crowbar.&#34;&#xA;&#xA;The pressure was immense. Aaron faced financial ruin, decades in prison, complete destruction of his life. In January 2013, at age 26, he hanged himself. His family and partner blamed the aggressive prosecution. The internet mourned a brilliant mind and passionate advocate crushed by prosecutorial overreach.&#xA;&#xA;Now consider the parallel:&#xA;&#xA;Aaron Swartz: 4.8 million papers → federal persecution, suicide at 26&#xA;&#xA;Meta: 162 TB (~162 million papers) → wins in court, saves $95 million&#xA;&#xA;Aaron was an individual acting on idealistic principles about information freedom. Meta is a trillion-dollar corporation acting on profit motives. Aaron faced the full weight of federal prosecution. Meta faced a civil lawsuit they successfully defended with their massive legal team.&#xA;&#xA;The system punishes idealism and rewards profit. The disparity isn&#39;t just unjust—it reveals something fundamental about who gets to break rules and who doesn&#39;t.&#xA;&#xA;The paradox no one wants to see&#xA;Anna&#39;s Archive claims to fight publishing monopolies and inequality in access to knowledge. But the reality:&#xA;&#xA;Who benefits most?&#xA;&#xA;Meta: 81.7 TB free, $95M saved&#xA;~30 AI companies: privileged access&#xA;Corporations with $100M+ compute budgets&#xA;&#xA;Resources needed to benefit:&#xA;&#xA;Storage/Bandwidth: trivial for Meta ($1000s)&#xA;Computing for training: MASSIVE ($10-100M)&#xA;Legal defense: MASSIVE ($millions)&#xA;&#xA;Only big tech can afford this. The result:&#xA;&#xA;Data: socialized (Anna&#39;s Archive, shared risk)&#xA;Profits: privatized (proprietary LLMs, paid APIs)&#xA;Costs: externalized (authors not compensated)&#xA;&#xA;But what about students in the Global South?&#xA;&#xA;This is where the story gets complicated, because the benefits are real and they matter immensely.&#xA;&#xA;Consider a medical student in India. Her family earns about $400/month. A single medical textbook costs $300-500. She needs fifteen of them. The math is impossible. Her options: don&#39;t graduate, or Anna&#39;s Archive. She chose the latter and completed her degree. She&#39;s now a practicing physician.&#xA;&#xA;Or take a PhD researcher in South Africa studying climate change impacts. The critical papers for his dissertation are behind Elsevier&#39;s paywall at $35 each. He needs twenty papers minimum—$700 his university can&#39;t afford. Without Sci-Hub (accessible through Anna&#39;s Archive), his dissertation would have been impossible. He completed it, published findings that inform local climate policy.&#xA;&#xA;An art history teacher in Argentina wanted to enrich her curriculum with Renaissance art analysis. The books she needed weren&#39;t available in local libraries. Importing them? Prohibitive between shipping costs and customs. Anna&#39;s Archive gave her access to rare texts that transformed her teaching.&#xA;&#xA;The data backs this up: literature review times for researchers in developing countries reduced 60-80%. Citation patterns show researchers in Nigeria, Bangladesh, Ecuador now cite contemporary research at parity with Harvard and Oxford. Publications from developing countries have increased. Methodological quality has improved. International collaborations have expanded.&#xA;&#xA;This matters. This changes lives. This is not hypothetical.&#xA;&#xA;The problem is: both things are simultaneously true.&#xA;&#xA;Anna&#39;s Archive saves academic careers in the Global South&#xA;Anna&#39;s Archive allows Meta to save $95 million&#xA;&#xA;But Meta downloaded more data in one week than all Indian students download in a year. How do we square that?&#xA;&#xA;The broken system that created this monster&#xA;To understand why Anna&#39;s Archive exists and why it&#39;s grown so explosively, you need to understand how fundamentally broken academic publishing has become.&#xA;&#xA;Here&#39;s the perverse cycle:&#xA;&#xA;Researcher writes paper (unpaid)&#xA;Other researchers peer review it (unpaid)&#xA;Publisher publishes it&#xA;Researcher&#39;s own university must pay to read it&#xA;Publisher profits: Elsevier and Wiley report 35-40% profit margins&#xA;&#xA;Today, over 70% of academic papers sit behind paywalls. Access costs $35-50 per paper for individuals, or $10,000-100,000+ per year for institutional subscriptions. Universities in developing countries simply cannot afford these subscriptions. Neither can most universities in developed countries—Harvard famously called journal subscription costs &#34;fiscally unsustainable&#34; in 2012.&#xA;&#xA;The system extracts free labor from researchers, locks up publicly-funded research behind paywalls, charges exorbitant fees to access it, and funnels enormous profits to publishers who add relatively little value. Academic institutions create the knowledge, do the quality control, and then pay again to access their own work.&#xA;&#xA;Sci-Hub and Anna&#39;s Archive didn&#39;t emerge from nowhere. They&#39;re responses to a genuinely broken system. The question is whether they&#39;re the right response—and who ultimately benefits most from that response.&#xA;&#xA;The architecture determines the ethics&#xA;Anna&#39;s Archive can&#39;t discriminate because:&#xA;&#xA;Open source philosophy: everyone or no one&#xA;Technical impossibility: how do you block Meta but not students?&#xA;Legal strategy: claiming &#34;non-hosting&#34; makes usage control impossible&#xA;&#xA;IPFS and BitTorrent are magnificent tools for resisting censorship. But resistance to censorship also means resistance to ethical control. You can&#39;t have one without the other.&#xA;&#xA;The system is structurally designed to be unkillable. Which also means it&#39;s structurally designed to serve whoever has the resources to benefit most.&#xA;&#xA;Where does it end?&#xA;December 2025: Anna&#39;s Archive announced they&#39;d scraped Spotify. The same preservation narrative, the same pattern. 256 million tracks, 86 million audio files, 300TB available to anyone with the infrastructure to use it.&#xA;&#xA;&#34;This Spotify scrape is our humble attempt to start such a &#39;preservation archive&#39; for music,&#34; they wrote. The justification mirrors the books argument: Spotify loses licenses, music disappears; platform risk if Spotify fails; regional blocks prevent access; long tail poorly preserved.&#xA;&#xA;All true. But who downloads 300TB of music? Not the kid in Malawi who just wants to listen to his favorite artist. ByteDance, training the next AI music generator. Startups building Spotify competitors. The same companies with compute budgets in the tens of millions.&#xA;&#xA;Anna&#39;s Archive is pivoting from text to multimedia, and each escalation follows a predictable pattern:&#xA;&#xA;Books → Justified by paywalls and academic access&#xA;Papers → Justified by broken academic publishing&#xA;Music → Justified by platform risk and preservation&#xA;Video? → What&#39;s the justification for the next step?&#xA;&#xA;With each escalation:&#xA;&#xA;The value for big tech increases exponentially&#xA;The proportion of benefit for individual students decreases&#xA;Mass piracy becomes normalized as &#34;preservation&#34;&#xA;The ethical questions get harder to answer&#xA;&#xA;And the international precedent is already being set. Japan&#39;s AI Minister (January 2025) stated explicitly: &#34;AI companies in Japan can use whatever they want for AI training... whether it is content obtained from illegal sites or otherwise.&#34;&#xA;&#xA;The message from governments: pirate freely if it serves AI supremacy. We&#39;re in a race to the bottom where copyright becomes meaningless for AI training, and the companies with the most resources benefit most.&#xA;&#xA;Conclusions: I don&#39;t know which way to turn&#xA;I started from that sleepless night, 256 million songs in an RSS feed, and ended up here with more questions than answers.&#xA;&#xA;Anna&#39;s Archive is a technological marvel—IPFS, BitTorrent, distributed databases creating something genuinely uncensorable. It&#39;s also a lifeline for millions of students and researchers locked out of knowledge by an exploitative publishing system. And simultaneously, it&#39;s the largest intellectual property expropriation operation in history, saving corporations hundreds of millions while creators receive nothing.&#xA;&#xA;All of these things are true at once. This isn&#39;t a simple story with heroes and villains.&#xA;&#xA;The academic publishing system is genuinely broken. Researchers create knowledge for free, review it for free, then their institutions must pay exorbitant fees to access it while publishers extract 35-40% profit margins. This system deserves to be disrupted.&#xA;&#xA;But Anna&#39;s Archive isn&#39;t disrupting it equitably. The architecture that makes it uncensorable also makes it impossible to distinguish between a student in Lagos accessing a textbook and Meta downloading 162TB for AI training. You can&#39;t have selective resistance to censorship—it&#39;s all or nothing.&#xA;&#xA;Aaron Swartz died fighting for information freedom with idealistic principles. Meta achieves the same result with corporate profit motives and walks away victorious. The system rewards power and punishes principle.&#xA;&#xA;Can this be fixed? Copyright reform moves at the speed of politics—years, decades. Compulsory licensing for AI training? Just beginning to be discussed. Open access mandates? Facing massive publisher resistance. Meanwhile, Anna&#39;s Archive operates at the speed of software, and data flows freely to those with $100M compute clusters.&#xA;&#xA;The question isn&#39;t whether Anna&#39;s Archive will be stopped—it won&#39;t be, that&#39;s the point of the architecture. The question is what world we&#39;re building where the same technology that liberates a medical student in India also bankrolls Meta&#39;s AI ambitions, and we can&#39;t separate one from the other.&#xA;&#xA;I don&#39;t have answers. I have a functioning IPFS node, a Tor relay, and the uncomfortable knowledge that every byte I help distribute might be saving a researcher&#39;s career or training someone&#39;s proprietary AI model. Probably both.&#xA;&#xA;Free for everyone. The problem is that &#34;everyone&#34; has very different resources to benefit from that freedom.&#xA;&#xA;Now, if you&#39;ll excuse me, I&#39;m going to check how much bandwidth my nodes are using. And reflect on whether participation is complicity or resistance. Maybe it&#39;s both. Maybe that&#39;s the point.&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/annas-archive-robin-hood-of-knowledge-or&#34;Discuss.../a&#xA;&#xA;#AnnaArchive #AI #Copyright #AaronSwartz #Meta #AcademicPublishing #IPFS #InformationFreedom&#xA;&#xA;div class=&#34;center&#34;a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a :: a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a :: a href=&#34;mailto:jolek78@posteo.net&#34;Email/a  :: a href=&#34;https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net&#34; Element/a/div]]&gt;</description>
      <content:encoded><![CDATA[<p>3:00 AM. Another one of those nights where my brain decided sleep was overrated. After my usual nocturnal walk through the streets of a remote Scottish town—where even a fox observed me with that “humans are weird” look—I sat back down at my server. Just a quick scan of my RSS feeds, I told myself, then I can start work. When...</p>

<blockquote><p>We backed up Spotify (metadata and music files). It&#39;s distributed in bulk torrents (~300TB), grouped by popularity.
This release includes the largest publicly available music metadata database with 256 million tracks and 186 million unique ISRCs.
It&#39;s the world&#39;s first “preservation archive” for music which is fully open (meaning it can easily be mirrored by anyone with enough disk space), with 86 million music files, representing around 99.6% of listens.</p></blockquote>

<p>The news came from <a href="https://annas-archive.li/">Anna&#39;s Archive</a>—the world&#39;s largest pirate library—which had just scraped Spotify&#39;s entire catalog. Not just metadata, but also the audio files. 86 million tracks, 300 terabytes. I stopped to reread those numbers, then thought: holy shit, how big is this thing?</p>



<p>And so, while the rest of the world slept, I started digging. This is one of those stories that needs to be told—a story weaving together hacker idealism, technology, billions of dollars in AI training data, and an ethical paradox few want to truly confront.</p>

<h3 id="when-z-library-fell" id="when-z-library-fell">When Z-Library fell</h3>

<p>November 3, 2022. The FBI seized Z-Library&#39;s domains, one of the world&#39;s largest pirate libraries. Two alleged operators were arrested in Argentina. The community panicked—Z-Library served millions of students, researchers, and readers. And suddenly, everything vanished.</p>

<p>But someone was prepared. A group called PiLiMi (Pirate Library Mirror) had created complete backups of all shadow libraries for years. LibGen, Z-Library, Sci-Hub. Everything. When Z-Library fell, these backups were ready. But there was a problem: petabytes of unusable data with no way to search them.</p>

<p>Enter Anna Archivist—a pseudonym, probably a collective—who understood something fundamental: preserving data is useless if it&#39;s not accessible. Days after Z-Library&#39;s seizure, Anna&#39;s Archive was online with a meta-search engine aggregating all shadow library catalogs, making them searchable and—crucially—virtually impossible to censor.</p>

<h3 id="the-numbers" id="the-numbers">The numbers</h3>

<p>December 2025:</p>
<ul><li>61.3 million books (PDF, EPUB, MOBI, DjVu)</li>
<li>95.5 million academic papers</li>
<li>256 million music tracks (Spotify metadata)</li>
<li>86 million audio files (~300TB)</li>
<li>Total: ~1.1 Petabyte in unified torrents</li></ul>

<p>To put this in perspective: the sum of all academic knowledge produced by humanity, plus a gigantic slice of world literary production, plus now music. All indexed, searchable, downloadable. Free. And virtually impossible to shut down.</p>

<h3 id="why-it-can-t-be-killed" id="why-it-can-t-be-killed">Why it can&#39;t be killed</h3>

<p>Remember Napster? Centralized servers, one lawsuit, shut down in a day. BitTorrent learned from that—decentralized everything. But Anna&#39;s Archive goes further, combining layers of resilience that make it practically immortal:</p>

<p><strong>Distributed Frontend:</strong> Multiple domain mirrors (.li, .se, .org, .gs), Tor hidden service, Progressive Web App that works offline. Block one, others continue.</p>

<p><strong>Distributed Database:</strong> Elasticsearch + PostgreSQL + public API. Anyone can download the entire database and host their own instance. No central server to attack.</p>

<p><strong>Distributed Files:</strong> This is the genius part. Anna&#39;s Archive hosts almost nothing directly. Instead:</p>
<ul><li>IPFS (InterPlanetary File System): Files identified by cryptographic hash, served by volunteer nodes worldwide</li>
<li>BitTorrent: Classic torrents with multiple trackers, self-sustaining swarms</li>
<li>HTTP Gateways: For normal users who just want to click-and-download, links redirect to public IPFS gateways</li></ul>

<p>Result: user downloads via normal HTTP, but content comes from a decentralized network. Can&#39;t shut down IPFS. Can&#39;t stop BitTorrent. Can block gateways, but hundreds exist and anyone can create new ones.</p>

<p><strong>OpSec:</strong> Domains registered via privacy-focused Icelandic registrar, bulletproof hosting in non-cooperative jurisdictions, Bitcoin payments, PGP-encrypted communications, zero personal information.</p>

<p>The only way to stop Anna&#39;s Archive would be to shut down the internet. Or convince every single seeder to stop. Good luck.</p>

<h3 id="81-7-terabytes-free-for-meta" id="81-7-terabytes-free-for-meta">81.7 terabytes free for meta</h3>

<p>And here&#39;s where it gets disturbing.</p>

<p>February 2025. Documents from <em>Kadrey v. Meta</em> are unsealed—a class action by authors against Meta for using their pirated books to train Llama AI models. Internal emails reveal a shocking timeline:</p>

<p><strong>October 2022</strong> – Melanie Kambadur, Senior Research Manager:</p>

<blockquote><p>I don&#39;t think we should use pirated material. I really need to draw a line there.</p></blockquote>

<p>Eleonora Presani, Meta employee:</p>

<blockquote><p>Using pirated material should be beyond our ethical threshold. SciHub, ResearchGate, LibGen are basically like PirateBay... they&#39;re distributing content that is protected by copyright and they&#39;re infringing it.</p></blockquote>

<p><strong>January 2023</strong> – Meeting with Mark Zuckerberg present:</p>

<blockquote><p>[Zuckerberg] wants to move this stuff forward, and we need to find a way to unblock all this.</p></blockquote>

<p><strong>April 2023</strong> – Nikolay Bashlykov, Meta engineer:</p>

<blockquote><p>Using Meta IP addresses to load through torrents pirate content... torrenting from a corporate laptop doesn&#39;t feel right.</p></blockquote>

<p><strong>2023-2024: The Operation</strong></p>

<p>Meta downloaded:</p>
<ul><li>81.7 TB via Anna&#39;s Archive torrents (35.7 TB from Z-Library alone)</li>
<li>80.6 TB from LibGen</li>
<li>Total: ~162 TB of pirated books</li></ul>

<p>Method: BitTorrent client on separate infrastructure, VPN to obscure origin, active seeding to other peers. Result: 197,000 copyrighted books integrated into Llama training data.</p>

<h3 id="june-2025-the-ruling" id="june-2025-the-ruling">June 2025: the ruling</h3>

<p>Judge Vince Chhabria (Northern District California) applied the four-factor fair use test. The decision is legally fascinating and ethically disturbing.</p>

<p><strong>Factor 1 – Transformative Use:</strong> Meta wins decisively. The judge ruled AI training is “spectacularly transformative”—fundamentally different from human reading. The purpose isn&#39;t to express the content but to learn statistical relationships between words.</p>

<p><strong>Factor 2 – Nature of Work:</strong> Neutral. Creative fiction gets more copyright protection than factual works, but this didn&#39;t tip the scales either way.</p>

<p><strong>Factor 3 – Amount Used:</strong> Meta wins. Even though they used entire books, the judge found this necessary for training. You can&#39;t cherry-pick sentences and expect an AI to learn language patterns.</p>

<p><strong>Factor 4 – Market Effect:</strong> This is where the judge&#39;s discomfort shows through:</p>

<blockquote><p>Generative AI has the potential to flood the market with endless amounts of images, songs, articles, books... So by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way.</p></blockquote>

<p>He sees the problem clearly. AI trained on copyrighted works will compete with and potentially destroy the market for those very works. But the plaintiffs couldn&#39;t prove specific economic harm with hard data.</p>

<p>The final ruling: “Given the state of the record, the Court has no choice but to grant summary judgment.” Meta wins on these specific facts. But the judge adds a critical caveat: “In most cases, training LLMs on copyrighted works without permission is likely infringing and not fair use.”</p>

<p>Meta didn&#39;t win because what they did was legitimate. They won because the authors&#39; lawyers didn&#39;t build a strong enough evidentiary case. It&#39;s a technical legal victory that sidesteps the ethical question entirely.</p>

<p>The precedent this sets is chilling: AI companies can pirate with relative impunity if they have good lawyers and plaintiffs can&#39;t prove specific damages.</p>

<h3 id="the-math" id="the-math">The math</h3>

<p><strong>Scenario A (legal):</strong></p>
<ul><li>Meta negotiates licenses with publishers</li>
<li>Cost: $50-100 million (conservative estimate)</li>
<li>Authors receive royalties</li></ul>

<p><strong>Scenario B (what they did):</strong></p>
<ul><li>Download 81.7 TB for free</li>
<li>Legal defense: ~$5 million</li>
<li>Win in court</li>
<li>Authors receive: $0</li></ul>

<p><strong>Meta&#39;s savings: $45-95 million</strong></p>

<p>And now every AI company knows: download from Anna&#39;s Archive, risk a lawsuit with weak evidence, save tens of millions.</p>

<p>Anna&#39;s Archive also revealed they provide “SFTP bulk access to approximately 30 companies”—primarily Chinese LLM startups and data brokers—who contribute money or data. DeepSeek publicly admitted using Anna&#39;s Archive data for training. No consequences in Chinese jurisdiction.</p>

<h3 id="aaron-swartz-and-the-question-that-haunts-this-story" id="aaron-swartz-and-the-question-that-haunts-this-story">Aaron Swartz and the question that haunts this story</h3>

<p>There&#39;s a ghost here. His name is Aaron Swartz, and his story illuminates everything wrong with how we treat information access.</p>

<p>2011: Aaron, 24, brilliant programmer, Reddit co-founder, and information freedom activist, connected to MIT&#39;s network and downloaded 4.8 million academic papers from JSTOR. His intent was to make publicly-funded research freely available. He wasn&#39;t enriching himself. He was acting on principle.</p>

<p>The response was swift and brutal. Federal prosecutors threw the book at him: 13 felony charges, maximum penalty of 50 years in prison and $1 million in fines. For downloading academic papers. The prosecution was led by U.S. Attorney Carmen Ortiz, who called it “stealing is stealing, whether you use a computer command or a crowbar.”</p>

<p>The pressure was immense. Aaron faced financial ruin, decades in prison, complete destruction of his life. In January 2013, at age 26, he hanged himself. His family and partner blamed the aggressive prosecution. The internet mourned a brilliant mind and passionate advocate crushed by prosecutorial overreach.</p>

<p>Now consider the parallel:</p>

<p><strong>Aaron Swartz: 4.8 million papers → federal persecution, suicide at 26</strong></p>

<p><strong>Meta: 162 TB (~162 million papers) → wins in court, saves $95 million</strong></p>

<p>Aaron was an individual acting on idealistic principles about information freedom. Meta is a trillion-dollar corporation acting on profit motives. Aaron faced the full weight of federal prosecution. Meta faced a civil lawsuit they successfully defended with their massive legal team.</p>

<p>The system punishes idealism and rewards profit. The disparity isn&#39;t just unjust—it reveals something fundamental about who gets to break rules and who doesn&#39;t.</p>

<h3 id="the-paradox-no-one-wants-to-see" id="the-paradox-no-one-wants-to-see">The paradox no one wants to see</h3>

<p>Anna&#39;s Archive claims to fight publishing monopolies and inequality in access to knowledge. But the reality:</p>

<p><strong>Who benefits most?</strong></p>
<ul><li>Meta: 81.7 TB free, $95M saved</li>
<li>~30 AI companies: privileged access</li>
<li>Corporations with $100M+ compute budgets</li></ul>

<p><strong>Resources needed to benefit:</strong></p>
<ul><li>Storage/Bandwidth: trivial for Meta ($1000s)</li>
<li>Computing for training: MASSIVE ($10-100M)</li>
<li>Legal defense: MASSIVE ($millions)</li></ul>

<p>Only big tech can afford this. The result:</p>
<ul><li>Data: socialized (Anna&#39;s Archive, shared risk)</li>
<li>Profits: privatized (proprietary LLMs, paid APIs)</li>
<li>Costs: externalized (authors not compensated)</li></ul>

<p><strong>But what about students in the Global South?</strong></p>

<p>This is where the story gets complicated, because the benefits are real and they matter immensely.</p>

<p>Consider a medical student in India. Her family earns about $400/month. A single medical textbook costs $300-500. She needs fifteen of them. The math is impossible. Her options: don&#39;t graduate, or Anna&#39;s Archive. She chose the latter and completed her degree. She&#39;s now a practicing physician.</p>

<p>Or take a PhD researcher in South Africa studying climate change impacts. The critical papers for his dissertation are behind Elsevier&#39;s paywall at $35 each. He needs twenty papers minimum—$700 his university can&#39;t afford. Without Sci-Hub (accessible through Anna&#39;s Archive), his dissertation would have been impossible. He completed it, published findings that inform local climate policy.</p>

<p>An art history teacher in Argentina wanted to enrich her curriculum with Renaissance art analysis. The books she needed weren&#39;t available in local libraries. Importing them? Prohibitive between shipping costs and customs. Anna&#39;s Archive gave her access to rare texts that transformed her teaching.</p>

<p>The data backs this up: literature review times for researchers in developing countries reduced 60-80%. Citation patterns show researchers in Nigeria, Bangladesh, Ecuador now cite contemporary research at parity with Harvard and Oxford. Publications from developing countries have increased. Methodological quality has improved. International collaborations have expanded.</p>

<p>This matters. This changes lives. This is not hypothetical.</p>

<p>The problem is: <em>both things are simultaneously true.</em></p>
<ol><li>Anna&#39;s Archive saves academic careers in the Global South</li>
<li>Anna&#39;s Archive allows Meta to save $95 million</li></ol>

<p>But Meta downloaded more data in one week than all Indian students download in a year. How do we square that?</p>

<h3 id="the-broken-system-that-created-this-monster" id="the-broken-system-that-created-this-monster">The broken system that created this monster</h3>

<p>To understand why Anna&#39;s Archive exists and why it&#39;s grown so explosively, you need to understand how fundamentally broken academic publishing has become.</p>

<p>Here&#39;s the perverse cycle:</p>
<ol><li>Researcher writes paper (unpaid)</li>
<li>Other researchers peer review it (unpaid)</li>
<li>Publisher publishes it</li>
<li>Researcher&#39;s own university must pay to read it</li>
<li>Publisher profits: Elsevier and Wiley report 35-40% profit margins</li></ol>

<p>Today, over 70% of academic papers sit behind paywalls. Access costs $35-50 per paper for individuals, or $10,000-100,000+ per year for institutional subscriptions. Universities in developing countries simply cannot afford these subscriptions. Neither can most universities in developed countries—Harvard famously called journal subscription costs “fiscally unsustainable” in 2012.</p>

<p>The system extracts free labor from researchers, locks up publicly-funded research behind paywalls, charges exorbitant fees to access it, and funnels enormous profits to publishers who add relatively little value. Academic institutions create the knowledge, do the quality control, and then pay again to access their own work.</p>

<p>Sci-Hub and Anna&#39;s Archive didn&#39;t emerge from nowhere. They&#39;re responses to a genuinely broken system. The question is whether they&#39;re the right response—and who ultimately benefits most from that response.</p>

<h3 id="the-architecture-determines-the-ethics" id="the-architecture-determines-the-ethics">The architecture determines the ethics</h3>

<p>Anna&#39;s Archive can&#39;t discriminate because:</p>
<ol><li>Open source philosophy: everyone or no one</li>
<li>Technical impossibility: how do you block Meta but not students?</li>
<li>Legal strategy: claiming “non-hosting” makes usage control impossible</li></ol>

<p>IPFS and BitTorrent are magnificent tools for resisting censorship. But resistance to censorship also means resistance to ethical control. You can&#39;t have one without the other.</p>

<p>The system is structurally designed to be unkillable. Which also means it&#39;s structurally designed to serve whoever has the resources to benefit most.</p>

<h3 id="where-does-it-end" id="where-does-it-end">Where does it end?</h3>

<p>December 2025: Anna&#39;s Archive announced they&#39;d scraped Spotify. The same preservation narrative, the same pattern. 256 million tracks, 86 million audio files, 300TB available to anyone with the infrastructure to use it.</p>

<p>“This Spotify scrape is our humble attempt to start such a &#39;preservation archive&#39; for music,” they wrote. The justification mirrors the books argument: Spotify loses licenses, music disappears; platform risk if Spotify fails; regional blocks prevent access; long tail poorly preserved.</p>

<p>All true. But who downloads 300TB of music? Not the kid in Malawi who just wants to listen to his favorite artist. ByteDance, training the next AI music generator. Startups building Spotify competitors. The same companies with compute budgets in the tens of millions.</p>

<p>Anna&#39;s Archive is pivoting from text to multimedia, and each escalation follows a predictable pattern:</p>
<ul><li><strong>Books</strong> → Justified by paywalls and academic access</li>
<li><strong>Papers</strong> → Justified by broken academic publishing</li>
<li><strong>Music</strong> → Justified by platform risk and preservation</li>
<li><strong>Video?</strong> → What&#39;s the justification for the next step?</li></ul>

<p>With each escalation:</p>
<ul><li>The value for big tech increases exponentially</li>
<li>The proportion of benefit for individual students decreases</li>
<li>Mass piracy becomes normalized as “preservation”</li>
<li>The ethical questions get harder to answer</li></ul>

<p>And the international precedent is already being set. Japan&#39;s AI Minister (January 2025) stated explicitly: “AI companies in Japan can use whatever they want for AI training... whether it is content obtained from illegal sites or otherwise.”</p>

<p>The message from governments: pirate freely if it serves AI supremacy. We&#39;re in a race to the bottom where copyright becomes meaningless for AI training, and the companies with the most resources benefit most.</p>

<h3 id="conclusions-i-don-t-know-which-way-to-turn" id="conclusions-i-don-t-know-which-way-to-turn">Conclusions: I don&#39;t know which way to turn</h3>

<p>I started from that sleepless night, 256 million songs in an RSS feed, and ended up here with more questions than answers.</p>

<p>Anna&#39;s Archive is a technological marvel—IPFS, BitTorrent, distributed databases creating something genuinely uncensorable. It&#39;s also a lifeline for millions of students and researchers locked out of knowledge by an exploitative publishing system. And simultaneously, it&#39;s the largest intellectual property expropriation operation in history, saving corporations hundreds of millions while creators receive nothing.</p>

<p>All of these things are true at once. This isn&#39;t a simple story with heroes and villains.</p>

<p>The academic publishing system is genuinely broken. Researchers create knowledge for free, review it for free, then their institutions must pay exorbitant fees to access it while publishers extract 35-40% profit margins. This system deserves to be disrupted.</p>

<p>But Anna&#39;s Archive isn&#39;t disrupting it equitably. The architecture that makes it uncensorable also makes it impossible to distinguish between a student in Lagos accessing a textbook and Meta downloading 162TB for AI training. You can&#39;t have selective resistance to censorship—it&#39;s all or nothing.</p>

<p>Aaron Swartz died fighting for information freedom with idealistic principles. Meta achieves the same result with corporate profit motives and walks away victorious. The system rewards power and punishes principle.</p>

<p>Can this be fixed? Copyright reform moves at the speed of politics—years, decades. Compulsory licensing for AI training? Just beginning to be discussed. Open access mandates? Facing massive publisher resistance. Meanwhile, Anna&#39;s Archive operates at the speed of software, and data flows freely to those with $100M compute clusters.</p>

<p>The question isn&#39;t whether Anna&#39;s Archive will be stopped—it won&#39;t be, that&#39;s the point of the architecture. The question is what world we&#39;re building where the same technology that liberates a medical student in India also bankrolls Meta&#39;s AI ambitions, and we can&#39;t separate one from the other.</p>

<p>I don&#39;t have answers. I have a functioning IPFS node, a Tor relay, and the uncomfortable knowledge that every byte I help distribute might be saving a researcher&#39;s career or training someone&#39;s proprietary AI model. Probably both.</p>

<p>Free for everyone. The problem is that “everyone” has very different resources to benefit from that freedom.</p>

<p>Now, if you&#39;ll excuse me, I&#39;m going to check how much bandwidth my nodes are using. And reflect on whether participation is complicity or resistance. Maybe it&#39;s both. Maybe that&#39;s the point.</p>

<p><a href="https://remark.as/p/jolek78/annas-archive-robin-hood-of-knowledge-or">Discuss...</a></p>

<p><a href="https://jolek78.writeas.com/tag:AnnaArchive" class="hashtag"><span>#</span><span class="p-category">AnnaArchive</span></a> <a href="https://jolek78.writeas.com/tag:AI" class="hashtag"><span>#</span><span class="p-category">AI</span></a> <a href="https://jolek78.writeas.com/tag:Copyright" class="hashtag"><span>#</span><span class="p-category">Copyright</span></a> <a href="https://jolek78.writeas.com/tag:AaronSwartz" class="hashtag"><span>#</span><span class="p-category">AaronSwartz</span></a> <a href="https://jolek78.writeas.com/tag:Meta" class="hashtag"><span>#</span><span class="p-category">Meta</span></a> <a href="https://jolek78.writeas.com/tag:AcademicPublishing" class="hashtag"><span>#</span><span class="p-category">AcademicPublishing</span></a> <a href="https://jolek78.writeas.com/tag:IPFS" class="hashtag"><span>#</span><span class="p-category">IPFS</span></a> <a href="https://jolek78.writeas.com/tag:InformationFreedom" class="hashtag"><span>#</span><span class="p-category">InformationFreedom</span></a></p>

<div class="center"><a href="https://fosstodon.org/@jolek78">Mastodon</a> :: <a href="https://pixelfed.social/jolek78">Pixelfed</a> :: <a href="mailto:jolek78@posteo.net">Email</a>  :: <a href="https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net"> Element</a></div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/annas-archive-robin-hood-of-knowledge-or</guid>
      <pubDate>Mon, 29 Dec 2025 14:24:51 +0000</pubDate>
    </item>
    <item>
      <title>Kiwix: Wikipedia in your pocket</title>
      <link>https://jolek78.writeas.com/kiwix-wikipedia-in-your-pocket?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[A hackmeeting, many years ago. A conference on various open-source projects. They were talking about Kiwix. The audience seemed interested, nodding, asking questions. I sat in the back of the room with a doubt that seemed legitimate but that I didn&#39;t dare express out loud: &#34;what&#39;s the point of offline Wikipedia?&#34; I mean: the internet is everywhere. If you need to look something up on Wikipedia, you open your browser, search, read. Done. Why would anyone download gigabytes of data to consult an encyclopedia offline? It seemed like a solution in search of a problem. Something for nerds nostalgic for CD-ROM encyclopedias.&#xA;&#xA;It took me years to understand how naive I&#39;d been.&#xA;&#xA;!--more--&#xA;&#xA;Years in which I continued to follow the project from afar. Years in which I read stories of deployments in Africa, Asia, prisons, refugee camps. Years in which I understood that the internet isn&#39;t everywhere, it&#39;s a privilege, not a given. And even where it exists, it&#39;s not necessarily accessible, affordable, or free from censorship.&#xA;&#xA;Years later, when I set up my Proxmox server, one of the first containers I decided to install was Kiwix. Not because I needed it—my connection works fine, thanks for asking—but because I wanted to be part of that project, so to speak. Because I had understood that Kiwix wasn&#39;t just software. It&#39;s a philosophy. It&#39;s practical proof that another web is possible: decentralized, offline, in users&#39; hands. &#xA;&#xA;Simply a matter of fundamental rights&#xA;There&#39;s a moment in 2004 when Emmanuel Engelhart—a French computer engineer working between Germany and Switzerland—becomes a Wikipedia editor and asks himself an apparently simple question: &#34;What about those without internet access?&#34; It wasn&#39;t a rhetorical question. At the time, as today, billions of people lived (and live) in areas where connectivity is a luxury, where broadband is science fiction, where even a single megabyte of data costs more than a meal.&#xA;&#xA;Engelhart&#39;s answer was radical: if people can&#39;t reach Wikipedia, then Wikipedia must reach people. Even without the internet.&#xA;&#xA;You know that thing about &#34;if the mountain won&#39;t come to Muhammad...&#34;? Exactly that.&#xA;&#xA;And so, in 2007, together with Renaud Gaudin—a Malian information management expert—Engelhart launched Kiwix. Open source software that allowed downloading the entire Wikipedia (and much more) to consult it completely offline.&#xA;&#xA;In a 2014 interview, Engelhart stated:&#xA;&#xA;  The contents of Wikipedia should be available for everyone! Even without Internet access. This is why I have launched the Kiwix project. Our users are all over the world: sailors on the oceans, poor students thirsty for knowledge, globetrotters almost living in planes, world&#39;s citizens suffering from censorship or free minded prisoners. For all these people, Kiwix provides a simple and practical solution to ponder about the world.&#xA;&#xA;And:&#xA;&#xA;  Water is a common good. You understand why you have to care about water. Wikipedia is the same; it&#39;s a common good. We have to care about Wikipedia.&#xA;&#xA;Digital Sovereignty&#xA;Why talk about Kiwix today? Because it&#39;s not just a technical solution to a connectivity problem. Kiwix represents something deeper: digital sovereignty in its purest form.&#xA;&#xA;While projects like Mastodon, Matrix, Lemmy, and Pixelfed create distributed networks—many nodes communicating with each other in federation—Kiwix goes beyond, or perhaps beneath, depending on your perspective. It&#39;s so radically independent that it doesn&#39;t even need a network. It&#39;s local. Completely. A single Kiwix installation is an autonomous island that communicates with nothing and no one.&#xA;&#xA;No federation, no peer-to-peer, no cloud.&#xA;&#xA;You have Wikipedia on your Raspberry Pi? It&#39;s yours—or rather, it&#39;s yours thanks to the contribution of all Wikipedians. It works without internet, without external dependencies. You can copy it to a USB stick and give it to someone else. You can take it to the middle of the ocean, the desert, Antarctica. You can share it on a local computer network. And it will work. Always. The data is on your hardware, under your physical control.&#xA;&#xA;The birth of the project&#xA;Kiwix&#39;s 2007 launch didn&#39;t happen with grand announcements or marketing campaigns. It was open source software, released under GPL license, developed by two enthusiasts. That&#39;s it.&#xA;&#xA;The technological heart of the project was (and is) the ZIM format—&#34;Zeno IMproved&#34;—an open source archive format optimized for wiki-style content. Highly compressed, easily indexable, designed to be searchable even without connection. All of Wikipedia&#39;s content is converted to static HTML, compressed into ZIM, and made available for download.&#xA;&#xA;To give you an idea of scale: the entire English Wikipedia—6.4 million articles, images included—takes up about 97 GB in ZIM format. Seems like a lot? The sum of all human knowledge now fits on a microSD card that costs 15 euros. On a 1TB portable hard drive you can put Wikipedia in ten different languages, the entire Project Gutenberg library, all TED talks, complete Stack Exchange, and you&#39;ll still have space left over.&#xA;&#xA;Between 2007 and 2011, the team also released three CD/DVD versions with article selections. Today they seem like archaeological artifacts, but at the time they were the solution for bringing Wikipedia to African schools where the internet simply didn&#39;t exist.&#xA;&#xA;The XULRunner problem and the rebirth&#xA;Like every serious open source project, Kiwix had its &#34;winter.&#34; Between 2014 and 2020, the software disappeared from many Linux distribution repositories. The reason? XULRunner, the Mozilla framework Kiwix was based on, was deprecated and removed from package databases.&#xA;&#xA;For six years, Kiwix was technically &#34;dead&#34; for many Linux users. But the community didn&#39;t give up. The team worked to completely rethink the software&#39;s architecture, rewrite it from scratch, and modernize it. When it reemerged in 2020, it was stronger than before: progressive WebApp, browser extensions, native mobile support, Raspberry Pi integration.&#xA;&#xA;It&#39;s the usual open source story: an obstacle that would seem fatal becomes an opportunity to improve and grow. How many proprietary companies would have simply shut down? But in open source, software doesn&#39;t die as long as the code is available and someone believes in it.&#xA;&#xA;Where Kiwix saves lives (not hyperbole)&#xA;Numbers are important, but it&#39;s the stories that make us truly understand a project&#39;s impact.&#xA;&#xA;Kenya: the Thika Alumni Trust&#xA;In 2015, seven friends who had studied together in the &#39;60s at a high school in Thika return for a visit. The principal asks for help: they need 50 computers to create a lab. The problem? The internet connection is 100 kbps. Useless.&#xA;&#xA;The solution was to create completely offline digital learning environments using Kiwix. Today, that project has transformed education in 61 schools throughout Kenya, reaching over 70,000 children. They&#39;ve installed 164 microservers running Kiwix—probably one of the largest networks in the world.&#xA;&#xA;The results? In primary schools where the Trust operates, national exam results improved from 8 to 12%. In special needs units, where absenteeism reached 50%, attendance now exceeds 90%.&#xA;&#xA;Mary Mungai, principal of a school with special needs units, says: &#34;All our children have benefited tremendously from the digital libraries. We have children who refused to attend classes but now do so faithfully, some who couldn&#39;t read or write but now do very well on computers.&#34;&#xA;&#xA;Ghana: the Kiwix4Schools Project&#xA;In 2019, four Ghanaian students from Ashesi University launched Kiwix4Schools with a simple goal: bring digital education to rural schools. They installed Kiwix on 15 Raspberry Pi devices, reaching 2,000 students in 15 schools.&#xA;&#xA;The impact was immediate. Teachers reported students staying after school to explore content. Children who had never touched a computer were navigating Wikipedia articles. Science class changed completely when students could look up experiments, see diagrams, understand concepts beyond what the single available textbook offered.&#xA;&#xA;India: Internet blackouts and censorship&#xA;In 2019-2020, the Indian government imposed internet blackouts in Kashmir—the longest in a democracy&#39;s history. For months, millions of people were cut off from the digital world. Hospitals, schools, businesses paralyzed.&#xA;&#xA;But those who had Kiwix continued accessing medical information, educational content, technical documentation. It wasn&#39;t a complete solution, but it was a lifeline. It demonstrated that offline access isn&#39;t just for poor countries—it&#39;s a resilience tool even in developed nations with unstable political situations.&#xA;&#xA;The ZIM format: open everything&#xA;The genius of Kiwix lies in the ZIM format. It&#39;s not just a compression format—it&#39;s an open standard specifically designed for offline content distribution. Any developer can create ZIM files, any software can read them. There&#39;s no vendor lock-in, no proprietary license.&#xA;&#xA;But ZIM isn&#39;t just for Wikipedia. Today ZIM archives exist for:&#xA;&#xA;Project Gutenberg (50,000+ public domain books)&#xA;Stack Exchange (all sites, all Q&amp;As)&#xA;TED Talks (thousands of videos with subtitles)&#xA;Khan Academy&#xA;Ubuntu documentation&#xA;Arch Wiki&#xA;WikiMed (medical encyclopedia, used by 100,000 doctors and students)&#xA;&#xA;The format is completely open, documented, and anyone can create ZIM archives of their content. It&#39;s the open source spirit in its purest form.&#xA;&#xA;Everything works&#xA;In 2018, Kiwix formalized collaboration with the Wikimedia Foundation, receiving $275,000 to improve offline access. In 2023, came a $250,000 grant from the Wikimedia Endowment.&#xA;&#xA;Stephane Coillet-Matillon, Kiwix CEO, in December 2018 declared:&#xA;&#xA;  Our hope is that one day everyone will have access to the internet, and eliminate the need for other offline methods of access to information. But we know that there are still serious gaps in internet access globally that require solutions today. Kiwix is a tool to start fixing things right now.&#xA;&#xA;Today, in 2025:&#xA;&#xA;Over 10 million users in more than 220 countries&#xA;More than 10,000 websites crawled regularly&#xA;Available on all platforms: Android, iOS, Windows, macOS, Linux&#xA;Browser extensions for Firefox, Chrome, Edge&#xA;Partnership with Orange Foundation to reach 500,000 children in West Africa&#xA;&#xA;You can explore the entire catalog at library.kiwix.org.&#xA;&#xA;The philosophy behind the code&#xA;Here we arrive at the heart of the matter. Why is Kiwix important? Not just because it works, not just because it&#39;s helped millions of people. But because it represents a way of thinking about technology.&#xA;&#xA;Kiwix is:&#xA;&#xA;Open Source: all code on GitHub, GPL license. Anyone can study it, modify it, improve it.&#xA;Completely local: doesn&#39;t depend on central servers, cloud, or connections. Each installation is autonomous.&#xA;Privacy-first: no tracking, no telemetry, no data sent to third parties. Impossible—it&#39;s offline.&#xA;Community-driven: developed by volunteers, funded by donations.&#xA;Accessible: designed to work even on old or limited hardware.&#xA;&#xA;It&#39;s the antithesis of the Big Tech model. There&#39;s no company controlling access, no centralized database of who reads what, no algorithms deciding which information to show you. It&#39;s technology as it should be: serving the user, before corporations transformed it into a machine for extracting data and selling advertising.&#xA;&#xA;A &#34;dangerous&#34; precedent&#xA;There&#39;s an interesting paradox. Kiwix exists because the internet isn&#39;t accessible to everyone. But its success demonstrates that maybe we don&#39;t even need it to be—at least not the way we conceive it now.&#xA;&#xA;Think about it: if I can have Wikipedia, Stack Exchange, Project Gutenberg, Khan Academy on a 128GB SD card, why should I depend on an always-on internet connection? If I can sync updates once a month when I pass by the library with WiFi, why should I pay 50 euros a month for a home connection?&#xA;&#xA;Kiwix demonstrates that the &#34;always connected, always online, always tracked&#34; model isn&#39;t the only possible one. That an alternative exists where knowledge is local, accessible, controllable. The monopoly isn&#39;t inevitable.&#xA;&#xA;And this, for Big Tech, is dangerous. Because if people realize they can access information without going through Google, without being tracked, without seeing ads... well, the entire business model collapses. It&#39;s also no secret that the entire streaming model—everything, no one excluded: Spotify, YouTube, Netflix, etc.—is ecologically unsustainable. Downloading once and playing a thousand times (locally) is less wasteful than downloading zero times and playing a thousand times (remotely). If it can be done for Wikipedia, TED Talks, and Project Gutenberg, it can be done for everything else.&#xA;&#xA;But the biggest challenge remains the same: making Kiwix known. Because the software exists, works, is free. But how many people know they can have Wikipedia in their pocket without the internet? How many African schools know they can have a complete digital library for the cost of a Raspberry Pi?&#xA;&#xA;Conclusions: what I learned&#xA;Innovation often doesn&#39;t come from Silicon Valley. It comes from a young French engineer working in Germany asking a simple question. It comes from developers scattered around the world contributing in their free time. It comes from the community, not corporations.&#xA;&#xA;Open source works. Kiwix is almost twenty years old, has overcome technical crises that would have killed a proprietary project, has continued to grow with ridiculous budgets. Why? Because the community believes in it. Because the code is open. Because the mission is clear.&#xA;&#xA;Technology is political. Deciding that knowledge must be accessible offline is a political choice. Deciding to use open source licenses is a political choice. Deciding not to track users is a political choice.&#xA;&#xA;Kiwix shows us an alternative. That we don&#39;t have to choose between functionality and ethics. That another web is possible.&#xA;&#xA;And now, if you&#39;ll excuse me, I&#39;m going to add a Python ZIM library to my Kiwix container, because I&#39;m studying it—or rather, &#34;I have to study it&#34;—for a bunch of small projects I have in mind. AI server included.&#xA;&#xA;#Kiwix #SmallWeb #DigitalSovereignty #OpenSource #Wikipedia #Offline #Privacy #Education #Africa&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/kiwix-wikipedia-in-your-pocket&#34;Discuss.../a&#xA;&#xA;div class=&#34;center&#34;a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a :: a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a :: a href=&#34;mailto:jolek78@posteo.net&#34;Email/a  :: a href=&#34;https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net&#34; Element/a/div]]&gt;</description>
      <content:encoded><![CDATA[<p>A hackmeeting, many years ago. A conference on various open-source projects. They were talking about <a href="https://kiwix.org">Kiwix</a>. The audience seemed interested, nodding, asking questions. I sat in the back of the room with a doubt that seemed legitimate but that I didn&#39;t dare express out loud: “what&#39;s the point of offline Wikipedia?” I mean: the internet is everywhere. If you need to look something up on Wikipedia, you open your browser, search, read. Done. Why would anyone download gigabytes of data to consult an encyclopedia offline? It seemed like a solution in search of a problem. Something for nerds nostalgic for CD-ROM encyclopedias.</p>

<p>It took me years to understand how naive I&#39;d been.</p>



<p>Years in which I continued to follow the project from afar. Years in which I read stories of deployments in Africa, Asia, prisons, refugee camps. Years in which I understood that the internet isn&#39;t everywhere, it&#39;s a privilege, not a given. And even where it exists, it&#39;s not necessarily accessible, affordable, or free from censorship.</p>

<p>Years later, when I set up my Proxmox server, one of the first containers I decided to install was Kiwix. Not because I needed it—my connection works fine, thanks for asking—but because I wanted to be part of that project, so to speak. Because I had understood that Kiwix wasn&#39;t just software. It&#39;s a philosophy. It&#39;s practical proof that another web is possible: decentralized, offline, in users&#39; hands.</p>

<h3 id="simply-a-matter-of-fundamental-rights" id="simply-a-matter-of-fundamental-rights">Simply a matter of fundamental rights</h3>

<p>There&#39;s a moment in 2004 when Emmanuel Engelhart—a French computer engineer working between Germany and Switzerland—becomes a Wikipedia editor and asks himself an apparently simple question: “What about those without internet access?” It wasn&#39;t a rhetorical question. At the time, as today, billions of people lived (and live) in areas where connectivity is a luxury, where broadband is science fiction, where even a single megabyte of data costs more than a meal.</p>

<p>Engelhart&#39;s answer was radical: if people can&#39;t reach Wikipedia, then Wikipedia must reach people. Even without the internet.</p>

<p>You know that thing about “if the mountain won&#39;t come to Muhammad...”? Exactly that.</p>

<p>And so, in 2007, together with Renaud Gaudin—a Malian information management expert—Engelhart launched Kiwix. Open source software that allowed downloading the entire Wikipedia (and much more) to consult it completely offline.</p>

<p>In a <a href="https://diff.wikimedia.org/2014/09/12/emmanuel-engelhart-inventor-of-kiwix/">2014 interview</a>, Engelhart stated:</p>

<blockquote><p>The contents of Wikipedia should be available for everyone! Even without Internet access. This is why I have launched the Kiwix project. Our users are all over the world: sailors on the oceans, poor students thirsty for knowledge, globetrotters almost living in planes, world&#39;s citizens suffering from censorship or free minded prisoners. For all these people, Kiwix provides a simple and practical solution to ponder about the world.</p></blockquote>

<p>And:</p>

<blockquote><p>Water is a common good. You understand why you have to care about water. Wikipedia is the same; it&#39;s a common good. We have to care about Wikipedia.</p></blockquote>

<h3 id="digital-sovereignty" id="digital-sovereignty">Digital Sovereignty</h3>

<p>Why talk about Kiwix today? Because it&#39;s not just a technical solution to a connectivity problem. Kiwix represents something deeper: digital sovereignty in its purest form.</p>

<p>While projects like Mastodon, Matrix, Lemmy, and Pixelfed create distributed networks—many nodes communicating with each other in federation—Kiwix goes beyond, or perhaps beneath, depending on your perspective. It&#39;s so radically independent that it doesn&#39;t even need a network. It&#39;s local. Completely. A single Kiwix installation is an autonomous island that communicates with nothing and no one.</p>

<p>No federation, no peer-to-peer, no cloud.</p>

<p>You have Wikipedia on your Raspberry Pi? It&#39;s yours—or rather, it&#39;s yours <em>thanks to the contribution</em> of all Wikipedians. It works without internet, without external dependencies. You can copy it to a USB stick and give it to someone else. You can take it to the middle of the ocean, the desert, Antarctica. You can share it on a local computer network. And it will work. Always. The data is on your hardware, under your physical control.</p>

<h3 id="the-birth-of-the-project" id="the-birth-of-the-project">The birth of the project</h3>

<p>Kiwix&#39;s 2007 launch didn&#39;t happen with grand announcements or marketing campaigns. It was open source software, released under GPL license, developed by two enthusiasts. That&#39;s it.</p>

<p>The technological heart of the project was (and is) the ZIM format—”Zeno IMproved”—an open source archive format optimized for wiki-style content. Highly compressed, easily indexable, designed to be searchable even without connection. All of Wikipedia&#39;s content is converted to static HTML, compressed into ZIM, and made available for download.</p>

<p>To give you an idea of scale: the entire English Wikipedia—6.4 million articles, images included—takes up about 97 GB in ZIM format. Seems like a lot? The sum of all human knowledge now fits on a microSD card that costs 15 euros. On a 1TB portable hard drive you can put Wikipedia in ten different languages, the entire Project Gutenberg library, all TED talks, complete Stack Exchange, and you&#39;ll still have space left over.</p>

<p>Between 2007 and 2011, the team also released three CD/DVD versions with article selections. Today they seem like archaeological artifacts, but at the time they were the solution for bringing Wikipedia to African schools where the internet simply didn&#39;t exist.</p>

<h3 id="the-xulrunner-problem-and-the-rebirth" id="the-xulrunner-problem-and-the-rebirth">The XULRunner problem and the rebirth</h3>

<p>Like every serious open source project, Kiwix had its “winter.” Between 2014 and 2020, the software disappeared from many Linux distribution repositories. The reason? XULRunner, the Mozilla framework Kiwix was based on, was deprecated and removed from package databases.</p>

<p>For six years, Kiwix was technically “dead” for many Linux users. But the community didn&#39;t give up. The team worked to completely rethink the software&#39;s architecture, rewrite it from scratch, and modernize it. When it reemerged in 2020, it was stronger than before: progressive WebApp, browser extensions, native mobile support, Raspberry Pi integration.</p>

<p>It&#39;s the usual open source story: an obstacle that would seem fatal becomes an opportunity to improve and grow. How many proprietary companies would have simply shut down? But in open source, software doesn&#39;t die as long as the code is available and someone believes in it.</p>

<h3 id="where-kiwix-saves-lives-not-hyperbole" id="where-kiwix-saves-lives-not-hyperbole">Where Kiwix saves lives (not hyperbole)</h3>

<p>Numbers are important, but it&#39;s the stories that make us truly understand a project&#39;s impact.</p>

<h4 id="kenya-the-thika-alumni-trust" id="kenya-the-thika-alumni-trust">Kenya: the Thika Alumni Trust</h4>

<p>In 2015, seven friends who had studied together in the &#39;60s at a high school in Thika return for a visit. The principal asks for help: they need 50 computers to create a lab. The problem? The internet connection is 100 kbps. Useless.</p>

<p>The solution was to create completely offline digital learning environments using Kiwix. Today, that project has transformed education in 61 schools throughout Kenya, reaching over 70,000 children. They&#39;ve installed 164 microservers running Kiwix—probably one of the largest networks in the world.</p>

<p>The results? In primary schools where the Trust operates, national exam results improved from 8 to 12%. In special needs units, where absenteeism reached 50%, attendance now exceeds 90%.</p>

<p>Mary Mungai, principal of a school with special needs units, says: “All our children have benefited tremendously from the digital libraries. We have children who refused to attend classes but now do so faithfully, some who couldn&#39;t read or write but now do very well on computers.”</p>

<h4 id="ghana-the-kiwix4schools-project" id="ghana-the-kiwix4schools-project">Ghana: the Kiwix4Schools Project</h4>

<p>In 2019, four Ghanaian students from Ashesi University launched Kiwix4Schools with a simple goal: bring digital education to rural schools. They installed Kiwix on 15 Raspberry Pi devices, reaching 2,000 students in 15 schools.</p>

<p>The impact was immediate. Teachers reported students staying after school to explore content. Children who had never touched a computer were navigating Wikipedia articles. Science class changed completely when students could look up experiments, see diagrams, understand concepts beyond what the single available textbook offered.</p>

<h4 id="india-internet-blackouts-and-censorship" id="india-internet-blackouts-and-censorship">India: Internet blackouts and censorship</h4>

<p>In 2019-2020, the Indian government imposed internet blackouts in Kashmir—the longest in a democracy&#39;s history. For months, millions of people were cut off from the digital world. Hospitals, schools, businesses paralyzed.</p>

<p>But those who had Kiwix continued accessing medical information, educational content, technical documentation. It wasn&#39;t a complete solution, but it was a lifeline. It demonstrated that offline access isn&#39;t just for poor countries—it&#39;s a resilience tool even in developed nations with unstable political situations.</p>

<h3 id="the-zim-format-open-everything" id="the-zim-format-open-everything">The ZIM format: open everything</h3>

<p>The genius of Kiwix lies in the <a href="https://wiki.openzim.org">ZIM format</a>. It&#39;s not just a compression format—it&#39;s an open standard specifically designed for offline content distribution. Any developer can create ZIM files, any software can read them. There&#39;s no vendor lock-in, no proprietary license.</p>

<p>But ZIM isn&#39;t just for Wikipedia. Today ZIM archives exist for:</p>
<ul><li>Project Gutenberg (50,000+ public domain books)</li>
<li>Stack Exchange (all sites, all Q&amp;As)</li>
<li>TED Talks (thousands of videos with subtitles)</li>
<li>Khan Academy</li>
<li>Ubuntu documentation</li>
<li>Arch Wiki</li>
<li>WikiMed (medical encyclopedia, used by 100,000 doctors and students)</li></ul>

<p>The format is completely open, documented, and anyone can create ZIM archives of their content. It&#39;s the open source spirit in its purest form.</p>

<h3 id="everything-works" id="everything-works">Everything works</h3>

<p>In 2018, Kiwix formalized collaboration with the Wikimedia Foundation, receiving $275,000 to improve offline access. In 2023, came a $250,000 grant from the Wikimedia Endowment.</p>

<p>Stephane Coillet-Matillon, Kiwix CEO, in <a href="https://wikimediafoundation.org/news/2018/12/21/kiwix-is-connecting-the-unconnected/">December 2018</a> declared:</p>

<blockquote><p>Our hope is that one day everyone will have access to the internet, and eliminate the need for other offline methods of access to information. But we know that there are still serious gaps in internet access globally that require solutions today. Kiwix is a tool to start fixing things right now.</p></blockquote>

<p>Today, in 2025:</p>
<ul><li>Over 10 million users in more than 220 countries</li>
<li>More than 10,000 websites crawled regularly</li>
<li>Available on all platforms: Android, iOS, Windows, macOS, Linux</li>
<li>Browser extensions for Firefox, Chrome, Edge</li>
<li>Partnership with Orange Foundation to reach 500,000 children in West Africa</li></ul>

<p>You can explore the entire catalog at <a href="https://library.kiwix.org/">library.kiwix.org</a>.</p>

<h3 id="the-philosophy-behind-the-code" id="the-philosophy-behind-the-code">The philosophy behind the code</h3>

<p>Here we arrive at the heart of the matter. Why is Kiwix important? Not just because it works, not just because it&#39;s helped millions of people. But because it represents a way of thinking about technology.</p>

<p>Kiwix is:</p>
<ul><li><strong>Open Source</strong>: all code on GitHub, GPL license. Anyone can study it, modify it, improve it.</li>
<li><strong>Completely local</strong>: doesn&#39;t depend on central servers, cloud, or connections. Each installation is autonomous.</li>
<li><strong>Privacy-first</strong>: no tracking, no telemetry, no data sent to third parties. Impossible—it&#39;s offline.</li>
<li><strong>Community-driven</strong>: developed by volunteers, funded by donations.</li>
<li><strong>Accessible</strong>: designed to work even on old or limited hardware.</li></ul>

<p>It&#39;s the antithesis of the Big Tech model. There&#39;s no company controlling access, no centralized database of who reads what, no algorithms deciding which information to show you. It&#39;s technology as it should be: serving the user, before corporations transformed it into a machine for extracting data and selling advertising.</p>

<h3 id="a-dangerous-precedent" id="a-dangerous-precedent">A “dangerous” precedent</h3>

<p>There&#39;s an interesting paradox. Kiwix exists because the internet isn&#39;t accessible to everyone. But its success demonstrates that maybe we don&#39;t even need it to be—at least not the way we conceive it now.</p>

<p>Think about it: if I can have Wikipedia, Stack Exchange, Project Gutenberg, Khan Academy on a 128GB SD card, why should I depend on an always-on internet connection? If I can sync updates once a month when I pass by the library with WiFi, why should I pay 50 euros a month for a home connection?</p>

<p>Kiwix demonstrates that the “always connected, always online, always tracked” model isn&#39;t the only possible one. That an alternative exists where knowledge is local, accessible, controllable. The monopoly isn&#39;t inevitable.</p>

<p>And this, for Big Tech, is dangerous. Because if people realize they can access information without going through Google, without being tracked, without seeing ads... well, the entire business model collapses. It&#39;s also no secret that the entire streaming model—everything, no one excluded: Spotify, YouTube, Netflix, etc.—is ecologically unsustainable. Downloading once and playing a thousand times (locally) is less wasteful than downloading zero times and playing a thousand times (remotely). If it can be done for Wikipedia, TED Talks, and Project Gutenberg, it can be done for everything else.</p>

<p>But the biggest challenge remains the same: making Kiwix known. Because the software exists, works, is free. But how many people know they can have Wikipedia in their pocket without the internet? How many African schools know they can have a complete digital library for the cost of a Raspberry Pi?</p>

<h3 id="conclusions-what-i-learned" id="conclusions-what-i-learned">Conclusions: what I learned</h3>

<p>Innovation often doesn&#39;t come from Silicon Valley. It comes from a young French engineer working in Germany asking a simple question. It comes from developers scattered around the world contributing in their free time. It comes from the community, not corporations.</p>

<p>Open source works. Kiwix is almost twenty years old, has overcome technical crises that would have killed a proprietary project, has continued to grow with ridiculous budgets. Why? Because the community believes in it. Because the code is open. Because the mission is clear.</p>

<p>Technology is political. Deciding that knowledge must be accessible offline is a political choice. Deciding to use open source licenses is a political choice. Deciding not to track users is a political choice.</p>

<p>Kiwix shows us an alternative. That we don&#39;t have to choose between functionality and ethics. That another web is possible.</p>

<p>And now, if you&#39;ll excuse me, I&#39;m going to add a Python ZIM library to my Kiwix container, because I&#39;m studying it—or rather, “I have to study it”—for a bunch of small projects I have in mind. AI server included.</p>

<p><a href="https://jolek78.writeas.com/tag:Kiwix" class="hashtag"><span>#</span><span class="p-category">Kiwix</span></a> <a href="https://jolek78.writeas.com/tag:SmallWeb" class="hashtag"><span>#</span><span class="p-category">SmallWeb</span></a> <a href="https://jolek78.writeas.com/tag:DigitalSovereignty" class="hashtag"><span>#</span><span class="p-category">DigitalSovereignty</span></a> <a href="https://jolek78.writeas.com/tag:OpenSource" class="hashtag"><span>#</span><span class="p-category">OpenSource</span></a> <a href="https://jolek78.writeas.com/tag:Wikipedia" class="hashtag"><span>#</span><span class="p-category">Wikipedia</span></a> <a href="https://jolek78.writeas.com/tag:Offline" class="hashtag"><span>#</span><span class="p-category">Offline</span></a> <a href="https://jolek78.writeas.com/tag:Privacy" class="hashtag"><span>#</span><span class="p-category">Privacy</span></a> <a href="https://jolek78.writeas.com/tag:Education" class="hashtag"><span>#</span><span class="p-category">Education</span></a> <a href="https://jolek78.writeas.com/tag:Africa" class="hashtag"><span>#</span><span class="p-category">Africa</span></a></p>

<p><a href="https://remark.as/p/jolek78/kiwix-wikipedia-in-your-pocket">Discuss...</a></p>

<div class="center"><a href="https://fosstodon.org/@jolek78">Mastodon</a> :: <a href="https://pixelfed.social/jolek78">Pixelfed</a> :: <a href="mailto:jolek78@posteo.net">Email</a>  :: <a href="https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net"> Element</a></div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/kiwix-wikipedia-in-your-pocket</guid>
      <pubDate>Thu, 18 Dec 2025 14:46:00 +0000</pubDate>
    </item>
    <item>
      <title>A song, an algorithm, and the end of the analog world</title>
      <link>https://jolek78.writeas.com/a-song-an-algorithm-and-the-end-of-the-analog-world?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[There&#39;s a moment in the history of technology when everything changes. We don&#39;t always recognise it. Sometimes it takes years to understand that a small spark, an apparently insignificant detail, ignited a revolution that would forever change the way we live, communicate, and consume culture. In 1987, an American singer-songwriter named Suzanne Vega released a minimalist track called &#34;Tom&#39;s Diner&#34;. Two minutes and nine seconds of a cappella vocals, no instrumental accompaniment, no special effects. Just a voice telling the story of an ordinary morning in a New York diner. A song so essential, so pure in its simplicity, that someone on the other side of the world – a German engineer obsessed with #audio compression – would use it as a benchmark to create a technology that would shake the global music industry to its core. That technology was called #MP3. And that voice, that &#34;warm a cappella voice&#34; as Karlheinz Brandenburg would later describe it, would become the ultimate test to determine whether a compression algorithm actually worked or not. &#xA;!--more--&#xA;&#xA;This is the story – part documented reality, part urban legend – of how a folk song became the unwitting mother of the greatest revolution in music distribution since vinyl. A story that has always fascinated me because it contains all the contradictions of our digital age: innovation and destruction, democratization and loss of quality, openness and control. And yes, it&#39;s also because I&#39;ve always had a soft spot for stories that intertwine in unexpected ways. Perhaps because I too, during my years in radio, saw first-hand what it means to work with audio, manipulate it, compress it, broadcast it. Perhaps because, like many of us who lived through the transition from analog to digital, I still carry the memory of those first MP3 collections downloaded via a 56k modem (crimes do become time-barred after 20 years, right?). But above all, this story fascinates me because it reminds us that behind every technological innovation there&#39;s always a human element: a voice, an aesthetic choice, an obsession. And in the case of MP3, that human element was precisely Suzanne Vega&#39;s voice singing about coffee and rain on a November morning.&#xA;&#xA;Late 1980s: the race for compression&#xA;&#xA;To understand how &#34;Tom&#39;s Diner&#34; ended up in the laboratories of the #Fraunhofer Institute, we need to step back and understand what was happening in the world of digital audio in the late 1980s. The CD had arrived in 1981, bringing the promise of perfect audio quality, crystalline, immune to scratches and the wear of time. But there was a massive problem: digital audio files were enormous. A three-minute song, encoded in PCM (Pulse-Code Modulation) format at 44.1 kHz and 16 bits, occupied around 30-35 megabytes. An entire album? Over 600 megabytes.&#xA;&#xA;To put this in perspective: in the 1980s, the portable listening revolution was the Sony Walkman, which played analog cassettes. With the arrival of CDs, Sony launched the Discman, but these portable CD players were bulky, drained batteries, and skipped at the slightest movement. The idea of carrying an entire record collection was still science fiction.&#xA;&#xA;In an era when a 40MB hard drive was considered gigantic, these numbers were simply impractical. You couldn&#39;t think of transmitting music via the internet – which was still an academic and military network – nor of efficiently archiving it on home computers. A radical solution was needed: audio had to be compressed while maintaining acceptable quality. This is where the small city of Erlangen, in Bavaria, enters the scene. Not exactly Silicon Valley, but a German town with a long tradition of scientific excellence. Here was the headquarters of the Fraunhofer Institute for Integrated Circuits, a research centre that would forever change the way we listen to music. The team was led by a man named Dieter Seitzer, who had worked for years on psychoacoustics – that branch of science studying how humans perceive sounds. Seitzer had a vision: to find a way to transmit high-quality music through ISDN telephone lines. It seemed like science fiction, but his doctoral student, a young engineer named Karlheinz Brandenburg, was convinced it was possible. The underlying idea was elegant in its simplicity: the human ear isn&#39;t perfect. There are frequencies we don&#39;t hear, sounds that get &#34;masked&#34; by louder ones, sonic details that our brain simply discards. Why waste disk space for information we can&#39;t perceive anyway?&#xA;&#xA;The goal, therefore, was to create an algorithm that eliminated everything the human ear couldn&#39;t distinguish, reducing an audio file to a tenth of its original size without the average listener noticing the difference. But the competition was fierce. In 1989, when the Moving Picture Experts Group (MPEG) – the international standardisation organisation – issued a call for audio codec proposals, 14 candidates arrived from around the world. Among them were AT&amp;T Bell Labs in the United States, Thomson in France, Philips in the Netherlands, and naturally the Erlangen team with their algorithm called ASPEC (Adaptive Spectral Perceptual Entropy Coding). It was a race where whoever demonstrated the most efficient algorithm won: maximum compression, minimum perceptible quality loss. And to prove it, tests were needed. Many tests. Obsessive, maniacal tests, repeated hundreds, thousands of times. In other words, a reference song was needed. A song that would put the algorithm to the most ruthless test possible.&#xA;&#xA;Why that voice?&#xA;Several versions exist of how Brandenburg discovered &#34;Tom&#39;s Diner&#34;. In one interview, he tells of hearing it on the radio while walking down a corridor. In another, he says he read about this song in a hi-fi magazine that used it to test high-quality speakers. The stories change, overlap, contradict each other. Brandenburg himself has given different versions over the years. But one thing is certain: when he heard that voice, he immediately knew he had found his ultimate test.&#xA;&#xA;  &#34;I was ready to fine-tune my compression algorithm,&#34; Brandenburg recalls in a 2009 interview, &#34;and somewhere down the corridor a radio was playing Tom&#39;s Diner. I was electrified. I knew it would be nearly impossible to compress this warm a cappella voice.&#34;&#xA;&#xA;And it&#39;s precisely in that phrase – &#34;nearly impossible&#34; – that you understand the challenge. The human voice is the most difficult instrument to compress. Evolutionarily, our ears are optimised to recognise voices. We evolved to hear nuances, emotions, the micro-tonal variations that distinguish one person from another, that tell us if someone is happy or sad, sincere or lying. Voice is the primary interface of human communication, and our brain has developed sophisticated mechanisms to analyse it. For this reason, any artifact, any distortion introduced by compression, immediately jumps out when dealing with voice. If MP3 could faithfully reproduce Suzanne Vega&#39;s voice, then it could handle anything.&#xA;&#xA;But why &#34;Tom&#39;s Diner&#34; specifically? What made this song so special?&#xA;&#xA;First: it&#39;s an a cappella recording. There are no instruments to mask or distract. There&#39;s no powerful bass covering the low frequencies, no electric guitars filling the mid-range. It&#39;s just voice. Naked, exposed, with nowhere to hide. Second: it&#39;s an exceptionally high-quality recording. It was recorded at A&amp;M Records studio with professional equipment, meaning it captures all the nuances, all the breaths, all the details of Vega&#39;s performance. There&#39;s no background noise that might mask compression artifacts. Third: Suzanne Vega&#39;s voice has a particular timbre – warm, intimate, with that touch of huskiness that makes it instantly recognisable. It has an interesting dynamic range, with more whispered passages and more assertive ones. It is, in essence, an acoustically &#34;complex&#34; voice.&#xA;&#xA;Brandenburg began working obsessively on that song. He listened to it hundreds of times a day, modifying the algorithm, listening again, modifying again. It was an exhausting, maniacal process. Every time he made a change to the code, he had to listen again to verify whether the result was acceptable or not. The problem was that where instrumental music still sounded acceptable, the voice became a disaster.&#xA;&#xA;Brandenburg had to keep refining, optimising, adjusting the algorithm until that voice sounded good, until he managed to capture that warmth, that intimacy, that human quality that made &#34;Tom&#39;s Diner&#34; so special. To be fair, &#34;Tom&#39;s Diner&#34; wasn&#39;t the only song used in testing. Brandenburg and his team also used other tracks: &#34;Mountains O&#39; Things&#34; by Tracy Chapman, &#34;In All Languages&#34; by Ornette Coleman, &#34;Diamonds on the Soles of Her Shoes&#34; by Paul Simon. James Johnston, from the AT&amp;T team working on a competing algorithm, also used some of these tracks. But &#34;Tom&#39;s Diner&#34; became the symbol, the ultimate test, the benchmark. If the algorithm could reproduce that voice, it could reproduce anything.&#xA;&#xA;1992: the MPEG Audio Layer-3 Standard is born&#xA;The hard work paid off. In 1992, after years of comparative testing conducted by independent institutes, the MPEG committee approved the MPEG-1 Audio Layer-3 standard. Brandenburg&#39;s team had won the competition. Their algorithm had proven superior to the others, capable of compressing audio by a factor of 10-12 while maintaining quality that most listeners judged &#34;indistinguishable&#34; from the original. But no one, at that moment, could imagine what was about to happen. MPEG-1 included three audio encoding layers: Layer-1, Layer-2, and Layer-3. Layer-3 was the most complex and most efficient, but also the most computationally demanding. In the early 1990s, home computers were still too slow to encode audio in Layer-3 in real time. It was cutting-edge technology, but without immediate practical applications. Layer-2, simpler and less efficient, was adopted for Digital Audio Broadcasting (DAB) in Europe. It seemed that Layer-3 – what would later become MP3 – was destined for a marginal role, a technical curiosity for audiophiles with powerful computers.&#xA;&#xA;Brandenburg himself had already developed a successor called Advanced Audio Coding (AAC), which was even more efficient than MP3. It seemed Layer-3 was destined for oblivion before it even took off. And then 1995 arrived. Two things changed everything: the World Wide Web and Windows 95. The Web was exploding. Suddenly, millions of people had internet access and wanted to share things: images, texts, and naturally, music. But connections were incredibly slow – 28.8k modems, if you were lucky, that took hours to download files of just a few megabytes. A format was needed that allowed music sharing in reasonable sizes. Windows 95 brought increasingly powerful computers into millions of homes, with processors capable of decoding compressed audio in real time. And, crucially, Windows used three-character file extensions to identify file types. On 14 July 1995, with a simple internal email at the Fraunhofer Institute, Layer-3 got its definitive name: .mp3&#xA;&#xA;Date: Fri, 14 Jul 1995 12:29:49 +0200&#xA;Subject: File extension for Layer 3: .mp3&#xA;Hello, In light of the overwhelming consensus of the survey participants, &#xA;the file extension for ISO MPEG Audio Layer 3 is .mp3&#xA;&#xA;Three letters that would change the history of music.&#xA;&#xA;But MP3 still needed a catalyst to take off. That catalyst arrived in the form of software. Brandenburg and his team, perhaps sensing the possibilities, perhaps just to experiment, developed a software player for Windows. They released it for free. Other developers began creating MP3 encoders, some legal with Fraunhofer licenses, others less so. The format spread virally, completely beyond its creators&#39; control. And when #Napster arrived in 1999 – the peer-to-peer file sharing service – MP3 became the standard format for large-scale music piracy. The record industry, caught completely off guard, cried scandal. Metallica protested (anyone who remembers that period raise your hand...). But it was too late. The genie was out of the bottle.&#xA;&#xA;The Irony: A Lossy Technology to Democratise Music&#xA;There&#39;s a profound irony in all this. MP3 is a &#34;lossy&#34; technology – with loss of information. Every time you compress an audio file to MP3, data is lost. Permanently. It&#39;s not reversible. An MP3, technically speaking, is a degraded version of the original. Yet this &#34;imperfect&#34; technology democratised access to music in a way no one could have predicted. It made it possible to have an entire record collection in your pocket. It allowed millions of people to discover artists they would never have listened to otherwise. It gave independent artists the ability to distribute their music without needing record labels. Brandenburg himself always had mixed feelings about MP3&#39;s success. On one hand, he was proud that his technology had had such an enormous impact. On the other, he was frustrated that many people used low bitrates – 128 kbps or less – that produced obvious sonic artifacts.&#xA;&#xA;MP3 at 320 kbps sounded excellent, practically indistinguishable from the original for most listeners. But for reasons of space and download speed, many settled for lower quality. And then there was the piracy question. Brandenburg had never imagined his technology would be used primarily to violate copyright on an industrial scale. The Fraunhofer team had worked for years on copy protection systems, DRM, digital watermarking. But none of these technologies were ever effectively implemented in the MP3 ecosystem that developed in the wild (but beautiful) west of the internet at the end of the &#39;90s. In a 1994 interview, Ricky Adar – an Indo-British entrepreneur – said to Brandenburg: &#34;Do you know that you will destroy the music industry?&#34;&#xA;&#xA;Brandenburg, at the time, thought it was an exaggeration. It wasn&#39;t. MP3 didn&#39;t destroy the music industry in the literal sense – music still exists, artists continue to create, people continue to listen. But it radically transformed it. The business model based on selling physical albums collapsed. Record labels lost their power, only to reorganise and regain it in subsequent years. Distribution became democratised. And all this thanks to a mathematical formula that eliminated frequencies the human ear struggles to perceive.&#xA;&#xA;How MP3 compression actually works&#xA;Behind the &#34;magic&#34; of MP3 lies solid mathematics. The algorithm is based on four fundamental pillars:&#xA;&#xA;MDCT Transform&#xA;The audio signal is broken down into 576 samples per frame, transformed from the time domain to the frequency domain. Basically, instead of having a waveform, we get a spectrum.&#xA;&#xA;Psychoacoustics&#xA;The algorithm calculates which frequencies are &#34;masked&#34; by louder ones. Example: if there&#39;s a very powerful drum at 100 Hz, our ear won&#39;t hear a weak sound at 110 Hz. Why waste bits encoding it? The psychoacoustic model divides the spectrum into 32 critical bands that correspond to the frequency resolution of the human ear.&#xA;&#xA;Quantisation&#xA;The &#34;important&#34; frequencies (those we hear) are encoded with more bits. Those masked or barely audible are coarsely quantised or eliminated entirely. A sound at 15 kHz, almost at the limit of audibility, might be represented with 2-3 bits instead of 16.&#xA;&#xA;Huffman Coding&#xA;The already compressed data is further compressed with entropy coding. More frequent patterns get shorter codes.&#xA;&#xA;Numerical result:&#xA;PCM Audio: 44100 samples/sec × 16 bits × 2 channels = 1411.2 kbps&#xA;MP3 at 128 kbps: compression ratio 11:1&#xA;MP3 at 320 kbps: compression ratio 4.4:1&#xA;&#xA;Suzanne Vega discovers she&#39;s the mother of MP3s&#xA;For years, Suzanne Vega had no idea of the role her song had played in MP3 development. It was the year 2000. Vega, by then an established artist with a consolidated career, was taking her daughter to nursery school. A father approached and congratulated her on being &#34;the mother of the MP3&#34;. Vega had no idea what he was talking about. The man explained he had read an article – hyperbolically titled &#34;Ich Bin Ein Paradigm Shifter: The MP3 Format is a Product of Suzanne Vega&#39;s Voice and This Man&#39;s Ears&#34; – that recounted how Brandenburg had used &#34;Tom&#39;s Diner&#34; to develop the compression algorithm. Vega was astonished. Her song, that small intimate track she had written in the 1980s while attending Barnard College, had become a fundamental piece in the history of digital technology.&#xA;&#xA;In 2007, Vega was invited to the Fraunhofer Institute in Erlangen. Brandenburg and his team played her how &#34;Tom&#39;s Diner&#34; sounded in the early versions of the algorithm, before it was refined. It was, in Brandenburg&#39;s own words, &#34;horrible&#34;. The voice was distorted, full of artifacts, almost unrecognisable. They then showed her how they had worked for months, iteration after iteration, to capture that vocal quality that made the track special. They explained the psychoacoustics, the listening tests, the obsession with detail. Vega, who had always been attentive to the quality of her recordings, appreciated the irony: a song recorded with maniacal care had helped develop a compression technology that, in a sense, sacrificed part of that quality for practical reasons.&#xA;&#xA;And there&#39;s another irony in this story. In 2012, Vega was invited to the Thomas Edison National Historical Park in New Jersey. There, she sang &#34;Tom&#39;s Diner&#34; – the song that had become the symbol of the digital revolution – recording it onto an Edison cylinder, one of the oldest and most analog recording technologies in existence. It was a symbolic gesture: bringing the song back to its analog roots, recording it with technology that predated even vinyl by decades. And naturally, someone took that Edison cylinder recording and converted it to MP3, closing the circle in a way that only modern technology could allow. The Museum of Portable Sound made that MP3 file available – an analog wax recording of the track that defined digital audio compression – as a gift for enthusiasts. An act that symbolically connects the Edison era to the Spotify era.&#xA;&#xA;From Walkman to Spotify, via iPod&#xA;Before the iPod: for twenty years, from 1979, the Sony Walkman had dominated portable listening. First with cassettes, then with the Discman for CDs. But you always had a physical limit: one cassette, one CD at a time. Pre-iPod MP3 players – like the MPMan F10 of 1998 – promised to solve this problem, but with only 32MB of storage (about 8 songs at 128kbps) they were little more than technological curiosities.&#xA;&#xA;1999: Napster arrives. Shawn Fanning, a nineteen-year-old student, creates software that allows MP3 files to be shared directly between users, without central servers. Within months, millions of people are downloading music for free. The record industry panics. Lawsuits follow, court battles. Napster is shut down in 2001, but it&#39;s too late. The model has been established: music can circulate freely online.&#xA;&#xA;2001: Apple launches the iPod. &#34;1000 songs in your pocket&#34; is the slogan. The definitive MP3 player, elegant, with an intuitive interface. The iPod wasn&#39;t the first MP3 player – there were already dozens on the market – but it was the one that made the idea mainstream. Suddenly, having your entire music collection in your pocket wasn&#39;t a nerd&#39;s dream anymore, it was a consumer reality.&#xA;&#xA;2003: Apple launches iTunes. Finally, a legal way to buy digital music. 99 cents per song, reasonable quality, no invasive DRM. It doesn&#39;t solve the piracy problem, but it offers a valid alternative. Within a few years, iTunes becomes the world&#39;s largest music retailer.&#xA;&#xA;2008: Spotify launches in Sweden. A new model: streaming, not downloading. Unlimited access to millions of tracks for a monthly fee (or free with ads). The MP3 as a file to own slowly begins to become obsolete. Why have files on your hard drive when you can have instant access to everything?&#xA;&#xA;2017: MP3 patents expire. The Fraunhofer Institute officially announces the &#34;death&#34; of MP3 and focuses on more modern codecs like AAC and Opus. But it&#39;s a purely technical death: MP3 continues to be used everywhere, a legacy format that will probably never completely die.&#xA;&#xA;Throughout all these years, Fraunhofer earned hundreds of millions of euros in royalties from MP3 patents. That money was reinvested in research, creating new generations of ever more efficient audio codecs: AAC (used by Apple), MPEG-H (for immersive audio), EVS (for 5G calls). Brandenburg, who in 2000 received the prestigious &#34;Deutscher Zukunftspreis&#34; (the German innovation prize), never stopped. Today he leads Brandenburg Labs, a startup working on advanced audio technologies like immersive audio for headphones, trying to create sonic experiences indistinguishable from reality. The original Fraunhofer team – Brandenburg, Bernhard Grill, Jürgen Herre, Harald Popp, Ernst Eberlein – has been awarded prizes and recognition worldwide. They&#39;ve entered the Internet Hall of Fame. The CE Hall of Fame. The German Research Hall of Fame. But perhaps the most significant recognition is the simplest: go to any corner of the world, ask someone of any age what an &#34;MP3&#34; is, and they&#39;ll know. A format that defined an entire era of digital culture.&#xA;&#xA;FLAC, OGG, vinyl, and the return of quality&#xA;And here we arrive at one of the most interesting parts of this story. Because not everyone embraced MP3. Not everyone embraced streaming. Not everyone settled for convenience at the expense of freedom and control. In the 2000s, while MP3 dominated and Fraunhofer profited from patents, there was already a counterculture growing silently.&#xA;&#xA;OGG Vorbis – released in 2000 by the Xiph.Org Foundation – was the open source community&#39;s response to the MP3 monopoly. While Fraunhofer and Thomson required licenses and royalties for MP3 encoders, OGG was completely free, without patents, without restrictions. Not only that: at the same bitrate, OGG often offered quality superior to MP3. It was technically better and philosophically consistent with free software ethics. For those who believed in open source, for those who rejected the idea of paying royalties on an audio format, for those who wanted full control over their tools, OGG became the format of choice. It wasn&#39;t just a technical matter: it was a matter of principle. The same spirit that had animated the free software movement in the 1980s – the GPL, the Free Software Foundation, all of Stallman&#39;s work – now extended to the world of audio codecs.&#xA;&#xA;And then there were those who completely rejected lossy compression. #FLAC – Free Lossless Audio Codec, released in 2001 – offered compression without data loss. Larger files, sure, but bit-for-bit identical to the original. For the most uncompromising audiophiles, FLAC was the only acceptable choice. But it wasn&#39;t just about digital formats. Just as digital seemed to have won, vinyl records began making a comeback. Sales, which had collapsed in the &#39;90s and 2000s, started growing again. In 2020, for the first time in decades, vinyl sales surpassed CD sales.&#xA;&#xA;Nostalgia, certainly. The charm of the physical object, the large cover, the ritual of putting the record on the turntable, certainly. But there&#39;s also a &#34;visceral&#34; element: owning a vinyl, or a CD, means owning something real, tangible. Something that can&#39;t be deleted from a server, revoked by a streaming service, lost in a hard drive crash.&#xA;&#xA;I myself, for years, have decided to stay out of streaming services. I buy, physically, CDs (almost always used), rip them to OGG, tag them properly, and put them on my FreeBSD NAS with ZFS. And then my #Navidrome server, calling them via NFS, does the rest. I&#39;ve chosen to maintain control over my data, to privilege a free and open source format over proprietary convenience. It&#39;s a choice that requires time (and a few scattered curses...), hard drives to manage, docker compose files to update, backups to make, players to configure. But it&#39;s also a choice that gives me a sense of ownership, of control that streaming cannot provide.&#xA;&#xA;There&#39;s an irony in all this: the technology that &#34;Tom&#39;s Diner&#34; helped create – MP3, lossy compression, the idea that &#34;good enough&#34; is sufficient – triggered two types of resistance. Those who rejected it for quality reasons (audiophiles with FLAC), and those who rejected it for freedom reasons (the open source community with OGG). And often, these two souls overlapped.&#xA;&#xA;But this choice is only possible because hard drives have become enormous, internet connections fast, storage cheap. The same technologies that made MP3 obsolete have made it possible to collect OGG or FLAC without thinking twice. In a sense, MP3 created the conditions for its own obsolescence – and for the birth of freer and often better alternatives.&#xA;&#xA;Some Lessons to Take Away&#xA;This story has taught us several things. It taught us that convenience often beats perfection. It taught us that technologies developed for one purpose (professional transmission via ISDN) can end up being used in completely different ways (mass file sharing). It taught us that established industries can be disrupted by technologies that initially seem marginal or niche. But perhaps the most important lesson is this: technology is always, at its core, a human matter. MP3 isn&#39;t just a mathematical algorithm. It&#39;s Suzanne Vega&#39;s voice singing about coffee and rain.&#xA;&#xA;  I am sitting in the morning&#xA;  At the diner on the corner&#xA;  I am waiting at the counter&#xA;  For the man to pour the coffee&#xA;&#xA;It&#39;s Brandenburg&#39;s obsession with capturing that warm vocal tonality. We are living, in other words, the consequences of those thousands of repeated listens to &#34;Tom&#39;s Diner&#34;, of that obsession with detail, of that search for perfect compression.&#xA;&#xA;And if Suzanne Vega hadn&#39;t written that song? If Brandenburg had chosen another track for his tests? Probably MP3 would have been developed anyway. The technology was in the air, the problem of audio compression had to be solved. But perhaps it would have taken longer. Perhaps the algorithm would have been slightly different. Perhaps history would have taken a different turn.&#xA;&#xA;I like to think that technological progress is inevitable, deterministic, that it follows an unstoppable internal logic. But stories like this remind us how random it is, how much it depends on individual choices, on coincidences.&#xA;&#xA;And now, if you&#39;ll excuse me, I&#39;m going to update the latest release of Navidrome on my Proxmox server. With Docker, obviously.&#xA;&#xA;#MP3 #DigitalAudio #SuzanneVega #TomsDiner #Fraunhofer #MusicHistory #AudioCompression #OpenSource #FLAC #TechHistory&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/a-song-an-algorithm-and-the-end-of-the-analog-world&#34;Discuss.../a&#xA;&#xA;div class=&#34;center&#34;a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a :: a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a :: a href=&#34;mailto:jolek78@posteo.net&#34;Email/a  :: a href=&#34;https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net&#34; Element/a/div]]&gt;</description>
      <content:encoded><![CDATA[<p>There&#39;s a moment in the history of technology when everything changes. We don&#39;t always recognise it. Sometimes it takes years to understand that a small spark, an apparently insignificant detail, ignited a revolution that would forever change the way we live, communicate, and consume culture. In 1987, an American singer-songwriter named Suzanne Vega released a minimalist track called “Tom&#39;s Diner”. Two minutes and nine seconds of a cappella vocals, no instrumental accompaniment, no special effects. Just a voice telling the story of an ordinary morning in a New York diner. A song so essential, so pure in its simplicity, that someone on the other side of the world – a German engineer obsessed with <a href="https://jolek78.writeas.com/tag:audio" class="hashtag"><span>#</span><span class="p-category">audio</span></a> compression – would use it as a benchmark to create a technology that would shake the global music industry to its core. That technology was called <a href="https://jolek78.writeas.com/tag:MP3" class="hashtag"><span>#</span><span class="p-category">MP3</span></a>. And that voice, that “warm a cappella voice” as Karlheinz Brandenburg would later describe it, would become the ultimate test to determine whether a compression algorithm actually worked or not.
</p>

<p>This is the story – part documented reality, part urban legend – of how a folk song became the unwitting mother of the greatest revolution in music distribution since vinyl. A story that has always fascinated me because it contains all the contradictions of our digital age: innovation and destruction, democratization and loss of quality, openness and control. And yes, it&#39;s also because I&#39;ve always had a soft spot for stories that intertwine in unexpected ways. Perhaps because I too, during my years in radio, saw first-hand what it means to work with audio, manipulate it, compress it, broadcast it. Perhaps because, like many of us who lived through the transition from analog to digital, I still carry the memory of those first MP3 collections downloaded via a 56k modem (crimes do become time-barred after 20 years, right?). But above all, this story fascinates me because it reminds us that behind every technological innovation there&#39;s always a human element: a voice, an aesthetic choice, an obsession. And in the case of MP3, that human element was precisely Suzanne Vega&#39;s voice singing about coffee and rain on a November morning.</p>

<h2 id="late-1980s-the-race-for-compression" id="late-1980s-the-race-for-compression">Late 1980s: the race for compression</h2>

<p>To understand how “Tom&#39;s Diner” ended up in the laboratories of the <a href="https://jolek78.writeas.com/tag:Fraunhofer" class="hashtag"><span>#</span><span class="p-category">Fraunhofer</span></a> Institute, we need to step back and understand what was happening in the world of digital audio in the late 1980s. The CD had arrived in 1981, bringing the promise of perfect audio quality, crystalline, immune to scratches and the wear of time. But there was a massive problem: digital audio files were enormous. A three-minute song, encoded in PCM (Pulse-Code Modulation) format at 44.1 kHz and 16 bits, occupied around 30-35 megabytes. An entire album? Over 600 megabytes.</p>

<p>To put this in perspective: in the 1980s, the portable listening revolution was the Sony Walkman, which played analog cassettes. With the arrival of CDs, Sony launched the Discman, but these portable CD players were bulky, drained batteries, and skipped at the slightest movement. The idea of carrying an entire record collection was still science fiction.</p>

<p>In an era when a 40MB hard drive was considered gigantic, these numbers were simply impractical. You couldn&#39;t think of transmitting music via the internet – which was still an academic and military network – nor of efficiently archiving it on home computers. A radical solution was needed: audio had to be compressed while maintaining acceptable quality. This is where the small city of Erlangen, in Bavaria, enters the scene. Not exactly Silicon Valley, but a German town with a long tradition of scientific excellence. Here was the headquarters of the Fraunhofer Institute for Integrated Circuits, a research centre that would forever change the way we listen to music. The team was led by a man named Dieter Seitzer, who had worked for years on psychoacoustics – that branch of science studying how humans perceive sounds. Seitzer had a vision: to find a way to transmit high-quality music through ISDN telephone lines. It seemed like science fiction, but his doctoral student, a young engineer named Karlheinz Brandenburg, was convinced it was possible. The underlying idea was elegant in its simplicity: the human ear isn&#39;t perfect. There are frequencies we don&#39;t hear, sounds that get “masked” by louder ones, sonic details that our brain simply discards. Why waste disk space for information we can&#39;t perceive anyway?</p>

<p>The goal, therefore, was to create an algorithm that eliminated everything the human ear couldn&#39;t distinguish, reducing an audio file to a tenth of its original size without the average listener noticing the difference. But the competition was fierce. In 1989, when the Moving Picture Experts Group (MPEG) – the international standardisation organisation – issued a call for audio codec proposals, 14 candidates arrived from around the world. Among them were AT&amp;T Bell Labs in the United States, Thomson in France, Philips in the Netherlands, and naturally the Erlangen team with their algorithm called ASPEC (Adaptive Spectral Perceptual Entropy Coding). It was a race where whoever demonstrated the most efficient algorithm won: maximum compression, minimum perceptible quality loss. And to prove it, tests were needed. Many tests. Obsessive, maniacal tests, repeated hundreds, thousands of times. In other words, a reference song was needed. A song that would put the algorithm to the most ruthless test possible.</p>

<h3 id="why-that-voice" id="why-that-voice">Why that voice?</h3>

<p>Several versions exist of how Brandenburg discovered “Tom&#39;s Diner”. In one interview, he tells of hearing it on the radio while walking down a corridor. In another, he says he read about this song in a hi-fi magazine that used it to test high-quality speakers. The stories change, overlap, contradict each other. Brandenburg himself has given different versions over the years. But one thing is certain: when he heard that voice, he immediately knew he had found his ultimate test.</p>

<blockquote><p>“I was ready to fine-tune my compression algorithm,” Brandenburg recalls in a 2009 interview, “and somewhere down the corridor a radio was playing Tom&#39;s Diner. I was electrified. I knew it would be nearly impossible to compress this warm a cappella voice.”</p></blockquote>

<p>And it&#39;s precisely in that phrase – “nearly impossible” – that you understand the challenge. The human voice is the most difficult instrument to compress. Evolutionarily, our ears are optimised to recognise voices. We evolved to hear nuances, emotions, the micro-tonal variations that distinguish one person from another, that tell us if someone is happy or sad, sincere or lying. Voice is the primary interface of human communication, and our brain has developed sophisticated mechanisms to analyse it. For this reason, any artifact, any distortion introduced by compression, immediately jumps out when dealing with voice. If MP3 could faithfully reproduce Suzanne Vega&#39;s voice, then it could handle anything.</p>

<p>But why “Tom&#39;s Diner” specifically? What made this song so special?</p>

<p>First: it&#39;s an a cappella recording. There are no instruments to mask or distract. There&#39;s no powerful bass covering the low frequencies, no electric guitars filling the mid-range. It&#39;s just voice. Naked, exposed, with nowhere to hide. Second: it&#39;s an exceptionally high-quality recording. It was recorded at A&amp;M Records studio with professional equipment, meaning it captures all the nuances, all the breaths, all the details of Vega&#39;s performance. There&#39;s no background noise that might mask compression artifacts. Third: Suzanne Vega&#39;s voice has a particular timbre – warm, intimate, with that touch of huskiness that makes it instantly recognisable. It has an interesting dynamic range, with more whispered passages and more assertive ones. It is, in essence, an acoustically “complex” voice.</p>

<p>Brandenburg began working obsessively on that song. He listened to it hundreds of times a day, modifying the algorithm, listening again, modifying again. It was an exhausting, maniacal process. Every time he made a change to the code, he had to listen again to verify whether the result was acceptable or not. The problem was that where instrumental music still sounded acceptable, the voice became a disaster.</p>

<p>Brandenburg had to keep refining, optimising, adjusting the algorithm until that voice sounded good, until he managed to capture that warmth, that intimacy, that human quality that made “Tom&#39;s Diner” so special. To be fair, “Tom&#39;s Diner” wasn&#39;t the only song used in testing. Brandenburg and his team also used other tracks: “Mountains O&#39; Things” by Tracy Chapman, “In All Languages” by Ornette Coleman, “Diamonds on the Soles of Her Shoes” by Paul Simon. James Johnston, from the AT&amp;T team working on a competing algorithm, also used some of these tracks. But “Tom&#39;s Diner” became the symbol, the ultimate test, the benchmark. If the algorithm could reproduce that voice, it could reproduce anything.</p>

<h3 id="1992-the-mpeg-audio-layer-3-standard-is-born" id="1992-the-mpeg-audio-layer-3-standard-is-born">1992: the MPEG Audio Layer-3 Standard is born</h3>

<p>The hard work paid off. In 1992, after years of comparative testing conducted by independent institutes, the MPEG committee approved the MPEG-1 Audio Layer-3 standard. Brandenburg&#39;s team had won the competition. Their algorithm had proven superior to the others, capable of compressing audio by a factor of 10-12 while maintaining quality that most listeners judged “indistinguishable” from the original. But no one, at that moment, could imagine what was about to happen. MPEG-1 included three audio encoding layers: Layer-1, Layer-2, and Layer-3. Layer-3 was the most complex and most efficient, but also the most computationally demanding. In the early 1990s, home computers were still too slow to encode audio in Layer-3 in real time. It was cutting-edge technology, but without immediate practical applications. Layer-2, simpler and less efficient, was adopted for Digital Audio Broadcasting (DAB) in Europe. It seemed that Layer-3 – what would later become MP3 – was destined for a marginal role, a technical curiosity for audiophiles with powerful computers.</p>

<p>Brandenburg himself had already developed a successor called Advanced Audio Coding (AAC), which was even more efficient than MP3. It seemed Layer-3 was destined for oblivion before it even took off. And then 1995 arrived. Two things changed everything: the World Wide Web and Windows 95. The Web was exploding. Suddenly, millions of people had internet access and wanted to share things: images, texts, and naturally, music. But connections were incredibly slow – 28.8k modems, if you were lucky, that took hours to download files of just a few megabytes. A format was needed that allowed music sharing in reasonable sizes. Windows 95 brought increasingly powerful computers into millions of homes, with processors capable of decoding compressed audio in real time. And, crucially, Windows used three-character file extensions to identify file types. On 14 July 1995, with a simple internal email at the Fraunhofer Institute, Layer-3 got its definitive name: .mp3</p>

<pre><code>Date: Fri, 14 Jul 1995 12:29:49 +0200
Subject: File extension for Layer 3: .mp3
Hello, In light of the overwhelming consensus of the survey participants, 
the file extension for ISO MPEG Audio Layer 3 is .mp3
</code></pre>

<p>Three letters that would change the history of music.</p>

<p>But MP3 still needed a catalyst to take off. That catalyst arrived in the form of software. Brandenburg and his team, perhaps sensing the possibilities, perhaps just to experiment, developed a software player for Windows. They released it for free. Other developers began creating MP3 encoders, some legal with Fraunhofer licenses, others less so. The format spread virally, completely beyond its creators&#39; control. And when <a href="https://jolek78.writeas.com/tag:Napster" class="hashtag"><span>#</span><span class="p-category">Napster</span></a> arrived in 1999 – the peer-to-peer file sharing service – MP3 became the standard format for large-scale music piracy. The record industry, caught completely off guard, cried scandal. Metallica protested (anyone who remembers that period raise your hand...). But it was too late. The genie was out of the bottle.</p>

<h3 id="the-irony-a-lossy-technology-to-democratise-music" id="the-irony-a-lossy-technology-to-democratise-music">The Irony: A Lossy Technology to Democratise Music</h3>

<p>There&#39;s a profound irony in all this. MP3 is a “lossy” technology – with loss of information. Every time you compress an audio file to MP3, data is lost. Permanently. It&#39;s not reversible. An MP3, technically speaking, is a degraded version of the original. Yet this “imperfect” technology democratised access to music in a way no one could have predicted. It made it possible to have an entire record collection in your pocket. It allowed millions of people to discover artists they would never have listened to otherwise. It gave independent artists the ability to distribute their music without needing record labels. Brandenburg himself always had mixed feelings about MP3&#39;s success. On one hand, he was proud that his technology had had such an enormous impact. On the other, he was frustrated that many people used low bitrates – 128 kbps or less – that produced obvious sonic artifacts.</p>

<p>MP3 at 320 kbps sounded excellent, practically indistinguishable from the original for most listeners. But for reasons of space and download speed, many settled for lower quality. And then there was the piracy question. Brandenburg had never imagined his technology would be used primarily to violate copyright on an industrial scale. The Fraunhofer team had worked for years on copy protection systems, DRM, digital watermarking. But none of these technologies were ever effectively implemented in the MP3 ecosystem that developed in the wild (but beautiful) west of the internet at the end of the &#39;90s. In a 1994 interview, Ricky Adar – an Indo-British entrepreneur – said to Brandenburg: “Do you know that you will destroy the music industry?”</p>

<p>Brandenburg, at the time, thought it was an exaggeration. It wasn&#39;t. MP3 didn&#39;t destroy the music industry in the literal sense – music still exists, artists continue to create, people continue to listen. But it radically transformed it. The business model based on selling physical albums collapsed. Record labels lost their power, only to reorganise and regain it in subsequent years. Distribution became democratised. And all this thanks to a mathematical formula that eliminated frequencies the human ear struggles to perceive.</p>

<h3 id="how-mp3-compression-actually-works" id="how-mp3-compression-actually-works">How MP3 compression actually works</h3>

<p>Behind the “magic” of MP3 lies solid mathematics. The algorithm is based on four fundamental pillars:</p>

<p><strong>MDCT Transform</strong>
The audio signal is broken down into 576 samples per frame, transformed from the time domain to the frequency domain. Basically, instead of having a waveform, we get a spectrum.</p>

<p><strong>Psychoacoustics</strong>
The algorithm calculates which frequencies are “masked” by louder ones. Example: if there&#39;s a very powerful drum at 100 Hz, our ear won&#39;t hear a weak sound at 110 Hz. Why waste bits encoding it? The psychoacoustic model divides the spectrum into 32 critical bands that correspond to the frequency resolution of the human ear.</p>

<p><strong>Quantisation</strong>
The “important” frequencies (those we hear) are encoded with more bits. Those masked or barely audible are coarsely quantised or eliminated entirely. A sound at 15 kHz, almost at the limit of audibility, might be represented with 2-3 bits instead of 16.</p>

<p><strong>Huffman Coding</strong>
The already compressed data is further compressed with entropy coding. More frequent patterns get shorter codes.</p>

<p><strong>Numerical result:</strong>
PCM Audio: 44100 samples/sec × 16 bits × 2 channels = 1411.2 kbps
MP3 at 128 kbps: compression ratio 11:1
MP3 at 320 kbps: compression ratio 4.4:1</p>

<h3 id="suzanne-vega-discovers-she-s-the-mother-of-mp3s" id="suzanne-vega-discovers-she-s-the-mother-of-mp3s">Suzanne Vega discovers she&#39;s the mother of MP3s</h3>

<p>For years, Suzanne Vega had no idea of the role her song had played in MP3 development. It was the year 2000. Vega, by then an established artist with a consolidated career, was taking her daughter to nursery school. A father approached and congratulated her on being “the mother of the MP3”. Vega had no idea what he was talking about. The man explained he had read an article – hyperbolically titled “Ich Bin Ein Paradigm Shifter: The MP3 Format is a Product of Suzanne Vega&#39;s Voice and This Man&#39;s Ears” – that recounted how Brandenburg had used “Tom&#39;s Diner” to develop the compression algorithm. Vega was astonished. Her song, that small intimate track she had written in the 1980s while attending Barnard College, had become a fundamental piece in the history of digital technology.</p>

<p>In 2007, Vega was invited to the Fraunhofer Institute in Erlangen. Brandenburg and his team played her how “Tom&#39;s Diner” sounded in the early versions of the algorithm, before it was refined. It was, in Brandenburg&#39;s own words, “horrible”. The voice was distorted, full of artifacts, almost unrecognisable. They then showed her how they had worked for months, iteration after iteration, to capture that vocal quality that made the track special. They explained the psychoacoustics, the listening tests, the obsession with detail. Vega, who had always been attentive to the quality of her recordings, appreciated the irony: a song recorded with maniacal care had helped develop a compression technology that, in a sense, sacrificed part of that quality for practical reasons.</p>

<p>And there&#39;s another irony in this story. In 2012, Vega was invited to the Thomas Edison National Historical Park in New Jersey. There, she sang “Tom&#39;s Diner” – the song that had become the symbol of the digital revolution – recording it onto an Edison cylinder, one of the oldest and most analog recording technologies in existence. It was a symbolic gesture: bringing the song back to its analog roots, recording it with technology that predated even vinyl by decades. And naturally, someone took that Edison cylinder recording and converted it to MP3, closing the circle in a way that only modern technology could allow. The Museum of Portable Sound made that MP3 file available – an analog wax recording of the track that defined digital audio compression – as a gift for enthusiasts. An act that symbolically connects the Edison era to the Spotify era.</p>

<h3 id="from-walkman-to-spotify-via-ipod" id="from-walkman-to-spotify-via-ipod">From Walkman to Spotify, via iPod</h3>

<p><strong>Before the iPod:</strong> for twenty years, from 1979, the Sony Walkman had dominated portable listening. First with cassettes, then with the Discman for CDs. But you always had a physical limit: one cassette, one CD at a time. Pre-iPod MP3 players – like the MPMan F10 of 1998 – promised to solve this problem, but with only 32MB of storage (about 8 songs at 128kbps) they were little more than technological curiosities.</p>

<p><strong>1999:</strong> Napster arrives. Shawn Fanning, a nineteen-year-old student, creates software that allows MP3 files to be shared directly between users, without central servers. Within months, millions of people are downloading music for free. The record industry panics. Lawsuits follow, court battles. Napster is shut down in 2001, but it&#39;s too late. The model has been established: music can circulate freely online.</p>

<p><strong>2001:</strong> Apple launches the iPod. “1000 songs in your pocket” is the slogan. The definitive MP3 player, elegant, with an intuitive interface. The iPod wasn&#39;t the first MP3 player – there were already dozens on the market – but it was the one that made the idea mainstream. Suddenly, having your entire music collection in your pocket wasn&#39;t a nerd&#39;s dream anymore, it was a consumer reality.</p>

<p><strong>2003:</strong> Apple launches iTunes. Finally, a legal way to buy digital music. 99 cents per song, reasonable quality, no invasive DRM. It doesn&#39;t solve the piracy problem, but it offers a valid alternative. Within a few years, iTunes becomes the world&#39;s largest music retailer.</p>

<p><strong>2008:</strong> Spotify launches in Sweden. A new model: streaming, not downloading. Unlimited access to millions of tracks for a monthly fee (or free with ads). The MP3 as a file to own slowly begins to become obsolete. Why have files on your hard drive when you can have instant access to everything?</p>

<p><strong>2017:</strong> MP3 patents expire. The Fraunhofer Institute officially announces the “death” of MP3 and focuses on more modern codecs like AAC and Opus. But it&#39;s a purely technical death: MP3 continues to be used everywhere, a legacy format that will probably never completely die.</p>

<p>Throughout all these years, Fraunhofer earned hundreds of millions of euros in royalties from MP3 patents. That money was reinvested in research, creating new generations of ever more efficient audio codecs: AAC (used by Apple), MPEG-H (for immersive audio), EVS (for 5G calls). Brandenburg, who in 2000 received the prestigious “Deutscher Zukunftspreis” (the German innovation prize), never stopped. Today he leads Brandenburg Labs, a startup working on advanced audio technologies like immersive audio for headphones, trying to create sonic experiences indistinguishable from reality. The original Fraunhofer team – Brandenburg, Bernhard Grill, Jürgen Herre, Harald Popp, Ernst Eberlein – has been awarded prizes and recognition worldwide. They&#39;ve entered the Internet Hall of Fame. The CE Hall of Fame. The German Research Hall of Fame. But perhaps the most significant recognition is the simplest: go to any corner of the world, ask someone of any age what an “MP3” is, and they&#39;ll know. A format that defined an entire era of digital culture.</p>

<h3 id="flac-ogg-vinyl-and-the-return-of-quality" id="flac-ogg-vinyl-and-the-return-of-quality">FLAC, OGG, vinyl, and the return of quality</h3>

<p>And here we arrive at one of the most interesting parts of this story. Because not everyone embraced MP3. Not everyone embraced streaming. Not everyone settled for convenience at the expense of freedom and control. In the 2000s, while MP3 dominated and Fraunhofer profited from patents, there was already a counterculture growing silently.</p>

<p><a href="https://jolek78.writeas.com/tag:OGG" class="hashtag"><span>#</span><span class="p-category">OGG</span></a> Vorbis – released in 2000 by the Xiph.Org Foundation – was the open source community&#39;s response to the MP3 monopoly. While Fraunhofer and Thomson required licenses and royalties for MP3 encoders, OGG was completely free, without patents, without restrictions. Not only that: at the same bitrate, OGG often offered quality superior to MP3. It was technically better and philosophically consistent with free software ethics. For those who believed in open source, for those who rejected the idea of paying royalties on an audio format, for those who wanted full control over their tools, OGG became the format of choice. It wasn&#39;t just a technical matter: it was a matter of principle. The same spirit that had animated the free software movement in the 1980s – the GPL, the Free Software Foundation, all of Stallman&#39;s work – now extended to the world of audio codecs.</p>

<p>And then there were those who completely rejected lossy compression. <a href="https://jolek78.writeas.com/tag:FLAC" class="hashtag"><span>#</span><span class="p-category">FLAC</span></a> – Free Lossless Audio Codec, released in 2001 – offered compression without data loss. Larger files, sure, but bit-for-bit identical to the original. For the most uncompromising audiophiles, FLAC was the only acceptable choice. But it wasn&#39;t just about digital formats. Just as digital seemed to have won, vinyl records began making a comeback. Sales, which had collapsed in the &#39;90s and 2000s, started growing again. In 2020, for the first time in decades, vinyl sales surpassed CD sales.</p>

<p>Nostalgia, certainly. The charm of the physical object, the large cover, the ritual of putting the record on the turntable, certainly. But there&#39;s also a “visceral” element: owning a vinyl, or a CD, means owning something real, tangible. Something that can&#39;t be deleted from a server, revoked by a streaming service, lost in a hard drive crash.</p>

<p>I myself, for years, have decided to stay out of streaming services. I buy, physically, CDs (almost always used), rip them to OGG, tag them properly, and put them on my FreeBSD NAS with ZFS. And then my <a href="https://jolek78.writeas.com/tag:Navidrome" class="hashtag"><span>#</span><span class="p-category">Navidrome</span></a> server, calling them via NFS, does the rest. I&#39;ve chosen to maintain control over my data, to privilege a free and open source format over proprietary convenience. It&#39;s a choice that requires time (and a few scattered curses...), hard drives to manage, docker compose files to update, backups to make, players to configure. But it&#39;s also a choice that gives me a sense of ownership, of control that streaming cannot provide.</p>

<p>There&#39;s an irony in all this: the technology that “Tom&#39;s Diner” helped create – MP3, lossy compression, the idea that “good enough” is sufficient – triggered two types of resistance. Those who rejected it for quality reasons (audiophiles with FLAC), and those who rejected it for freedom reasons (the open source community with OGG). And often, these two souls overlapped.</p>

<p>But this choice is only possible because hard drives have become enormous, internet connections fast, storage cheap. The same technologies that made MP3 obsolete have made it possible to collect OGG or FLAC without thinking twice. In a sense, MP3 created the conditions for its own obsolescence – and for the birth of freer and often better alternatives.</p>

<h3 id="some-lessons-to-take-away" id="some-lessons-to-take-away">Some Lessons to Take Away</h3>

<p>This story has taught us several things. It taught us that convenience often beats perfection. It taught us that technologies developed for one purpose (professional transmission via ISDN) can end up being used in completely different ways (mass file sharing). It taught us that established industries can be disrupted by technologies that initially seem marginal or niche. But perhaps the most important lesson is this: technology is always, at its core, a human matter. MP3 isn&#39;t just a mathematical algorithm. It&#39;s Suzanne Vega&#39;s voice singing about coffee and rain.</p>

<blockquote><p>I am sitting in the morning
At the diner on the corner
I am waiting at the counter
For the man to pour the coffee</p></blockquote>

<p>It&#39;s Brandenburg&#39;s obsession with capturing that warm vocal tonality. We are living, in other words, the consequences of those thousands of repeated listens to “Tom&#39;s Diner”, of that obsession with detail, of that search for perfect compression.</p>

<p>And if Suzanne Vega hadn&#39;t written that song? If Brandenburg had chosen another track for his tests? Probably MP3 would have been developed anyway. The technology was in the air, the problem of audio compression had to be solved. But perhaps it would have taken longer. Perhaps the algorithm would have been slightly different. Perhaps history would have taken a different turn.</p>

<p>I like to think that technological progress is inevitable, deterministic, that it follows an unstoppable internal logic. But stories like this remind us how random it is, how much it depends on individual choices, on coincidences.</p>

<p>And now, if you&#39;ll excuse me, I&#39;m going to update the latest release of Navidrome on my Proxmox server. With Docker, obviously.</p>

<p><a href="https://jolek78.writeas.com/tag:MP3" class="hashtag"><span>#</span><span class="p-category">MP3</span></a> <a href="https://jolek78.writeas.com/tag:DigitalAudio" class="hashtag"><span>#</span><span class="p-category">DigitalAudio</span></a> <a href="https://jolek78.writeas.com/tag:SuzanneVega" class="hashtag"><span>#</span><span class="p-category">SuzanneVega</span></a> <a href="https://jolek78.writeas.com/tag:TomsDiner" class="hashtag"><span>#</span><span class="p-category">TomsDiner</span></a> <a href="https://jolek78.writeas.com/tag:Fraunhofer" class="hashtag"><span>#</span><span class="p-category">Fraunhofer</span></a> <a href="https://jolek78.writeas.com/tag:MusicHistory" class="hashtag"><span>#</span><span class="p-category">MusicHistory</span></a> <a href="https://jolek78.writeas.com/tag:AudioCompression" class="hashtag"><span>#</span><span class="p-category">AudioCompression</span></a> <a href="https://jolek78.writeas.com/tag:OpenSource" class="hashtag"><span>#</span><span class="p-category">OpenSource</span></a> <a href="https://jolek78.writeas.com/tag:FLAC" class="hashtag"><span>#</span><span class="p-category">FLAC</span></a> <a href="https://jolek78.writeas.com/tag:TechHistory" class="hashtag"><span>#</span><span class="p-category">TechHistory</span></a></p>

<p><a href="https://remark.as/p/jolek78/a-song-an-algorithm-and-the-end-of-the-analog-world">Discuss...</a></p>

<div class="center"><a href="https://fosstodon.org/@jolek78">Mastodon</a> :: <a href="https://pixelfed.social/jolek78">Pixelfed</a> :: <a href="mailto:jolek78@posteo.net">Email</a>  :: <a href="https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net"> Element</a></div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/a-song-an-algorithm-and-the-end-of-the-analog-world</guid>
      <pubDate>Sun, 30 Nov 2025 23:21:53 +0000</pubDate>
    </item>
    <item>
      <title>Planet of Lana: another gem</title>
      <link>https://jolek78.writeas.com/planet-of-lana-a-solarpunk-gem-worth-discovering?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[From time to time, to completely disconnect from everything and everyone, I turn back into a kid and immerse myself in video games. I&#39;m slow, I admit it: a game that would normally take 4-5 hours, I finish in at least quadruple the time. But every now and then, among the depths of Steam, I encounter genuine gems. And last night I finally completed Planet of Lana, a 2023 indie game that had been sitting in my library for months. The plot is straightforward but effective: Lana and Elo, presumably brother and sister, live in a peaceful fishing village built on stilts, where life flows serenely in harmony with nature. But this peace is shattered when a group of robots assault the village, kidnapping some inhabitants including Elo himself. From here begins Lana&#39;s odyssey: a journey to the edges of the known world to find and save her brother.&#xA;&#xA;!--more--&#xA;&#xA;Solarpunk Hieroglyphs&#xA;The game presents itself with a now well-established formula in the indie landscape: progression based on environmental puzzles that mark the passage from one section to another. But what truly struck me were the hieroglyphs scattered throughout the journey. These ancient inscriptions tell of an era when coexistence between humans and machines was peaceful and harmonious. It&#39;s pure solarpunk: natural elements perfectly integrated with technology, a vision of sustainable future that we rarely see in video games.&#xA;&#xA;I took all the time necessary to study these glyphs, to understand their deeper meaning. And it was worth it: they represent the thematic heart of the game, that common thread connecting past and present.&#xA;&#xA;planetoflana&#xA;&#xA;Meet Mui&#xA;During the journey, Lana meets Mui, an extraordinary little creature that looks like a cross between a cat and... something alien. Mui immediately becomes indispensable: jumping, untying ropes, distracting enemy machines. And, like all self-respecting cats, he&#39;s a sweetheart who&#39;s terrified of water and needs to be transported from shore to shore on rafts.&#xA;&#xA;The path is varied though, at times, the puzzles follow a repetitive logic. But the real protagonist is exploration, taking the time to observe every detail of this magnificent world.&#xA;&#xA;The Music&#xA;I must spend a few words on the music: it&#39;s simply spectacular. A masterpiece that requires tissues at hand. There&#39;s a recurring theme composed of no more than six notes that enters your soul and never leaves. Those six notes become the emotional thread of the entire experience.&#xA;&#xA;The City of Machines&#xA;So I reach the end of the game. Lana arrives at what I&#39;ve dubbed &#34;the city of machines.&#34; But everything is unexpected: no cyberpunk dystopia, no apocalyptic scenarios. Everything is peaceful. Enormous robotic spiders entertain infants in an almost surreal atmosphere.&#xA;&#xA;Then Lana slips and falls into a hidden place: humans are trapped in small transparent domes. She finds Elo. But when she tries to free him, the system detects her presence and triggers the alarm. Desperate escape.&#xA;&#xA;The Ending (Spoilers)&#xA;The final sequence is a small masterpiece of game design and storytelling. You find yourself before an enormous pulsating energy sphere. It&#39;s evident that it cannot be destroyed. And here the unthinkable happens: Mui, until that moment a simple supporting character, becomes the absolute protagonist. He flies toward the sphere, absorbs all its energy and falls to the ground, apparently lifeless.&#xA;&#xA;Silence. Despair. You think Mui has sacrificed himself to save Elo.&#xA;&#xA;But then... those six notes. The game&#39;s main theme gently returns. Mui begins to pulse with iridescent colors and awakens. And when you return to explore the world, you understand: by absorbing the sphere&#39;s energy, Mui has taken control of the machines, which now live in peace with the fishermen.&#xA;&#xA;Final Thoughts&#xA;Planet of Lana is one of those games that stays with you. Not because of puzzle difficulty or gameplay innovation, but for its ability to tell a story of hope, sacrifice, and harmony between nature and technology. It&#39;s proof that solarpunk can work beautifully in video games too, offering an alternative to the usual cyberpunk dystopias.&#xA;&#xA;If you&#39;re looking for a relaxing yet emotionally intense experience, with sublime art direction and a soundtrack to jealously preserve in your playlist, Planet of Lana absolutely deserves your time.&#xA;&#xA;Even if, in my case, it was much more than four hours.&#xA;&#xA;#PlanetOfLana #Gaming #IndieGames #Solarpunk #GameReview #PuzzleGames&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/planet-of-lana-a-solarpunk-gem-worth-discovering&#34;Discuss.../a&#xA;&#xA;div class=&#34;center&#34;a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a :: a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a :: a href=&#34;mailto:jolek78@posteo.net&#34;Email/a  :: a href=&#34;https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net&#34; Element/a/div]]&gt;</description>
      <content:encoded><![CDATA[<p>From time to time, to completely disconnect from everything and everyone, I turn back into a kid and immerse myself in video games. I&#39;m slow, I admit it: a game that would normally take 4-5 hours, I finish in at least quadruple the time. But every now and then, among the depths of Steam, I encounter genuine gems. And last night I finally completed <strong><a href="https://store.steampowered.com/app/1608230/Planet_of_Lana/">Planet of Lana</a></strong>, a 2023 indie game that had been sitting in my library for months. The plot is straightforward but effective: Lana and Elo, presumably brother and sister, live in a peaceful fishing village built on stilts, where life flows serenely in harmony with nature. But this peace is shattered when a group of robots assault the village, kidnapping some inhabitants including Elo himself. From here begins Lana&#39;s odyssey: a journey to the edges of the known world to find and save her brother.</p>



<h3 id="solarpunk-hieroglyphs" id="solarpunk-hieroglyphs">Solarpunk Hieroglyphs</h3>

<p>The game presents itself with a now well-established formula in the indie landscape: progression based on environmental puzzles that mark the passage from one section to another. But what truly struck me were the <strong>hieroglyphs</strong> scattered throughout the journey. These ancient inscriptions tell of an era when coexistence between humans and machines was peaceful and harmonious. It&#39;s pure <strong>solarpunk</strong>: natural elements perfectly integrated with technology, a vision of sustainable future that we rarely see in video games.</p>

<p>I took all the time necessary to study these glyphs, to understand their deeper meaning. And it was worth it: they represent the thematic heart of the game, that common thread connecting past and present.</p>

<p><img src="https://i.snap.as/XaN1bulL.png" alt="planetoflana"/></p>

<h3 id="meet-mui" id="meet-mui">Meet Mui</h3>

<p>During the journey, Lana meets <strong>Mui</strong>, an extraordinary little creature that looks like a cross between a cat and... something alien. Mui immediately becomes indispensable: jumping, untying ropes, distracting enemy machines. And, like all self-respecting cats, he&#39;s a sweetheart who&#39;s terrified of water and needs to be transported from shore to shore on rafts.</p>

<p>The path is varied though, at times, the puzzles follow a repetitive logic. But the real protagonist is <strong>exploration</strong>, taking the time to observe every detail of this magnificent world.</p>

<h3 id="the-music" id="the-music">The Music</h3>

<p>I must spend a few words on the music: it&#39;s simply <strong>spectacular</strong>. A masterpiece that requires tissues at hand. There&#39;s a recurring theme composed of no more than six notes that enters your soul and never leaves. Those six notes become the emotional thread of the entire experience.</p>

<h3 id="the-city-of-machines" id="the-city-of-machines">The City of Machines</h3>

<p>So I reach the end of the game. Lana arrives at what I&#39;ve dubbed “the city of machines.” But everything is unexpected: no cyberpunk dystopia, no apocalyptic scenarios. Everything is peaceful. Enormous robotic spiders entertain infants in an almost surreal atmosphere.</p>

<p>Then Lana slips and falls into a hidden place: humans are trapped in small transparent domes. She finds Elo. But when she tries to free him, the system detects her presence and triggers the alarm. Desperate escape.</p>

<h3 id="the-ending-spoilers" id="the-ending-spoilers">The Ending (Spoilers)</h3>

<p>The final sequence is a <strong>small masterpiece</strong> of game design and storytelling. You find yourself before an enormous pulsating energy sphere. It&#39;s evident that it cannot be destroyed. And here the unthinkable happens: <strong>Mui</strong>, until that moment a simple supporting character, becomes the absolute protagonist. He flies toward the sphere, absorbs all its energy and falls to the ground, apparently lifeless.</p>

<p>Silence. Despair. You think Mui has sacrificed himself to save Elo.</p>

<p>But then... those six notes. The game&#39;s main theme gently returns. Mui begins to pulse with iridescent colors and awakens. And when you return to explore the world, you understand: by absorbing the sphere&#39;s energy, Mui has taken control of the machines, which now live in peace with the fishermen.</p>

<h3 id="final-thoughts" id="final-thoughts">Final Thoughts</h3>

<p><strong>Planet of Lana</strong> is one of those games that stays with you. Not because of puzzle difficulty or gameplay innovation, but for its ability to tell a story of hope, sacrifice, and harmony between nature and technology. It&#39;s proof that solarpunk can work beautifully in video games too, offering an alternative to the usual cyberpunk dystopias.</p>

<p>If you&#39;re looking for a relaxing yet emotionally intense experience, with sublime art direction and a soundtrack to jealously preserve in your playlist, Planet of Lana absolutely deserves your time.</p>

<p>Even if, in my case, it was much more than four hours.</p>

<p><a href="https://jolek78.writeas.com/tag:PlanetOfLana" class="hashtag"><span>#</span><span class="p-category">PlanetOfLana</span></a> <a href="https://jolek78.writeas.com/tag:Gaming" class="hashtag"><span>#</span><span class="p-category">Gaming</span></a> <a href="https://jolek78.writeas.com/tag:IndieGames" class="hashtag"><span>#</span><span class="p-category">IndieGames</span></a> <a href="https://jolek78.writeas.com/tag:Solarpunk" class="hashtag"><span>#</span><span class="p-category">Solarpunk</span></a> <a href="https://jolek78.writeas.com/tag:GameReview" class="hashtag"><span>#</span><span class="p-category">GameReview</span></a> <a href="https://jolek78.writeas.com/tag:PuzzleGames" class="hashtag"><span>#</span><span class="p-category">PuzzleGames</span></a></p>

<p><a href="https://remark.as/p/jolek78/planet-of-lana-a-solarpunk-gem-worth-discovering">Discuss...</a></p>

<div class="center"><a href="https://fosstodon.org/@jolek78">Mastodon</a> :: <a href="https://pixelfed.social/jolek78">Pixelfed</a> :: <a href="mailto:jolek78@posteo.net">Email</a>  :: <a href="https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net"> Element</a></div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/planet-of-lana-a-solarpunk-gem-worth-discovering</guid>
      <pubDate>Fri, 14 Nov 2025 23:25:17 +0000</pubDate>
    </item>
    <item>
      <title>ChatGPT didn&#39;t invent anything.</title>
      <link>https://jolek78.writeas.com/chatgpt-didnt-invent-anything?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[When the world woke up astonished in November 2022 to this &#34;magical&#34; chatbot, few realized that this magic was the result of decades of research. The history of artificial intelligence begins in 1943, when Warren McCulloch and Walter Pitts proposed the first mathematical model of an artificial neuron. In 1956, at the Dartmouth Conference, John McCarthy coined the term &#34;Artificial Intelligence&#34; and the discipline was officially born.&#xA;&#xA;The &#39;60s and &#39;70s were characterized by excessive optimism: people thought strong AI was just around the corner. Two &#34;AI winters&#34; followed – periods when funding disappeared and research slowed – because promises weren&#39;t materializing. But some continued working in the shadows. Geoffrey Hinton, Yann LeCun, Yoshua Bengio – those we now call the &#34;godfathers of deep learning&#34; – continued their studies on neural networks when no one believed in them anymore.&#xA;&#xA;!--more--&#xA;&#xA;The real breakthrough came with three converging factors: computational power (GPUs), enormous amounts of data, and better algorithms. In 2012, AlexNet won the ImageNet Challenge by an overwhelming margin, demonstrating that deep learning really worked. From there, an unstoppable acceleration.&#xA;&#xA;Once upon a time in the Carboniferous...&#xA;Before ChatGPT exploded, my only knowledge of AI came from science fiction books. Philip K. Dick and his reflections on what it means to be human. Cyberpunk in general, with its technological dystopias. Gibson&#39;s Sprawl trilogy, where AIs live in cyberspace like digital deities. Those pages were my only window to a future that seemed incredibly distant.&#xA;&#xA;When I hosted the podcast Caccia al Fotone (a nice thing, but now belonging to the Carboniferous period...), I delved deeper into the subject. I read several papers published on arXiv and dedicated two episodes to AI development. In 2019, during the pandemic period, I devoured &#34;Artificial Intelligence: A Guide for Thinking Humans&#34; by Melanie Mitchell – a book that also helped me write a &#34;thing&#34; (those who know, know; those who don&#39;t, never mind...) on the evolution of computer systems and surveillance capitalism.&#xA;&#xA;I thought I had a clear picture. I thought I was prepared.&#xA;&#xA;Mea culpa&#xA;Then ChatGPT arrived.&#xA;&#xA;November 2022. First approach: total amazement. I couldn&#39;t believe my eyes. I kept asking questions, and despite all the initial hallucinations I encountered, I continued to have that &#34;wow effect&#34; typical of a child finding the most beautiful shell on the seashore (forgive me Newton for stealing that phrase, but it&#39;s always too beautiful).&#xA;&#xA;And here&#39;s my mea culpa: I set aside all my protective filters that I generally have regarding privacy, open source, control over my data. I let myself go for hours of conversations on the most diverse topics. Until one night – one of many sleepless nights – I found myself discussing with that LLM about depression, various mental disorders, and how one or more abuses can influence a person&#39;s life.&#xA;&#xA;When I realized what was happening, I stopped abruptly. I deleted the conversation, canceled my OpenAI subscription and didn&#39;t touch any LLM for more than a month. I was entrusting my most intimate thoughts to a proprietary system controlled by a corporation. I was betraying every principle I believed in.&#xA;&#xA;But I work in IT. This is a huge revolution. I couldn&#39;t afford to fall behind, nor could I simply reject it on principle. I had to find an alternative. I began to study seriously.&#xA;&#xA;Local, always local&#xA;I encountered the first models I could test locally. I discovered Hugging Face, and it was like finding an oasis in the desert. I began studying transformers, the datasets developed by the community. And I was astounded.&#xA;&#xA;Transformers are the architecture that revolutionized AI. Presented in the 2017 paper &#34;Attention Is All You Need&#34;, they replaced old recurrent neural networks (RNNs) with a more elegant and efficient mechanism: the attention mechanism.&#xA;&#xA;In simple words: instead of processing text word by word in sequence, a transformer looks at all words simultaneously and calculates which ones are most relevant to the context. When you read &#34;The bank of the river was green,&#34; the attention mechanism understands that &#34;bank&#34; refers to the river and not the financial institution, because it evaluates the weight of each word relative to the others.&#xA;&#xA;This architecture made models like BERT, GPT, and all modern LLMs possible. It&#39;s scalable, parallelizable, and extremely powerful.&#xA;&#xA;Hugging Face and the Open Source revolution&#xA;Hugging Face is much more than a platform: it has become the Library of Alexandria of the artificial intelligence era. Founded in 2016, it now hosts over 500,000 pre-trained models, 250,000 datasets, and thousands of demo applications.&#xA;&#xA;Their transformers library has democratized access to AI. With a few lines of Python you can download and use models that would cost millions of dollars to train from scratch. Hugging Face isn&#39;t the only platform doing this – there are also Ollama, LM Studio, GPT4All – but it&#39;s certainly the most extensive and collaborative.&#xA;&#xA;Here, praise must be given to the developers: this community of people scattered around the world is doing extraordinary work. They release open source models, share knowledge, meticulously document everything. They&#39;re building a real alternative to Big Tech&#39;s monopoly on AI.&#xA;&#xA;History repeating&#xA;Watching this explosion of open models, global collaboration, shared code, I had a powerful déjà-vu. This is incredibly similar to the open source revolution that happened 30 years ago.&#xA;&#xA;In the &#39;90s, Linux and the free software movement challenged Microsoft&#39;s dominance and proprietary systems. Many said it was impossible, that free software would never work. Today Linux powers 96% of the world&#39;s servers, all Android smartphones, and much of the Internet infrastructure.&#xA;&#xA;Now the same thing is happening with AI. Llama, Mistral, Falcon, Mixtral – &#34;open weight/open source&#34; models that compete with (and often surpass) their proprietary counterparts. History repeats itself, and this time I know which side to be on.&#xA;&#xA;Another server in my homeLab&#xA;I resumed studying Python, a study I had left on standby years ago. I began experimenting with training local LLM models. I added old scripts to provide my writing style (yes, it seems incredible but every coder has their own style, and it says a lot about their personality). I used Llama 3 to improve my Bash coding.&#xA;&#xA;And when I was ready, I decided to make an important purchase: I bought a small server – to add to my homelab: Proxmox, pfSense, Nextcloud, WireGuard etc... – that I would transform into an OpenWebUI system.&#xA;&#xA;OpenWebUI is a self-hosted web interface for local language models. Like ChatGPT, but running entirely on local hardware, without sending a single byte to someone else&#39;s servers.&#xA;&#xA;For the nerds reading: the simplest way to install is obviously through Docker. Here&#39;s a basic example:&#xA;&#xA;docker run -d -p 3000:8080 \&#xA;  -v open-webui:/app/backend/data \&#xA;  --name open-webui \&#xA;  --restart always \&#xA;  ghcr.io/open-webui/open-webui:main&#xA;&#xA;Once installed, just connect OpenWebUI to Ollama (the runtime for local models), download your preferred models, and you&#39;re operational.&#xA;&#xA;GPU usage is fundamental: a medium-sized LLM requires a lot of RAM and computing power. A dedicated GPU (like an NVIDIA GTX of various types) makes an enormous difference. For those using AMD, there&#39;s ROCm. With 16GB of RAM and an 8GB GPU, you can comfortably run 7B parameter models quantized to 4-bit.&#xA;&#xA;My favorite combo? AMD, Debian, Docker, OpenWebUI, Ollama and Mistral.&#xA;&#xA;A revolution. and a choice to make&#xA;We&#39;re facing a revolution that we cannot avoid. AI is here, it&#39;s powerful, and it&#39;s evolving rapidly. There are two roads ahead of us.&#xA;&#xA;The first: avoid it now, close our eyes, hope it passes or that someone else deals with it. And then, in twenty years, find ourselves chasing an evolved AI, probably impossible to understand, completely in the hands of those who controlled it from the beginning. This is the path of least resistance, but also of maximum risk. It means ceding control, understanding, and ultimately power to whoever gets there first.&#xA;&#xA;The second: study it, analyze it, use it and understand it today to be able to handle it better tomorrow. Actively participate in its evolution. Contribute to the open source community, ensure that this technology remains accessible, understandable, in the hands of many instead of a few. This path requires effort, time, sometimes admitting we were wrong (as I did). But it&#39;s the only path that leads to actual agency over our technological future.&#xA;&#xA;The choice seems obvious when stated this way, but it&#39;s not easy in practice. It requires overcoming fear, investing time, challenging our assumptions. It means getting our hands dirty with code, running models locally, understanding how these systems actually work instead of treating them as black boxes.&#xA;&#xA;I made my choice that night when I deleted my ChatGPT conversation history. I chose not to be a passive consumer of AI technology controlled by corporations. I chose to understand, to build, to contribute to the alternative that&#39;s being constructed by thousands of developers around the world.&#xA;&#xA;The technology is already here. The question is: will it be controlled by a few companies optimizing for profit and control, or will it be a tool accessible to everyone, understandable, modifiable, improvable by the community?&#xA;&#xA;As I&#39;ve learned on this journey, choosing to understand – even when it&#39;s difficult, even when it means admitting you were wrong – is always better than passively submitting.&#xA;&#xA;AI is not magic. It&#39;s mathematics, code, hardware, and above all: it&#39;s made by people. And if it&#39;s made by people, it can be understood, modified and shaped by people. For the better, not for the worse.&#xA;&#xA;The revolution is happening. The only question is: are you participating, or are you watching?&#xA;&#xA;#AI #OpenSource #LocalLLM #Privacy #ChatGPT #HuggingFace #Ollama #SelfHosted #MachineLearning #DigitalSovereignty&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/chatgpt-didnt-invent-anything&#34;Discuss.../a&#xA;&#xA;div class=&#34;center&#34;a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a :: a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a :: a href=&#34;mailto:jolek78@posteo.net&#34;Email/a  :: a href=&#34;https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net&#34; Element/a/div]]&gt;</description>
      <content:encoded><![CDATA[<p>When the world woke up astonished in November 2022 to this “magical” chatbot, few realized that this magic was the result of decades of research. The history of artificial intelligence begins in 1943, when Warren McCulloch and Walter Pitts proposed the first mathematical model of an artificial neuron. In 1956, at the Dartmouth Conference, John McCarthy coined the term “Artificial Intelligence” and the discipline was officially born.</p>

<p>The &#39;60s and &#39;70s were characterized by excessive optimism: people thought strong AI was just around the corner. Two “AI winters” followed – periods when funding disappeared and research slowed – because promises weren&#39;t materializing. But some continued working in the shadows. Geoffrey Hinton, Yann LeCun, Yoshua Bengio – those we now call the “godfathers of deep learning” – continued their studies on neural networks when no one believed in them anymore.</p>



<p>The real breakthrough came with three converging factors: computational power (GPUs), enormous amounts of data, and better algorithms. In 2012, AlexNet won the ImageNet Challenge by an overwhelming margin, demonstrating that deep learning really worked. From there, an unstoppable acceleration.</p>

<h3 id="once-upon-a-time-in-the-carboniferous" id="once-upon-a-time-in-the-carboniferous">Once upon a time in the Carboniferous...</h3>

<p>Before ChatGPT exploded, my only knowledge of AI came from science fiction books. Philip K. Dick and his reflections on what it means to be human. Cyberpunk in general, with its technological dystopias. Gibson&#39;s Sprawl trilogy, where AIs live in cyberspace like digital deities. Those pages were my only window to a future that seemed incredibly distant.</p>

<p>When I hosted the podcast Caccia al Fotone (a nice thing, but now belonging to the Carboniferous period...), I delved deeper into the subject. I read several papers published on arXiv and dedicated two episodes to AI development. In 2019, during the pandemic period, I devoured “Artificial Intelligence: A Guide for Thinking Humans” by Melanie Mitchell – a book that also helped me write a “thing” (those who know, know; those who don&#39;t, never mind...) on the evolution of computer systems and surveillance capitalism.</p>

<p>I thought I had a clear picture. I thought I was prepared.</p>

<h3 id="mea-culpa" id="mea-culpa">Mea culpa</h3>

<p>Then ChatGPT arrived.</p>

<p>November 2022. First approach: total amazement. I couldn&#39;t believe my eyes. I kept asking questions, and despite all the initial hallucinations I encountered, I continued to have that “wow effect” typical of a child finding the most beautiful shell on the seashore (forgive me Newton for stealing that phrase, but it&#39;s always too beautiful).</p>

<p>And here&#39;s my mea culpa: I set aside all my protective filters that I generally have regarding privacy, open source, control over my data. I let myself go for hours of conversations on the most diverse topics. Until one night – one of many sleepless nights – I found myself discussing with that LLM about depression, various mental disorders, and how one or more abuses can influence a person&#39;s life.</p>

<p>When I realized what was happening, I stopped abruptly. I deleted the conversation, canceled my OpenAI subscription and didn&#39;t touch any LLM for more than a month. I was entrusting my most intimate thoughts to a proprietary system controlled by a corporation. I was betraying every principle I believed in.</p>

<p>But I work in IT. This is a huge revolution. I couldn&#39;t afford to fall behind, nor could I simply reject it on principle. I had to find an alternative. I began to study seriously.</p>

<h3 id="local-always-local" id="local-always-local">Local, always local</h3>

<p>I encountered the first models I could test locally. I discovered <a href="https://huggingface.co">Hugging Face</a>, and it was like finding an oasis in the desert. I began studying transformers, the datasets developed by the community. And I was astounded.</p>

<p><strong>Transformers</strong> are the architecture that revolutionized AI. Presented in the 2017 paper <a href="https://arxiv.org/abs/1706.03762">“Attention Is All You Need”</a>, they replaced old recurrent neural networks (RNNs) with a more elegant and efficient mechanism: the attention mechanism.</p>

<p>In simple words: instead of processing text word by word in sequence, a transformer looks at all words simultaneously and calculates which ones are most relevant to the context. When you read “The bank of the river was green,” the attention mechanism understands that “bank” refers to the river and not the financial institution, because it evaluates the weight of each word relative to the others.</p>

<p>This architecture made models like BERT, GPT, and all modern LLMs possible. It&#39;s scalable, parallelizable, and extremely powerful.</p>

<h3 id="hugging-face-and-the-open-source-revolution" id="hugging-face-and-the-open-source-revolution">Hugging Face and the Open Source revolution</h3>

<p><a href="https://huggingface.co">Hugging Face</a> is much more than a platform: it has become the Library of Alexandria of the artificial intelligence era. Founded in 2016, it now hosts over 500,000 pre-trained models, 250,000 datasets, and thousands of demo applications.</p>

<p>Their <a href="https://github.com/huggingface/transformers">transformers library</a> has democratized access to AI. With a few lines of Python you can download and use models that would cost millions of dollars to train from scratch. Hugging Face isn&#39;t the only platform doing this – there are also <a href="https://ollama.com">Ollama</a>, <a href="https://lmstudio.ai">LM Studio</a>, <a href="https://gpt4all.io">GPT4All</a> – but it&#39;s certainly the most extensive and collaborative.</p>

<p>Here, praise must be given to the developers: this community of people scattered around the world is doing extraordinary work. They release open source models, share knowledge, meticulously document everything. They&#39;re building a real alternative to Big Tech&#39;s monopoly on AI.</p>

<h3 id="history-repeating" id="history-repeating">History repeating</h3>

<p>Watching this explosion of open models, global collaboration, shared code, I had a powerful déjà-vu. This is incredibly similar to the open source revolution that happened 30 years ago.</p>

<p>In the &#39;90s, Linux and the free software movement challenged Microsoft&#39;s dominance and proprietary systems. Many said it was impossible, that free software would never work. Today Linux powers 96% of the world&#39;s servers, all Android smartphones, and much of the Internet infrastructure.</p>

<p>Now the same thing is happening with AI. Llama, Mistral, Falcon, Mixtral – “open weight/open source” models that compete with (and often surpass) their proprietary counterparts. History repeats itself, and this time I know which side to be on.</p>

<h3 id="another-server-in-my-homelab" id="another-server-in-my-homelab">Another server in my homeLab</h3>

<p>I resumed studying Python, a study I had left on standby years ago. I began experimenting with training local LLM models. I added old scripts to provide my writing style (yes, it seems incredible but every coder has their own style, and it says a lot about their personality). I used Llama 3 to improve my Bash coding.</p>

<p>And when I was ready, I decided to make an important purchase: I bought a small server – to add to my homelab: Proxmox, pfSense, Nextcloud, WireGuard etc... – that I would transform into an <a href="https://openwebui.com">OpenWebUI</a> system.</p>

<p>OpenWebUI is a self-hosted web interface for local language models. Like ChatGPT, but running entirely on local hardware, without sending a single byte to someone else&#39;s servers.</p>

<p>For the nerds reading: the simplest way to install is obviously through Docker. Here&#39;s a basic example:</p>

<pre><code>docker run -d -p 3000:8080 \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main
</code></pre>

<p>Once installed, just connect OpenWebUI to <a href="https://ollama.com">Ollama</a> (the runtime for local models), download your preferred models, and you&#39;re operational.</p>

<p>GPU usage is fundamental: a medium-sized LLM requires a lot of RAM and computing power. A dedicated GPU (like an NVIDIA GTX of various types) makes an enormous difference. For those using AMD, there&#39;s ROCm. With 16GB of RAM and an 8GB GPU, you can comfortably run 7B parameter models quantized to 4-bit.</p>

<p>My favorite combo? AMD, Debian, Docker, OpenWebUI, Ollama and Mistral.</p>

<h3 id="a-revolution-and-a-choice-to-make" id="a-revolution-and-a-choice-to-make">A revolution. and a choice to make</h3>

<p>We&#39;re facing a revolution that we cannot avoid. AI is here, it&#39;s powerful, and it&#39;s evolving rapidly. There are two roads ahead of us.</p>

<p><strong>The first:</strong> avoid it now, close our eyes, hope it passes or that someone else deals with it. And then, in twenty years, find ourselves chasing an evolved AI, probably impossible to understand, completely in the hands of those who controlled it from the beginning. This is the path of least resistance, but also of maximum risk. It means ceding control, understanding, and ultimately power to whoever gets there first.</p>

<p><strong>The second:</strong> study it, analyze it, use it and understand it today to be able to handle it better tomorrow. Actively participate in its evolution. Contribute to the open source community, ensure that this technology remains accessible, understandable, in the hands of many instead of a few. This path requires effort, time, sometimes admitting we were wrong (as I did). But it&#39;s the only path that leads to actual agency over our technological future.</p>

<p>The choice seems obvious when stated this way, but it&#39;s not easy in practice. It requires overcoming fear, investing time, challenging our assumptions. It means getting our hands dirty with code, running models locally, understanding how these systems actually work instead of treating them as black boxes.</p>

<p>I made my choice that night when I deleted my ChatGPT conversation history. I chose not to be a passive consumer of AI technology controlled by corporations. I chose to understand, to build, to contribute to the alternative that&#39;s being constructed by thousands of developers around the world.</p>

<p>The technology is already here. The question is: will it be controlled by a few companies optimizing for profit and control, or will it be a tool accessible to everyone, understandable, modifiable, improvable by the community?</p>

<p>As I&#39;ve learned on this journey, choosing to understand – even when it&#39;s difficult, even when it means admitting you were wrong – is always better than passively submitting.</p>

<p>AI is not magic. It&#39;s mathematics, code, hardware, and above all: it&#39;s made by people. And if it&#39;s made by people, it can be understood, modified and shaped by people. For the better, not for the worse.</p>

<p>The revolution is happening. The only question is: are you participating, or are you watching?</p>

<p><a href="https://jolek78.writeas.com/tag:AI" class="hashtag"><span>#</span><span class="p-category">AI</span></a> <a href="https://jolek78.writeas.com/tag:OpenSource" class="hashtag"><span>#</span><span class="p-category">OpenSource</span></a> <a href="https://jolek78.writeas.com/tag:LocalLLM" class="hashtag"><span>#</span><span class="p-category">LocalLLM</span></a> <a href="https://jolek78.writeas.com/tag:Privacy" class="hashtag"><span>#</span><span class="p-category">Privacy</span></a> <a href="https://jolek78.writeas.com/tag:ChatGPT" class="hashtag"><span>#</span><span class="p-category">ChatGPT</span></a> <a href="https://jolek78.writeas.com/tag:HuggingFace" class="hashtag"><span>#</span><span class="p-category">HuggingFace</span></a> <a href="https://jolek78.writeas.com/tag:Ollama" class="hashtag"><span>#</span><span class="p-category">Ollama</span></a> <a href="https://jolek78.writeas.com/tag:SelfHosted" class="hashtag"><span>#</span><span class="p-category">SelfHosted</span></a> <a href="https://jolek78.writeas.com/tag:MachineLearning" class="hashtag"><span>#</span><span class="p-category">MachineLearning</span></a> <a href="https://jolek78.writeas.com/tag:DigitalSovereignty" class="hashtag"><span>#</span><span class="p-category">DigitalSovereignty</span></a></p>

<p><a href="https://remark.as/p/jolek78/chatgpt-didnt-invent-anything">Discuss...</a></p>

<div class="center"><a href="https://fosstodon.org/@jolek78">Mastodon</a> :: <a href="https://pixelfed.social/jolek78">Pixelfed</a> :: <a href="mailto:jolek78@posteo.net">Email</a>  :: <a href="https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net"> Element</a></div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/chatgpt-didnt-invent-anything</guid>
      <pubDate>Tue, 28 Oct 2025 12:56:35 +0000</pubDate>
    </item>
    <item>
      <title>Aletheia is born</title>
      <link>https://jolek78.writeas.com/aletheia-is-born?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[In the silence of dozens of sleepless nights, in the solitude of my keyboard and laptop, I have imagined worlds. Fantastic scenarios inspired by hundreds of books, perhaps read too hastily, that have embedded themselves in my mind like small precious memories. The blue glow of my screen became a portal to these universes as my fingers translated thoughts into digital existence, each keystroke bringing new realities to life. As a longtime passionate reader of Cyberpunk, and only recently of Solarpunk, I have patiently imagined a story. Cyberpunk&#39;s dystopias in a Post-Apocalyptic world and Solarpunk&#39;s hopeful ecological futures have merged in my creative space, forming a unique vision that explores both technological power and environmental harmony. I build it unhurriedly, without a deadline, shaping the characters one at a time. I grow attached to them, explore them, abandon them, return to them, weep, and begin again. Each character carries fragments of real lives, observed emotions, and contemplated philosophies – becoming more real to me with every written line.&#xA;&#xA;!--more--&#xA;&#xA;If you want to immerse yourself in the story and follow the writing process – which is now well underway – this is the link. It&#39;s a journey: you&#39;re welcome. &#xA;&#xA;For English readers: equip yourselves with a translator, you will need it.&#xA;&#xA;Il Codice di Aletheia&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/aletheia-is-born&#34;Discuss.../a&#xA;&#xA;div class=&#34;center&#34;a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a :: a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a :: a href=&#34;mailto:jolek78@posteo.net&#34;Email/a  :: a href=&#34;https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net&#34; Element/a/div]]&gt;</description>
      <content:encoded><![CDATA[<p>In the silence of dozens of sleepless nights, in the solitude of my keyboard and laptop, I have imagined worlds. Fantastic scenarios inspired by hundreds of books, perhaps read too hastily, that have embedded themselves in my mind like small precious memories. The blue glow of my screen became a portal to these universes as my fingers translated thoughts into digital existence, each keystroke bringing new realities to life. As a longtime passionate reader of Cyberpunk, and only recently of Solarpunk, I have patiently imagined a story. Cyberpunk&#39;s dystopias in a Post-Apocalyptic world and Solarpunk&#39;s hopeful ecological futures have merged in my creative space, forming a unique vision that explores both technological power and environmental harmony. I build it unhurriedly, without a deadline, shaping the characters one at a time. I grow attached to them, explore them, abandon them, return to them, weep, and begin again. Each character carries fragments of real lives, observed emotions, and contemplated philosophies – becoming more real to me with every written line.</p>



<p>If you want to immerse yourself in the story and follow the writing process – which is now well underway – this is the link. It&#39;s a journey: you&#39;re welcome.</p>

<p>For English readers: equip yourselves with a translator, you will need it.</p>

<p><a href="https://gitea.com/jolek78/Aletheia">Il Codice di Aletheia</a></p>

<p><a href="https://remark.as/p/jolek78/aletheia-is-born">Discuss...</a></p>

<div class="center"><a href="https://fosstodon.org/@jolek78">Mastodon</a> :: <a href="https://pixelfed.social/jolek78">Pixelfed</a> :: <a href="mailto:jolek78@posteo.net">Email</a>  :: <a href="https://app.cinny.in/login/envs.net/#/user/@jolek78:envs.net"> Element</a></div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/aletheia-is-born</guid>
      <pubDate>Thu, 10 Apr 2025 05:46:39 +0000</pubDate>
    </item>
  </channel>
</rss>