<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>SolarPunk &amp;mdash; jolek78&#39;s blog</title>
    <link>https://jolek78.writeas.com/tag:SolarPunk</link>
    <description>thoughts from a friendly human being</description>
    <pubDate>Sat, 16 May 2026 12:18:20 +0000</pubDate>
    
    <item>
      <title>Guests on our own web</title>
      <link>https://jolek78.writeas.com/guests-on-our-own-web?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[A few months ago I spun up a new VPS on Linode, London datacentre. Nothing special - Debian, Nginx, a Let&#39;s Encrypt certificate, a domain I was going to use for my daily notes and my homelab experiments. No link posted anywhere, no entries in my feeds, no backlinks from the sites I run. Just a freshly assigned IP, from a subnet that a week earlier had belonged to someone else.&#xA;&#xA;!--more--&#xA;&#xA;The one thing I had configured carefully was the logs: nginx with an extended format, journald with audit, a few baseline fail2ban jails. I wanted to see what happens to a server that doesn&#39;t yet have a life, before I gave it one. Twenty-four hours later, I opened the logs. No humans. That was expected - I hadn&#39;t told anyone the domain. But there was already a small zoo of other presences. A wget from a Polish VPS with a phantom reverse DNS, the kind registered with a placeholder that never got updated. Three GETs, same resource, thirty-six second intervals. Then nothing. An SSH scan on port 80 - yes, an SSH scan on the HTTP port - written in Go, with a user-agent that claimed to be Mozilla/5.0 but was negotiating TLS the way only Go&#39;s crypto libraries do. VisionHeight, a commercial scanner that bills itself as ethical, mapped seven ports in two and a half minutes. Censys came through twice, identifying itself, leaving its own PTR and a link to its opt-out page. A Common Crawl crawler. GPTBot. ClaudeBot. AppleBot.&#xA;&#xA;People: zero.&#xA;&#xA;I spent the evening watching those logs the way you&#39;d watch a sequence of read-heads on a tape. It was like opening the door to a flat you&#39;d just rented and finding it already occupied by intruders. This is a public network, they seemed to be saying, and nobody told you what public means.&#xA;&#xA;Since then I&#39;ve done what everyone does: I&#39;ve built defences. nftables to drop ASNs known for aggressive scanning. fail2ban with custom jails for nginx that recognise the patterns of the noisier scans - probes against /wp-login.php on a server that doesn&#39;t run WordPress, attempts at /.env, requests for phpMyAdmin paths that don&#39;t exist. GoAccess to visualise what little organic traffic remains once the rest is filtered out. An alert system over ntfy for out-of-band anomalies. It is routine - every sysadmin running a homelab has their own variant. But building it calmly, rather than as a patch on something that has already fallen over, is precisely what gets you to look at things that would otherwise scroll past, filtered away.&#xA;&#xA;And while I was building it, a question came to mind, maybe a banal one, an extremely banal one: who am I doing all this for?&#xA;&#xA;Not for the readers - those are few, almost none, they arrive via RSS, shared links, the occasional search engine. I was defending the server from a network that is predominantly non-human. I was configuring jails for scanners that don&#39;t know me, for crawlers that don&#39;t read me, for botnets that don&#39;t particularly mean me harm - they mean harm to anyone reachable on a port 22 or 80.&#xA;&#xA;The threshold: 51% (and already 53%)&#xA;&#xA;In 2024, for the first time in ten years, bot-generated traffic surpassed human traffic on the internet. Fifty-one percent against forty-nine. The figure comes from Imperva&#39;s Bad Bot Report, 2025 edition, the twelfth in the annual series - the analysis is based on thirteen trillion requests blocked by their global mitigation network in 2024 alone. It is the number that best sums up where we have ended up.&#xA;&#xA;The 2026 Bad Bot Report, published a few weeks ago with 2025 data, has updated the figure: 53% bots, 47% humans. Another point and a half lost in twelve months. It did not happen all at once. Here is the historical series, from 2015 onwards:&#xA;&#xA;| Year | Humans | Bad bots | Good bots |&#xA;|------|--------|----------|-----------|&#xA;| 2015 | 54% | 27% | 19% |&#xA;| 2018 | 62% | 22% | 17% |&#xA;| 2020 | 59% | 26% | 15% |&#xA;| 2022 | 53% | 30% | 17% |&#xA;| 2023 | 50% | 32% | 18% |&#xA;| 2024 | 49% | 37% | 14% |&#xA;| 2025 | 47% | n/d | n/d |&#xA;&#xA;(Source: Imperva, Bad Bot Report 2025 and 2026)&#xA;&#xA;Humans have lost seven percentage points in ten years. The erosion is slow and steady - a descending curve measured in years, not in months. Nobody cut a ribbon to announce we have crossed the threshold. It was a gradual shift of the axis, a median that moved while we were looking elsewhere. Meanwhile, the bad bots grew from 27% to 37%. Ten percentage points in ten years, all on the predatory side. Brute force, credential stuffing, data scraping, account takeover, API fraud. Imperva records that ATOs - Account Takeover Attacks - grew by 40% in 2024 alone, and in 2025 the financial sector absorbed 46% of all ATO incidents worldwide. And, dulcis in fundo, the &#34;good bots&#34; - Googlebot, Bingbot, the legitimate aggregators, the health checkers - went down. From 19% in 2015 to 14% in 2024. The indexing services that historically justified bandwidth consumption have lost ground: the network has become more automated, but in a direction that does not pay off for those who publish.&#xA;&#xA;Cloudflare confirms this with independent data. Their Radar Year in Review 2025, published at the end of December, reports that global internet traffic grew by 19% in 2025, and a substantial share of that growth is attributable to bots and AI crawlers. Googlebot still dominates - around 28% of verified traffic - but the new generation is gaining fast: OpenAI&#39;s GPTBot went from 4.7% in July 2024 to 11.7% in July 2025. ChatGPT-User, the bot that acts on explicit user command, recorded a year-on-year growth of 2,825% in request volume. That is not a typo. PerplexityBot, even more extreme: +157,490%.&#xA;&#xA;The 51% threshold has to be read in this context. The curve has been rising for years, and 2024 is not the peak. The network we are using today is not the 2015 network with a few more bots: it is a structurally different network, where humans have gone from being the main signal to being the background noise.&#xA;&#xA;Who is talking in this network?&#xA;&#xA;When you say &#34;bot&#34; you do not say one thing. The presences in the logs belong to three families that do different jobs, have different economies, and produce different pressures on the infrastructure. Three main categories, then.&#xA;&#xA;The cartographers. These are the scanners that map the entire IPv4 space - four billion three hundred million addresses - across all or nearly all known ports, and maintain queryable databases of exposed services. The founding project is ZMap, released in 2013 by a team at the University of Michigan. ZMap is a port scanner that can scan the entire IPv4 space on a single port in under 45 minutes from a single machine, from userspace, over a gigabit connection. Technically remarkable: it cuts by an order of magnitude the time needed to &#34;see&#34; all of the internet. Censys was built on top of ZMap, launched in 2015 by the same authors. Censys continuously scans IPv4, collects TLS certificates, service banners, software fingerprints, and keeps everything in a queryable commercial database. Shodan, founded in 2009 by John Matherly, is the conceptual predecessor: less polished technically, but longer-lived and more deeply rooted in sysadmin culture. Rapid7&#39;s Project Sonar, ZoomEye, Fofa, Netlas - all follow the same logic.&#xA;&#xA;In 2012, an anonymous researcher decided he wanted to census the internet but did not have the bandwidth. He built an illegal botnet of compromised routers - the Carna Botnet - and ran the first Internet Census: he published the dataset online and declared his own offences. It remains a case study in the asymmetry between technical capability and legality - what Censys does today from a datacentre, ten years ago was a federal crime in the United States. The scanners describe themselves as ethical. They respect abuse@, publish their methodology, exclude networks on request, leave identifiable PTRs. All true. But the data they produce - the complete, near-real-time map of what is exposed on the internet - is sold by subscription, and the clients include academic research and the surveillance industry, corporate threat intelligence and aspiring attackers with seventy-nine dollars a month for a base account. In 2014, at Def Con 22, researchers Dan Tentler, Paul McMillan and Robert Graham ran a live IPv4 scan on port 5900 looking for VNC servers without authentication. They found thirty thousand systems accessible without a password. Among them: two hydroelectric power stations, the cameras of a Czech casino, industrial control systems, ATMs, a caviar production plant. The map exists because producing it is cheap, and who consults it - for what purposes, with what consequences - is a consequence of that price, not the reason for the project.&#xA;&#xA;The extractors. These are the AI crawlers. They existed in embryonic form before too - Common Crawl for years, the indexing archives of search engines forever - but since November 2022, with the release of ChatGPT, they have changed in nature and in volume.&#xA;&#xA;Cloudflare&#39;s data, collected from a fixed sample of clients to eliminate the growth bias, is explicit. Between July 2024 and July 2025:&#xA;&#xA;GPTBot (OpenAI): from 4.7% to 11.7% of total crawler traffic&#xA;ClaudeBot (Anthropic): from 6% to nearly 10%&#xA;Meta-ExternalAgent (Meta): from 0.9% to 7.5%&#xA;PerplexityBot (Perplexity): growth of 157,490%&#xA;Bytespider (ByteDance): declining, from 14.1% to 2.4%&#xA;&#xA;The most revealing figure is the composition by purpose. Cloudflare classifies AI crawling into three categories: training (data collection to train models), search (indexing for chat search), user action (visits on explicit user command). Over the past twelve months, 80% of AI crawling has been for training. 18% for search. 2% for user action. In the most recent six months the training share has risen further, to 82%. The overwhelming majority of the work these bots do around the web does not, then, serve network mapping - it serves to extract content, process it, and turn it into training data for models that will then sell access or use the output to generate responses that compete with the originating site.&#xA;&#xA;Another Cloudflare metric measures the imbalance directly: the crawl-to-refer ratio, that is, how many requests a bot makes versus how much traffic it then sends back to the source site. In July 2025, Anthropic was crawling 38,000 pages for every human visitor it sent back - a clear improvement on the 286,000:1 ratio recorded in January of the same year, but still the most lopsided extreme among the major AI platforms. OpenAI in the same period was running at around 1,500:1. Perplexity 194:1. The economic model is asymmetric extraction: take a lot, give back little.&#xA;&#xA;The parasites. These are the bad bots in the strict sense: 37% of total internet traffic in 2024. Thirteen trillion requests blocked by Imperva&#39;s network alone in that year.&#xA;&#xA;Here the composition changes. Imperva observes that &#34;simple&#34; attacks - basic scripts, dictionary attacks, automated scans - grew from 40% to 45% in 2024. The report explicitly attributes this growth to the arrival of generative AI: tools like ChatGPT, Claude, Llama have lowered the technical barrier to writing a brute forcer, a credential stuffer, a malicious crawler. What ten years ago required Perl and John the Ripper today requires a prompt and ten minutes. 31% of total attacks recorded by Imperva fall into one of the twenty-one OWASP Automated Threats categories. 44% of advanced bot traffic attacks APIs, no longer web pages - because APIs expose business logic with fewer defences and more value. 21% of attacks use residential proxies: IP addresses belonging to real domestic connections, rented on the grey market, allowing the bot to blend in as legitimate user traffic. Geo-fencing, per-IP rate limiting, ASN blacklists - all useless against an attacker who routes traffic through a residential fibre line in Milan.&#xA;&#xA;One detail demolishes a widespread myth. Attackers usually do not want to take a site down. They want to use it. A compromised site is worth more alive than dead: as a host for phishing, cryptocurrency mining, botnet command-and-control, traffic redirect for black-hat SEO, file storage for warez. When the site falls, the attacker has done something wrong - they have saturated resources, triggered detection, burned their foothold. Akamai regularly publishes reports that confirm this: the economic model of the malicious bot is the long stay, not the raid. This changes the reading of visible symptoms. If a site falls over with intermittent 502s, the structural explanation is almost always: saturation of a PHP-FPM pool due to medium-scale bot traffic, on infrastructure that was not dimensioned to absorb half the internet knocking at the same time. The political explanation - they are attacking us to silence us - is almost always false, because anyone who knows how to attack seriously does not let the site fall over.&#xA;&#xA;robots.txt, or the death of a social pact&#xA;&#xA;In June 1994 Martijn Koster, a Dutch sysadmin running the early web crawlers for ALIWEB, proposed a convention: a text file at the root of the site, robots.txt, in which the operator could declare which parts of their domain crawlers were kindly asked not to visit. No central authority would enforce it, no network protocol would verify it. It was a gentleman&#39;s pact, full stop. It worked because in the nineties crawlers were few, they were run by people who knew each other, and nobody had an economic interest strong enough to burn their reputation by ignoring a directive. For thirty years it held. Googlebot, Bingbot, Yandex, Common Crawl - all respected robots.txt as part of the basic etiquette of indexing. It was so established that the formal specification only arrived in 2022 (RFC 9309), decades after the daily practice. When the IETF standardised it, they did so to document a consolidated practice, not to create a new one.&#xA;&#xA;That pact, in the last three years, has been broken.&#xA;&#xA;Drew DeVault, founder of SourceHut - the niche git platform much loved by those who do not want to be on GitHub - published a post in March 2025 that became a manifesto, titled Please stop externalising your costs directly into my face. The piece describes, with technical coldness, the behaviour of LLM crawlers:&#xA;&#xA;  they crawl everything they can find, robots.txt be damned, including expensive endpoints like git blame, every page of every git log, and every commit of every repository, and they do this using random User-Agents that overlap with end-users and come from tens of thousands of IP addresses - mostly residential, in unrelated subnets, each making no more than one HTTP request over any window we tried to measure - actively and maliciously adapting and blending in with legitimate traffic to evade any attempt at characterisation or blocking&#xA;&#xA;It is the description of a distributed DDoS attack carried out by companies that present themselves as legitimate consumers of bandwidth. SourceHut had to unilaterally block entire cloud providers - Google Cloud, Microsoft Azure - because it was the only viable defence.&#xA;&#xA;The Wikimedia Foundation, in April 2025, published data that complements this. Since January 2024, the bandwidth consumed by media downloads on Wikimedia Commons has grown by 50%. The increase does not come from new human readers: it comes from AI scrapers vacuuming up the entire catalogue of 144 million open-licence files. Wikimedia has quantified it: 65% of the most expensive traffic hitting the central datacentres is bot-generated, even though bots account for only 35% of total pageviews. The bots read in bulk - they request obscure pages that the regional cache does not have, forcing the infrastructure to fetch them from the centre. A human reader costs little; an AI crawler costs a lot, and the cost-to-benefit ratio for the body hosting the content has become unsustainable. The Foundation has set as a 2025/2026 annual goal: &#34;reduce by 20% the traffic generated by scrapers&#34;. An organisation that hosts the largest free encyclopaedia in the world is forced to invest engineering in repelling those who want to read it.&#xA;&#xA;The KDE project&#39;s GitLab went down temporarily because of a crawler coming from Alibaba IP ranges. GNOME&#39;s GitLab installed Anubis, a proof-of-work challenge written by Xe Iaso - on arrival at the page, the browser has to solve a small computational problem before the content is shown. Costs nothing to a human, costs dearly to a bot that has to do millions a day. The numbers published by Bart Piotrowski, GNOME&#39;s sysadmin, after switching on Anubis: in two and a half hours, 81,000 total requests, of which only 3% made it through the proof-of-work. 97% were bots. Anubis&#39; default loading screen shows a girl in anime style - it is an explicitly provocative aesthetic choice by Iaso, who has declared he wanted to make the experience annoying for those using these tools to extract.&#xA;&#xA;Kevin Fenzi, who administers Fedora&#39;s infrastructure, has blocked traffic from entire countries. Drew DeVault, in the same post, writes:&#xA;&#xA;  Every time I sit down for a beer with my friends and fellow sysadmins, it is not long before we start complaining about the bots and asking each other whether the other has found the definitive way to get rid of them. The desperation in these conversations is palpable&#xA;&#xA;It is the first-person chronicle of a technical community that has watched a thirty-year cooperative protocol break in thirty-six months.&#xA;&#xA;Anthropic, OpenAI and the others publicly respond that they respect robots.txt. The sysadmins&#39; logs say otherwise. Cloudflare, in its December 2025 report, writes unambiguously that &#34;crawling activity can be aggressive, often ignoring the directives found in robots.txt files&#34;. The structural problem is simple: robots.txt never had an enforcement mechanism. It rested on reputation. For those extracting data today to train AI models, the dataset is worth more than the reputation lost by ignoring it.&#xA;&#xA;What it means for those who publish&#xA;&#xA;For anyone running a small site - a blog, an online magazine, a collective&#39;s server, a personal homelab - the 51% (and more) figure translates into a daily operational reality that those who do not administer do not see. A server receives, in proportion, the same kind of bot traffic as the New York Times. Not the same volume, of course - but the same mix. GPTBot downloads wp-content, Censys maps the ports, some botnet tries credentials against three or four well-known WordPress endpoints. Even publishing three articles a month to a readership of two hundred people, you end up statistically anonymous, inside a scanning distribution that is uniform across all of IPv4.&#xA;&#xA;This produces two effects.&#xA;&#xA;The first is that the technical barrier to publishing on one&#39;s own has grown. In the 2000s it was enough to install WordPress on a shared host and forget about it. Today that model survives only if there is someone taking care of the maintenance - timely updates, well-curated plugins, robust passwords, offsite backups, monitoring. Without it, the site does not get attacked in a targeted way: it simply gets consumed by background pressure, like a cliff that erodes without any particular wave breaking on it.&#xA;&#xA;The second is centralisation. The industry&#39;s response to the problem has been &#34;managed everything&#34;: Cloudflare in front of everything, managed WAFs, hosting that does automatic protection, CDNs that absorb anomalous traffic. They work. But the price is that a large chunk of the web now passes through a single provider - Cloudflare handles something like 20% of global HTTP requests - and the small independent publisher who would like to remain small and independent has to choose between delegating their network to a commercial intermediary or accepting standing upright in the wind.&#xA;&#xA;On the defensive front there is a ferment of countermeasures - creative and desperate at the same time. Beyond Anubis, there are tar pits: Nepenthes, written by an anonymous developer who signs himself &#34;Aaron&#34;, responds to crawlers with infinite labyrinths of generated content - pages that link to other pages that link to others, all synthetic, all designed to consume the bot&#39;s resources without giving anything useful in return. Cloudflare has released a commercial equivalent, AI Labyrinth, which does the same thing serving irrelevant text to recognised crawlers. There is the community project ai.robots.txt, which maintains an up-to-date list of AI crawler user-agents and provides both a ready-made robots.txt and .htaccess rules to block them. A small archipelago of individual countermeasures - effective in some cases, but also a symptom: the fight is site by site, sysadmin by sysadmin, because no higher level exists where the question can be resolved.&#xA;&#xA;Self-hosting is still possible. I do it myself, many others do. But it requires time, competence, continuous attention. It has become a niche. What in the 1990s was the normal way of being online is today an exception that needs to be justified - and maintained by hand.&#xA;&#xA;We publish for human readers. But the infrastructure is shaped by bots. The visible web - the one humans see, navigate, read - is the surface tip of an iceberg made mostly of traffic invisible to the eyes and visible in the logs. The real web - the one the bots see - is all of IPv4, scanned in search of usable surfaces.&#xA;&#xA;Guests on our own web&#xA;&#xA;When Tim Berners-Lee described the World Wide Web in the early 1990s, he spoke of a space for connecting people: documents, ideas, knowledge, communities. The cyberlibertarian narrative of the years that followed - Barlow&#39;s Declaration of the Independence of Cyberspace in 1996, the Californian dream of the internet as individual emancipation from the hierarchies of the twentieth century - amplified that promise until it became myth. Thirty years later, the data is one: in 2025, humans are 47% of internet traffic. The majority is machines. And 80% of the work of those machines is the extraction of value from pages that other humans have written, to be processed and sold as predictive, classificatory, generative capability.&#xA;&#xA;Lawrence Lessig saw it in 1999, in Code and Other Laws of Cyberspace. The thesis was simple: code is law. The technical architecture of a network is already political, because it determines what behaviours are possible. Changing the code - the protocols, the specifications, the design choices - means changing which practices are economic and which are not. TCP/IP does not speak about identity, and that is a political choice with thirty-year consequences. robots.txt was cooperative, and that is a political choice that has become a vulnerability. Those who have controlled the architecture - the ARPANET engineers first, the large infrastructure companies later - have already written the rules of the game, regardless of who won the elections or wrote the laws. Lessig has been repeating it for twenty-five years. It is happening now, on a global scale.&#xA;&#xA;We are guests on our own web. We have been for at least a decade, and for two years we have been statistically a minority. The rent we pay is in data extracted without our noticing, in attention consumed by content generated by those who have scraped ours, and in administration hours spent keeping in place infrastructure that is not designed for us. It is not a metaphor: it is an accounting that could be done line by line, if anyone felt like keeping it. The interesting question, then, is not how we block the bots: it is what it means to publish and administer in an internet where the intended audience is no longer the majority of the recipients. A question we should have asked ourselves a long time ago, and one that concerns not only technical operators, but anyone who considers the internet a common good - political, cultural, material.&#xA;&#xA;---&#xA;&#xA;Sources and further reading&#xA;&#xA;On bot traffic statistics and trends&#xA;&#xA;Imperva (Thales) (2025). 2025 Bad Bot Report: The Rapid Rise of Bots and the Unseen Risk for Business. Twelfth annual edition. The decade-long historical series, the pillar 51% figure, composition by attack category, estimates on residential proxies and ATOs. Thirteen trillion bot requests blocked in 2024. https://www.imperva.com/resources/resource-library/reports/bad-bot-report/&#xA;Imperva (Thales) (2026). 2026 Bad Bot Report: Bad Bots in the Agentic Age. Updated figures for 2025: 53% bots, 47% humans. https://www.imperva.com/blog/&#xA;Cloudflare Radar (2025). 2025 Year in Review: The rise of AI, post-quantum, and record-breaking DDoS attacks. AI crawler composition by purpose (training/search/user action), GPTBot/ClaudeBot/Meta-ExternalAgent share, crawl-to-refer ratio by platform. Independent confirmation of the Imperva data from a completely different network angle. https://radar.cloudflare.com/year-in-review/2025&#xA;Cloudflare Blog (2025). From Googlebot to GPTBot: who&#39;s crawling your site in 2025. https://blog.cloudflare.com/&#xA;&#xA;On the breakdown of cooperative protocols&#xA;&#xA;DeVault, D. (2025). Please stop externalising your costs directly into my face. SourceHut blog, March 2025. The manifesto, in first person, of a sysadmin who watches the cooperative robots.txt pact break. Essential reading to understand what it means to administer a FOSS service under pressure from LLM crawlers. https://drewdevault.com/2025/03/17/2025-03-17-Stop-externalizing-your-costs-on-me.html&#xA;Wikimedia Foundation (2025). How crawlers impact the operations of the Wikimedia projects. Diff blog, April 2025. The internal data: 65% of the most expensive traffic from bots, 35% of pageviews. The most documented case of asymmetry between costs borne by the body hosting free content and benefits extracted by crawlers. https://diff.wikimedia.org/&#xA;Iaso, X. (2024–present). Anubis (proof-of-work anti-AI-scraper). The concrete tool that GNOME, KDE and several other FOSS communities have adopted to defend public infrastructure from aggressive crawlers. Demonstrates that defence, today, is proof-of-work - that is, computational friction applied to those who want to read. https://anubis.techaro.lol/&#xA;&#xA;On scanning infrastructure&#xA;&#xA;Durumeric, Z., Adrian, D., Mirian, A., Bailey, M., Halderman, J. A. (2015). &#34;A Search Engine Backed by Internet-Wide Scanning&#34;. Proceedings of the 22nd ACM Conference on Computer and Communications Security (CCS &#39;15). Founding paper of Censys. Describes how scanning IPv4 has become economically trivial. Essential technical reading to understand the discovery/defence asymmetry. https://zmap.io/&#xA;Akamai (various years). The Web Scraping Problem and related Threat Intelligence reports. Economic model of the malicious bot as a parasitic long stay, not as a destroyer. Demolishes the common intuition that a site that falls over has been &#34;attacked&#34;: those who know how to attack well do not make anything fall over. https://www.akamai.com/blog/security&#xA;&#xA;On the political economy of digital infrastructure&#xA;&#xA;Lessig, L. (1999, updated as Code v2 in 2006). Code and Other Laws of Cyberspace. Basic Books. Code is law. The technical architecture of a network is already political because it defines what is possible. Twenty-five years later, the thesis is the single most useful conceptual tool for reading what is happening to robots.txt. http://codev2.cc/&#xA;Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs. Framework of non-consensual extraction as the dominant economic model of Silicon Valley. To be read thinking that its thesis, written about behaviour, applies today one level deeper: to the textual raw material.&#xA;Crawford, K. (2021). Atlas of AI. Yale University Press. The materiality of AI as extractive asymmetry: mines, datacentres, underpaid human labour. I would add: your server.&#xA;&#xA;Original protocol specifications&#xA;&#xA;Postel, J. (ed.) (1981). Internet Protocol. RFC 791. The original IP specification, fourteen pages that never talk about identity. https://datatracker.ietf.org/doc/html/rfc791&#xA;Koster, M., Illyes, G., Zeller, H., Sassman, L. (2022). Robots Exclusion Protocol. RFC 9309. The formal specification of robots.txt, arriving thirty years after the practice and already obsolete in the practice. Worth rereading every so often to remember that today&#39;s internet is a palimpsest of hacks on top of a protocol conceived for a world that no longer exists. https://datatracker.ietf.org/doc/html/rfc9309&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/guests-on-our-own-web&#34;Discuss.../a&#xA;&#xA;#Bots #AICrawlers #robotsTxt #DigitalSovereignty #SelfHosting #Cloudflare #SurveillanceCapitalism #FOSS #Internet #SolarPunk #Writing&#xA;&#xA;div class=&#34;center&#34;&#xD;&#xA;· 🦣 a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a · 📸 a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a ·  📬 a href=&#34;mailto:jolek78@jolek78.dev&#34;Email/a ·&#xD;&#xA;· ☕ a href=&#34;https://liberapay.com/jolek78&#34;Support this work on Liberapay/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>A few months ago I spun up a new VPS on <strong>Linode</strong>, London datacentre. Nothing special – <strong>Debian</strong>, <strong>Nginx</strong>, a <strong>Let&#39;s Encrypt</strong> certificate, a domain I was going to use for my daily notes and my homelab experiments. No link posted anywhere, no entries in my feeds, no backlinks from the sites I run. Just a freshly assigned IP, from a subnet that a week earlier had belonged to someone else.</p>



<p>The one thing I had configured carefully was the logs: nginx with an extended format, journald with audit, a few baseline <strong>fail2ban</strong> jails. I wanted to see what happens to a server that doesn&#39;t yet have a life, before I gave it one. Twenty-four hours later, I opened the logs. No humans. That was expected – I hadn&#39;t told anyone the domain. But there was already a small zoo of other presences. A <code>wget</code> from a Polish VPS with a phantom reverse DNS, the kind registered with a placeholder that never got updated. Three GETs, same resource, thirty-six second intervals. Then nothing. An SSH scan on port 80 – yes, an SSH scan on the HTTP port – written in Go, with a user-agent that claimed to be Mozilla/5.0 but was negotiating TLS the way only Go&#39;s crypto libraries do. <strong>VisionHeight</strong>, a commercial scanner that bills itself as ethical, mapped seven ports in two and a half minutes. <strong>Censys</strong> came through twice, identifying itself, leaving its own PTR and a link to its opt-out page. A <strong>Common Crawl</strong> crawler. <strong>GPTBot</strong>. <strong>ClaudeBot</strong>. <strong>AppleBot</strong>.</p>

<p>People: zero.</p>

<p>I spent the evening watching those logs the way you&#39;d watch a sequence of read-heads on a tape. It was like opening the door to a flat you&#39;d just rented and finding it already occupied by intruders. <em>This is a public network</em>, they seemed to be saying, <em>and nobody told you what public means</em>.</p>

<p>Since then I&#39;ve done what everyone does: I&#39;ve built defences. <strong>nftables</strong> to drop ASNs known for aggressive scanning. fail2ban with custom jails for nginx that recognise the patterns of the noisier scans – probes against <code>/wp-login.php</code> on a server that doesn&#39;t run <strong>WordPress</strong>, attempts at <code>/.env</code>, requests for phpMyAdmin paths that don&#39;t exist. <strong>GoAccess</strong> to visualise what little organic traffic remains once the rest is filtered out. An alert system over <strong>ntfy</strong> for out-of-band anomalies. It is routine – every sysadmin running a homelab has their own variant. But building it calmly, rather than as a patch on something that has already fallen over, is precisely what gets you to look at things that would otherwise scroll past, filtered away.</p>

<p>And while I was building it, a question came to mind, maybe a banal one, an extremely banal one: <em>who am I doing all this for</em>?</p>

<p>Not for the readers – those are few, almost none, they arrive via RSS, shared links, the occasional search engine. I was defending the server from a network that is predominantly non-human. I was configuring jails for scanners that don&#39;t know me, for crawlers that don&#39;t read me, for botnets that don&#39;t particularly mean me harm – they mean harm to anyone reachable on a port 22 or 80.</p>

<h2 id="the-threshold-51-and-already-53" id="the-threshold-51-and-already-53">The threshold: 51% (and already 53%)</h2>

<p>In 2024, for the first time in ten years, bot-generated traffic surpassed human traffic on the internet. Fifty-one percent against forty-nine. The figure comes from <strong>Imperva</strong>&#39;s <em>Bad Bot Report</em>, 2025 edition, the twelfth in the annual series – the analysis is based on thirteen trillion requests blocked by their global mitigation network in 2024 alone. It is the number that best sums up where we have ended up.</p>

<p>The 2026 <em>Bad Bot Report</em>, published a few weeks ago with 2025 data, has updated the figure: 53% bots, 47% humans. Another point and a half lost in twelve months. It did not happen all at once. Here is the historical series, from 2015 onwards:</p>

<table>
<thead>
<tr>
<th>Year</th>
<th>Humans</th>
<th>Bad bots</th>
<th>Good bots</th>
</tr>
</thead>

<tbody>
<tr>
<td>2015</td>
<td>54%</td>
<td>27%</td>
<td>19%</td>
</tr>

<tr>
<td>2018</td>
<td>62%</td>
<td>22%</td>
<td>17%</td>
</tr>

<tr>
<td>2020</td>
<td>59%</td>
<td>26%</td>
<td>15%</td>
</tr>

<tr>
<td>2022</td>
<td>53%</td>
<td>30%</td>
<td>17%</td>
</tr>

<tr>
<td>2023</td>
<td>50%</td>
<td>32%</td>
<td>18%</td>
</tr>

<tr>
<td>2024</td>
<td>49%</td>
<td>37%</td>
<td>14%</td>
</tr>

<tr>
<td>2025</td>
<td>47%</td>
<td>n/d</td>
<td>n/d</td>
</tr>
</tbody>
</table>

<p><em>(Source: Imperva, Bad Bot Report 2025 and 2026)</em></p>

<p>Humans have lost seven percentage points in ten years. The erosion is slow and steady – a descending curve measured in years, not in months. Nobody cut a ribbon to announce <em>we have crossed the threshold</em>. It was a gradual shift of the axis, a median that moved while we were looking elsewhere. Meanwhile, the bad bots grew from 27% to 37%. Ten percentage points in ten years, all on the predatory side. Brute force, credential stuffing, data scraping, account takeover, API fraud. Imperva records that ATOs – <strong>Account Takeover Attacks</strong> – grew by 40% in 2024 alone, and in 2025 the financial sector absorbed 46% of all ATO incidents worldwide. And, <em>dulcis in fundo</em>, the “good bots” – <strong>Googlebot</strong>, <strong>Bingbot</strong>, the legitimate aggregators, the health checkers – went down. From 19% in 2015 to 14% in 2024. The indexing services that historically justified bandwidth consumption have lost ground: the network has become more automated, but in a direction that does not pay off for those who publish.</p>

<p><strong>Cloudflare</strong> confirms this with independent data. Their <em>Radar Year in Review 2025</em>, published at the end of December, reports that global internet traffic grew by 19% in 2025, and a substantial share of that growth is attributable to bots and AI crawlers. Googlebot still dominates – around 28% of verified traffic – but the new generation is gaining fast: <strong>OpenAI</strong>&#39;s GPTBot went from 4.7% in July 2024 to 11.7% in July 2025. ChatGPT-User, the bot that acts on explicit user command, recorded a year-on-year growth of 2,825% in request volume. That is not a typo. <strong>PerplexityBot</strong>, even more extreme: +157,490%.</p>

<p>The 51% threshold has to be read in this context. The curve has been rising for years, and 2024 is not the peak. The network we are using today is not the 2015 network with a few more bots: <em>it is a structurally different network, where humans have gone from being the main signal to being the background noise</em>.</p>

<h2 id="who-is-talking-in-this-network" id="who-is-talking-in-this-network">Who is talking in this network?</h2>

<p>When you say “bot” you do not say one thing. The presences in the logs belong to three families that do different jobs, have different economies, and produce different pressures on the infrastructure. Three main categories, then.</p>

<p><strong>The cartographers.</strong> These are the scanners that map the entire IPv4 space – four billion three hundred million addresses – across all or nearly all known ports, and maintain queryable databases of exposed services. The founding project is <strong>ZMap</strong>, released in 2013 by a team at the <strong>University of Michigan</strong>. ZMap is a port scanner that can scan the entire IPv4 space on a single port in under 45 minutes from a single machine, from userspace, over a gigabit connection. Technically remarkable: it cuts by an order of magnitude the time needed to “see” all of the internet. Censys was built on top of ZMap, launched in 2015 by the same authors. Censys continuously scans IPv4, collects TLS certificates, service banners, software fingerprints, and keeps everything in a queryable commercial database. <strong>Shodan</strong>, founded in 2009 by <strong>John Matherly</strong>, is the conceptual predecessor: less polished technically, but longer-lived and more deeply rooted in sysadmin culture. <strong>Rapid7</strong>&#39;s Project Sonar, ZoomEye, Fofa, Netlas – all follow the same logic.</p>

<p>In 2012, an anonymous researcher decided he wanted to census the internet but did not have the bandwidth. He built an illegal botnet of compromised routers – the <strong>Carna Botnet</strong> – and ran the first Internet Census: he published the dataset online and declared his own offences. It remains a case study in the asymmetry between technical capability and legality – what Censys does today from a datacentre, ten years ago was a federal crime in the United States. The scanners describe themselves as <em>ethical</em>. They respect <code>abuse@</code>, publish their methodology, exclude networks on request, leave identifiable PTRs. All true. But the data they produce – the complete, near-real-time map of what is exposed on the internet – is sold by subscription, and the clients include academic research and the surveillance industry, corporate threat intelligence and aspiring attackers with seventy-nine dollars a month for a base account. In 2014, at <strong>Def Con 22</strong>, researchers Dan Tentler, Paul McMillan and Robert Graham ran a live IPv4 scan on port 5900 looking for <strong>VNC</strong> servers without authentication. They found thirty thousand systems accessible without a password. Among them: two hydroelectric power stations, the cameras of a Czech casino, industrial control systems, ATMs, a caviar production plant. The map exists because producing it is cheap, and <em>who consults it – for what purposes, with what consequences – is a consequence of that price, not the reason for the project</em>.</p>

<p><strong>The extractors.</strong> These are the AI crawlers. They existed in embryonic form before too – Common Crawl for years, the indexing archives of search engines forever – but since November 2022, with the release of ChatGPT, they have changed in nature and in volume.</p>

<p>Cloudflare&#39;s data, collected from a fixed sample of clients to eliminate the growth bias, is explicit. Between July 2024 and July 2025:</p>
<ul><li>GPTBot (<strong>OpenAI</strong>): from 4.7% to 11.7% of total crawler traffic</li>
<li>ClaudeBot (<strong>Anthropic</strong>): from 6% to nearly 10%</li>
<li>Meta-ExternalAgent (<strong>Meta</strong>): from 0.9% to 7.5%</li>
<li>PerplexityBot (<strong>Perplexity</strong>): growth of 157,490%</li>
<li>Bytespider (<strong>ByteDance</strong>): declining, from 14.1% to 2.4%</li></ul>

<p>The most revealing figure is the composition by purpose. Cloudflare classifies AI crawling into three categories: <em>training</em> (data collection to train models), <em>search</em> (indexing for chat search), <em>user action</em> (visits on explicit user command). Over the past twelve months, 80% of AI crawling has been for training. 18% for search. 2% for user action. In the most recent six months the training share has risen further, to 82%. <em>The overwhelming majority of the work these bots do around the web does not, then, serve network mapping – it serves to extract content, process it, and turn it into training data for models that will then sell access or use the output to generate responses that compete with the originating site</em>.</p>

<p>Another Cloudflare metric measures the imbalance directly: the <em>crawl-to-refer ratio</em>, that is, how many requests a bot makes versus how much traffic it then sends back to the source site. In July 2025, Anthropic was crawling 38,000 pages for every human visitor it sent back – a clear improvement on the 286,000:1 ratio recorded in January of the same year, but still the most lopsided extreme among the major AI platforms. OpenAI in the same period was running at around 1,500:1. Perplexity 194:1. The economic model is asymmetric extraction: take a lot, give back little.</p>

<p><strong>The parasites.</strong> These are the bad bots in the strict sense: 37% of total internet traffic in 2024. Thirteen trillion requests blocked by Imperva&#39;s network alone in that year.</p>

<p>Here the composition changes. Imperva observes that “simple” attacks – basic scripts, dictionary attacks, automated scans – grew from 40% to 45% in 2024. The report explicitly attributes this growth to the arrival of generative AI: tools like ChatGPT, <strong>Claude</strong>, <strong>Llama</strong> have lowered the technical barrier to writing a brute forcer, a credential stuffer, a malicious crawler. What ten years ago required Perl and John the Ripper today requires a prompt and ten minutes. 31% of total attacks recorded by Imperva fall into one of the twenty-one OWASP Automated Threats categories. 44% of advanced bot traffic attacks APIs, no longer web pages – because APIs expose business logic with fewer defences and more value. 21% of attacks use residential proxies: IP addresses belonging to real domestic connections, rented on the grey market, allowing the bot to blend in as legitimate user traffic. Geo-fencing, per-IP rate limiting, ASN blacklists – all useless against an attacker who routes traffic through a residential fibre line in Milan.</p>

<p>One detail demolishes a widespread myth. <em>Attackers usually do not want to take a site down. They want to use it</em>. A compromised site is worth more alive than dead: as a host for phishing, cryptocurrency mining, botnet command-and-control, traffic redirect for black-hat SEO, file storage for warez. When the site falls, the attacker has done something wrong – they have saturated resources, triggered detection, burned their foothold. <strong>Akamai</strong> regularly publishes reports that confirm this: the economic model of the malicious bot is the long stay, not the raid. This changes the reading of visible symptoms. If a site falls over with intermittent 502s, the structural explanation is almost always: saturation of a PHP-FPM pool due to medium-scale bot traffic, on infrastructure that was not dimensioned to absorb half the internet knocking at the same time. The political explanation – <em>they are attacking us to silence us</em> – is almost always false, because anyone who knows how to attack seriously does not let the site fall over.</p>

<h2 id="robots-txt-or-the-death-of-a-social-pact" id="robots-txt-or-the-death-of-a-social-pact">robots.txt, or the death of a social pact</h2>

<p>In June 1994 <strong>Martijn Koster</strong>, a Dutch sysadmin running the early web crawlers for <strong>ALIWEB</strong>, proposed a convention: a text file at the root of the site, <code>robots.txt</code>, in which the operator could declare which parts of their domain crawlers were kindly asked not to visit. No central authority would enforce it, no network protocol would verify it. <em>It was a gentleman&#39;s pact, full stop</em>. It worked because in the nineties crawlers were few, they were run by people who knew each other, and nobody had an economic interest strong enough to burn their reputation by ignoring a directive. For thirty years it held. Googlebot, Bingbot, Yandex, Common Crawl – all respected <code>robots.txt</code> as part of the basic etiquette of indexing. It was so established that the formal specification only arrived in 2022 (<strong>RFC 9309</strong>), decades after the daily practice. When the <strong>IETF</strong> standardised it, they did so to document a consolidated practice, not to create a new one.</p>

<p>That pact, in the last three years, has been broken.</p>

<p><strong>Drew DeVault</strong>, founder of <strong>SourceHut</strong> – the niche git platform much loved by those who do not want to be on GitHub – published a post in March 2025 that became a manifesto, titled <em>Please stop externalising your costs directly into my face</em>. The piece describes, with technical coldness, the behaviour of LLM crawlers:</p>

<blockquote><p>they crawl everything they can find, robots.txt be damned, including expensive endpoints like git blame, every page of every git log, and every commit of every repository, and they do this using random User-Agents that overlap with end-users and come from tens of thousands of IP addresses – mostly residential, in unrelated subnets, each making no more than one HTTP request over any window we tried to measure – actively and maliciously adapting and blending in with legitimate traffic to evade any attempt at characterisation or blocking</p></blockquote>

<p>It is the description of a distributed DDoS attack carried out by companies that present themselves as legitimate consumers of bandwidth. SourceHut had to unilaterally block entire cloud providers – Google Cloud, Microsoft Azure – because it was the only viable defence.</p>

<p>The <strong>Wikimedia Foundation</strong>, in April 2025, published data that complements this. Since January 2024, the bandwidth consumed by media downloads on <strong>Wikimedia Commons</strong> has grown by 50%. The increase does not come from new human readers: it comes from AI scrapers vacuuming up the entire catalogue of 144 million open-licence files. Wikimedia has quantified it: 65% of the most expensive traffic hitting the central datacentres is bot-generated, even though bots account for only 35% of total pageviews. <em>The bots read in bulk</em> – they request obscure pages that the regional cache does not have, forcing the infrastructure to fetch them from the centre. A human reader costs little; an AI crawler costs a lot, and the cost-to-benefit ratio for the body hosting the content has become unsustainable. The Foundation has set as a 2025/2026 annual goal: “reduce by 20% the traffic generated by scrapers”. <em>An organisation that hosts the largest free encyclopaedia in the world is forced to invest engineering in repelling those who want to read it</em>.</p>

<p>The <strong>KDE</strong> project&#39;s GitLab went down temporarily because of a crawler coming from <strong>Alibaba</strong> IP ranges. <strong>GNOME</strong>&#39;s GitLab installed <strong>Anubis</strong>, a proof-of-work challenge written by <strong>Xe Iaso</strong> – on arrival at the page, the browser has to solve a small computational problem before the content is shown. Costs nothing to a human, costs dearly to a bot that has to do millions a day. The numbers published by Bart Piotrowski, GNOME&#39;s sysadmin, after switching on Anubis: in two and a half hours, 81,000 total requests, of which only 3% made it through the proof-of-work. 97% were bots. Anubis&#39; default loading screen shows a girl in anime style – it is an explicitly provocative aesthetic choice by Iaso, who has declared he wanted to make the experience annoying for those using these tools to extract.</p>

<p>Kevin Fenzi, who administers <strong>Fedora</strong>&#39;s infrastructure, has blocked traffic from entire countries. Drew DeVault, in the same post, writes:</p>

<blockquote><p>Every time I sit down for a beer with my friends and fellow sysadmins, it is not long before we start complaining about the bots and asking each other whether the other has found the definitive way to get rid of them. The desperation in these conversations is palpable</p></blockquote>

<p>It is the first-person chronicle of a technical community that has watched a thirty-year cooperative protocol break in thirty-six months.</p>

<p>Anthropic, OpenAI and the others publicly respond that they respect <code>robots.txt</code>. The sysadmins&#39; logs say otherwise. Cloudflare, in its December 2025 report, writes unambiguously that “crawling activity can be aggressive, often ignoring the directives found in robots.txt files”. The structural problem is simple: <em><code>robots.txt</code> never had an enforcement mechanism. It rested on reputation</em>. For those extracting data today to train AI models, the dataset is worth more than the reputation lost by ignoring it.</p>

<h2 id="what-it-means-for-those-who-publish" id="what-it-means-for-those-who-publish">What it means for those who publish</h2>

<p>For anyone running a small site – a blog, an online magazine, a collective&#39;s server, a personal homelab – the 51% (and more) figure translates into a daily operational reality that those who do not administer do not see. <em>A server receives, in proportion, the same kind of bot traffic as the New York Times</em>. Not the same volume, of course – but the same mix. GPTBot downloads <code>wp-content</code>, Censys maps the ports, some botnet tries credentials against three or four well-known WordPress endpoints. Even publishing three articles a month to a readership of two hundred people, you end up statistically anonymous, inside a scanning distribution that is uniform across all of IPv4.</p>

<p>This produces two effects.</p>

<p>The first is that <em>the technical barrier to publishing on one&#39;s own has grown</em>. In the 2000s it was enough to install WordPress on a shared host and forget about it. Today that model survives only if there is someone taking care of the maintenance – timely updates, well-curated plugins, robust passwords, offsite backups, monitoring. Without it, the site does not get attacked in a targeted way: it simply gets consumed by background pressure, like a cliff that erodes without any particular wave breaking on it.</p>

<p>The second is centralisation. The industry&#39;s response to the problem has been “managed everything”: Cloudflare in front of everything, managed WAFs, hosting that does automatic protection, CDNs that absorb anomalous traffic. They work. But the price is that a large chunk of the web now passes through a single provider – <em>Cloudflare handles something like 20% of global HTTP requests</em> – and the small independent publisher who would like to remain small and independent has to choose between delegating their network to a commercial intermediary or accepting standing upright in the wind.</p>

<p>On the defensive front there is a ferment of countermeasures – creative and desperate at the same time. Beyond Anubis, there are tar pits: <strong>Nepenthes</strong>, written by an anonymous developer who signs himself “Aaron”, responds to crawlers with infinite labyrinths of generated content – pages that link to other pages that link to others, all synthetic, all designed to consume the bot&#39;s resources without giving anything useful in return. Cloudflare has released a commercial equivalent, <strong>AI Labyrinth</strong>, which does the same thing serving irrelevant text to recognised crawlers. There is the community project <strong>ai.robots.txt</strong>, which maintains an up-to-date list of AI crawler user-agents and provides both a ready-made <code>robots.txt</code> and <code>.htaccess</code> rules to block them. <em>A small archipelago of individual countermeasures – effective in some cases, but also a symptom: the fight is site by site, sysadmin by sysadmin, because no higher level exists where the question can be resolved</em>.</p>

<p>Self-hosting is still possible. I do it myself, many others do. But it requires time, competence, continuous attention. <em>It has become a niche</em>. What in the 1990s was the normal way of being online is today an exception that needs to be justified – and maintained by hand.</p>

<p>We publish for human readers. But the infrastructure is shaped by bots. The visible web – the one humans see, navigate, read – is the surface tip of an iceberg made mostly of traffic invisible to the eyes and visible in the logs. The real web – the one the bots see – is all of IPv4, scanned in search of usable surfaces.</p>

<h2 id="guests-on-our-own-web" id="guests-on-our-own-web">Guests on our own web</h2>

<p>When <strong>Tim Berners-Lee</strong> described the World Wide Web in the early 1990s, he spoke of a space for connecting people: documents, ideas, knowledge, communities. The cyberlibertarian narrative of the years that followed – <strong>Barlow</strong>&#39;s <em>Declaration of the Independence of Cyberspace</em> in 1996, the Californian dream of the internet as individual emancipation from the hierarchies of the twentieth century – amplified that promise until it became myth. Thirty years later, the data is one: in 2025, humans are 47% of internet traffic. <em>The majority is machines</em>. And 80% of the work of those machines is the extraction of value from pages that other humans have written, to be processed and sold as predictive, classificatory, generative capability.</p>

<p><strong>Lawrence Lessig</strong> saw it in 1999, in <em>Code and Other Laws of Cyberspace</em>. The thesis was simple: <em>code is law</em>. The technical architecture of a network is already political, because it determines what behaviours are possible. Changing the code – the protocols, the specifications, the design choices – means changing which practices are economic and which are not. TCP/IP does not speak about identity, and that is a political choice with thirty-year consequences. <code>robots.txt</code> was cooperative, and that is a political choice that has become a vulnerability. Those who have controlled the architecture – the <strong>ARPANET</strong> engineers first, the large infrastructure companies later – have already written the rules of the game, regardless of who won the elections or wrote the laws. Lessig has been repeating it for twenty-five years. It is happening now, on a global scale.</p>

<p><em>We are guests on our own web</em>. We have been for at least a decade, and for two years we have been statistically a minority. The rent we pay is in data extracted without our noticing, in attention consumed by content generated by those who have scraped ours, and in administration hours spent keeping in place infrastructure that is not designed for us. It is not a metaphor: it is an accounting that could be done line by line, if anyone felt like keeping it. The interesting question, then, is not <em>how we block the bots</em>: it is <em>what it means to publish and administer in an internet where the intended audience is no longer the majority of the recipients</em>. A question we should have asked ourselves a long time ago, and one that concerns not only technical operators, but anyone who considers the internet a common good – political, cultural, material.</p>

<hr/>

<h2 id="sources-and-further-reading" id="sources-and-further-reading">Sources and further reading</h2>

<p><strong>On bot traffic statistics and trends</strong></p>
<ul><li>Imperva (Thales) (2025). <em>2025 Bad Bot Report: The Rapid Rise of Bots and the Unseen Risk for Business</em>. Twelfth annual edition. The decade-long historical series, the pillar 51% figure, composition by attack category, estimates on residential proxies and ATOs. Thirteen trillion bot requests blocked in 2024. <a href="https://www.imperva.com/resources/resource-library/reports/bad-bot-report/">https://www.imperva.com/resources/resource-library/reports/bad-bot-report/</a></li>
<li>Imperva (Thales) (2026). <em>2026 Bad Bot Report: Bad Bots in the Agentic Age</em>. Updated figures for 2025: 53% bots, 47% humans. <a href="https://www.imperva.com/blog/">https://www.imperva.com/blog/</a></li>
<li>Cloudflare Radar (2025). <em>2025 Year in Review: The rise of AI, post-quantum, and record-breaking DDoS attacks</em>. AI crawler composition by purpose (training/search/user action), GPTBot/ClaudeBot/Meta-ExternalAgent share, crawl-to-refer ratio by platform. Independent confirmation of the Imperva data from a completely different network angle. <a href="https://radar.cloudflare.com/year-in-review/2025">https://radar.cloudflare.com/year-in-review/2025</a></li>
<li>Cloudflare Blog (2025). <em>From Googlebot to GPTBot: who&#39;s crawling your site in 2025</em>. <a href="https://blog.cloudflare.com/">https://blog.cloudflare.com/</a></li></ul>

<p><strong>On the breakdown of cooperative protocols</strong></p>
<ul><li>DeVault, D. (2025). <em>Please stop externalising your costs directly into my face</em>. SourceHut blog, March 2025. The manifesto, in first person, of a sysadmin who watches the cooperative <code>robots.txt</code> pact break. Essential reading to understand what it means to administer a FOSS service under pressure from LLM crawlers. <a href="https://drewdevault.com/2025/03/17/2025-03-17-Stop-externalizing-your-costs-on-me.html">https://drewdevault.com/2025/03/17/2025-03-17-Stop-externalizing-your-costs-on-me.html</a></li>
<li>Wikimedia Foundation (2025). <em>How crawlers impact the operations of the Wikimedia projects</em>. Diff blog, April 2025. The internal data: 65% of the most expensive traffic from bots, 35% of pageviews. The most documented case of asymmetry between costs borne by the body hosting free content and benefits extracted by crawlers. <a href="https://diff.wikimedia.org/">https://diff.wikimedia.org/</a></li>
<li>Iaso, X. (2024–present). <em>Anubis (proof-of-work anti-AI-scraper)</em>. The concrete tool that GNOME, KDE and several other FOSS communities have adopted to defend public infrastructure from aggressive crawlers. Demonstrates that defence, today, is proof-of-work – that is, computational friction applied to those who want to read. <a href="https://anubis.techaro.lol/">https://anubis.techaro.lol/</a></li></ul>

<p><strong>On scanning infrastructure</strong></p>
<ul><li>Durumeric, Z., Adrian, D., Mirian, A., Bailey, M., Halderman, J. A. (2015). “A Search Engine Backed by Internet-Wide Scanning”. <em>Proceedings of the 22nd ACM Conference on Computer and Communications Security (CCS &#39;15)</em>. Founding paper of Censys. Describes how scanning IPv4 has become economically trivial. Essential technical reading to understand the discovery/defence asymmetry. <a href="https://zmap.io/">https://zmap.io/</a></li>
<li>Akamai (various years). <em>The Web Scraping Problem</em> and related Threat Intelligence reports. Economic model of the malicious bot as a parasitic <em>long stay</em>, not as a destroyer. Demolishes the common intuition that a site that falls over has been “attacked”: those who know how to attack well do not make anything fall over. <a href="https://www.akamai.com/blog/security">https://www.akamai.com/blog/security</a></li></ul>

<p><strong>On the political economy of digital infrastructure</strong></p>
<ul><li>Lessig, L. (1999, updated as <em>Code v2</em> in 2006). <em>Code and Other Laws of Cyberspace</em>. Basic Books. <em>Code is law</em>. The technical architecture of a network is already political because it defines what is possible. Twenty-five years later, the thesis is the single most useful conceptual tool for reading what is happening to <code>robots.txt</code>. <a href="http://codev2.cc/">http://codev2.cc/</a></li>
<li>Zuboff, S. (2019). <em>The Age of Surveillance Capitalism</em>. PublicAffairs. Framework of non-consensual extraction as the dominant economic model of Silicon Valley. To be read thinking that its thesis, written about behaviour, applies today one level deeper: to the textual raw material.</li>
<li>Crawford, K. (2021). <em>Atlas of AI</em>. Yale University Press. The materiality of AI as extractive asymmetry: mines, datacentres, underpaid human labour. I would add: your server.</li></ul>

<p><strong>Original protocol specifications</strong></p>
<ul><li>Postel, J. (ed.) (1981). <em>Internet Protocol</em>. RFC 791. The original IP specification, fourteen pages that never talk about identity. <a href="https://datatracker.ietf.org/doc/html/rfc791">https://datatracker.ietf.org/doc/html/rfc791</a></li>
<li>Koster, M., Illyes, G., Zeller, H., Sassman, L. (2022). <em>Robots Exclusion Protocol</em>. RFC 9309. The formal specification of <code>robots.txt</code>, arriving thirty years after the practice and already obsolete in the practice. Worth rereading every so often to remember that today&#39;s internet is a palimpsest of hacks on top of a protocol conceived for a world that no longer exists. <a href="https://datatracker.ietf.org/doc/html/rfc9309">https://datatracker.ietf.org/doc/html/rfc9309</a></li></ul>

<p><a href="https://remark.as/p/jolek78/guests-on-our-own-web">Discuss...</a></p>

<p><a href="https://jolek78.writeas.com/tag:Bots" class="hashtag"><span>#</span><span class="p-category">Bots</span></a> <a href="https://jolek78.writeas.com/tag:AICrawlers" class="hashtag"><span>#</span><span class="p-category">AICrawlers</span></a> <a href="https://jolek78.writeas.com/tag:robotsTxt" class="hashtag"><span>#</span><span class="p-category">robotsTxt</span></a> <a href="https://jolek78.writeas.com/tag:DigitalSovereignty" class="hashtag"><span>#</span><span class="p-category">DigitalSovereignty</span></a> <a href="https://jolek78.writeas.com/tag:SelfHosting" class="hashtag"><span>#</span><span class="p-category">SelfHosting</span></a> <a href="https://jolek78.writeas.com/tag:Cloudflare" class="hashtag"><span>#</span><span class="p-category">Cloudflare</span></a> <a href="https://jolek78.writeas.com/tag:SurveillanceCapitalism" class="hashtag"><span>#</span><span class="p-category">SurveillanceCapitalism</span></a> <a href="https://jolek78.writeas.com/tag:FOSS" class="hashtag"><span>#</span><span class="p-category">FOSS</span></a> <a href="https://jolek78.writeas.com/tag:Internet" class="hashtag"><span>#</span><span class="p-category">Internet</span></a> <a href="https://jolek78.writeas.com/tag:SolarPunk" class="hashtag"><span>#</span><span class="p-category">SolarPunk</span></a> <a href="https://jolek78.writeas.com/tag:Writing" class="hashtag"><span>#</span><span class="p-category">Writing</span></a></p>

<div class="center">
· 🦣 <a href="https://fosstodon.org/@jolek78">Mastodon</a> · 📸 <a href="https://pixelfed.social/jolek78">Pixelfed</a> ·  📬 <a href="mailto:jolek78@jolek78.dev">Email</a> ·
· ☕ <a href="https://liberapay.com/jolek78">Support this work on Liberapay</a>
</div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/guests-on-our-own-web</guid>
      <pubDate>Sat, 16 May 2026 07:58:18 +0000</pubDate>
    </item>
    <item>
      <title>ARM. The chip we didn&#39;t know we needed</title>
      <link>https://jolek78.writeas.com/arm-the-chip-we-didnt-know-we-needed?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[There are architectures you see and architectures you don&#39;t. ARM is the most extreme case of the second category: it runs in the phone in our pocket, in the home router, in the eighty-euro board that serves as a home server for millions of tinkerers, in the datacentres of Amazon and Google. It is everywhere, and almost nobody knows what it is. It took me years too to bring it into focus, and the occasion was a Raspberry Pi 3 that I had decided to turn into a Nextcloud - the first brick of what would become, in the years to come, my small homelab - many years ago. It was a line in /boot/config that made me notice the thing: the Pi&#39;s processor, a Broadcom BCM2837, used the same architecture as the Android phones I had hacked for years. ARM. Same instruction set, same underlying logic, same family.&#xA;&#xA;!--more--&#xA;&#xA;A room in Cambridge, a government project, and a woman&#xA;&#xA;The story of ARM does not begin in a Silicon Valley garage. It begins in Cambridge, in 1983, in a small company called Acorn Computers, on a commission from the BBC.&#xA;&#xA;The context matters, because it changes the whole flavour of the story. The British government had decided to launch a national computer literacy programme - the BBC Computer Literacy Project - and needed a machine that could go into schools. Acorn won the tender with the BBC Micro, a cheap and robust computer that would introduce an entire generation of Britons to programming. It was the first time a state systematically funded popular access to computing. Not a startup with a venture-capital pitch: a public project, with public money, for an explicitly democratising goal.&#xA;&#xA;But the BBC Micro was not enough. Acorn needed something more powerful for the next step, and the processors available on the market - 6502, Z80, the early Intel offerings - were either too slow, too complex, or too expensive. Acorn&#39;s research and development team then decided to design one from scratch, drawing inspiration from Patterson and Ditzel&#39;s work at Berkeley on the RISC architecture: simple instructions, executed quickly, few transistors, low power consumption. The result, in 1985, was the ARM1: thirty thousand transistors, no cache, no microcode.&#xA;&#xA;The person who designed the architecture and instruction set of that ARM1 was called Sophie Wilson. Her approach is summarised in a sentence she gave in an interview with the Telegraph, and it is worth quoting:&#xA;&#xA;  We accomplished this by thinking about things very, very carefully beforehand.&#xA;&#xA;Nothing particularly sophisticated, on the face of it. But in a sector where the dominant tendency was to add instructions and complexity to increase performance, the intuition of Wilson and her colleague Steve Furber went in the opposite direction: take away instead of add, simplify instead of complicate.&#xA;&#xA;There is an episode that explains better than any technical analysis where this philosophy led. On 26 April 1985, when the first chips came back from the VLSI Technology foundry, Furber connected them to a development board and was puzzled: the ammeter in series with the power supply read zero. The processor seemed to be consuming literally nothing. The team that had designed the ARM1 numbered a handful of people - Wilson on the instruction set, Furber on microarchitecture design, a few collaborators around them - and operated with negligible resources compared to Intel or Motorola. The idea that they had just produced a processor that consumed zero was implausible.&#xA;&#xA;The explanation, as Wilson recounted in a 2012 interview with The Register, was wrong in the most embarrassing way possible:&#xA;&#xA;  The development board the chip was plugged into had a fault: there was no current being sent down the power supply lines at all. The processor was actually running on leakage from the logic circuits. So the low-power big thing that the ARM is most valued for today, the reason that it&#39;s on all your mobile phones, was a complete accident.&#xA;&#xA;The board was faulty, the power was not actually reaching the chip, and the processor was running on the leakage current from the logic circuits. The most important characteristic of the most widespread ARM architecture on the planet - the energy efficiency that makes it suitable for mobile devices - was discovered by mistake, on a broken board, by an engineer convinced he had a faulty measuring instrument.&#xA;&#xA;Furber, for his part, explained the dynamic in more engineering terms:&#xA;&#xA;  We applied Victorian engineering margins, and in designing to ensure it came out under a watt, we missed, and it came out under a tenth of a watt.&#xA;&#xA;The &#34;Victorian engineering margins&#34; are the generous safety margins typical of late nineteenth-century engineering - over-dimensioning every component to avoid failures. Furber and Wilson, accustomed to designing with limited resources and no margin for error, had applied the same principle to the chip design: design for consumption under a watt, and end up well below.&#xA;&#xA;  There was no magic with the low power characteristics apart from simplicity.&#xA;&#xA;No magic. Just a design done well by a small team that could not afford to get it wrong. On that accident, and on that simplicity, ARM&#39;s dominance in mobile for the next forty years would be built.&#xA;&#xA;---&#xA;&#xA;A note on Sophie Wilson&#xA;&#xA;Born in Leeds in 1957. She studied mathematics at Selwyn College, Cambridge, and as a student already worked with Hermann Hauser at Acorn - designing the Acorn System 1 even before graduating. In 1981, on commission from the BBC, she wrote BBC BASIC: a complete programming language in 16 kilobytes, so well-designed that it is still in use today on embedded systems. The &#34;subtract instead of add&#34; philosophy that would make ARM1 what it is was not born in 1985: it was born in the extreme memory constraints of the BBC Micro. Only later, in 1983, did Wilson begin work on the ARM1 instruction set, which she completed with Steve Furber in 1985. After Acorn she moved to Element 14, a 1999 spin-off absorbed by Broadcom in 2000. At Broadcom, where she still works as a Distinguished Engineer, she contributed to the BCM family of SoCs - including those that ended up inside the early Raspberry Pis, BCM2837 of the Pi 3 included. Recognition came late: Computer History Museum Fellow Award in 2012, Fellow of the Royal Society in 2013, Commander of the Order of the British Empire in 2019. In the 1990s she completed her gender transition, continuing to work in the sector without interruption.&#xA;&#xA;---&#xA;&#xA;In 1990, Acorn, Apple and VLSI Technology founded a separate joint venture to manage and license the architecture. The name changed from Acorn RISC Machine to Advanced RISC Machines. ARM Holdings was born as an independent company, headquartered in Cambridge, with a business model that had no precedent in the sector: it would never manufacture a single chip. It would sell the idea of the chip. Licences, royalties, IP. Anyone who wanted to build an ARM processor would have to pay them.&#xA;&#xA;It was a technical choice, but also a political one. ARM did not have the capital to build factories, did not have the infrastructure. But it had something harder to replicate: a clean, efficient architecture, designed well from the start.&#xA;&#xA;The architecture of invisible power&#xA;&#xA;ARM&#39;s business model is one of the most elegant - and least understood - in the entire technology industry. It works like this: ARM designs the processor architectures and licenses their use to third parties in exchange for an upfront fee (typically between one and ten million dollars) plus a royalty on every chip produced, usually around 1–2% of the final device price. Whoever buys the licence can then build their own chips based on that architecture, customising it within the limits allowed by the contract. They are not buying a product, then: they are buying the right to make one.&#xA;&#xA;Garnsey, Lorenzoni and Ferriani, in a fundamental study on the birth of ARM as a spin-off from Acorn published in Research Policy in 2008, describe this transition as an exemplary case of techno-organizational speciation: technology is not simply transferred, but is radically transformed in the passage to a new domain through a new organisational model. ARM is not Acorn that changes its name: it is a new organism, with a completely different survival logic, which carries the original DNA but adapts to an environment Acorn could never have inhabited.&#xA;&#xA;The practical result of this structure is what the industry calls neutral positioning. ARM does not compete with its customers - it does not sell chips, does not produce devices - so it can sell the same licence to Qualcomm, Apple, Samsung and MediaTek, who fight each other on the market every day. It is the &#34;Switzerland&#34; of silicon: a credible referee, a common infrastructure, a layer everyone builds on without having to trust the others. This has created an ecosystem of over a thousand licensee partners - a number impossible to reach for any traditional chip manufacturer. Furber, today professor of computer engineering at the University of Manchester, summed up the result in a way that is hard to forget:&#xA;&#xA;  I suspect there&#39;s more ARM computing power on the planet than everything else ever made put together. The numbers are just astronomical.&#xA;&#xA;It is not rhetoric: it is the logical consequence of a model that multiplies adoption instead of concentrating it.&#xA;&#xA;But this neutrality has a structural cost that is rarely thematised. When ARM sells a licence, it also sells dependence. Whoever builds their own SoC on ARM architecture is bound to that instruction set for the entire life of the product. Changing architecture would mean rewriting the software, recertifying the systems, redoing the chip design. The exit cost is very high. And this means that ARM, despite producing nothing, exercises enormous systemic power: it can renegotiate licence terms, raise royalties, decide who gets access to the most advanced architectures and who does not. Abstract as this dependence may sound on paper, there is a recent case that makes it very concrete - and worth following in detail, because it illustrates exactly how ARM power is exercised in the real world.&#xA;&#xA;In 2021, Qualcomm acquired for $1.4 billion a Californian startup called Nuvia, founded by three former Apple Silicon engineers - Gerard Williams III, Manu Gulati, John Bruno - who were designing a server chip called Phoenix, based on the ARM v8.7-A architecture. Nuvia had its own ALA (Architecture License Agreement) with ARM, negotiated on the terms of a small startup entering a new market. When Qualcomm bought it, it integrated the Phoenix technology into its own Oryon core, the heart of the new Snapdragon X Elite - the chip with which Qualcomm wanted to challenge Intel and AMD in the AI PC laptop market.&#xA;&#xA;The problem was contractual, not technical. Qualcomm&#39;s ALA with ARM already existed, and provided for lower royalties than Nuvia&#39;s. Qualcomm argued that the integration of Nuvia into its own chips fell under its pre-existing ALA. ARM replied that no: the acquisition required a full renegotiation from scratch - on ARM&#39;s terms, naturally. In 2022 ARM took Qualcomm to court asking, among other things, for the physical destruction of the pre-acquisition Nuvia designs. Not a downsizing, not a renegotiation: destruction. The message was unambiguous: IP licensing is not a sale, it is a revocable permission, and the permission is granted by whoever owns the architecture.&#xA;&#xA;The case went to trial in Wilmington, Delaware, in December 2024. The jury ruled unanimously in favour of Qualcomm on two of the three contested points, hung jury on the third. On 30 September 2025, Judge Maryellen Noreika issued the final ruling: full and final judgment in favour of Qualcomm and Nuvia on all fronts, also rejecting ARM&#39;s request for a new trial. The judge explicitly noted that ARM itself, in its own internal documents, admitted to having recorded historic licensing and royalty revenues after attempting to terminate Nuvia&#39;s ALA in 2022 - which, translated, means: while claiming to have been damaged by Nuvia&#39;s actions, ARM was making piles of money precisely thanks to the ecosystem built on that architecture.&#xA;&#xA;ARM has announced it will appeal. Qualcomm, for its part, already has a counter-suit open since April 2024 against ARM - accusing it of withholding technical deliverables, anti-competitive behaviour, and (in a subsequent amendment) of intending to enter the server chip market as a direct competitor. The trial, originally set for March 2026, has been postponed to October 2026 to deal with a series of pending motions - a sign that the complexity of the dispute does not exhaust itself easily. That is: ARM, which built everything on neutral positioning, finds itself accused in court of wanting to become a silicon producer. Aka: the Switzerland that suddenly wants an army.&#xA;&#xA;The Qualcomm/Nuvia case is important not because Qualcomm won, but because it publicly exposed the nature of the power ARM exercises. The real asset had never been the architecture - the architecture is technical documentation, brutally, in the end. The real asset was the contract. The capacity to drag into court anyone who thinks they can use that documentation without the right permission. Langdon Winner, in his influential 1980 essay Do Artifacts Have Politics?, argued that technological choices are never neutral - they incorporate power structures, distribute access in non-random ways, create dependencies that persist long after the initial decision.&#xA;&#xA;  It is still true that, in a world in which human beings make and maintain artificial systems, nothing is &#34;required&#34; in an absolute sense. Nevertheless, once a course of action is underway, once artifacts like nuclear power plants have been built and put in operation, the kinds of reasoning that justify the adaptation of social life to technical requirements pop up as spontaneously as flowers in the spring.&#xA;&#xA;And ARM is an almost perfect case of this thesis applied to the IP economy: an architecture born of a public computer-literacy project becomes the foundation on which an invisible monopoly is built across tens of billions of devices. It is not malice. It is structure. The chip has no intentions. But the licensing structure that sits on top of it, that one does.&#xA;&#xA;A new front: the datacentre&#xA;&#xA;A parenthesis is necessary, because it tells where ARM is going right now - and why the Qualcomm/Nuvia case has the importance it has.&#xA;&#xA;For the first part of its history, ARM was the architecture of mobile. Servers, datacentres, enterprise computing were Intel territory: x86 dominated in an apparently unchallenged way. Things began to change in 2018, when Amazon Web Services announced the first Graviton, a custom ARM chip designed in-house by Annapurna Labs (acquired by AWS in 2015). The selling argument was simple and technically sound: at equivalent loads, ARM chips consumed much less energy than equivalent x86, and in a datacentre where the electricity bill is a third of operating costs, this translates directly into margin.&#xA;&#xA;Since then the trajectory has been steady and surprisingly fast. In 2023 ARM accounted for about 5% of the cloud compute of the three major hyperscalers. ARM itself, in its 2025 communications, claims that by year-end approximately half of the compute shipped to the top hyperscalers will be ARM-based - a figure to be taken with the caution due to a company talking about its own market, but consistent: for the third consecutive year, more than half of new CPU capacity added to AWS is Graviton, and 98% of the top one thousand EC2 customers use it. AWS Graviton5, announced on 4 December 2025 at re:Invent, has 192 cores in a single socket, an L3 cache five times larger than the previous generation, and is based on the Neoverse V3 ARMv9.2 cores at 3 nanometres. Google has launched Axion (based on Neoverse V2) with the claim of a 65% better price-performance compared to x86 instances. Microsoft has rolled out Cobalt 100 in 29 global regions. NVIDIA - the very same NVIDIA that had tried to buy ARM - uses ARM Neoverse cores in Grace, the CPU that accompanies its H100 and B100 GPUs for AI workloads. Spotify, Paramount+, Uber, Oracle, Salesforce have migrated infrastructure to ARM. Over a billion ARM Neoverse cores have been deployed in datacentres worldwide.&#xA;&#xA;This changes the proportions of the game. When ARM made money on smartphone royalties, we were talking about cents per chip but on billions of units. In datacentres things are different: every Graviton5 costs AWS thousands of dollars, and every server with an ARM chip on board is a more substantial royalty. The datacentre is the segment where ARM can finally start extracting value aggressively. And it is also the segment where licensees have most to lose: if Apple or Qualcomm raise your royalties on a phone, it is an annoyance; if ARM raises your royalties on the chip running your cloud, it is an attack on the operating margin of your business.&#xA;&#xA;It is easier to understand, in this light, why Qualcomm pulled out the Nuvia case with such determination. And why - as we will see shortly - it is looking for an architectural way out.&#xA;&#xA;The failed coup&#xA;&#xA;November 2020. Jensen Huang, NVIDIA&#39;s CEO, announces the acquisition of ARM from SoftBank for $40 billion. It would have been the largest operation in semiconductor history. It did not go through, and understanding why helps to see how systemic ARM&#39;s position in the industry was - and still is.&#xA;&#xA;Hermann Hauser, the Austrian from Cambridge who had founded Acorn, the company from which ARM was born, had reacted to the SoftBank acquisition back in July 2016 with a public statement on Twitter that left no room for interpretation:&#xA;&#xA;  ARM is the proudest achievement of my life. The proposed sale to SoftBank is a sad day for me and for technology in Britain.&#xA;&#xA;When, four years later, NVIDIA announced its intention to buy ARM from SoftBank, Hauser&#39;s reaction was even sharper. In an interview with the BBC he explained the structural problem with a clarity that regulatory documents rarely achieve:&#xA;&#xA;  It&#39;s one of the fundamental assumptions of the ARM business model that it can sell to everybody. The one saving grace about Softbank was that it wasn&#39;t a chip company, and retained ARM neutrality. If it becomes part of Nvidia, most of the licensees are competitors of Nvidia, and will of course then look for an alternative to ARM.&#xA;&#xA;And in his written testimony submitted to the British Parliament he added, with the freedom of someone who had nothing left to lose:&#xA;&#xA;  I have no shares or other interest in ARM as I had to sell them all to Softbank. I can therefore freely speak my mind.&#xA;&#xA;Hauser was right. NVIDIA, in 2020, was already dominant in artificial intelligence through its GPUs. Buying ARM would have meant getting early access to new designs ahead of competitors, the ability to slow or deny licences to rivals, and benefiting freely from the architecture while others continued paying royalties. Qualcomm, Microsoft and Google publicly opposed the deal. The American FTC opened an antitrust proceeding. The European Commission launched an investigation. Britain opened its own. China raised a red flag. In February 2022, the deal was formally cancelled for significant regulatory challenges.&#xA;&#xA;There is another Hauser statement worth quoting. In a 2022 interview with UKTN, he called British politicians «technologically illiterate» and «the root cause» of the governance problems around ARM. He argued that the government should have taken a golden share in ARM long before, and that any attempt to do so in 2022 was «trying to close the gate after the horse has bolted». An architecture born with public money and a public mandate had become a pawn in the power game between SoftBank, NVIDIA and the NASDAQ - because no one had thought, at the appropriate moment, that it was worth keeping it in public territory.&#xA;&#xA;The end of the story: SoftBank took ARM public in September 2023, in what was the largest IPO of the year. ARM Holdings is today listed on NASDAQ with a market capitalisation of around $150 billion. Masayoshi Son is still the controlling shareholder. The fact that the acquisition attempt by the world&#39;s largest AI chip producer was blocked by regulators does not eliminate the problem - it shifts it. ARM is independent, but it is a very particular form of independence: that of a systemic infrastructure in the hands of financial investors, subject to stock-market logic, obliged to grow revenues every quarter. The uncomfortable question is: what happens when the needs of a commons architecture - stable, predictable, accessible, neutral - conflict with the needs of a publicly listed company that has to raise royalties to satisfy shareholders? It is not a theoretical question. ARM has systematically increased its licence fees in recent years. And the major licensees have started looking for alternatives.&#xA;&#xA;The half-democratisation&#xA;&#xA;We have to give ARM what ARM deserves, before continuing with the critique. And what it deserves is considerable.&#xA;&#xA;The Raspberry Pi - version 3 in 2017, version 5 today - costs less than eighty euros for the most recent version. It is a complete computer, capable of running Linux, a server, a media centre, a network node. It exists because the ARM architecture has made it possible to produce powerful and very low-power SoCs at costs that x86 processors cannot get close to. The same principle applies to the billion-plus smartphones in the hands of people in countries where a desktop PC would be an inaccessible luxury. To the microcontrollers controlling IoT sensors at a few cents each. To the embedded processors in medical devices, industrial control systems, critical infrastructure. ARM has materially lowered the cost of access to computational hardware on a global scale.&#xA;&#xA;Wilson herself, looking back on the whole story, framed it with a lucidity that almost sounds like a warning:&#xA;&#xA;  To build something new and complicated, it&#39;s not the sort of quick thing, it&#39;s a sustained effort over a long period of time. It takes many people&#39;s different inputs to make something unique and novel. Overnight success takes 30 years.&#xA;&#xA;Thirty years of invisible work, of architectures refined chip by chip, of licences negotiated one at a time, before the world noticed that ARM was everywhere.&#xA;&#xA;The &#34;democratisation&#34; effected by ARM is real but structurally asymmetric. It has democratised access to hardware for device manufacturers - anyone can build an ARM chip by paying the licence - but not necessarily for the end users of those devices. An iPhone - or an Android phone - has an ARM chip designed by a company, but the end user has no access to the chip&#39;s architecture, no possibility to modify it, no transparency on what runs at that level. The chip is ARM, the device is a closed box. This is the final contradiction: you may have the right - or almost - to manage the software running on an ARM chip, but below the kernel, below the bootloader, there is a chip whose architecture was defined in Cambridge, produced in Taiwan, integrated into a SoC designed by Broadcom, over which you can have no control. Sovereignty ends exactly where silicon begins. Those who really benefited are the oligopoly of large licensees - Apple, Qualcomm, Samsung, NVIDIA, Amazon with its Gravitons - not the small Bangalore startup with an idea for a specialised chip.&#xA;&#xA;And yet - and here the story gets complicated, in an interesting way - within the narrow space the ARM licensing model concedes, someone is nevertheless trying to pull the lever of openness at the levels available. In December 2024, a Shenzhen company called Radxa announced the Radxa Orion O6, presented as the &#34;World&#39;s First Open Source Arm V9 Motherboard&#34;. It is a Mini-ITX board at $200 in the base version, based on the Cix CD8180 SoC - an ARMv9.2 chip with 12 cores (four Cortex-A720 at 2.8 GHz, four at 2.4 GHz, four Cortex-A520 at 1.8 GHz) produced by Cix Technology, a Chinese fabless founded in 2021. Debian 12, Fedora and Ubuntu run natively on it, with UEFI EDKII and SystemReady SR certification. The first Geekbench benchmarks put it at the level of an Apple M1 in single-core - not bad for an ARM board at less than a tenth of the price of a Mac mini.&#xA;&#xA;Note: it is worth clarifying what &#34;open source&#34; means here, because it means different things at different levels. The ARMv9.2 instruction set on which the CD8180 is built is not open: Cix pays regular royalties to ARM Holdings like all other licensees. The SoC itself is not open: it is a proprietary chip, with the NPU microcode and Mali GPU blocks all closed. What is open is the layer immediately above: board schematics, Board Support Package, EDKII bootloader, Linux kernel, device tree - all published under free licences, replicable, modifiable.&#xA;&#xA;It is also a concrete demonstration of what the open hardware movement has been arguing for twenty years: openness is layered, and opening one more layer than was open before is already a political act, even if the foundation underneath remains closed. The fact that this board comes from China - like the RISC-V pivot we will discuss shortly - is no accident: it is consistent with a geopolitical trajectory that seeks margins of technological sovereignty wherever it is possible to extract them.&#xA;&#xA;The Linux moment for hardware&#xA;&#xA;And here RISC-V comes onstage. And the story gets more interesting.&#xA;&#xA;RISC-V was born in 2010 at the University of California Berkeley, in the same department that had helped inspire the original RISC architecture thirty years earlier. Krste Asanović and his collaborators needed a clean processor architecture for research, without having to pay licences or ask permission. They decided to design one from scratch, and to make it completely open: no royalties, no licences, no intellectual property to respect. The RISC-V instruction set is an open standard, freely published, that anyone can implement, modify, distribute.&#xA;&#xA;For ten years RISC-V was an academic experiment, then a nucleus of embedded adoption, then an interesting alternative for those who wanted custom chips without paying ARM. In the last two or three years the proportions have changed. The SHD Group, a market analysis firm that has been monitoring the RISC-V sector since 2019, announced at the November 2025 RISC-V Summit that the technology&#39;s market penetration had exceeded 25% - an important symbolic threshold, even if it is to be taken with some caution. The same RISC-V International annual report for 2025 admits it is not entirely clear whether the 25% refers to the global microprocessor market in the strict sense or only to the segments where RISC-V already has a significant presence (embedded, IoT, microcontrollers). The SHD projection for 2031 is 33.7%. However it is measured, the trajectory is that of an architecture that is no longer a niche: it is the third pillar of computing, alongside x86 and ARM.&#xA;&#xA;The strength of RISC-V is not just technical - it is political in the most precise sense of the term. Some examples:&#xA;&#xA;The Chinese front. China has very concrete reasons not to want to depend on ARM, a company listed in New York with American shareholders. Under increasingly stringent US sanctions on advanced Intel/AMD chips, China has pivoted en masse to RISC-V - also because the RISC-V International consortium was strategically moved from Delaware to Switzerland in March 2020, formally placing it beyond the reach of unilateral American export controls. Alibaba, through its T-Head division, has released the XuanTie C920 chips and successors. Smaller Chinese manufacturers are flooding the mid-market with RISC-V AI accelerators that cost significantly less than the equivalent Western ones under sanction. It is an architectural decoupling, not just a commercial one.&#xA;&#xA;The European front. The European Union, through the EU Chips Act, funds the Project DARE consortium (Digital Autonomy with RISC-V in Europe) with the explicit goal of reducing European dependence on American and British technology in critical infrastructure. Quintauris, a joint venture founded in December 2023 by Bosch, Infineon, Nordic Semiconductor, NXP and Qualcomm (with STMicroelectronics joining as a sixth shareholder in 2024), developed in 2025 RT-Europa, the first RISC-V platform for real-time automotive controllers - a sector where dependence on foreign IP had become strategically intolerable.&#xA;&#xA;The Qualcomm front. In December 2025, while the Nuvia case closed yet another chapter against ARM, Qualcomm acquired Ventana Micro Systems, one of the most advanced companies in the development of high-performance RISC-V cores. Literally: not only was Qualcomm fighting ARM in court, it was also buying the way to no longer need ARM. It is the most significant move in all the recent history, because for the first time one of the major ARM licensees equips itself with a credible architectural plan B.&#xA;&#xA;Three different fronts, one same direction. The parallel with Linux is more than metaphorical. Linux did not kill Windows or macOS. But it did create a real alternative that changed the terms of power in the software industry. RISC-V aspires to do the same thing for hardware. And the critical point - the one Winner would have appreciated - is that this openness is built into the architecture itself, not guaranteed by a company&#39;s good will. You cannot buy RISC-V and &#34;close it&#34;. The instruction set is public by definition. You can build proprietary implementations on top of it - and many companies are doing that - but the foundation remains accessible.&#xA;&#xA;And here the question: will RISC-V be incorporated by capitalism exactly as Linux was? The honest answer is: probably yes, and in part it already has been. The major RISC-V implementations by Apple, Google and Meta are not open source - they use the open instruction set to build proprietary architectures. The fact that the foundation is free does not mean that everything built on top of it is. The same logic Boltanski and Chiapello described applies: critique is not defeated, it is incorporated. But at least the foundation remains open. And that counts.&#xA;&#xA;Conclusions - or questions, if you prefer&#xA;&#xA;ARM is born of a public mandate and a democratisation project, and becomes the foundation of a private oligopoly. The chip is the same; the power structure on top of it is radically different from the one that produced it. And that chip really did lower the entry barriers for hardware producers - it produced the Raspberry Pi, the cheap phones, the microcontrollers everywhere, the more efficient datacentres - but the democratisation stopped at the gates of the production chain. The end users of those devices gained no real sovereignty over the silicon they hold in their pocket.&#xA;&#xA;NVIDIA&#39;s attempt to acquire ARM was blocked by regulators, but only because it would have concentrated power too visibly. The systemic power ARM already exercises - silently, through licences and royalties, through legal cases against those trying to step out of contractual terms - disturbs no regulator, generates no headlines, produces no parliamentary hearings. It is the kind of power that makes itself invisible precisely because it is structural: it does not lie in a decision, it lies in the conditions within which decisions are made.&#xA;&#xA;There is also a contradiction that concerns me personally. That Raspberry Pi I had on the table - and all the ARM chips in the phones I have hacked for years - were already, in some sense, part of a system I did not control. I changed the software on top. I did not change the power structure underneath (one could make the same argument about Intel, ça va sans dire…). Digital sovereignty ends exactly where silicon begins, and pretending otherwise would be dishonest.&#xA;&#xA;RISC-V opens a real crack. Not a revolution - a crack. The possibility that the foundation of computing be a commons, instead of private property subject to corporate decisions and legal battles. It does not solve the problem of closed hardware, it does not solve the problem of oligopolistic foundries, it does not solve any of the contradictions described. But at least it does not aggravate them. It is the same logic of the open hardware movement, which for twenty years has been trying to apply to silicon what free software has applied to code - with more modest results, because the physical layer is structurally more hostile to the commons: if you cannot open it, you do not really own it. And in a sector where every layer of the technology stack has been systematically fenced off, keeping the foundation open is a political act, not just a technical one.&#xA;&#xA;What stays with me is a feeling familiar to anyone who has spent time thinking about computing as political territory. Technological choices incorporate power structures. Power structures persist long after the original choices have been forgotten. And whoever controls the basic infrastructure - the instruction set, the architecture, the licences - controls something much more important than a company: they control the rules of the game on which everything else is built.&#xA;&#xA;The question I leave open is: in whose favour were these rules written? And by what right do they continue to apply?&#xA;&#xA;---&#xA;&#xA;Sources and further reading&#xA;&#xA;On the history of ARM and its origins&#xA;&#xA;Garnsey, E., Lorenzoni, G., Ferriani, S. (2008). &#34;Speciation through entrepreneurial spin-off: The Acorn-ARM story&#34;. Research Policy, 37(2): 210-224. doi: 10.1016/j.respol.2007.11.006. The most in-depth academic study on the origin of ARM as a spin-off from Acorn and on the genesis of its IP licensing-based business model. https://www.sciencedirect.com/science/article/abs/pii/S0048733307002363&#xA;Patterson, D., Ditzel, D. (1980). &#34;The Case for the Reduced Instruction Set Computer&#34;. ACM SIGARCH Computer Architecture News, 8(6): 25-33. The founding paper of the RISC architecture at Berkeley, which inspired the ARM project. https://dl.acm.org/doi/10.1145/641914.641917&#xA;&#xA;On the IP licensing business model&#xA;&#xA;Ferriani, S., Garnsey, E., Lorenzoni, G., Massa, L. (2015). &#34;ARM plc and the IP Business Model&#34;. Working Paper, Centre for Technology Management, University of Cambridge. https://www.ifm.eng.cam.ac.uk/uploads/Research/CTM/workingpaper/2015-02-Ferriani-Garnsey-Lorenzoni-Massa.pdf&#xA;Grindley, P. C., Teece, D. J. (1997). &#34;Managing Intellectual Capital: Licensing and Cross-Licensing in Semiconductors and Electronics&#34;. California Management Review, 39(2): 8-41.&#xA;&#xA;On power in technological choices&#xA;&#xA;Winner, L. (1980). &#34;Do Artifacts Have Politics?&#34;. Daedalus, 109(1): 121-136. https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf&#xA;Boltanski, L., Chiapello, È. (1999). Le nouvel esprit du capitalisme. Gallimard. (English transl. The New Spirit of Capitalism, Verso, 2005). https://www.jstor.org/stable/4201214&#xA;&#xA;On the Qualcomm/Nuvia case&#xA;&#xA;Paul, Weiss (2025). &#34;Qualcomm Wins Decisive Post-Trial Victory in High-Profile Licensing Dispute Against Arm&#34;. https://www.paulweiss.com/insights/client-news/qualcomm-wins-decisive-post-trial-victory-in-high-profile-licensing-dispute-against-arm. Press release of the law firm that represented Qualcomm, with summary of the 30 September 2025 ruling.&#xA;The Register (2025). &#34;Judge dismisses Arm&#39;s last legal claim against Qualcomm&#34;. https://www.theregister.com/2025/10/01/armslastlegalclaimagainst/&#xA;Computerworld (2025). &#34;Arm&#39;s high-stakes licensing suit against Qualcomm ends in mistrial, but Qualcomm prevails in key areas&#34;. https://www.computerworld.com/article/3629812/&#xA;&#xA;On the NVIDIA acquisition attempt and geopolitical implications&#xA;&#xA;U.S. Federal Trade Commission (2021). Complaint in the Matter of NVIDIA Corporation and Arm Limited. https://www.ftc.gov/legal-library/browse/cases-proceedings/2110081-nvidia-corporationarm-limited&#xA;Hauser, H. (2020). Written evidence submitted to the UK Parliament Business, Energy and Industrial Strategy Committee on the proposed acquisition of ARM by NVIDIA. Document BFA0018. https://committees.parliament.uk/writtenevidence/12711/pdf/&#xA;Hauser, H. (2022). Interview with UKTN: &#34;UK left it too late to take golden share in Arm&#34;. https://www.uktech.news/news/government-and-policy/hermann-hauser-arm-golden-share-20220623&#xA;&#xA;On Sophie Wilson, Steve Furber and the origin of ARM1&#xA;&#xA;Wilson, S. (2012). Interview with The Register: &#34;ARM creators Sophie Wilson and Steve Furber&#34;. https://www.theregister.com/2012/05/03/unsungheroesoftecharmcreatorssophiewilsonandstevefurber/. Contains Wilson&#39;s statement on low power as a complete accident.&#xA;Furber, S. (2010). Interview with ACM Queue: &#34;A Conversation with Steve Furber&#34;. https://queue.acm.org/detail.cfm?id=1716385. Contains the statement on Victorian engineering margins.&#xA;Furber, S. (2011). Interview with Communications of the ACM. https://cacm.acm.org/news/an-interview-with-steve-furber/. Contains the assessment on total ARM computing power on the planet.&#xA;Furber, S. (2017). &#34;ARM: The architecture that conquered mobile computing&#34;. Philosophical Transactions of the Royal Society A, 375(2104). doi: 10.1098/rsta.2017.0148.&#xA;Computer History Museum (2012). Fellow Award citation for Sophie Wilson and Steve Furber. https://computerhistory.org/chm-fellows/sophie-wilson/&#xA;&#xA;On ARM in datacentres&#xA;&#xA;Arm Holdings (2025). &#34;Half of the Compute Shipped to Top Hyperscalers in 2025 will be Arm-based&#34;. Arm Newsroom. https://newsroom.arm.com/blog/half-of-compute-shipped-to-top-hyperscalers-in-2025-will-be-arm-based&#xA;Arm Holdings (2025). &#34;How Arm is redefining compute through the converged AI data center&#34;. Arm Newsroom. https://newsroom.arm.com/blog/arm-converged-ai-data-center-aws-graviton5&#xA;Omdia (2026). &#34;Arm Steps Deeper into Silicon: Implications for the Semiconductor Value Chain&#34;. https://omdia.tech.informa.com&#xA;&#xA;On the democratisation of access to computing&#xA;&#xA;Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press. http://www.benkler.org/BenklerWealthOfNetworks.pdf&#xA;Söderberg, J. (2008). Hacking Capitalism: The Free and Open Source Software Movement. Routledge. https://downloads.gvsig.org/download/people/vagazzi/Hacking%20Capitalism.pdf&#xA;&#xA;On RISC-V and architectural sovereignty&#xA;&#xA;RISC-V International (2024). RISC-V Ratified Specifications. https://riscv.org/technical/specifications/&#xA;RISC-V International (2026). Annual Report 2025. https://riscv.org/wp-content/uploads/2026/01/RISC-V-Annual-Report-2025.pdf. The official RISC-V International annual report, with the SHD Group estimate on market penetration (33.7% projected by 2031, 25% threshold reached in 2025 in some segments).&#xA;Waterman, A., Asanović, K. (eds.) (2019). The RISC-V Instruction Set Manual. UC Berkeley Technical Report UCB/EECS-2019-103. https://riscv.org/wp-content/uploads/2019/12/riscv-spec-20191213.pdf&#xA;Asanović, K., Patterson, D. A. (2014). &#34;Instruction Sets Should Be Free: The Case for RISC-V&#34;. EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2014-146.&#xA;Center for Security and Emerging Technology (2025). &#34;RISC-V: What it is and Why it Matters&#34;. https://cset.georgetown.edu/article/risc-v-what-it-is-and-why-it-matters/. On the incorporation of RISC-V International in Switzerland in March 2020 and the geopolitical implications.&#xA;Jamestown Foundation (2025). &#34;Examining China&#39;s Grand Strategy For RISC-V&#34;. https://jamestown.org/program/examining-chinas-grand-strategy-for-risc-v/&#xA;The Register (2025). &#34;Qualcomm takes RISC on Arm alternative with Ventana buy&#34;. https://www.theregister.com/2025/12/10/qualcommriscvarm_ventana/. On the acquisition of Ventana Micro Systems by Qualcomm on 10 December 2025.&#xA;Quintauris GmbH (2023). &#34;Five Leading Semiconductor Industry Players Incorporate New Company, Quintauris, to Drive RISC-V Ecosystem Forward&#34;. Press release, 22 December 2023. https://www.quintauris.com&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/arm-the-chip-we-didnt-know-we-needed&#34;Discuss.../a&#xA;&#xA;#ARM #RISCV #Semiconductors #OpenHardware #SophieWilson #DigitalSovereignty #IPLicensing #Computing #SolarPunk #FOSS #Writing&#xA;&#xA;div class=&#34;center&#34;&#xD;&#xA;· 🦣 a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a · 📸 a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a ·  📬 a href=&#34;mailto:jolek78@jolek78.dev&#34;Email/a ·&#xD;&#xA;· ☕ a href=&#34;https://liberapay.com/jolek78&#34;Support this work on Liberapay/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>There are architectures you see and architectures you don&#39;t. <strong>ARM</strong> is the most extreme case of the second category: it runs in the phone in our pocket, in the home router, in the eighty-euro board that serves as a home server for millions of tinkerers, in the datacentres of <strong>Amazon</strong> and <strong>Google</strong>. It is everywhere, and almost nobody knows what it is. It took me years too to bring it into focus, and the occasion was a <strong>Raspberry Pi 3</strong> that I had decided to turn into a Nextcloud – the first brick of what would become, in the years to come, my small homelab – many years ago. It was a line in <strong>/boot/config</strong> that made me notice the thing: the Pi&#39;s processor, a <strong>Broadcom BCM2837</strong>, used the same architecture as the <strong>Android</strong> phones I had hacked for years. ARM. Same instruction set, same underlying logic, same family.</p>



<h2 id="a-room-in-cambridge-a-government-project-and-a-woman" id="a-room-in-cambridge-a-government-project-and-a-woman">A room in Cambridge, a government project, and a woman</h2>

<p>The story of ARM does not begin in a Silicon Valley garage. It begins in Cambridge, in 1983, in a small company called <strong>Acorn Computers</strong>, on a commission from the <strong>BBC</strong>.</p>

<p>The context matters, because it changes the whole flavour of the story. The British government had decided to launch a national computer literacy programme – the BBC Computer Literacy Project – and needed a machine that could go into schools. Acorn won the tender with the <strong>BBC Micro</strong>, a cheap and robust computer that would introduce an entire generation of Britons to programming. It was the first time a state systematically funded popular access to computing. Not a startup with a venture-capital pitch: a public project, with public money, for an explicitly democratising goal.</p>

<p>But the BBC Micro was not enough. Acorn needed something more powerful for the next step, and the processors available on the market – 6502, Z80, the early Intel offerings – were either too slow, too complex, or too expensive. Acorn&#39;s research and development team then decided to design one from scratch, drawing inspiration from Patterson and Ditzel&#39;s work at Berkeley on the <strong>RISC</strong> architecture: simple instructions, executed quickly, few transistors, low power consumption. The result, in 1985, was the ARM1: thirty thousand transistors, no cache, no microcode.</p>

<p>The person who designed the architecture and instruction set of that ARM1 was called Sophie Wilson. Her approach is summarised in a sentence she gave in an interview with the Telegraph, and it is worth quoting:</p>

<blockquote><p>We accomplished this by thinking about things very, very carefully beforehand.</p></blockquote>

<p>Nothing particularly sophisticated, on the face of it. But in a sector where the dominant tendency was to add instructions and complexity to increase performance, the intuition of Wilson and her colleague Steve Furber went in the opposite direction: take away instead of add, simplify instead of complicate.</p>

<p>There is an episode that explains better than any technical analysis where this philosophy led. On 26 April 1985, when the first chips came back from the <strong>VLSI Technology</strong> foundry, Furber connected them to a development board and was puzzled: the ammeter in series with the power supply read zero. The processor seemed to be consuming literally nothing. The team that had designed the ARM1 numbered a handful of people – Wilson on the instruction set, Furber on microarchitecture design, a few collaborators around them – and operated with negligible resources compared to Intel or Motorola. The idea that they had just produced a processor that consumed zero was implausible.</p>

<p>The explanation, as Wilson recounted in a 2012 interview with The Register, was wrong in the most embarrassing way possible:</p>

<blockquote><p>The development board the chip was plugged into had a fault: there was no current being sent down the power supply lines at all. The processor was actually running on leakage from the logic circuits. So the low-power big thing that the ARM is most valued for today, the reason that it&#39;s on all your mobile phones, was a complete accident.</p></blockquote>

<p>The board was faulty, the power was not actually reaching the chip, and the processor was running on the leakage current from the logic circuits. The most important characteristic of the most widespread ARM architecture on the planet – the energy efficiency that makes it suitable for mobile devices – was discovered by mistake, on a broken board, by an engineer convinced he had a faulty measuring instrument.</p>

<p>Furber, for his part, explained the dynamic in more engineering terms:</p>

<blockquote><p>We applied Victorian engineering margins, and in designing to ensure it came out under a watt, we missed, and it came out under a tenth of a watt.</p></blockquote>

<p>The “Victorian engineering margins” are the generous safety margins typical of late nineteenth-century engineering – over-dimensioning every component to avoid failures. Furber and Wilson, accustomed to designing with limited resources and no margin for error, had applied the same principle to the chip design: design for consumption under a watt, and end up well below.</p>

<blockquote><p>There was no magic with the low power characteristics apart from simplicity.</p></blockquote>

<p>No magic. Just a design done well by a small team that could not afford to get it wrong. On that accident, and on that simplicity, ARM&#39;s dominance in mobile for the next forty years would be built.</p>

<hr/>

<p><strong><em>A note on Sophie Wilson</em></strong></p>

<p><em>Born in Leeds in 1957. She studied mathematics at Selwyn College, Cambridge, and as a student already worked with Hermann Hauser at Acorn – designing the Acorn System 1 even before graduating. In 1981, on commission from the BBC, she wrote BBC BASIC: a complete programming language in 16 kilobytes, so well-designed that it is still in use today on embedded systems. The “subtract instead of add” philosophy that would make ARM1 what it is was not born in 1985: it was born in the extreme memory constraints of the BBC Micro. Only later, in 1983, did Wilson begin work on the ARM1 instruction set, which she completed with Steve Furber in 1985. After Acorn she moved to Element 14, a 1999 spin-off absorbed by Broadcom in 2000. At Broadcom, where she still works as a Distinguished Engineer, she contributed to the BCM family of SoCs – including those that ended up inside the early Raspberry Pis, BCM2837 of the Pi 3 included. Recognition came late: Computer History Museum Fellow Award in 2012, Fellow of the Royal Society in 2013, Commander of the Order of the British Empire in 2019. In the 1990s she completed her <a href="https://web.archive.org/web/20200810221447/https://www.beyondpositive.org/2012/05/09/you-are-beautiful-and-dont-you-forget-it-a-word-about-acceptance/">gender transition</a>, continuing to work in the sector without interruption.</em></p>

<hr/>

<p>In 1990, Acorn, Apple and <strong>VLSI Technology</strong> founded a separate joint venture to manage and license the architecture. The name changed from Acorn <strong>RISC Machine</strong> to <strong>Advanced RISC Machines</strong>. ARM Holdings was born as an independent company, headquartered in Cambridge, with a business model that had no precedent in the sector: it would never manufacture a single chip. It would sell the idea of the chip. Licences, royalties, IP. Anyone who wanted to build an ARM processor would have to pay them.</p>

<p>It was a technical choice, but also a political one. ARM did not have the capital to build factories, did not have the infrastructure. But it had something harder to replicate: a clean, efficient architecture, designed well from the start.</p>

<h2 id="the-architecture-of-invisible-power" id="the-architecture-of-invisible-power">The architecture of invisible power</h2>

<p>ARM&#39;s business model is one of the most elegant – and least understood – in the entire technology industry. It works like this: ARM designs the processor architectures and licenses their use to third parties in exchange for an upfront fee (typically between one and ten million dollars) plus a royalty on every chip produced, usually around 1–2% of the final device price. Whoever buys the licence can then build their own chips based on that architecture, customising it within the limits allowed by the contract. They are not buying a product, then: they are buying the right to make one.</p>

<p>Garnsey, Lorenzoni and Ferriani, in a fundamental study on the birth of ARM as a spin-off from Acorn published in Research Policy in 2008, describe this transition as an exemplary case of <em>techno-organizational speciation</em>: technology is not simply transferred, but is radically transformed in the passage to a new domain through a new organisational model. ARM is not Acorn that changes its name: it is a new organism, with a completely different survival logic, which carries the original DNA but adapts to an environment Acorn could never have inhabited.</p>

<p>The practical result of this structure is what the industry calls neutral positioning. ARM does not compete with its customers – it does not sell chips, does not produce devices – so it can sell the same licence to <strong>Qualcomm</strong>, <strong>Apple</strong>, <strong>Samsung</strong> and <strong>MediaTek</strong>, who fight each other on the market every day. It is the “Switzerland” of silicon: a credible referee, a common infrastructure, a layer everyone builds on without having to trust the others. This has created an ecosystem of over a thousand licensee partners – a number impossible to reach for any traditional chip manufacturer. Furber, today professor of computer engineering at the University of Manchester, summed up the result in a way that is hard to forget:</p>

<blockquote><p>I suspect there&#39;s more ARM computing power on the planet than everything else ever made put together. The numbers are just astronomical.</p></blockquote>

<p>It is not rhetoric: it is the logical consequence of a model that multiplies adoption instead of concentrating it.</p>

<p>But this neutrality has a structural cost that is rarely thematised. When ARM sells a licence, it also sells dependence. Whoever builds their own <strong>SoC</strong> on ARM architecture is bound to that instruction set for the entire life of the product. Changing architecture would mean rewriting the software, recertifying the systems, redoing the chip design. The exit cost is very high. And this means that ARM, despite producing nothing, exercises enormous systemic power: it can renegotiate licence terms, raise royalties, decide who gets access to the most advanced architectures and who does not. Abstract as this dependence may sound on paper, there is a recent case that makes it very concrete – and worth following in detail, because it illustrates exactly how ARM power is exercised in the real world.</p>

<p>In 2021, <strong>Qualcomm</strong> acquired for $1.4 billion a Californian startup called Nuvia, founded by three former Apple Silicon engineers – Gerard Williams III, Manu Gulati, John Bruno – who were designing a server chip called Phoenix, based on the <strong>ARM v8.7-A</strong> architecture. Nuvia had its own ALA (Architecture License Agreement) with ARM, negotiated on the terms of a small startup entering a new market. When Qualcomm bought it, it integrated the Phoenix technology into its own Oryon core, the heart of the new <strong>Snapdragon X Elite</strong> – the chip with which Qualcomm wanted to challenge Intel and <strong>AMD</strong> in the AI PC laptop market.</p>

<p>The problem was contractual, not technical. Qualcomm&#39;s ALA with ARM already existed, and provided for lower royalties than Nuvia&#39;s. Qualcomm argued that the integration of <strong>Nuvia</strong> into its own chips fell under its pre-existing ALA. ARM replied that no: the acquisition required a full renegotiation from scratch – on ARM&#39;s terms, naturally. In 2022 ARM took Qualcomm to court asking, among other things, for the physical destruction of the pre-acquisition Nuvia designs. Not a downsizing, not a renegotiation: destruction. The message was unambiguous: IP licensing is not a sale, it is a revocable permission, and the permission is granted by whoever owns the architecture.</p>

<p>The case went to trial in Wilmington, Delaware, in December 2024. The jury ruled unanimously in favour of Qualcomm on two of the three contested points, hung jury on the third. On 30 September 2025, Judge Maryellen Noreika issued the final ruling: full and final judgment in favour of Qualcomm and Nuvia on all fronts, also rejecting ARM&#39;s request for a new trial. The judge explicitly noted that ARM itself, in its own internal documents, admitted to having recorded historic licensing and royalty revenues after attempting to terminate Nuvia&#39;s ALA in 2022 – which, translated, means: while claiming to have been damaged by Nuvia&#39;s actions, ARM was making piles of money precisely thanks to the ecosystem built on that architecture.</p>

<p><strong>ARM</strong> has announced it will appeal. <strong>Qualcomm</strong>, for its part, already has a counter-suit open since April 2024 against ARM – accusing it of withholding technical deliverables, anti-competitive behaviour, and (in a subsequent amendment) of intending to enter the server chip market as a direct competitor. The trial, originally set for March 2026, has been postponed to October 2026 to deal with a series of pending motions – a sign that the complexity of the dispute does not exhaust itself easily. That is: ARM, which built everything on neutral positioning, finds itself accused in court of wanting to become a silicon producer. Aka: <em>the Switzerland that suddenly wants an army</em>.</p>

<p>The Qualcomm/Nuvia case is important not because Qualcomm won, but because it publicly exposed the nature of the power ARM exercises. The real asset had never been the architecture – the architecture is technical documentation, brutally, in the end. The real asset was the contract. The capacity to drag into court anyone who thinks they can use that documentation without the right permission. Langdon Winner, in his influential 1980 essay <em>Do Artifacts Have Politics?</em>, argued that technological choices are never neutral – they incorporate power structures, distribute access in non-random ways, create dependencies that persist long after the initial decision.</p>

<blockquote><p>It is still true that, in a world in which human beings make and maintain artificial systems, nothing is “required” in an absolute sense. Nevertheless, once a course of action is underway, once artifacts like nuclear power plants have been built and put in operation, the kinds of reasoning that justify the adaptation of social life to technical requirements pop up as spontaneously as flowers in the spring.</p></blockquote>

<p>And ARM is an almost perfect case of this thesis applied to the IP economy: an architecture born of a public computer-literacy project becomes the foundation on which an invisible monopoly is built across tens of billions of devices. It is not malice. It is structure. The chip has no intentions. But the licensing structure that sits on top of it, that one does.</p>

<h2 id="a-new-front-the-datacentre" id="a-new-front-the-datacentre">A new front: the datacentre</h2>

<p>A parenthesis is necessary, because it tells where ARM is going right now – and why the Qualcomm/Nuvia case has the importance it has.</p>

<p>For the first part of its history, ARM was the architecture of mobile. Servers, datacentres, enterprise computing were Intel territory: x86 dominated in an apparently unchallenged way. Things began to change in 2018, when <strong>Amazon Web Services</strong> announced the first <strong>Graviton</strong>, a custom ARM chip designed in-house by <strong>Annapurna Labs</strong> (acquired by AWS in 2015). The selling argument was simple and technically sound: at equivalent loads, ARM chips consumed much less energy than equivalent x86, and in a datacentre where the electricity bill is a third of operating costs, this translates directly into margin.</p>

<p>Since then the trajectory has been steady and surprisingly fast. In 2023 ARM accounted for about 5% of the cloud compute of the three major hyperscalers. ARM itself, in its 2025 communications, claims that by year-end approximately half of the compute shipped to the top hyperscalers will be ARM-based – a figure to be taken with the caution due to a company talking about its own market, but consistent: for the third consecutive year, more than half of new CPU capacity added to AWS is Graviton, and 98% of the top one thousand EC2 customers use it. AWS Graviton5, announced on 4 December 2025 at <strong>re:Invent</strong>, has 192 cores in a single socket, an L3 cache five times larger than the previous generation, and is based on the <strong>Neoverse V3 ARMv9.2</strong> cores at 3 nanometres. Google has launched <strong>Axion</strong> (based on Neoverse V2) with the claim of a 65% better price-performance compared to x86 instances. Microsoft has rolled out <strong>Cobalt 100</strong> in 29 global regions. NVIDIA – the very same <strong>NVIDIA</strong> that had tried to buy ARM – uses ARM Neoverse cores in <strong>Grace</strong>, the CPU that accompanies its H100 and B100 GPUs for AI workloads. Spotify, Paramount+, Uber, Oracle, Salesforce have migrated infrastructure to ARM. Over a billion ARM Neoverse cores have been deployed in datacentres worldwide.</p>

<p>This changes the proportions of the game. When ARM made money on smartphone royalties, we were talking about cents per chip but on billions of units. In datacentres things are different: every Graviton5 costs AWS thousands of dollars, and every server with an ARM chip on board is a more substantial royalty. The datacentre is the segment where ARM can finally start extracting value aggressively. And it is also the segment where licensees have most to lose: if Apple or Qualcomm raise your royalties on a phone, it is an annoyance; if ARM raises your royalties on the chip running your cloud, it is an attack on the operating margin of your business.</p>

<p>It is easier to understand, in this light, why Qualcomm pulled out the Nuvia case with such determination. And why – as we will see shortly – it is looking for an architectural way out.</p>

<h2 id="the-failed-coup" id="the-failed-coup">The failed coup</h2>

<p>November 2020. Jensen Huang, NVIDIA&#39;s CEO, announces the acquisition of ARM from <strong>SoftBank</strong> for $40 billion. It would have been the largest operation in semiconductor history. It did not go through, and understanding why helps to see how systemic ARM&#39;s position in the industry was – and still is.</p>

<p>Hermann Hauser, the Austrian from Cambridge who had founded Acorn, the company from which ARM was born, had reacted to the SoftBank acquisition back in July 2016 with a public statement on Twitter that left no room for interpretation:</p>

<blockquote><p>ARM is the proudest achievement of my life. The proposed sale to SoftBank is a sad day for me and for technology in Britain.</p></blockquote>

<p>When, four years later, NVIDIA announced its intention to buy ARM from SoftBank, Hauser&#39;s reaction was even sharper. In an interview with the BBC he explained the structural problem with a clarity that regulatory documents rarely achieve:</p>

<blockquote><p>It&#39;s one of the fundamental assumptions of the ARM business model that it can sell to everybody. The one saving grace about Softbank was that it wasn&#39;t a chip company, and retained ARM neutrality. If it becomes part of Nvidia, most of the licensees are competitors of Nvidia, and will of course then look for an alternative to ARM.</p></blockquote>

<p>And in his written testimony submitted to the British Parliament he added, with the freedom of someone who had nothing left to lose:</p>

<blockquote><p>I have no shares or other interest in ARM as I had to sell them all to Softbank. I can therefore freely speak my mind.</p></blockquote>

<p>Hauser was right. NVIDIA, in 2020, was already dominant in artificial intelligence through its GPUs. Buying ARM would have meant getting early access to new designs ahead of competitors, the ability to slow or deny licences to rivals, and benefiting freely from the architecture while others continued paying royalties. Qualcomm, <strong>Microsoft</strong> and <strong>Google</strong> publicly opposed the deal. The American <strong>FTC</strong> opened an antitrust proceeding. The European Commission launched an investigation. Britain opened its own. China raised a red flag. In February 2022, the deal was formally cancelled for significant regulatory challenges.</p>

<p>There is another Hauser statement worth quoting. In a 2022 interview with UKTN, he called British politicians «technologically illiterate» and «the root cause» of the governance problems around ARM. He argued that the government should have taken a golden share in ARM long before, and that any attempt to do so in 2022 was «trying to close the gate after the horse has bolted». An architecture born with public money and a public mandate had become a pawn in the power game between SoftBank, NVIDIA and the NASDAQ – because no one had thought, at the appropriate moment, that it was worth keeping it in public territory.</p>

<p>The end of the story: SoftBank took ARM public in September 2023, in what was the largest IPO of the year. <strong>ARM Holdings</strong> is today listed on NASDAQ with a market capitalisation of around $150 billion. Masayoshi Son is still the controlling shareholder. The fact that the acquisition attempt by the world&#39;s largest AI chip producer was blocked by regulators does not eliminate the problem – it shifts it. ARM is independent, but it is a very particular form of independence: that of a systemic infrastructure in the hands of financial investors, subject to stock-market logic, obliged to grow revenues every quarter. The uncomfortable question is: <em>what happens when the needs of a commons architecture – stable, predictable, accessible, neutral – conflict with the needs of a publicly listed company that has to raise royalties to satisfy shareholders?</em> It is not a theoretical question. ARM has systematically increased its licence fees in recent years. And the major licensees have started looking for alternatives.</p>

<h2 id="the-half-democratisation" id="the-half-democratisation">The half-democratisation</h2>

<p>We have to give ARM what ARM deserves, before continuing with the critique. And what it deserves is considerable.</p>

<p>The <strong>Raspberry Pi</strong> – version 3 in 2017, version 5 today – costs less than eighty euros for the most recent version. It is a complete computer, capable of running <strong>Linux</strong>, a server, a media centre, a network node. It exists because the <strong>ARM</strong> architecture has made it possible to produce powerful and very low-power <strong>SoCs</strong> at costs that <strong>x86</strong> processors cannot get close to. The same principle applies to the billion-plus smartphones in the hands of people in countries where a desktop PC would be an inaccessible luxury. To the microcontrollers controlling IoT sensors at a few cents each. To the embedded processors in medical devices, industrial control systems, critical infrastructure. ARM has materially <em>lowered the cost of access to computational hardware</em> on a global scale.</p>

<p>Wilson herself, looking back on the whole story, framed it with a lucidity that almost sounds like a warning:</p>

<blockquote><p>To build something new and complicated, it&#39;s not the sort of quick thing, it&#39;s a sustained effort over a long period of time. It takes many people&#39;s different inputs to make something unique and novel. Overnight success takes 30 years.</p></blockquote>

<p>Thirty years of invisible work, of architectures refined chip by chip, of licences negotiated one at a time, before the world noticed that ARM was everywhere.</p>

<p>The “democratisation” effected by ARM is real but structurally asymmetric. It has democratised access to hardware for device manufacturers – anyone can build an ARM chip by paying the licence – but not necessarily for the end users of those devices. An <strong>iPhone</strong> – or an <strong>Android</strong> phone – has an ARM chip designed by a company, but the end user has no access to the chip&#39;s architecture, no possibility to modify it, no transparency on what runs at that level. The chip is ARM, the device is a closed box. This is the final contradiction: you may have the right – or almost – to manage the software running on an ARM chip, but below the <strong>kernel</strong>, below the <strong>bootloader</strong>, there is a chip whose architecture was defined in Cambridge, produced in Taiwan, integrated into a SoC designed by Broadcom, over which you can have no control. Sovereignty ends exactly where silicon begins. Those who really benefited are the oligopoly of large licensees – Apple, Qualcomm, Samsung, NVIDIA, Amazon with its Gravitons – <em>not the small Bangalore startup</em> with an idea for a specialised chip.</p>

<p>And yet – and here the story gets complicated, in an interesting way – within the narrow space the ARM licensing model concedes, someone is nevertheless trying to pull the lever of openness at the levels available. In December 2024, a Shenzhen company called <strong>Radxa</strong> announced the <strong>Radxa Orion O6</strong>, presented as the “<strong><em>World&#39;s First Open Source Arm V9 Motherboard</em></strong>”. It is a Mini-ITX board at $200 in the base version, based on the <strong>Cix CD8180</strong> SoC – an <strong>ARMv9.2</strong> chip with 12 cores (four Cortex-A720 at 2.8 GHz, four at 2.4 GHz, four Cortex-A520 at 1.8 GHz) produced by Cix Technology, a Chinese fabless founded in 2021. Debian 12, Fedora and Ubuntu run natively on it, with UEFI EDKII and SystemReady SR certification. The first Geekbench benchmarks put it at the level of an Apple M1 in single-core – not bad for an ARM board at less than a tenth of the price of a Mac mini.</p>

<p>*Note: it is worth clarifying what “<strong>open source</strong>” means here, because it means different things at different levels. The ARMv9.2 instruction set on which the CD8180 is built is not open: Cix pays regular royalties to ARM Holdings like all other licensees. The SoC itself is not open: it is a proprietary chip, with the NPU microcode and Mali GPU blocks all closed. What is open is the layer immediately above: board schematics, Board Support Package, EDKII bootloader, Linux kernel, device tree – all published under free licences, replicable, modifiable.*</p>

<p>It is also a concrete demonstration of what the <strong><em>open hardware</em></strong> movement has been arguing for twenty years: openness is layered, and <em>opening one more layer than was open before is already a political act</em>, even if the foundation underneath remains closed. The fact that this board comes from China – like the RISC-V pivot we will discuss shortly – is no accident: it is consistent with a geopolitical trajectory that seeks margins of technological sovereignty wherever it is possible to extract them.</p>

<h2 id="the-linux-moment-for-hardware" id="the-linux-moment-for-hardware">The Linux moment for hardware</h2>

<p>And here RISC-V comes onstage. And the story gets more interesting.</p>

<p><strong>RISC-V</strong> was born in 2010 at the University of California Berkeley, in the same department that had helped inspire the original RISC architecture thirty years earlier. Krste Asanović and his collaborators needed a clean processor architecture for research, without having to pay licences or ask permission. They decided to design one from scratch, and to make it completely open: no royalties, no licences, no intellectual property to respect. The RISC-V instruction set is an open standard, freely published, that anyone can implement, modify, distribute.</p>

<p>For ten years RISC-V was an academic experiment, then a nucleus of embedded adoption, then an interesting alternative for those who wanted custom chips without paying ARM. In the last two or three years the proportions have changed. The <strong>SHD Group</strong>, a market analysis firm that has been monitoring the RISC-V sector since 2019, announced at the November 2025 RISC-V Summit that the technology&#39;s market penetration had exceeded 25% – an important symbolic threshold, even if it is to be taken with some caution. The same <strong>RISC-V International</strong> annual report for 2025 admits it is not entirely clear whether the 25% refers to the global microprocessor market in the strict sense or only to the segments where RISC-V already has a significant presence (embedded, IoT, microcontrollers). The SHD projection for 2031 is 33.7%. However it is measured, the trajectory is that of an architecture that is no longer a niche: it is the third pillar of computing, alongside <strong>x86</strong> and <strong>ARM</strong>.</p>

<p>The strength of RISC-V is not just technical – it is political in the most precise sense of the term. Some examples:</p>

<p>The Chinese front. China has very concrete reasons not to want to depend on ARM, a company listed in New York with American shareholders. Under increasingly stringent US sanctions on advanced <strong>Intel/AMD</strong> chips, China has pivoted en masse to RISC-V – also because the RISC-V International consortium was strategically moved from Delaware to Switzerland in March 2020, formally placing it beyond the reach of unilateral American export controls. <strong>Alibaba</strong>, through its T-Head division, has released the <strong>XuanTie C920</strong> chips and successors. Smaller Chinese manufacturers are flooding the mid-market with RISC-V AI accelerators that cost significantly less than the equivalent Western ones under sanction. It is an architectural decoupling, not just a commercial one.</p>

<p>The European front. The European Union, through the <strong>EU Chips Act</strong>, funds the Project DARE consortium (Digital Autonomy with RISC-V in Europe) with the explicit goal of reducing European dependence on American and British technology in critical infrastructure. Quintauris, a joint venture founded in December 2023 by Bosch, Infineon, Nordic Semiconductor, NXP and Qualcomm (with STMicroelectronics joining as a sixth shareholder in 2024), developed in 2025 <strong>RT-Europa</strong>, the first RISC-V platform for real-time automotive controllers – a sector where dependence on foreign IP had become strategically intolerable.</p>

<p>The Qualcomm front. In December 2025, while the Nuvia case closed yet another chapter against ARM, Qualcomm acquired Ventana Micro Systems, one of the most advanced companies in the development of high-performance RISC-V cores. Literally: not only was Qualcomm fighting ARM in court, it was also buying the way to no longer need ARM. It is the most significant move in all the recent history, because for the first time one of the major ARM licensees equips itself with a credible architectural plan B.</p>

<p>Three different fronts, one same direction. The parallel with Linux is more than metaphorical. <strong>Linux</strong> did not kill <strong>Windows</strong> or macOS. <em>But it did create a real alternative</em> that changed the terms of power in the software industry. RISC-V aspires to do the same thing for hardware. And the critical point – the one Winner would have appreciated – is that this openness is built into the architecture itself, not guaranteed by a company&#39;s good will. You cannot buy RISC-V and “close it”. The instruction set is public by definition. You can build proprietary implementations on top of it – and many companies are doing that – but the foundation remains accessible.</p>

<p>And here the question: will RISC-V be incorporated by capitalism exactly as Linux was? The honest answer is: probably yes, and in part it already has been. The major RISC-V implementations by Apple, Google and Meta are not open source – they use the open instruction set to build proprietary architectures. The fact that the foundation is free does not mean that everything built on top of it is. The same logic Boltanski and Chiapello described applies: critique is not defeated, it is incorporated. But at least the foundation remains open. And that counts.</p>

<h2 id="conclusions-or-questions-if-you-prefer" id="conclusions-or-questions-if-you-prefer">Conclusions – or questions, if you prefer</h2>

<p>ARM is born of a public mandate and a democratisation project, and becomes the foundation of a private oligopoly. The chip is the same; the power structure on top of it is radically different from the one that produced it. And that chip really did lower the entry barriers for hardware producers – it produced the Raspberry Pi, the cheap phones, the microcontrollers everywhere, the more efficient datacentres – but the democratisation stopped at the gates of the production chain. The end users of those devices gained no real sovereignty over the silicon they hold in their pocket.</p>

<p><strong>NVIDIA</strong>&#39;s attempt to acquire <strong>ARM</strong> was blocked by regulators, but only because it would have concentrated power too visibly. The systemic power ARM already exercises – silently, through licences and royalties, through legal cases against those trying to step out of contractual terms – disturbs no regulator, generates no headlines, produces no parliamentary hearings. It is the kind of power that makes itself invisible precisely because it is structural: it does not lie in a decision, it lies in the conditions within which decisions are made.</p>

<p>There is also a contradiction that concerns me personally. That Raspberry Pi I had on the table – and all the ARM chips in the phones I have hacked for years – were already, in some sense, part of a system I did not control. I changed the software on top. I did not change the power structure underneath (one could make the same argument about Intel, ça va sans dire…). Digital sovereignty ends exactly where silicon begins, and pretending otherwise would be dishonest.</p>

<p><strong>RISC-V</strong> opens a real crack. Not a revolution – <em>a crack</em>. The possibility that the foundation of computing be a commons, instead of private property subject to corporate decisions and legal battles. It does not solve the problem of closed hardware, it does not solve the problem of oligopolistic foundries, it does not solve any of the contradictions described. But at least it does not aggravate them. It is the same logic of the open hardware movement, which for twenty years has been trying to apply to silicon what free software has applied to code – with more modest results, because the physical layer is structurally more hostile to the commons: if you cannot open it, you do not really own it. And in a sector where every layer of the technology stack has been systematically fenced off, keeping the foundation open is a political act, not just a technical one.</p>

<p>What stays with me is a feeling familiar to anyone who has spent time thinking about computing as political territory. Technological choices incorporate power structures. Power structures persist long after the original choices have been forgotten. And whoever controls the basic infrastructure – the instruction set, the architecture, the licences – controls something much more important than a company: they control the rules of the game on which everything else is built.</p>

<p>The question I leave open is: in whose favour were these rules written? And by what right do they continue to apply?</p>

<hr/>

<h2 id="sources-and-further-reading" id="sources-and-further-reading">Sources and further reading</h2>

<p><strong>On the history of ARM and its origins</strong></p>
<ul><li>Garnsey, E., Lorenzoni, G., Ferriani, S. (2008). “Speciation through entrepreneurial spin-off: The Acorn-ARM story”. Research Policy, 37(2): 210-224. doi: 10.1016/j.respol.2007.11.006. The most in-depth academic study on the origin of ARM as a spin-off from Acorn and on the genesis of its IP licensing-based business model. <a href="https://www.sciencedirect.com/science/article/abs/pii/S0048733307002363">https://www.sciencedirect.com/science/article/abs/pii/S0048733307002363</a></li>
<li>Patterson, D., Ditzel, D. (1980). “The Case for the Reduced Instruction Set Computer”. ACM SIGARCH Computer Architecture News, 8(6): 25-33. The founding paper of the RISC architecture at Berkeley, which inspired the ARM project. <a href="https://dl.acm.org/doi/10.1145/641914.641917">https://dl.acm.org/doi/10.1145/641914.641917</a></li></ul>

<p><strong>On the IP licensing business model</strong></p>
<ul><li>Ferriani, S., Garnsey, E., Lorenzoni, G., Massa, L. (2015). “ARM plc and the IP Business Model”. Working Paper, Centre for Technology Management, University of Cambridge. <a href="https://www.ifm.eng.cam.ac.uk/uploads/Research/CTM/working_paper/2015-02-Ferriani-Garnsey-Lorenzoni-Massa.pdf">https://www.ifm.eng.cam.ac.uk/uploads/Research/CTM/working_paper/2015-02-Ferriani-Garnsey-Lorenzoni-Massa.pdf</a></li>
<li>Grindley, P. C., Teece, D. J. (1997). “Managing Intellectual Capital: Licensing and Cross-Licensing in Semiconductors and Electronics”. California Management Review, 39(2): 8-41.</li></ul>

<p><strong>On power in technological choices</strong></p>
<ul><li>Winner, L. (1980). “Do Artifacts Have Politics?”. Daedalus, 109(1): 121-136. <a href="https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf">https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf</a></li>
<li>Boltanski, L., Chiapello, È. (1999). Le nouvel esprit du capitalisme. Gallimard. (English transl. The New Spirit of Capitalism, Verso, 2005). <a href="https://www.jstor.org/stable/4201214">https://www.jstor.org/stable/4201214</a></li></ul>

<p><strong>On the Qualcomm/Nuvia case</strong></p>
<ul><li>Paul, Weiss (2025). “Qualcomm Wins Decisive Post-Trial Victory in High-Profile Licensing Dispute Against Arm”. <a href="https://www.paulweiss.com/insights/client-news/qualcomm-wins-decisive-post-trial-victory-in-high-profile-licensing-dispute-against-arm">https://www.paulweiss.com/insights/client-news/qualcomm-wins-decisive-post-trial-victory-in-high-profile-licensing-dispute-against-arm</a>. Press release of the law firm that represented Qualcomm, with summary of the 30 September 2025 ruling.</li>
<li>The Register (2025). “Judge dismisses Arm&#39;s last legal claim against Qualcomm”. <a href="https://www.theregister.com/2025/10/01/arms_last_legal_claim_against/">https://www.theregister.com/2025/10/01/arms_last_legal_claim_against/</a></li>
<li>Computerworld (2025). “Arm&#39;s high-stakes licensing suit against Qualcomm ends in mistrial, but Qualcomm prevails in key areas”. <a href="https://www.computerworld.com/article/3629812/">https://www.computerworld.com/article/3629812/</a></li></ul>

<p><strong>On the NVIDIA acquisition attempt and geopolitical implications</strong></p>
<ul><li>U.S. Federal Trade Commission (2021). Complaint in the Matter of NVIDIA Corporation and Arm Limited. <a href="https://www.ftc.gov/legal-library/browse/cases-proceedings/2110081-nvidia-corporationarm-limited">https://www.ftc.gov/legal-library/browse/cases-proceedings/2110081-nvidia-corporationarm-limited</a></li>
<li>Hauser, H. (2020). Written evidence submitted to the UK Parliament Business, Energy and Industrial Strategy Committee on the proposed acquisition of ARM by NVIDIA. Document BFA0018. <a href="https://committees.parliament.uk/writtenevidence/12711/pdf/">https://committees.parliament.uk/writtenevidence/12711/pdf/</a></li>
<li>Hauser, H. (2022). Interview with UKTN: “UK left it too late to take golden share in Arm”. <a href="https://www.uktech.news/news/government-and-policy/hermann-hauser-arm-golden-share-20220623">https://www.uktech.news/news/government-and-policy/hermann-hauser-arm-golden-share-20220623</a></li></ul>

<p><strong>On Sophie Wilson, Steve Furber and the origin of ARM1</strong></p>
<ul><li>Wilson, S. (2012). Interview with The Register: “ARM creators Sophie Wilson and Steve Furber”. <a href="https://www.theregister.com/2012/05/03/unsung_heroes_of_tech_arm_creators_sophie_wilson_and_steve_furber/">https://www.theregister.com/2012/05/03/unsung_heroes_of_tech_arm_creators_sophie_wilson_and_steve_furber/</a>. Contains Wilson&#39;s statement on low power as a complete accident.</li>
<li>Furber, S. (2010). Interview with ACM Queue: “A Conversation with Steve Furber”. <a href="https://queue.acm.org/detail.cfm?id=1716385">https://queue.acm.org/detail.cfm?id=1716385</a>. Contains the statement on Victorian engineering margins.</li>
<li>Furber, S. (2011). Interview with Communications of the ACM. <a href="https://cacm.acm.org/news/an-interview-with-steve-furber/">https://cacm.acm.org/news/an-interview-with-steve-furber/</a>. Contains the assessment on total ARM computing power on the planet.</li>
<li>Furber, S. (2017). “ARM: The architecture that conquered mobile computing”. Philosophical Transactions of the Royal Society A, 375(2104). doi: 10.1098/rsta.2017.0148.</li>
<li>Computer History Museum (2012). Fellow Award citation for Sophie Wilson and Steve Furber. <a href="https://computerhistory.org/chm-fellows/sophie-wilson/">https://computerhistory.org/chm-fellows/sophie-wilson/</a></li></ul>

<p><strong>On ARM in datacentres</strong></p>
<ul><li>Arm Holdings (2025). “Half of the Compute Shipped to Top Hyperscalers in 2025 will be Arm-based”. Arm Newsroom. <a href="https://newsroom.arm.com/blog/half-of-compute-shipped-to-top-hyperscalers-in-2025-will-be-arm-based">https://newsroom.arm.com/blog/half-of-compute-shipped-to-top-hyperscalers-in-2025-will-be-arm-based</a></li>
<li>Arm Holdings (2025). “How Arm is redefining compute through the converged AI data center”. Arm Newsroom. <a href="https://newsroom.arm.com/blog/arm-converged-ai-data-center-aws-graviton5">https://newsroom.arm.com/blog/arm-converged-ai-data-center-aws-graviton5</a></li>
<li>Omdia (2026). “Arm Steps Deeper into Silicon: Implications for the Semiconductor Value Chain”. <a href="https://omdia.tech.informa.com">https://omdia.tech.informa.com</a></li></ul>

<p><strong>On the democratisation of access to computing</strong></p>
<ul><li>Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press. <a href="http://www.benkler.org/Benkler_Wealth_Of_Networks.pdf">http://www.benkler.org/Benkler_Wealth_Of_Networks.pdf</a></li>
<li>Söderberg, J. (2008). Hacking Capitalism: The Free and Open Source Software Movement. Routledge. <a href="https://downloads.gvsig.org/download/people/vagazzi/Hacking%20Capitalism.pdf">https://downloads.gvsig.org/download/people/vagazzi/Hacking%20Capitalism.pdf</a></li></ul>

<p><strong>On RISC-V and architectural sovereignty</strong></p>
<ul><li>RISC-V International (2024). RISC-V Ratified Specifications. <a href="https://riscv.org/technical/specifications/">https://riscv.org/technical/specifications/</a></li>
<li>RISC-V International (2026). Annual Report 2025. <a href="https://riscv.org/wp-content/uploads/2026/01/RISC-V-Annual-Report-2025.pdf">https://riscv.org/wp-content/uploads/2026/01/RISC-V-Annual-Report-2025.pdf</a>. The official RISC-V International annual report, with the SHD Group estimate on market penetration (33.7% projected by 2031, 25% threshold reached in 2025 in some segments).</li>
<li>Waterman, A., Asanović, K. (eds.) (2019). The RISC-V Instruction Set Manual. UC Berkeley Technical Report UCB/EECS-2019-103. <a href="https://riscv.org/wp-content/uploads/2019/12/riscv-spec-20191213.pdf">https://riscv.org/wp-content/uploads/2019/12/riscv-spec-20191213.pdf</a></li>
<li>Asanović, K., Patterson, D. A. (2014). “Instruction Sets Should Be Free: The Case for RISC-V”. EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2014-146.</li>
<li>Center for Security and Emerging Technology (2025). “RISC-V: What it is and Why it Matters”. <a href="https://cset.georgetown.edu/article/risc-v-what-it-is-and-why-it-matters/">https://cset.georgetown.edu/article/risc-v-what-it-is-and-why-it-matters/</a>. On the incorporation of RISC-V International in Switzerland in March 2020 and the geopolitical implications.</li>
<li>Jamestown Foundation (2025). “Examining China&#39;s Grand Strategy For RISC-V”. <a href="https://jamestown.org/program/examining-chinas-grand-strategy-for-risc-v/">https://jamestown.org/program/examining-chinas-grand-strategy-for-risc-v/</a></li>
<li>The Register (2025). “Qualcomm takes RISC on Arm alternative with Ventana buy”. <a href="https://www.theregister.com/2025/12/10/qualcomm_riscv_arm_ventana/">https://www.theregister.com/2025/12/10/qualcomm_riscv_arm_ventana/</a>. On the acquisition of Ventana Micro Systems by Qualcomm on 10 December 2025.</li>
<li>Quintauris GmbH (2023). “Five Leading Semiconductor Industry Players Incorporate New Company, Quintauris, to Drive RISC-V Ecosystem Forward”. Press release, 22 December 2023. <a href="https://www.quintauris.com">https://www.quintauris.com</a></li></ul>

<p><a href="https://remark.as/p/jolek78/arm-the-chip-we-didnt-know-we-needed">Discuss...</a></p>

<p><a href="https://jolek78.writeas.com/tag:ARM" class="hashtag"><span>#</span><span class="p-category">ARM</span></a> <a href="https://jolek78.writeas.com/tag:RISCV" class="hashtag"><span>#</span><span class="p-category">RISCV</span></a> <a href="https://jolek78.writeas.com/tag:Semiconductors" class="hashtag"><span>#</span><span class="p-category">Semiconductors</span></a> <a href="https://jolek78.writeas.com/tag:OpenHardware" class="hashtag"><span>#</span><span class="p-category">OpenHardware</span></a> <a href="https://jolek78.writeas.com/tag:SophieWilson" class="hashtag"><span>#</span><span class="p-category">SophieWilson</span></a> <a href="https://jolek78.writeas.com/tag:DigitalSovereignty" class="hashtag"><span>#</span><span class="p-category">DigitalSovereignty</span></a> <a href="https://jolek78.writeas.com/tag:IPLicensing" class="hashtag"><span>#</span><span class="p-category">IPLicensing</span></a> <a href="https://jolek78.writeas.com/tag:Computing" class="hashtag"><span>#</span><span class="p-category">Computing</span></a> <a href="https://jolek78.writeas.com/tag:SolarPunk" class="hashtag"><span>#</span><span class="p-category">SolarPunk</span></a> <a href="https://jolek78.writeas.com/tag:FOSS" class="hashtag"><span>#</span><span class="p-category">FOSS</span></a> <a href="https://jolek78.writeas.com/tag:Writing" class="hashtag"><span>#</span><span class="p-category">Writing</span></a></p>

<div class="center">
· 🦣 <a href="https://fosstodon.org/@jolek78">Mastodon</a> · 📸 <a href="https://pixelfed.social/jolek78">Pixelfed</a> ·  📬 <a href="mailto:jolek78@jolek78.dev">Email</a> ·
· ☕ <a href="https://liberapay.com/jolek78">Support this work on Liberapay</a>
</div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/arm-the-chip-we-didnt-know-we-needed</guid>
      <pubDate>Sat, 02 May 2026 00:30:00 +0000</pubDate>
    </item>
    <item>
      <title>Reflections on an (impossible) escape from capitalism</title>
      <link>https://jolek78.writeas.com/reflections-on-an-impossible-escape-from-capitalism?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[It was an ordinary Friday evening. The parcel had arrived with the courier that morning, but I only opened it after dinner, with that silent ceremony I perform every time new hardware shows up - as if opening a box too quickly were a form of disrespect toward the object. Inside was a HUNSN 4K. Small, almost ridiculously small. A mini PC in a form factor that fit in the palm of a hand. I put it on the table, looked at it. Looked at it again. And then an uncomfortable thought occurred to me. I had ordered it from a Chinese reseller, paid with a credit card, through a completely traceable payment infrastructure, from one of the most centralised and surveilled commercial ecosystems in existence. To build a homelab that would let me escape centralised and surveilled ecosystems.&#xA;&#xA;!--more--&#xA;&#xA;The funny thing - funny in the sense that it makes you laugh, but badly - is that I&#39;m not alone. Every day, somewhere in the world, someone orders a mini PC, a Raspberry Pi, a managed Mikrotik switch, with the stated goal of taking back control of their digital life. They order it on Alibaba, pay with PayPal, wait for the courier. And they see nothing strange in any of this, because the contradiction is so structural it has become invisible. This article is an attempt to make it visible again. Without easy solutions, because I don&#39;t have any. And when have I ever…&#xA;&#xA;The Promise of the Homelab&#xA;&#xA;When, in 2019, I started self-hosting pretty much everything - Nextcloud (always on a Raspberry Pi, first RPi3 then RPi4), Jellyfin, Navidrome, FreshRSS, and about twenty-five other services on Proxmox LXC, each with its own isolated Docker daemon - I did it with a precise motivation: I wanted to know where my data lived, who could read it, and have the ability to switch it off myself if I ever felt like it. Not when a company decides to shut down a service, not when someone else changes the licence terms. Me. This came after a long period of reflection on myself, the work I was doing and still do, and the technological society I live in. It is an ideological choice before it is a technical one. Technology as a tool for autonomy rather than control; infrastructure as something you own instead of something that owns you. I hope no one is alarmed if I say that some of these reflections began with reading Theodore Kaczynski&#39;s Manifesto, before eventually landing, of course, on more authoritative sources.&#xA;&#xA;Yes, I&#39;m mad, but not quite that mad…&#xA;&#xA;When you pay a subscription to a cloud service, the transaction does not end the moment you authorise the electronic payment. Shoshana Zuboff, in The Age of Surveillance Capitalism, calls this mechanism behavioral surplus: the behavioural data extracted beyond what is needed to provide the service, then resold as predictive raw material.&#xA;&#xA;  Under the regime of surveillance capitalism, however, the first text does not stand alone; it trails a shadow close behind. The first text, full of promise, actually functions as the supply operation for the second text: the shadow text. Everything that we contribute to the first text, no matter how trivial or fleeting, becomes a target for surplus extraction. That surplus fills the pages of the second text. This one is hidden from our view: &#34;read only&#34; for surveillance capitalists. In this text our experience is dragooned as raw material to be accumulated and analyzed as means to others&#39; market ends. The shadow text is a burgeoning accumulation of behavioral surplus and its analyses, and it says more about us than we can know about ourselves. Worse still, it becomes increasingly difficult, and perhaps impossible, to refrain from contributing to the shadow text. It automatically feeds on our experience as we engage in the normal and necessary routines of social participation.&#xA;&#xA;You are not the customer of the system - you are its product. Your habits, your schedules, your preferences, your hesitations before clicking on something: all of this is collected, modelled, sold. The transaction is not monthly: it is continuous, invisible, and never ends as long as you use the service. With hardware, in principle, the transaction is one-time: you buy, you pay, it ends, it is yours. The disk is in your room, not on a server subject to government requests, security breaches, or business decisions that are nothing to do with you but impact your access to those services. This distinction - between a tool you use and a system that uses you - is the real stake of the homelab. It is not about saving money, it is not about performance. It is about who controls what.&#xA;&#xA;The problem is that building this infrastructure requires hardware, time, knowledge, and resources. The hardware comes from somewhere; the time, the knowledge, and the energy resources come from a privilege not granted to everyone.&#xA;&#xA;The Market I Hadn&#39;t Seen&#xA;&#xA;Search for &#34;mini PC homelab&#34; on any marketplace. What you find is a productive ecosystem that has exploded over the past five years in a way I honestly did not expect.&#xA;&#xA;MINISFORUM, Beelink, Trigkey, Geekom, GMKtec. Zimaboard, with its single-board aesthetic designed explicitly for those who want home racks. Raspberry Pi and the galaxy of clones - Orange Pi, Rock Pi, Banana Pi. Managed Mikrotik switches at accessible prices. 1U rack cases to mount under the desk. M.2 NVMe SSDs with TBW figures calculated for small-server workloads. Silent power supplies designed to run 24/7. A market built from scratch, that exists precisely because there is a community of people who want to run servers at home. r/homelab and r/selfhosted on Reddit have approximately 2.8 and 1.7 million members respectively - numbers publicly verifiable, and growing. YouTube is full of dedicated channels. There is an entire attention economy built around &#34;escaping&#34; the attention economy.&#xA;&#xA;But it is worth asking: who built this market, and why. MINISFORUM and Beelink do not exist out of ideological sympathy for the homelab movement. They exist because they identified a profitable segment and served it with industrial precision. Kate Crawford, in Atlas of AI, documents how technology supply chains follow niche demand with the same efficiency with which they follow mass demand: factories in Guangdong optimise production lines not for a worldview, but for a margin. The fact that the resulting product also satisfies an ideological need is, from the manufacturer&#39;s point of view, irrelevant.&#xA;&#xA;  The Victorian environmental disaster at the dawn of the global information society shows how the relations between technology and its materials, environments, and labor practices are interwoven. Just as Victorians precipitated ecological disaster for their early cables, so do contemporary mining and global supply chains further imperil the delicate ecological balance of our era.&#xA;&#xA;The mechanism had been described with theoretical precision back in 1999 by Luc Boltanski and Ève Chiapello in The New Spirit of Capitalism. Their thesis: capitalism is never defeated by criticism - it is incorporated. When a critique becomes widespread enough, the system absorbs it and transforms it into a market segment. The artistic critique of the 1960s - autonomy, authenticity, rejection of standardisation - became the marketing of the creative economy. The critique of digital centralisation - sovereignty, privacy, control - has become an online catalogue to browse through.&#xA;&#xA;Resistance has become a market segment. Every time someone buys a HUNSN to stop paying subscriptions to services they don&#39;t control, a factory in Guangdong sells a HUNSN. Capitalism has not been defeated - it has shifted (at least for a small slice of the population: the nerds, the hackers) the extraction point from subscriptions to hardware.&#xA;&#xA;The Accumulation Syndrome&#xA;&#xA;But there is a further level - more ridiculous and more personal - that homelab communities never discuss openly, yet anyone who has a homelab recognises immediately. The Raspberry Pi 4 bought &#34;for a project.&#34; The old ThinkPad kept because &#34;you never know.&#34; The 4TB disk salvaged from a decommissioned NAS - and &#34;it might come in handy.&#34; The second-hand switch picked up on eBay for eighteen euros because it was cheap and might be useful. The cables, the cables, the cables.&#xA;&#xA;r/homelab has a term for this: just in case hardware. It is the hardware of the imaginary future, of projects that only exist in your head, of configurations that one day - one day - you will finally test. In the meantime it occupies a shelf, draws current in standby, and generates a diffuse sense of possibility that is indistinguishable from the most classic consumerism. The underlying psychological mechanism has a precise name: compensatory consumption - consumption as a response to a perceived loss of autonomy or control. You buy hardware because buying hardware gives you the feeling of recovering agency over something. The aesthetic is different from traditional consumerism - no luxury logos, no recognisable status symbols - but the mechanism is identical.&#xA;&#xA;That said, there is a partially honest answer to all of this: the second-hand and refurbished market. The ThinkPad X230 on eBay, the Dell R720 server decommissioned from a datacentre, the disk from someone who upgraded their NAS. My ZFS NAS, to give one example, is a recycled old tower with four 1TB disks in RAIDZ - hardware that would otherwise have ended up in landfill, with a life cycle extended by years, without generating new production demand. It is closer to the ethics of repair than to compulsive buying. But it has its own internal contradiction: it requires even more technical competence than buying new - knowing how to assess wear, diagnose an unknown component, manage ten-year-old drivers. The barrier to entry rises further. And the refurbished market is itself now an organised commercial sector, with its own margins, its own platforms, its own pricing logic. It is not a clean way out. It is a less dirty way out.&#xA;&#xA;And then there is the energy question, which is usually ignored in homelab discussions and is instead the most uncomfortable of all - uncomfortable enough to deserve a more in-depth treatment later on. For now, suffice it to say: every machine on your shelf that &#34;draws current in standby&#34; is a line item in the energy bill that the homelab movement rarely accounts for.&#xA;&#xA;Not for Everyone. And It Should Not Be This Way.&#xA;&#xA;There is a second level of the paradox that is even more uncomfortable than the first. Building a homelab costs money - relatively little, but it costs. It requires physical space. It requires a decent connection. And it requires time. A lot of time. Not installation time - that is measurable, finite. The learning time that precedes everything else. To reach the point where you can build a functional infrastructure with Proxmox, LXC containers, centralised authentication, reverse proxy, automated backups - you need to have already spent years understanding how Linux works, how to reason about networks and permissions, how to read a log. I started with a Red Hat in 1997, and it took me almost thirty years to get where I am. I should know this. Yet it always escapes me. And that time did not fall from the sky. It is time I was able to dedicate because I had a certain kind of job, a certain stability, a certain amount of mental energy left at the end of the day. It is middle-class-with-a-stable-position time, not the time of someone working three warehouse shifts a week. Passion is not enough.&#xA;&#xA;Johan Söderberg documents this in Hacking Capitalism: the FOSS movement was born as resistance to capitalism, but reproduces within itself hierarchies of skill and merit that make it structurally exclusive. Freedom is technically available to anyone, but effective access requires resources distributed in anything but a democratic manner. Söderberg goes further than simply observing the exclusivity: the voluntary open source work produces use value - functioning software, documentation, community support - that capital then extracts as exchange value without remunerating those who produced it. Red Hat builds a billion-dollar company on a kernel written largely by volunteers. It is not just that not everyone can get in: it is that those who get in often work for someone without knowing it. The homelab inherits this problem and amplifies it.&#xA;&#xA;  The narrative of orthodox historical materialism corresponds with some very popular ideas in the computer underground. It is widely held that the infinite reproducibility of information made possible by computers (forces of production) has rendered intellectual property (relations of production, superstructure) obsolete. The storyline of post-industrial ideology is endorsed but with a different ending. Rather than culminating in global markets, technocracy and liberalism, as Daniel Bell and the futurists would have it; hackers are looking forward to a digital gift economy and high-tech anarchism. In a second turn of events, hackers have jumped on the distorted remains of Marxism presented in information-age literature, and, while missing out on the vocabulary, ended up promoting an upgraded Karl Kautsky-version of historical materialism.&#xA;&#xA;This is not a quirk of the homelab movement: it is a recurring structure in every technological wave. Langdon Winner, in his influential essay Do Artifacts Have Politics?, argued that technological choices are never neutral - they incorporate power structures, distribute access in non-random ways. Amateur radio in the 1920s, the personal computer in the 1980s, the internet in the 1990s: every time the promise was democratising, every time the actual distribution followed the lines of pre-existing privilege. Not out of malice, but out of structure. The irony is this: those who would most need digital autonomy - those who cannot afford subscriptions, those who live under governments that surveil communications, those most exposed to data collection - are exactly those least likely to be able to build a homelab. Not for lack of interest or intelligence. For lack of time, money, and years of privileged exposure to technology.&#xA;&#xA;Homelab communities do not usually talk about this. They talk about which mini PC to buy, how to optimise energy consumption, which distro to use as a base. The conversation about structural exclusivity exists, but at the margins - in Jacobin, in Logic Magazine, in EFF activism - while the centre of the discourse remains impermeable. It is not that no one speaks about it: it is that the peripheries speak about it, and the peripheries do not set the agenda. This entire conversation takes place in a room to which not everyone has a ticket. And those inside do not seem to find that particularly problematic.&#xA;&#xA;A Technological Cosplay?&#xA;&#xA;So is the whole thing a con? Is the homelab just anti-capitalist cosplay while you continue to fund the same supply chains? In part, yes.&#xA;&#xA;The HUNSN 4K was designed in China, assembled in China, shipped by container on ships burning bunker fuel. Global maritime transport is responsible for approximately 2.5% of global CO₂ emissions - a share that the IMO (International Maritime Organization) has been trying to reduce for years with slow progress and targets continually postponed. Then: distributed through Alibaba, paid with a credit card. Every piece of technology hardware carries an extractive chain that begins in lithium mines in Bolivia and cobalt mines in the Democratic Republic of the Congo, passes through factories in Guangdong, and ends in electronic waste processing centres in Ghana. The hardware travels that supply chain exactly like any other consumer device. Furthermore, hardware has a lifecycle. In five years the HUNSN 4K will be too slow, or it will break, or something will come out with energy efficiency too much better to ignore. And I will buy again. The mini PC market for homelabs depends on the obsolescence of previous purchases - exactly like any other consumer market.&#xA;&#xA;The critique of capitalism, when it is widespread enough, is not suppressed - it is incorporated. The system absorbs the values of resistance and transforms them into a market segment. Autonomy becomes a selling point. Decentralisation becomes a brand. The rebel who wanted to exit the system finds himself funding a new vertical of the same system, convinced he is making an ethical choice.&#xA;&#xA;The Counter-Shot&#xA;&#xA;But there is a structural difference that would be dishonest to ignore.&#xA;&#xA;When you pay a subscription to a cloud service, the cost is not just the monthly fee. It is the continuous cession of data, behaviours, habits. It is the behavioral surplus Zuboff talks about: you are not using a service, you are being used as raw material to train models, build profiles, sell advertising. The transaction never ends, in ways you often cannot see and cannot escape from as long as you use the service.&#xA;&#xA;With hardware, the transaction ends. The data stays on a physical disk in your room, not on a server subject to government requests, breaches, or business decisions that have nothing to do with you but impact your life. The software running on it - Proxmox, Debian, Nextcloud, Jellyfin - is open source; you can modify it. If something changes in a way you cannot accept, you can leave. This resilience has real value - but it is worth noting that it is asymmetric resilience: it works for those who have the skills to exercise it. For those who do not, the theoretical portability of their data from Nextcloud to something else requires exactly the same skills we have already identified as the barrier to entry. The freedom to leave is real. Access to that freedom, much less so.&#xA;&#xA;And then there is the energy question, which I have deferred long enough. The major hyperscalers - AWS, Google, Azure - operate with a PUE (Power Usage Effectiveness) between 1.1 and 1.2. For every watt of useful computation they dissipate barely 0.1–0.2 watts in heat and infrastructure. They have enormous economies of scale, optimised industrial cooling, significant investments in renewable energy, and above all: their servers run at very high utilisation rates. Almost always busy.&#xA;&#xA;A home homelab works in a radically different way. The machine runs 24/7 even when it is doing nothing - and for most of the time it is doing nothing. Navidrome serving three requests a day, FreshRSS fetching every hour, an LDAP container sitting listening without receiving connections. You are paying the energy cost of the infrastructure regardless of usage. The implicit PUE of a homelab, calculated honestly on the ratio between total consumption and actual workload, is much worse than that of a datacentre. IEA data (Data Centres and Data Transmission Networks, updated annually) shows that large cloud providers progressively improve energy efficiency thanks to economies of scale that no individual homelab can replicate. The flip side is that the same growth in demand that makes economies of scale possible negates the efficiency gains: Amazon&#39;s absolute emissions increased between 2023 and 2024 despite improved PUE. Efficiency improves. Total consumption grows anyway. This is Jevons&#39; Paradox: energy efficiency, instead of reducing consumption, increases it, because it lowers the marginal cost of use and stimulates demand that grows faster than the efficiency gains.&#xA;&#xA;  Note: The comparison is not as linear as the numbers suggest. PUE measures the internal efficiency of a datacentre, not the energy cost of the network traffic that data generates every time it leaves it - traffic that a homelab eliminates almost completely for internal services. Nor does it measure proportion: AWS is efficient at delivering services to millions of users, but that scale says nothing about the real cost of storing fifty gigabytes of personal data on a server designed for loads a thousand times greater. A HUNSN N100 in idle consumes less than 8 watts. The honest energy comparison is not homelab vs hyperscaler in the abstract - it is homelab vs proportional share of hyperscaler for your specific workload, a calculation that nobody can make with publicly available data.&#xA;&#xA;This does not automatically mean that the cloud is the ethically correct choice - the problem does not reduce to PUE, and surveillance has costs that are not measured in kilowatts. It means that anyone with SolarPunk values who chooses the homelab must reckon with a real contradiction: the choice of sovereignty may be, watt for watt, energetically more costly than the system one wants to escape. I have no clean answer, but ignoring the question would be dishonest. Söderberg acknowledges that the FOSS movement has produced concrete and undeniable gains - they simply are not enough, on their own, to subvert the dynamics of informational capitalism.&#xA;&#xA;In short: this is not a critique of the homelab, but it is a critique of the homelab presented as a sufficient revolutionary act.&#xA;&#xA;What Happens at Eleven PM - and Beyond&#xA;&#xA;That night, with the HUNSN 4K on the table, I pressed on. I installed Proxmox. I configured the network. I started bringing up containers one by one. And at some point - three hours had passed, I had three terminals open and was debugging nslcd to centralise LDAP authentication across all the containers - I realised something: I was doing all of this simply because I enjoyed it. Not to resist something. Not to advance an ideological agenda. Because there was a problem to solve and solving it gave me satisfaction. Mihaly Csikszentmihalyi describes this state in Flow as total absorption in a task calibrated to one&#39;s own competencies: time expands, attention narrows, awareness of context vanishes. It is not motivation - it is something more immediate. Debugging an authentication problem at eleven at night on a system I could have chosen not to build is, neuropsychologically, indistinguishable from pleasure. Not the satisfaction of having finished: the process itself. Moreover, for an AuDHD person like me, going into hyperfocus allows you to lose your sense of time entirely, and to literally escape from a world you viscerally loathe.&#xA;&#xA;Ah - you had not figured that out yet?&#xA;&#xA;When I had finished and closed everything, the satisfaction was still there. Along with a mildly uncomfortable awareness: I could probably have used a hosted service, lived just as well, and not lost three hours of a weeknight. But in the meantime I had understood how PAM worked, I had read documentation I had never opened before, I had implemented it on my homelab, I had learned something I hadn&#39;t known I wanted to know.&#xA;&#xA;And here the circle closes in a somewhat unsettling way. Söderberg speaks of voluntary open source work as the production of pure use value - the intrinsic pleasure of doing, understanding, building something that works. But it is exactly this use value that capital then extracts as exchange value: the competence I accumulate debugging LDAP at eleven at night is the same competence I bring to work the next day, that I put into articles like this one, that I share in communities where others use it to build their own homelabs. Technical pleasure is not neutral. It has a production chain. Not always visible, but real.&#xA;&#xA;This is what the homelab is, at least for me: a way of learning that produces, as a side effect, an infrastructure I control. The ideology is there, but it comes second. First comes the pleasure of understanding how something works. Or rather: ideology and pleasure are interchangeable, and often run in parallel - but this does not resolve any of the contradictions I described above. It leaves them all standing, in fact makes them stranger. Am I resisting capitalism, or am I just cultivating an expensive hobby with a political aesthetic?&#xA;&#xA;The Hacker Ethic&#xA;&#xA;The word &#34;hacker&#34; has had bad press for decades. In 1990s news bulletins it was a synonym for a hooded cybercriminal; in the jargon of security companies it became a marketing term to prepend to anything. Neither has much to do with what the word historically means. Steven Levy, in Hackers: Heroes of the Computer Revolution, reconstructs the culture that formed around the MIT and Stanford labs in the 1960s: a community of programmers for whom code was an aesthetic object, access to information a moral principle, and technical competence the only legitimate hierarchy. The principles Levy identifies as the &#34;hacker ethic&#34; are precise: access to computers - and to anything that can teach you how the world works - should be unlimited and total. All information should be free. Decentralised systems are preferable to centralised ones. Hackers should be judged by what they produce, not by titles, age, race, or position. You can create art and beauty with a computer.&#xA;&#xA;It is not a political manifesto in the traditional sense. It is something more visceral - a disposition toward the world, a way of standing before a system you do not yet understand: the correct response is to take it apart, understand how it works, and put it back together better than before.&#xA;&#xA;Pekka Himanen, in The Hacker Ethic and the Spirit of the Information Age - with a preface by Linus Torvalds and an epilogue by Manuel Castells, which already says something about the project&#39;s ambition - performs a more explicit theoretical operation. He builds the hacker ethic in direct opposition to the Protestant work ethic described by Max Weber: where Weber saw work as duty, discipline as virtue, and leisure as absence of production, Himanen identifies in the hacker a figure who works out of passion, considers play an integral part of work, and rejects the sharp separation between productive time and free time. The hacker does not work for money - money is a side effect, when it comes. They work because the problem is interesting. Because the elegant solution has value in itself. Because understanding how something works is, in and of itself, sufficient.&#xA;&#xA;  Hacker activity is also joyful. It often has its roots in playful explorations. Torvalds has described, in messages on the Net, how Linux began to expand from small experiments with the computer he had just acquired. In the same messages, he has explained his motivation for developing Linux by simply stating that &#34;it was/is fun working on it.&#34; Tim Berners-Lee, the man behind the Web, also describes how this creation began with experiments in linking what he called &#34;play programs.&#34; Wozniak relates how many characteristics of the Apple computer &#34;came from a game, and the fun features that were built in were only to do one pet project, which was to program … [a game called] Breakout and show it off at the club.&#34;&#xA;&#xA;Recognise something? I do. Those three hours debugging nslcd at eleven at night were not work in the Weberian sense - nobody was paying me, nobody had asked me to do it, there was no corporate objective to reach. They were hacking in the precise sense that Levy and Himanen describe: exploration motivated by curiosity, with the infrastructure as an object of study as much as of utility. The homelab is, culturally, a direct expression of the hacker ethic. It is no coincidence that homelab communities and open source communities overlap almost perfectly, that they use the same language, the same platforms, the same values. But here, as elsewhere in this article, the story gets complicated.&#xA;&#xA;The hacker ethic promises a pure meritocracy: you are judged by what you can do, not by who you are. It is an attractive idea. It is also, in practice, a partial fiction. Technical meritocracy presupposes that everyone starts from the same point - that skills are accessible to anyone who really wants to acquire them, that the time to acquire them is distributed equally, that mentorship networks and learning resources are available regardless of context. The homelab as hacker practice inherits both things: the genuine nature of curiosity as a driver, and structural exclusivity as an undeclared side effect. The pleasure of taking a system apart to understand how it works is real and should not be devalued. But that pleasure is available, in practice, to those who already have the ticket.&#xA;&#xA;Conclusions&#xA;&#xA;The HUNSN 4K runs, alongside the other &#34;little electronic contraptions,&#34; on a rack next to my armchair - the one where, at the end of the day, I indulge my guilty pleasure of reading a book in the company of my cats. Proxmox, the Nextcloud server, the ZFS NAS, a small MINISFORUM box running Ollama with some local open-weight LLM models, a Raspberry Pi 5 running the Tor Relay, and a HUNSN RJ15 with pfSense controlling incoming and outgoing traffic. An infrastructure, in short, that allows me to have something resembling digital sovereignty within the limits of the possible. The contradictions I have described do not resolve. They are held together, with effort, as any intellectually complex position on a complex system must be held together.&#xA;&#xA;The first: the market that made the accessible homelab possible is the same market the homelab is supposed to emancipate us from. If this explosion of affordable, efficient mini PCs had not happened - if capitalism had not decided to build exactly what we wanted - how many of us would have taken the same path? How much of our &#34;ethical choice&#34; depends on the existence of products designed and sold precisely for us?&#xA;&#xA;The second: does incorporated resistance truly lose its force, or does it remain resistance even when someone profits from it? Boltanski and Chiapello describe the incorporation mechanism, but do not argue that critique loses all effectiveness in the process. Perhaps the homelab is simultaneously a product of the system and a real, if partial, form of withdrawal from it. The two things are not mutually exclusive.&#xA;&#xA;The third: if digital autonomy requires decades of accumulated skills, enough free time to use them, and enough money to buy the hardware, are we building a democratic alternative? Or are we building an exclusive club with a rebel aesthetic, reproducing the same hierarchies of privilege it claims to want to fight?&#xA;&#xA;The fourth: the energy question has no clean answer, and Jevons&#39; Paradox makes it even more uncomfortable - because it works in both directions. The cloud improves efficiency and increases total consumption. A homelab consumes proportionally more, but does not fuel the demand that drives that total consumption upwards. Are we building digital sovereignty, or are we simply choosing where to position ourselves within a contradiction that cannot be resolved at the individual level?&#xA;&#xA;I don&#39;t know. But at least I know where my data is.&#xA;&#xA;Fun Fact&#xA;&#xA;This article was written in Markdown using a Flatnotes instance running as a CT container on Proxmox, while listening to a symphonic metal playlist served by Navidrome - another CT container - pulling OGG files from a ZFS NAS over an NFS share. The cited books were in EPUB format on Calibre Web. In the background, Nextcloud on a Raspberry Pi 4 was syncing and backing up everything. Spelling mistakes were corrected by Qwen2.5, an LLM model served by Ollama on the MINISFORUM box, accessible locally via oterm and Open WebUI. And all of this, controlled from a laptop running Linux.&#xA;&#xA;Coincidences? I don&#39;t think so.&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/reflections-on-an-impossible-escape-from-capitalism&#34;Discuss.../a&#xA;&#xA;#Homelab #SelfHosted #SurveillanceCapitalism #Privacy #OpenSource #HackerEthic #SolarPunk #DigitalSovereignty #FOSS #Linux #Writing&#xA;&#xA;div class=&#34;center&#34;&#xD;&#xA;· 🦣 a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a · 📸 a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a ·  📬 a href=&#34;mailto:jolek78@jolek78.dev&#34;Email/a ·&#xD;&#xA;· ☕ a href=&#34;https://liberapay.com/jolek78&#34;Support this work on Liberapay/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>It was an ordinary Friday evening. The parcel had arrived with the courier that morning, but I only opened it after dinner, with that silent ceremony I perform every time new hardware shows up – as if opening a box too quickly were a form of disrespect toward the object. Inside was a HUNSN 4K. Small, almost ridiculously small. A mini PC in a form factor that fit in the palm of a hand. I put it on the table, looked at it. Looked at it again. And then an uncomfortable thought occurred to me. I had ordered it from a Chinese reseller, paid with a credit card, through a completely traceable payment infrastructure, from one of the most centralised and surveilled commercial ecosystems in existence. To build a homelab that would let me escape centralised and surveilled ecosystems.</p>



<p>The funny thing – funny in the sense that it makes you laugh, but badly – is that I&#39;m not alone. Every day, somewhere in the world, someone orders a mini PC, a Raspberry Pi, a managed Mikrotik switch, with the stated goal of taking back control of their digital life. They order it on Alibaba, pay with PayPal, wait for the courier. And they see nothing strange in any of this, because the contradiction is so structural it has become invisible. This article is an attempt to make it visible again. Without easy solutions, because I don&#39;t have any. And when have I ever…</p>

<h2 id="the-promise-of-the-homelab" id="the-promise-of-the-homelab">The Promise of the Homelab</h2>

<p>When, in 2019, I started self-hosting pretty much everything – Nextcloud (always on a Raspberry Pi, first RPi3 then RPi4), Jellyfin, Navidrome, FreshRSS, and about twenty-five other services on Proxmox LXC, each with its own isolated Docker daemon – I did it with a precise motivation: I wanted to know where my data lived, who could read it, and have the ability to switch it off myself if I ever felt like it. Not when a company decides to shut down a service, not when someone else changes the licence terms. Me. This came after a long period of reflection on myself, the work I was doing and still do, and the technological society I live in. It is an ideological choice before it is a technical one. Technology as a tool for autonomy rather than control; infrastructure as something you own instead of something that owns you. I hope no one is alarmed if I say that some of these reflections began with reading Theodore Kaczynski&#39;s Manifesto, before eventually landing, of course, on more authoritative sources.</p>

<p>Yes, I&#39;m mad, but not quite that mad…</p>

<p>When you pay a subscription to a cloud service, the transaction does not end the moment you authorise the electronic payment. Shoshana Zuboff, in <em>The Age of Surveillance Capitalism</em>, calls this mechanism <em>behavioral surplus</em>: the behavioural data extracted beyond what is needed to provide the service, then resold as predictive raw material.</p>

<blockquote><p>Under the regime of surveillance capitalism, however, the first text does not stand alone; it trails a shadow close behind. The first text, full of promise, actually functions as the supply operation for the second text: the shadow text. Everything that we contribute to the first text, no matter how trivial or fleeting, becomes a target for surplus extraction. That surplus fills the pages of the second text. This one is hidden from our view: “read only” for surveillance capitalists. In this text our experience is dragooned as raw material to be accumulated and analyzed as means to others&#39; market ends. The shadow text is a burgeoning accumulation of behavioral surplus and its analyses, and it says more about us than we can know about ourselves. Worse still, it becomes increasingly difficult, and perhaps impossible, to refrain from contributing to the shadow text. It automatically feeds on our experience as we engage in the normal and necessary routines of social participation.</p></blockquote>

<p>You are not the customer of the system – you are its product. Your habits, your schedules, your preferences, your hesitations before clicking on something: all of this is collected, modelled, sold. The transaction is not monthly: it is continuous, invisible, and never ends as long as you use the service. With hardware, in principle, the transaction is one-time: you buy, you pay, it ends, it is yours. The disk is in your room, not on a server subject to government requests, security breaches, or business decisions that are nothing to do with you but impact your access to those services. This distinction – between a tool you use and a system that uses you – is the real stake of the homelab. It is not about saving money, it is not about performance. It is about who controls what.</p>

<p>The problem is that building this infrastructure requires hardware, time, knowledge, and resources. The hardware comes from somewhere; the time, the knowledge, and the energy resources come from a privilege not granted to everyone.</p>

<h2 id="the-market-i-hadn-t-seen" id="the-market-i-hadn-t-seen">The Market I Hadn&#39;t Seen</h2>

<p>Search for “mini PC homelab” on any marketplace. What you find is a productive ecosystem that has exploded over the past five years in a way I honestly did not expect.</p>

<p>MINISFORUM, Beelink, Trigkey, Geekom, GMKtec. Zimaboard, with its single-board aesthetic designed explicitly for those who want home racks. Raspberry Pi and the galaxy of clones – Orange Pi, Rock Pi, Banana Pi. Managed Mikrotik switches at accessible prices. 1U rack cases to mount under the desk. M.2 NVMe SSDs with TBW figures calculated for small-server workloads. Silent power supplies designed to run 24/7. A market built from scratch, that exists precisely because there is a community of people who want to run servers at home. r/homelab and r/selfhosted on Reddit have approximately 2.8 and 1.7 million members respectively – numbers publicly verifiable, and growing. YouTube is full of dedicated channels. There is an entire attention economy built around “escaping” the attention economy.</p>

<p>But it is worth asking: who built this market, and why. MINISFORUM and Beelink do not exist out of ideological sympathy for the homelab movement. They exist because they identified a profitable segment and served it with industrial precision. Kate Crawford, in <em>Atlas of AI</em>, documents how technology supply chains follow niche demand with the same efficiency with which they follow mass demand: factories in Guangdong optimise production lines not for a worldview, but for a margin. The fact that the resulting product also satisfies an ideological need is, from the manufacturer&#39;s point of view, irrelevant.</p>

<blockquote><p>The Victorian environmental disaster at the dawn of the global information society shows how the relations between technology and its materials, environments, and labor practices are interwoven. Just as Victorians precipitated ecological disaster for their early cables, so do contemporary mining and global supply chains further imperil the delicate ecological balance of our era.</p></blockquote>

<p>The mechanism had been described with theoretical precision back in 1999 by Luc Boltanski and Ève Chiapello in <em>The New Spirit of Capitalism</em>. Their thesis: capitalism is never defeated by criticism – it is incorporated. When a critique becomes widespread enough, the system absorbs it and transforms it into a market segment. The artistic critique of the 1960s – autonomy, authenticity, rejection of standardisation – became the marketing of the creative economy. The critique of digital centralisation – sovereignty, privacy, control – has become an online catalogue to browse through.</p>

<p>Resistance has become a market segment. Every time someone buys a HUNSN to stop paying subscriptions to services they don&#39;t control, a factory in Guangdong sells a HUNSN. Capitalism has not been defeated – it has shifted (at least for a small slice of the population: the nerds, the hackers) the extraction point from subscriptions to hardware.</p>

<h2 id="the-accumulation-syndrome" id="the-accumulation-syndrome">The Accumulation Syndrome</h2>

<p>But there is a further level – more ridiculous and more personal – that homelab communities never discuss openly, yet anyone who has a homelab recognises immediately. The Raspberry Pi 4 bought “for a project.” The old ThinkPad kept because “you never know.” The 4TB disk salvaged from a decommissioned NAS – and “it might come in handy.” The second-hand switch picked up on eBay for eighteen euros because it was cheap and might be useful. The cables, the cables, the cables.</p>

<p>r/homelab has a term for this: <em>just in case hardware</em>. It is the hardware of the imaginary future, of projects that only exist in your head, of configurations that one day – one day – you will finally test. In the meantime it occupies a shelf, draws current in standby, and generates a diffuse sense of possibility that is indistinguishable from the most classic consumerism. The underlying psychological mechanism has a precise name: <em>compensatory consumption</em> – consumption as a response to a perceived loss of autonomy or control. You buy hardware because buying hardware gives you the feeling of recovering agency over something. The aesthetic is different from traditional consumerism – no luxury logos, no recognisable status symbols – but the mechanism is identical.</p>

<p>That said, there is a partially honest answer to all of this: the second-hand and refurbished market. The ThinkPad X230 on eBay, the Dell R720 server decommissioned from a datacentre, the disk from someone who upgraded their NAS. My ZFS NAS, to give one example, is a recycled old tower with four 1TB disks in RAIDZ – hardware that would otherwise have ended up in landfill, with a life cycle extended by years, without generating new production demand. It is closer to the ethics of repair than to compulsive buying. But it has its own internal contradiction: it requires even more technical competence than buying new – knowing how to assess wear, diagnose an unknown component, manage ten-year-old drivers. The barrier to entry rises further. And the refurbished market is itself now an organised commercial sector, with its own margins, its own platforms, its own pricing logic. It is not a clean way out. It is a less dirty way out.</p>

<p>And then there is the energy question, which is usually ignored in homelab discussions and is instead the most uncomfortable of all – uncomfortable enough to deserve a more in-depth treatment later on. For now, suffice it to say: every machine on your shelf that “draws current in standby” is a line item in the energy bill that the homelab movement rarely accounts for.</p>

<h2 id="not-for-everyone-and-it-should-not-be-this-way" id="not-for-everyone-and-it-should-not-be-this-way">Not for Everyone. And It Should Not Be This Way.</h2>

<p>There is a second level of the paradox that is even more uncomfortable than the first. Building a homelab costs money – relatively little, but it costs. It requires physical space. It requires a decent connection. And it requires time. A lot of time. Not installation time – that is measurable, finite. The learning time that precedes everything else. To reach the point where you can build a functional infrastructure with Proxmox, LXC containers, centralised authentication, reverse proxy, automated backups – you need to have already spent years understanding how Linux works, how to reason about networks and permissions, how to read a log. I started with a Red Hat in 1997, and it took me almost thirty years to get where I am. I should know this. Yet it always escapes me. And that time did not fall from the sky. It is time I was able to dedicate because I had a certain kind of job, a certain stability, a certain amount of mental energy left at the end of the day. It is middle-class-with-a-stable-position time, not the time of someone working three warehouse shifts a week. Passion is not enough.</p>

<p>Johan Söderberg documents this in <em>Hacking Capitalism</em>: the FOSS movement was born as resistance to capitalism, but reproduces within itself hierarchies of skill and merit that make it structurally exclusive. Freedom is technically available to anyone, but effective access requires resources distributed in anything but a democratic manner. Söderberg goes further than simply observing the exclusivity: the voluntary open source work produces use value – functioning software, documentation, community support – that capital then extracts as <em>exchange value</em> without remunerating those who produced it. Red Hat builds a billion-dollar company on a kernel written largely by volunteers. It is not just that not everyone can get in: it is that those who get in often work for someone without knowing it. The homelab inherits this problem and amplifies it.</p>

<blockquote><p>The narrative of orthodox historical materialism corresponds with some very popular ideas in the computer underground. It is widely held that the infinite reproducibility of information made possible by computers (forces of production) has rendered intellectual property (relations of production, superstructure) obsolete. The storyline of post-industrial ideology is endorsed but with a different ending. Rather than culminating in global markets, technocracy and liberalism, as Daniel Bell and the futurists would have it; hackers are looking forward to a digital gift economy and high-tech anarchism. In a second turn of events, hackers have jumped on the distorted remains of Marxism presented in information-age literature, and, while missing out on the vocabulary, ended up promoting an upgraded Karl Kautsky-version of historical materialism.</p></blockquote>

<p>This is not a quirk of the homelab movement: it is a recurring structure in every technological wave. Langdon Winner, in his influential essay <em>Do Artifacts Have Politics?</em>, argued that technological choices are never neutral – they incorporate power structures, distribute access in non-random ways. Amateur radio in the 1920s, the personal computer in the 1980s, the internet in the 1990s: every time the promise was democratising, every time the actual distribution followed the lines of pre-existing privilege. Not out of malice, but out of structure. The irony is this: those who would most need digital autonomy – those who cannot afford subscriptions, those who live under governments that surveil communications, those most exposed to data collection – are exactly those least likely to be able to build a homelab. Not for lack of interest or intelligence. For lack of time, money, and years of privileged exposure to technology.</p>

<p>Homelab communities do not usually talk about this. They talk about which mini PC to buy, how to optimise energy consumption, which distro to use as a base. The conversation about structural exclusivity exists, but at the margins – in Jacobin, in Logic Magazine, in EFF activism – while the centre of the discourse remains impermeable. It is not that no one speaks about it: it is that the peripheries speak about it, and the peripheries do not set the agenda. This entire conversation takes place in a room to which not everyone has a ticket. And those inside do not seem to find that particularly problematic.</p>

<h2 id="a-technological-cosplay" id="a-technological-cosplay">A Technological Cosplay?</h2>

<p>So is the whole thing a con? Is the homelab just anti-capitalist cosplay while you continue to fund the same supply chains? In part, yes.</p>

<p>The HUNSN 4K was designed in China, assembled in China, shipped by container on ships burning bunker fuel. Global maritime transport is responsible for approximately 2.5% of global CO₂ emissions – a share that the IMO (International Maritime Organization) has been trying to reduce for years with slow progress and targets continually postponed. Then: distributed through Alibaba, paid with a credit card. Every piece of technology hardware carries an extractive chain that begins in lithium mines in Bolivia and cobalt mines in the Democratic Republic of the Congo, passes through factories in Guangdong, and ends in electronic waste processing centres in Ghana. The hardware travels that supply chain exactly like any other consumer device. Furthermore, hardware has a lifecycle. In five years the HUNSN 4K will be too slow, or it will break, or something will come out with energy efficiency too much better to ignore. And I will buy again. The mini PC market for homelabs depends on the obsolescence of previous purchases – exactly like any other consumer market.</p>

<p>The critique of capitalism, when it is widespread enough, is not suppressed – it is incorporated. The system absorbs the values of resistance and transforms them into a market segment. Autonomy becomes a selling point. Decentralisation becomes a brand. The rebel who wanted to exit the system finds himself funding a new vertical of the same system, convinced he is making an ethical choice.</p>

<h2 id="the-counter-shot" id="the-counter-shot">The Counter-Shot</h2>

<p>But there is a structural difference that would be dishonest to ignore.</p>

<p>When you pay a subscription to a cloud service, the cost is not just the monthly fee. It is the continuous cession of data, behaviours, habits. It is the behavioral surplus Zuboff talks about: you are not using a service, you are being used as raw material to train models, build profiles, sell advertising. The transaction never ends, in ways you often cannot see and cannot escape from as long as you use the service.</p>

<p>With hardware, the transaction ends. The data stays on a physical disk in your room, not on a server subject to government requests, breaches, or business decisions that have nothing to do with you but impact your life. The software running on it – Proxmox, Debian, Nextcloud, Jellyfin – is open source; you can modify it. If something changes in a way you cannot accept, you can leave. This resilience has real value – but it is worth noting that it is asymmetric resilience: it works for those who have the skills to exercise it. For those who do not, the theoretical portability of their data from Nextcloud to something else requires exactly the same skills we have already identified as the barrier to entry. The freedom to leave is real. Access to that freedom, much less so.</p>

<p>And then there is the energy question, which I have deferred long enough. The major hyperscalers – AWS, Google, Azure – operate with a PUE (Power Usage Effectiveness) between 1.1 and 1.2. For every watt of useful computation they dissipate barely 0.1–0.2 watts in heat and infrastructure. They have enormous economies of scale, optimised industrial cooling, significant investments in renewable energy, and above all: their servers run at very high utilisation rates. Almost always busy.</p>

<p>A home homelab works in a radically different way. The machine runs 24/7 even when it is doing nothing – and for most of the time it is doing nothing. Navidrome serving three requests a day, FreshRSS fetching every hour, an LDAP container sitting listening without receiving connections. You are paying the energy cost of the infrastructure regardless of usage. The implicit PUE of a homelab, calculated honestly on the ratio between total consumption and actual workload, is much worse than that of a datacentre. IEA data (<em>Data Centres and Data Transmission Networks</em>, updated annually) shows that large cloud providers progressively improve energy efficiency thanks to economies of scale that no individual homelab can replicate. The flip side is that the same growth in demand that makes economies of scale possible negates the efficiency gains: Amazon&#39;s absolute emissions increased between 2023 and 2024 despite improved PUE. Efficiency improves. Total consumption grows anyway. This is Jevons&#39; Paradox: energy efficiency, instead of reducing consumption, increases it, because it lowers the marginal cost of use and stimulates demand that grows faster than the efficiency gains.</p>

<blockquote><p><em>Note: The comparison is not as linear as the numbers suggest. PUE measures the internal efficiency of a datacentre, not the energy cost of the network traffic that data generates every time it leaves it – traffic that a homelab eliminates almost completely for internal services. Nor does it measure proportion: AWS is efficient at delivering services to millions of users, but that scale says nothing about the real cost of storing fifty gigabytes of personal data on a server designed for loads a thousand times greater. A HUNSN N100 in idle consumes less than 8 watts. The honest energy comparison is not homelab vs hyperscaler in the abstract – it is homelab vs proportional share of hyperscaler for your specific workload, a calculation that nobody can make with publicly available data.</em></p></blockquote>

<p>This does not automatically mean that the cloud is the ethically correct choice – the problem does not reduce to PUE, and surveillance has costs that are not measured in kilowatts. It means that anyone with SolarPunk values who chooses the homelab must reckon with a real contradiction: the choice of sovereignty may be, watt for watt, energetically more costly than the system one wants to escape. I have no clean answer, but ignoring the question would be dishonest. Söderberg acknowledges that the FOSS movement has produced concrete and undeniable gains – they simply are not enough, on their own, to subvert the dynamics of informational capitalism.</p>

<p>In short: this is not a critique of the homelab, but it is a critique of the homelab presented as a sufficient revolutionary act.</p>

<h2 id="what-happens-at-eleven-pm-and-beyond" id="what-happens-at-eleven-pm-and-beyond">What Happens at Eleven PM – and Beyond</h2>

<p>That night, with the HUNSN 4K on the table, I pressed on. I installed Proxmox. I configured the network. I started bringing up containers one by one. And at some point – three hours had passed, I had three terminals open and was debugging nslcd to centralise LDAP authentication across all the containers – I realised something: I was doing all of this simply because I enjoyed it. Not to resist something. Not to advance an ideological agenda. Because there was a problem to solve and solving it gave me satisfaction. Mihaly Csikszentmihalyi describes this state in <em>Flow</em> as total absorption in a task calibrated to one&#39;s own competencies: time expands, attention narrows, awareness of context vanishes. It is not motivation – it is something more immediate. Debugging an authentication problem at eleven at night on a system I could have chosen not to build is, neuropsychologically, indistinguishable from pleasure. Not the satisfaction of having finished: the process itself. Moreover, for an AuDHD person like me, going into hyperfocus allows you to lose your sense of time entirely, and to literally escape from a world you viscerally loathe.</p>

<p>Ah – you had not figured that out yet?</p>

<p>When I had finished and closed everything, the satisfaction was still there. Along with a mildly uncomfortable awareness: I could probably have used a hosted service, lived just as well, and not lost three hours of a weeknight. But in the meantime I had understood how PAM worked, I had read documentation I had never opened before, I had implemented it on my homelab, I had learned something I hadn&#39;t known I wanted to know.</p>

<p>And here the circle closes in a somewhat unsettling way. Söderberg speaks of voluntary open source work as the production of pure use value – the intrinsic pleasure of doing, understanding, building something that works. But it is exactly this use value that capital then extracts as exchange value: the competence I accumulate debugging LDAP at eleven at night is the same competence I bring to work the next day, that I put into articles like this one, that I share in communities where others use it to build their own homelabs. Technical pleasure is not neutral. It has a production chain. Not always visible, but real.</p>

<p>This is what the homelab is, at least for me: a way of learning that produces, as a side effect, an infrastructure I control. The ideology is there, but it comes second. First comes the pleasure of understanding how something works. Or rather: ideology and pleasure are interchangeable, and often run in parallel – but this does not resolve any of the contradictions I described above. It leaves them all standing, in fact makes them stranger. Am I resisting capitalism, or am I just cultivating an expensive hobby with a political aesthetic?</p>

<h2 id="the-hacker-ethic" id="the-hacker-ethic">The Hacker Ethic</h2>

<p>The word “hacker” has had bad press for decades. In 1990s news bulletins it was a synonym for a hooded cybercriminal; in the jargon of security companies it became a marketing term to prepend to anything. Neither has much to do with what the word historically means. Steven Levy, in <em>Hackers: Heroes of the Computer Revolution</em>, reconstructs the culture that formed around the MIT and Stanford labs in the 1960s: a community of programmers for whom code was an aesthetic object, access to information a moral principle, and technical competence the only legitimate hierarchy. The principles Levy identifies as the “hacker ethic” are precise: access to computers – and to anything that can teach you how the world works – should be unlimited and total. All information should be free. Decentralised systems are preferable to centralised ones. Hackers should be judged by what they produce, not by titles, age, race, or position. You can create art and beauty with a computer.</p>

<p>It is not a political manifesto in the traditional sense. It is something more visceral – a disposition toward the world, a way of standing before a system you do not yet understand: the correct response is to take it apart, understand how it works, and put it back together better than before.</p>

<p>Pekka Himanen, in <em>The Hacker Ethic and the Spirit of the Information Age</em> – with a preface by Linus Torvalds and an epilogue by Manuel Castells, which already says something about the project&#39;s ambition – performs a more explicit theoretical operation. He builds the hacker ethic in direct opposition to the Protestant work ethic described by Max Weber: where Weber saw work as duty, discipline as virtue, and leisure as absence of production, Himanen identifies in the hacker a figure who works out of passion, considers play an integral part of work, and rejects the sharp separation between productive time and free time. The hacker does not work for money – money is a side effect, when it comes. They work because the problem is interesting. Because the elegant solution has value in itself. Because understanding how something works is, in and of itself, sufficient.</p>

<blockquote><p>Hacker activity is also joyful. It often has its roots in playful explorations. Torvalds has described, in messages on the Net, how Linux began to expand from small experiments with the computer he had just acquired. In the same messages, he has explained his motivation for developing Linux by simply stating that “it was/is fun working on it.” Tim Berners-Lee, the man behind the Web, also describes how this creation began with experiments in linking what he called “play programs.” Wozniak relates how many characteristics of the Apple computer “came from a game, and the fun features that were built in were only to do one pet project, which was to program … [a game called] Breakout and show it off at the club.”</p></blockquote>

<p>Recognise something? I do. Those three hours debugging nslcd at eleven at night were not work in the Weberian sense – nobody was paying me, nobody had asked me to do it, there was no corporate objective to reach. They were hacking in the precise sense that Levy and Himanen describe: exploration motivated by curiosity, with the infrastructure as an object of study as much as of utility. The homelab is, culturally, a direct expression of the hacker ethic. It is no coincidence that homelab communities and open source communities overlap almost perfectly, that they use the same language, the same platforms, the same values. But here, as elsewhere in this article, the story gets complicated.</p>

<p>The hacker ethic promises a pure meritocracy: you are judged by what you can do, not by who you are. It is an attractive idea. It is also, in practice, a partial fiction. Technical meritocracy presupposes that everyone starts from the same point – that skills are accessible to anyone who really wants to acquire them, that the time to acquire them is distributed equally, that mentorship networks and learning resources are available regardless of context. The homelab as hacker practice inherits both things: the genuine nature of curiosity as a driver, and structural exclusivity as an undeclared side effect. The pleasure of taking a system apart to understand how it works is real and should not be devalued. But that pleasure is available, in practice, to those who already have the ticket.</p>

<h2 id="conclusions" id="conclusions">Conclusions</h2>

<p>The HUNSN 4K runs, alongside the other “little electronic contraptions,” on a rack next to my armchair – the one where, at the end of the day, I indulge my guilty pleasure of reading a book in the company of my cats. Proxmox, the Nextcloud server, the ZFS NAS, a small MINISFORUM box running Ollama with some local open-weight LLM models, a Raspberry Pi 5 running the Tor Relay, and a HUNSN RJ15 with pfSense controlling incoming and outgoing traffic. An infrastructure, in short, that allows me to have something resembling digital sovereignty within the limits of the possible. The contradictions I have described do not resolve. They are held together, with effort, as any intellectually complex position on a complex system must be held together.</p>

<p>The first: the market that made the accessible homelab possible is the same market the homelab is supposed to emancipate us from. If this explosion of affordable, efficient mini PCs had not happened – if capitalism had not decided to build exactly what we wanted – how many of us would have taken the same path? How much of our “ethical choice” depends on the existence of products designed and sold precisely for us?</p>

<p>The second: does incorporated resistance truly lose its force, or does it remain resistance even when someone profits from it? Boltanski and Chiapello describe the incorporation mechanism, but do not argue that critique loses all effectiveness in the process. Perhaps the homelab is simultaneously a product of the system and a real, if partial, form of withdrawal from it. The two things are not mutually exclusive.</p>

<p>The third: if digital autonomy requires decades of accumulated skills, enough free time to use them, and enough money to buy the hardware, are we building a democratic alternative? Or are we building an exclusive club with a rebel aesthetic, reproducing the same hierarchies of privilege it claims to want to fight?</p>

<p>The fourth: the energy question has no clean answer, and Jevons&#39; Paradox makes it even more uncomfortable – because it works in both directions. The cloud improves efficiency and increases total consumption. A homelab consumes proportionally more, but does not fuel the demand that drives that total consumption upwards. Are we building digital sovereignty, or are we simply choosing where to position ourselves within a contradiction that cannot be resolved at the individual level?</p>

<p>I don&#39;t know. But at least I know where my data is.</p>

<h2 id="fun-fact" id="fun-fact">Fun Fact</h2>

<p>This article was written in Markdown using a Flatnotes instance running as a CT container on Proxmox, while listening to a symphonic metal playlist served by Navidrome – another CT container – pulling OGG files from a ZFS NAS over an NFS share. The cited books were in EPUB format on Calibre Web. In the background, Nextcloud on a Raspberry Pi 4 was syncing and backing up everything. Spelling mistakes were corrected by Qwen2.5, an LLM model served by Ollama on the MINISFORUM box, accessible locally via oterm and Open WebUI. And all of this, controlled from a laptop running Linux.</p>

<p>Coincidences? I don&#39;t think so.</p>

<p><a href="https://remark.as/p/jolek78/reflections-on-an-impossible-escape-from-capitalism">Discuss...</a></p>

<p><a href="https://jolek78.writeas.com/tag:Homelab" class="hashtag"><span>#</span><span class="p-category">Homelab</span></a> <a href="https://jolek78.writeas.com/tag:SelfHosted" class="hashtag"><span>#</span><span class="p-category">SelfHosted</span></a> <a href="https://jolek78.writeas.com/tag:SurveillanceCapitalism" class="hashtag"><span>#</span><span class="p-category">SurveillanceCapitalism</span></a> <a href="https://jolek78.writeas.com/tag:Privacy" class="hashtag"><span>#</span><span class="p-category">Privacy</span></a> <a href="https://jolek78.writeas.com/tag:OpenSource" class="hashtag"><span>#</span><span class="p-category">OpenSource</span></a> <a href="https://jolek78.writeas.com/tag:HackerEthic" class="hashtag"><span>#</span><span class="p-category">HackerEthic</span></a> <a href="https://jolek78.writeas.com/tag:SolarPunk" class="hashtag"><span>#</span><span class="p-category">SolarPunk</span></a> <a href="https://jolek78.writeas.com/tag:DigitalSovereignty" class="hashtag"><span>#</span><span class="p-category">DigitalSovereignty</span></a> <a href="https://jolek78.writeas.com/tag:FOSS" class="hashtag"><span>#</span><span class="p-category">FOSS</span></a> <a href="https://jolek78.writeas.com/tag:Linux" class="hashtag"><span>#</span><span class="p-category">Linux</span></a> <a href="https://jolek78.writeas.com/tag:Writing" class="hashtag"><span>#</span><span class="p-category">Writing</span></a></p>

<div class="center">
· 🦣 <a href="https://fosstodon.org/@jolek78">Mastodon</a> · 📸 <a href="https://pixelfed.social/jolek78">Pixelfed</a> ·  📬 <a href="mailto:jolek78@jolek78.dev">Email</a> ·
· ☕ <a href="https://liberapay.com/jolek78">Support this work on Liberapay</a>
</div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/reflections-on-an-impossible-escape-from-capitalism</guid>
      <pubDate>Sun, 05 Apr 2026 15:46:47 +0000</pubDate>
    </item>
    <item>
      <title>Planet of Lana: another gem</title>
      <link>https://jolek78.writeas.com/planet-of-lana-a-solarpunk-gem-worth-discovering?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[From time to time, to completely disconnect from everything and everyone, I turn back into a kid and immerse myself in video games. I&#39;m slow, I admit it: a game that would normally take 4-5 hours, I finish in at least quadruple the time. But every now and then, among the depths of Steam, I encounter genuine gems. And last night I finally completed Planet of Lana, a 2023 indie game that had been sitting in my library for months. The plot is straightforward but effective: Lana and Elo, presumably brother and sister, live in a peaceful fishing village built on stilts, where life flows serenely in harmony with nature. But this peace is shattered when a group of robots assault the village, kidnapping some inhabitants including Elo himself. From here begins Lana&#39;s odyssey: a journey to the edges of the known world to find and save her brother.&#xA;&#xA;!--more--&#xA;&#xA;Solarpunk Hieroglyphs&#xA;The game presents itself with a now well-established formula in the indie landscape: progression based on environmental puzzles that mark the passage from one section to another. But what truly struck me were the hieroglyphs scattered throughout the journey. These ancient inscriptions tell of an era when coexistence between humans and machines was peaceful and harmonious. It&#39;s pure solarpunk: natural elements perfectly integrated with technology, a vision of sustainable future that we rarely see in video games.&#xA;&#xA;I took all the time necessary to study these glyphs, to understand their deeper meaning. And it was worth it: they represent the thematic heart of the game, that common thread connecting past and present.&#xA;&#xA;planetoflana&#xA;&#xA;Meet Mui&#xA;During the journey, Lana meets Mui, an extraordinary little creature that looks like a cross between a cat and... something alien. Mui immediately becomes indispensable: jumping, untying ropes, distracting enemy machines. And, like all self-respecting cats, he&#39;s a sweetheart who&#39;s terrified of water and needs to be transported from shore to shore on rafts.&#xA;&#xA;The path is varied though, at times, the puzzles follow a repetitive logic. But the real protagonist is exploration, taking the time to observe every detail of this magnificent world.&#xA;&#xA;The Music&#xA;I must spend a few words on the music: it&#39;s simply spectacular. A masterpiece that requires tissues at hand. There&#39;s a recurring theme composed of no more than six notes that enters your soul and never leaves. Those six notes become the emotional thread of the entire experience.&#xA;&#xA;The City of Machines&#xA;So I reach the end of the game. Lana arrives at what I&#39;ve dubbed &#34;the city of machines.&#34; But everything is unexpected: no cyberpunk dystopia, no apocalyptic scenarios. Everything is peaceful. Enormous robotic spiders entertain infants in an almost surreal atmosphere.&#xA;&#xA;Then Lana slips and falls into a hidden place: humans are trapped in small transparent domes. She finds Elo. But when she tries to free him, the system detects her presence and triggers the alarm. Desperate escape.&#xA;&#xA;The Ending (Spoilers)&#xA;The final sequence is a small masterpiece of game design and storytelling. You find yourself before an enormous pulsating energy sphere. It&#39;s evident that it cannot be destroyed. And here the unthinkable happens: Mui, until that moment a simple supporting character, becomes the absolute protagonist. He flies toward the sphere, absorbs all its energy and falls to the ground, apparently lifeless.&#xA;&#xA;Silence. Despair. You think Mui has sacrificed himself to save Elo.&#xA;&#xA;But then... those six notes. The game&#39;s main theme gently returns. Mui begins to pulse with iridescent colors and awakens. And when you return to explore the world, you understand: by absorbing the sphere&#39;s energy, Mui has taken control of the machines, which now live in peace with the fishermen.&#xA;&#xA;Final Thoughts&#xA;Planet of Lana is one of those games that stays with you. Not because of puzzle difficulty or gameplay innovation, but for its ability to tell a story of hope, sacrifice, and harmony between nature and technology. It&#39;s proof that solarpunk can work beautifully in video games too, offering an alternative to the usual cyberpunk dystopias.&#xA;&#xA;If you&#39;re looking for a relaxing yet emotionally intense experience, with sublime art direction and a soundtrack to jealously preserve in your playlist, Planet of Lana absolutely deserves your time.&#xA;&#xA;Even if, in my case, it was much more than four hours.&#xA;&#xA;#PlanetOfLana #Gaming #IndieGames #Solarpunk #GameReview #PuzzleGames #Writing&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/planet-of-lana-a-solarpunk-gem-worth-discovering&#34;Discuss.../a&#xA;&#xA;div class=&#34;center&#34;&#xD;&#xA;· 🦣 a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a · 📸 a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a ·  📬 a href=&#34;mailto:jolek78@jolek78.dev&#34;Email/a ·&#xD;&#xA;· ☕ a href=&#34;https://liberapay.com/jolek78&#34;Support this work on Liberapay/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>From time to time, to completely disconnect from everything and everyone, I turn back into a kid and immerse myself in video games. I&#39;m slow, I admit it: a game that would normally take 4-5 hours, I finish in at least quadruple the time. But every now and then, among the depths of Steam, I encounter genuine gems. And last night I finally completed <strong><a href="https://store.steampowered.com/app/1608230/Planet_of_Lana/">Planet of Lana</a></strong>, a 2023 indie game that had been sitting in my library for months. The plot is straightforward but effective: Lana and Elo, presumably brother and sister, live in a peaceful fishing village built on stilts, where life flows serenely in harmony with nature. But this peace is shattered when a group of robots assault the village, kidnapping some inhabitants including Elo himself. From here begins Lana&#39;s odyssey: a journey to the edges of the known world to find and save her brother.</p>



<h3 id="solarpunk-hieroglyphs" id="solarpunk-hieroglyphs">Solarpunk Hieroglyphs</h3>

<p>The game presents itself with a now well-established formula in the indie landscape: progression based on environmental puzzles that mark the passage from one section to another. But what truly struck me were the <strong>hieroglyphs</strong> scattered throughout the journey. These ancient inscriptions tell of an era when coexistence between humans and machines was peaceful and harmonious. It&#39;s pure <strong>solarpunk</strong>: natural elements perfectly integrated with technology, a vision of sustainable future that we rarely see in video games.</p>

<p>I took all the time necessary to study these glyphs, to understand their deeper meaning. And it was worth it: they represent the thematic heart of the game, that common thread connecting past and present.</p>

<p><img src="https://i.snap.as/XaN1bulL.png" alt="planetoflana"/></p>

<h3 id="meet-mui" id="meet-mui">Meet Mui</h3>

<p>During the journey, Lana meets <strong>Mui</strong>, an extraordinary little creature that looks like a cross between a cat and... something alien. Mui immediately becomes indispensable: jumping, untying ropes, distracting enemy machines. And, like all self-respecting cats, he&#39;s a sweetheart who&#39;s terrified of water and needs to be transported from shore to shore on rafts.</p>

<p>The path is varied though, at times, the puzzles follow a repetitive logic. But the real protagonist is <strong>exploration</strong>, taking the time to observe every detail of this magnificent world.</p>

<h3 id="the-music" id="the-music">The Music</h3>

<p>I must spend a few words on the music: it&#39;s simply <strong>spectacular</strong>. A masterpiece that requires tissues at hand. There&#39;s a recurring theme composed of no more than six notes that enters your soul and never leaves. Those six notes become the emotional thread of the entire experience.</p>

<h3 id="the-city-of-machines" id="the-city-of-machines">The City of Machines</h3>

<p>So I reach the end of the game. Lana arrives at what I&#39;ve dubbed “the city of machines.” But everything is unexpected: no cyberpunk dystopia, no apocalyptic scenarios. Everything is peaceful. Enormous robotic spiders entertain infants in an almost surreal atmosphere.</p>

<p>Then Lana slips and falls into a hidden place: humans are trapped in small transparent domes. She finds Elo. But when she tries to free him, the system detects her presence and triggers the alarm. Desperate escape.</p>

<h3 id="the-ending-spoilers" id="the-ending-spoilers">The Ending (Spoilers)</h3>

<p>The final sequence is a <strong>small masterpiece</strong> of game design and storytelling. You find yourself before an enormous pulsating energy sphere. It&#39;s evident that it cannot be destroyed. And here the unthinkable happens: <strong>Mui</strong>, until that moment a simple supporting character, becomes the absolute protagonist. He flies toward the sphere, absorbs all its energy and falls to the ground, apparently lifeless.</p>

<p>Silence. Despair. You think Mui has sacrificed himself to save Elo.</p>

<p>But then... those six notes. The game&#39;s main theme gently returns. Mui begins to pulse with iridescent colors and awakens. And when you return to explore the world, you understand: by absorbing the sphere&#39;s energy, Mui has taken control of the machines, which now live in peace with the fishermen.</p>

<h3 id="final-thoughts" id="final-thoughts">Final Thoughts</h3>

<p><strong>Planet of Lana</strong> is one of those games that stays with you. Not because of puzzle difficulty or gameplay innovation, but for its ability to tell a story of hope, sacrifice, and harmony between nature and technology. It&#39;s proof that solarpunk can work beautifully in video games too, offering an alternative to the usual cyberpunk dystopias.</p>

<p>If you&#39;re looking for a relaxing yet emotionally intense experience, with sublime art direction and a soundtrack to jealously preserve in your playlist, Planet of Lana absolutely deserves your time.</p>

<p>Even if, in my case, it was much more than four hours.</p>

<p><a href="https://jolek78.writeas.com/tag:PlanetOfLana" class="hashtag"><span>#</span><span class="p-category">PlanetOfLana</span></a> <a href="https://jolek78.writeas.com/tag:Gaming" class="hashtag"><span>#</span><span class="p-category">Gaming</span></a> <a href="https://jolek78.writeas.com/tag:IndieGames" class="hashtag"><span>#</span><span class="p-category">IndieGames</span></a> <a href="https://jolek78.writeas.com/tag:Solarpunk" class="hashtag"><span>#</span><span class="p-category">Solarpunk</span></a> <a href="https://jolek78.writeas.com/tag:GameReview" class="hashtag"><span>#</span><span class="p-category">GameReview</span></a> <a href="https://jolek78.writeas.com/tag:PuzzleGames" class="hashtag"><span>#</span><span class="p-category">PuzzleGames</span></a> <a href="https://jolek78.writeas.com/tag:Writing" class="hashtag"><span>#</span><span class="p-category">Writing</span></a></p>

<p><a href="https://remark.as/p/jolek78/planet-of-lana-a-solarpunk-gem-worth-discovering">Discuss...</a></p>

<div class="center">
· 🦣 <a href="https://fosstodon.org/@jolek78">Mastodon</a> · 📸 <a href="https://pixelfed.social/jolek78">Pixelfed</a> ·  📬 <a href="mailto:jolek78@jolek78.dev">Email</a> ·
· ☕ <a href="https://liberapay.com/jolek78">Support this work on Liberapay</a>
</div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/planet-of-lana-a-solarpunk-gem-worth-discovering</guid>
      <pubDate>Fri, 14 Nov 2025 23:25:17 +0000</pubDate>
    </item>
  </channel>
</rss>