<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>DigitalSovereignty &amp;mdash; jolek78&#39;s blog</title>
    <link>https://jolek78.writeas.com/tag:DigitalSovereignty</link>
    <description>thoughts from a friendly human being</description>
    <pubDate>Mon, 11 May 2026 03:28:51 +0000</pubDate>
    
    <item>
      <title>ARM. The chip we didn&#39;t know we needed</title>
      <link>https://jolek78.writeas.com/arm-the-chip-we-didnt-know-we-needed?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[There are architectures you see and architectures you don&#39;t. ARM is the most extreme case of the second category: it runs in the phone in our pocket, in the home router, in the eighty-euro board that serves as a home server for millions of tinkerers, in the datacentres of Amazon and Google. It is everywhere, and almost nobody knows what it is. It took me years too to bring it into focus, and the occasion was a Raspberry Pi 3 that I had decided to turn into a Nextcloud — the first brick of what would become, in the years to come, my small homelab — many years ago. It was a line in /boot/config that made me notice the thing: the Pi&#39;s processor, a Broadcom BCM2837, used the same architecture as the Android phones I had hacked for years. ARM. Same instruction set, same underlying logic, same family.&#xA;&#xA;!--more--&#xA;&#xA;A room in Cambridge, a government project, and a woman&#xA;&#xA;The story of ARM does not begin in a Silicon Valley garage. It begins in Cambridge, in 1983, in a small company called Acorn Computers, on a commission from the BBC.&#xA;&#xA;The context matters, because it changes the whole flavour of the story. The British government had decided to launch a national computer literacy programme — the BBC Computer Literacy Project — and needed a machine that could go into schools. Acorn won the tender with the BBC Micro, a cheap and robust computer that would introduce an entire generation of Britons to programming. It was the first time a state systematically funded popular access to computing. Not a startup with a venture-capital pitch: a public project, with public money, for an explicitly democratising goal.&#xA;&#xA;But the BBC Micro was not enough. Acorn needed something more powerful for the next step, and the processors available on the market — 6502, Z80, the early Intel offerings — were either too slow, too complex, or too expensive. Acorn&#39;s research and development team then decided to design one from scratch, drawing inspiration from Patterson and Ditzel&#39;s work at Berkeley on the RISC architecture: simple instructions, executed quickly, few transistors, low power consumption. The result, in 1985, was the ARM1: thirty thousand transistors, no cache, no microcode.&#xA;&#xA;The person who designed the architecture and instruction set of that ARM1 was called Sophie Wilson. Her approach is summarised in a sentence she gave in an interview with the Telegraph, and it is worth quoting:&#xA;&#xA;  We accomplished this by thinking about things very, very carefully beforehand.&#xA;&#xA;Nothing particularly sophisticated, on the face of it. But in a sector where the dominant tendency was to add instructions and complexity to increase performance, the intuition of Wilson and her colleague Steve Furber went in the opposite direction: take away instead of add, simplify instead of complicate.&#xA;&#xA;There is an episode that explains better than any technical analysis where this philosophy led. On 26 April 1985, when the first chips came back from the VLSI Technology foundry, Furber connected them to a development board and was puzzled: the ammeter in series with the power supply read zero. The processor seemed to be consuming literally nothing. The team that had designed the ARM1 numbered a handful of people — Wilson on the instruction set, Furber on microarchitecture design, a few collaborators around them — and operated with negligible resources compared to Intel or Motorola. The idea that they had just produced a processor that consumed zero was implausible.&#xA;&#xA;The explanation, as Wilson recounted in a 2012 interview with The Register, was wrong in the most embarrassing way possible:&#xA;&#xA;  The development board the chip was plugged into had a fault: there was no current being sent down the power supply lines at all. The processor was actually running on leakage from the logic circuits. So the low-power big thing that the ARM is most valued for today, the reason that it&#39;s on all your mobile phones, was a complete accident.&#xA;&#xA;The board was faulty, the power was not actually reaching the chip, and the processor was running on the leakage current from the logic circuits. The most important characteristic of the most widespread ARM architecture on the planet — the energy efficiency that makes it suitable for mobile devices — was discovered by mistake, on a broken board, by an engineer convinced he had a faulty measuring instrument.&#xA;&#xA;Furber, for his part, explained the dynamic in more engineering terms:&#xA;&#xA;  We applied Victorian engineering margins, and in designing to ensure it came out under a watt, we missed, and it came out under a tenth of a watt.&#xA;&#xA;The &#34;Victorian engineering margins&#34; are the generous safety margins typical of late nineteenth-century engineering — over-dimensioning every component to avoid failures. Furber and Wilson, accustomed to designing with limited resources and no margin for error, had applied the same principle to the chip design: design for consumption under a watt, and end up well below.&#xA;&#xA;  There was no magic with the low power characteristics apart from simplicity.&#xA;&#xA;No magic. Just a design done well by a small team that could not afford to get it wrong. On that accident, and on that simplicity, ARM&#39;s dominance in mobile for the next forty years would be built.&#xA;&#xA;---&#xA;&#xA;A note on Sophie Wilson&#xA;&#xA;Born in Leeds in 1957. She studied mathematics at Selwyn College, Cambridge, and as a student already worked with Hermann Hauser at Acorn — designing the Acorn System 1 even before graduating. In 1981, on commission from the BBC, she wrote BBC BASIC: a complete programming language in 16 kilobytes, so well-designed that it is still in use today on embedded systems. The &#34;subtract instead of add&#34; philosophy that would make ARM1 what it is was not born in 1985: it was born in the extreme memory constraints of the BBC Micro. Only later, in 1983, did Wilson begin work on the ARM1 instruction set, which she completed with Steve Furber in 1985. After Acorn she moved to Element 14, a 1999 spin-off absorbed by Broadcom in 2000. At Broadcom, where she still works as a Distinguished Engineer, she contributed to the BCM family of SoCs — including those that ended up inside the early Raspberry Pis, BCM2837 of the Pi 3 included. Recognition came late: Computer History Museum Fellow Award in 2012, Fellow of the Royal Society in 2013, Commander of the Order of the British Empire in 2019. In the 1990s she completed her gender transition, continuing to work in the sector without interruption.&#xA;&#xA;---&#xA;&#xA;In 1990, Acorn, Apple and VLSI Technology founded a separate joint venture to manage and license the architecture. The name changed from Acorn RISC Machine to Advanced RISC Machines. ARM Holdings was born as an independent company, headquartered in Cambridge, with a business model that had no precedent in the sector: it would never manufacture a single chip. It would sell the idea of the chip. Licences, royalties, IP. Anyone who wanted to build an ARM processor would have to pay them.&#xA;&#xA;It was a technical choice, but also a political one. ARM did not have the capital to build factories, did not have the infrastructure. But it had something harder to replicate: a clean, efficient architecture, designed well from the start.&#xA;&#xA;The architecture of invisible power&#xA;&#xA;ARM&#39;s business model is one of the most elegant — and least understood — in the entire technology industry. It works like this: ARM designs the processor architectures and licenses their use to third parties in exchange for an upfront fee (typically between one and ten million dollars) plus a royalty on every chip produced, usually around 1–2% of the final device price. Whoever buys the licence can then build their own chips based on that architecture, customising it within the limits allowed by the contract. They are not buying a product, then: they are buying the right to make one.&#xA;&#xA;Garnsey, Lorenzoni and Ferriani, in a fundamental study on the birth of ARM as a spin-off from Acorn published in Research Policy in 2008, describe this transition as an exemplary case of techno-organizational speciation: technology is not simply transferred, but is radically transformed in the passage to a new domain through a new organisational model. ARM is not Acorn that changes its name: it is a new organism, with a completely different survival logic, which carries the original DNA but adapts to an environment Acorn could never have inhabited.&#xA;&#xA;The practical result of this structure is what the industry calls neutral positioning. ARM does not compete with its customers — it does not sell chips, does not produce devices — so it can sell the same licence to Qualcomm, Apple, Samsung and MediaTek, who fight each other on the market every day. It is the &#34;Switzerland&#34; of silicon: a credible referee, a common infrastructure, a layer everyone builds on without having to trust the others. This has created an ecosystem of over a thousand licensee partners — a number impossible to reach for any traditional chip manufacturer. Furber, today professor of computer engineering at the University of Manchester, summed up the result in a way that is hard to forget:&#xA;&#xA;  I suspect there&#39;s more ARM computing power on the planet than everything else ever made put together. The numbers are just astronomical.&#xA;&#xA;It is not rhetoric: it is the logical consequence of a model that multiplies adoption instead of concentrating it.&#xA;&#xA;But this neutrality has a structural cost that is rarely thematised. When ARM sells a licence, it also sells dependence. Whoever builds their own SoC on ARM architecture is bound to that instruction set for the entire life of the product. Changing architecture would mean rewriting the software, recertifying the systems, redoing the chip design. The exit cost is very high. And this means that ARM, despite producing nothing, exercises enormous systemic power: it can renegotiate licence terms, raise royalties, decide who gets access to the most advanced architectures and who does not. Abstract as this dependence may sound on paper, there is a recent case that makes it very concrete — and worth following in detail, because it illustrates exactly how ARM power is exercised in the real world.&#xA;&#xA;In 2021, Qualcomm acquired for $1.4 billion a Californian startup called Nuvia, founded by three former Apple Silicon engineers — Gerard Williams III, Manu Gulati, John Bruno — who were designing a server chip called Phoenix, based on the ARM v8.7-A architecture. Nuvia had its own ALA (Architecture License Agreement) with ARM, negotiated on the terms of a small startup entering a new market. When Qualcomm bought it, it integrated the Phoenix technology into its own Oryon core, the heart of the new Snapdragon X Elite — the chip with which Qualcomm wanted to challenge Intel and AMD in the AI PC laptop market.&#xA;&#xA;The problem was contractual, not technical. Qualcomm&#39;s ALA with ARM already existed, and provided for lower royalties than Nuvia&#39;s. Qualcomm argued that the integration of Nuvia into its own chips fell under its pre-existing ALA. ARM replied that no: the acquisition required a full renegotiation from scratch — on ARM&#39;s terms, naturally. In 2022 ARM took Qualcomm to court asking, among other things, for the physical destruction of the pre-acquisition Nuvia designs. Not a downsizing, not a renegotiation: destruction. The message was unambiguous: IP licensing is not a sale, it is a revocable permission, and the permission is granted by whoever owns the architecture.&#xA;&#xA;The case went to trial in Wilmington, Delaware, in December 2024. The jury ruled unanimously in favour of Qualcomm on two of the three contested points, hung jury on the third. On 30 September 2025, Judge Maryellen Noreika issued the final ruling: full and final judgment in favour of Qualcomm and Nuvia on all fronts, also rejecting ARM&#39;s request for a new trial. The judge explicitly noted that ARM itself, in its own internal documents, admitted to having recorded historic licensing and royalty revenues after attempting to terminate Nuvia&#39;s ALA in 2022 — which, translated, means: while claiming to have been damaged by Nuvia&#39;s actions, ARM was making piles of money precisely thanks to the ecosystem built on that architecture.&#xA;&#xA;ARM has announced it will appeal. Qualcomm, for its part, already has a counter-suit open since April 2024 against ARM — accusing it of withholding technical deliverables, anti-competitive behaviour, and (in a subsequent amendment) of intending to enter the server chip market as a direct competitor. The trial, originally set for March 2026, has been postponed to October 2026 to deal with a series of pending motions — a sign that the complexity of the dispute does not exhaust itself easily. That is: ARM, which built everything on neutral positioning, finds itself accused in court of wanting to become a silicon producer. Aka: the Switzerland that suddenly wants an army.&#xA;&#xA;The Qualcomm/Nuvia case is important not because Qualcomm won, but because it publicly exposed the nature of the power ARM exercises. The real asset had never been the architecture — the architecture is technical documentation, brutally, in the end. The real asset was the contract. The capacity to drag into court anyone who thinks they can use that documentation without the right permission. Langdon Winner, in his influential 1980 essay Do Artifacts Have Politics?, argued that technological choices are never neutral — they incorporate power structures, distribute access in non-random ways, create dependencies that persist long after the initial decision.&#xA;&#xA;  It is still true that, in a world in which human beings make and maintain artificial systems, nothing is &#34;required&#34; in an absolute sense. Nevertheless, once a course of action is underway, once artifacts like nuclear power plants have been built and put in operation, the kinds of reasoning that justify the adaptation of social life to technical requirements pop up as spontaneously as flowers in the spring.&#xA;&#xA;And ARM is an almost perfect case of this thesis applied to the IP economy: an architecture born of a public computer-literacy project becomes the foundation on which an invisible monopoly is built across tens of billions of devices. It is not malice. It is structure. The chip has no intentions. But the licensing structure that sits on top of it, that one does.&#xA;&#xA;A new front: the datacentre&#xA;&#xA;A parenthesis is necessary, because it tells where ARM is going right now — and why the Qualcomm/Nuvia case has the importance it has.&#xA;&#xA;For the first part of its history, ARM was the architecture of mobile. Servers, datacentres, enterprise computing were Intel territory: x86 dominated in an apparently unchallenged way. Things began to change in 2018, when Amazon Web Services announced the first Graviton, a custom ARM chip designed in-house by Annapurna Labs (acquired by AWS in 2015). The selling argument was simple and technically sound: at equivalent loads, ARM chips consumed much less energy than equivalent x86, and in a datacentre where the electricity bill is a third of operating costs, this translates directly into margin.&#xA;&#xA;Since then the trajectory has been steady and surprisingly fast. In 2023 ARM accounted for about 5% of the cloud compute of the three major hyperscalers. ARM itself, in its 2025 communications, claims that by year-end approximately half of the compute shipped to the top hyperscalers will be ARM-based — a figure to be taken with the caution due to a company talking about its own market, but consistent: for the third consecutive year, more than half of new CPU capacity added to AWS is Graviton, and 98% of the top one thousand EC2 customers use it. AWS Graviton5, announced on 4 December 2025 at re:Invent, has 192 cores in a single socket, an L3 cache five times larger than the previous generation, and is based on the Neoverse V3 ARMv9.2 cores at 3 nanometres. Google has launched Axion (based on Neoverse V2) with the claim of a 65% better price-performance compared to x86 instances. Microsoft has rolled out Cobalt 100 in 29 global regions. NVIDIA — the very same NVIDIA that had tried to buy ARM — uses ARM Neoverse cores in Grace, the CPU that accompanies its H100 and B100 GPUs for AI workloads. Spotify, Paramount+, Uber, Oracle, Salesforce have migrated infrastructure to ARM. Over a billion ARM Neoverse cores have been deployed in datacentres worldwide.&#xA;&#xA;This changes the proportions of the game. When ARM made money on smartphone royalties, we were talking about cents per chip but on billions of units. In datacentres things are different: every Graviton5 costs AWS thousands of dollars, and every server with an ARM chip on board is a more substantial royalty. The datacentre is the segment where ARM can finally start extracting value aggressively. And it is also the segment where licensees have most to lose: if Apple or Qualcomm raise your royalties on a phone, it is an annoyance; if ARM raises your royalties on the chip running your cloud, it is an attack on the operating margin of your business.&#xA;&#xA;It is easier to understand, in this light, why Qualcomm pulled out the Nuvia case with such determination. And why — as we will see shortly — it is looking for an architectural way out.&#xA;&#xA;The failed coup&#xA;&#xA;November 2020. Jensen Huang, NVIDIA&#39;s CEO, announces the acquisition of ARM from SoftBank for $40 billion. It would have been the largest operation in semiconductor history. It did not go through, and understanding why helps to see how systemic ARM&#39;s position in the industry was — and still is.&#xA;&#xA;Hermann Hauser, the Austrian from Cambridge who had founded Acorn, the company from which ARM was born, had reacted to the SoftBank acquisition back in July 2016 with a public statement on Twitter that left no room for interpretation:&#xA;&#xA;  ARM is the proudest achievement of my life. The proposed sale to SoftBank is a sad day for me and for technology in Britain.&#xA;&#xA;When, four years later, NVIDIA announced its intention to buy ARM from SoftBank, Hauser&#39;s reaction was even sharper. In an interview with the BBC he explained the structural problem with a clarity that regulatory documents rarely achieve:&#xA;&#xA;  It&#39;s one of the fundamental assumptions of the ARM business model that it can sell to everybody. The one saving grace about Softbank was that it wasn&#39;t a chip company, and retained ARM neutrality. If it becomes part of Nvidia, most of the licensees are competitors of Nvidia, and will of course then look for an alternative to ARM.&#xA;&#xA;And in his written testimony submitted to the British Parliament he added, with the freedom of someone who had nothing left to lose:&#xA;&#xA;  I have no shares or other interest in ARM as I had to sell them all to Softbank. I can therefore freely speak my mind.&#xA;&#xA;Hauser was right. NVIDIA, in 2020, was already dominant in artificial intelligence through its GPUs. Buying ARM would have meant getting early access to new designs ahead of competitors, the ability to slow or deny licences to rivals, and benefiting freely from the architecture while others continued paying royalties. Qualcomm, Microsoft and Google publicly opposed the deal. The American FTC opened an antitrust proceeding. The European Commission launched an investigation. Britain opened its own. China raised a red flag. In February 2022, the deal was formally cancelled for significant regulatory challenges.&#xA;&#xA;There is another Hauser statement worth quoting. In a 2022 interview with UKTN, he called British politicians «technologically illiterate» and «the root cause» of the governance problems around ARM. He argued that the government should have taken a golden share in ARM long before, and that any attempt to do so in 2022 was «trying to close the gate after the horse has bolted». An architecture born with public money and a public mandate had become a pawn in the power game between SoftBank, NVIDIA and the NASDAQ — because no one had thought, at the appropriate moment, that it was worth keeping it in public territory.&#xA;&#xA;The end of the story: SoftBank took ARM public in September 2023, in what was the largest IPO of the year. ARM Holdings is today listed on NASDAQ with a market capitalisation of around $150 billion. Masayoshi Son is still the controlling shareholder. The fact that the acquisition attempt by the world&#39;s largest AI chip producer was blocked by regulators does not eliminate the problem — it shifts it. ARM is independent, but it is a very particular form of independence: that of a systemic infrastructure in the hands of financial investors, subject to stock-market logic, obliged to grow revenues every quarter. The uncomfortable question is: what happens when the needs of a commons architecture — stable, predictable, accessible, neutral — conflict with the needs of a publicly listed company that has to raise royalties to satisfy shareholders? It is not a theoretical question. ARM has systematically increased its licence fees in recent years. And the major licensees have started looking for alternatives.&#xA;&#xA;The half-democratisation&#xA;&#xA;We have to give ARM what ARM deserves, before continuing with the critique. And what it deserves is considerable.&#xA;&#xA;The Raspberry Pi — version 3 in 2017, version 5 today — costs less than eighty euros for the most recent version. It is a complete computer, capable of running Linux, a server, a media centre, a network node. It exists because the ARM architecture has made it possible to produce powerful and very low-power SoCs at costs that x86 processors cannot get close to. The same principle applies to the billion-plus smartphones in the hands of people in countries where a desktop PC would be an inaccessible luxury. To the microcontrollers controlling IoT sensors at a few cents each. To the embedded processors in medical devices, industrial control systems, critical infrastructure. ARM has materially lowered the cost of access to computational hardware on a global scale.&#xA;&#xA;Wilson herself, looking back on the whole story, framed it with a lucidity that almost sounds like a warning:&#xA;&#xA;  To build something new and complicated, it&#39;s not the sort of quick thing, it&#39;s a sustained effort over a long period of time. It takes many people&#39;s different inputs to make something unique and novel. Overnight success takes 30 years.&#xA;&#xA;Thirty years of invisible work, of architectures refined chip by chip, of licences negotiated one at a time, before the world noticed that ARM was everywhere.&#xA;&#xA;The &#34;democratisation&#34; effected by ARM is real but structurally asymmetric. It has democratised access to hardware for device manufacturers — anyone can build an ARM chip by paying the licence — but not necessarily for the end users of those devices. An iPhone — or an Android phone — has an ARM chip designed by a company, but the end user has no access to the chip&#39;s architecture, no possibility to modify it, no transparency on what runs at that level. The chip is ARM, the device is a closed box. This is the final contradiction: you may have the right — or almost — to manage the software running on an ARM chip, but below the kernel, below the bootloader, there is a chip whose architecture was defined in Cambridge, produced in Taiwan, integrated into a SoC designed by Broadcom, over which you can have no control. Sovereignty ends exactly where silicon begins. Those who really benefited are the oligopoly of large licensees — Apple, Qualcomm, Samsung, NVIDIA, Amazon with its Gravitons — not the small Bangalore startup with an idea for a specialised chip.&#xA;&#xA;And yet — and here the story gets complicated, in an interesting way — within the narrow space the ARM licensing model concedes, someone is nevertheless trying to pull the lever of openness at the levels available. In December 2024, a Shenzhen company called Radxa announced the Radxa Orion O6, presented as the &#34;World&#39;s First Open Source Arm V9 Motherboard&#34;. It is a Mini-ITX board at $200 in the base version, based on the Cix CD8180 SoC — an ARMv9.2 chip with 12 cores (four Cortex-A720 at 2.8 GHz, four at 2.4 GHz, four Cortex-A520 at 1.8 GHz) produced by Cix Technology, a Chinese fabless founded in 2021. Debian 12, Fedora and Ubuntu run natively on it, with UEFI EDKII and SystemReady SR certification. The first Geekbench benchmarks put it at the level of an Apple M1 in single-core — not bad for an ARM board at less than a tenth of the price of a Mac mini.&#xA;&#xA;Note: it is worth clarifying what &#34;open source&#34; means here, because it means different things at different levels. The ARMv9.2 instruction set on which the CD8180 is built is not open: Cix pays regular royalties to ARM Holdings like all other licensees. The SoC itself is not open: it is a proprietary chip, with the NPU microcode and Mali GPU blocks all closed. What is open is the layer immediately above: board schematics, Board Support Package, EDKII bootloader, Linux kernel, device tree — all published under free licences, replicable, modifiable.&#xA;&#xA;It is also a concrete demonstration of what the open hardware movement has been arguing for twenty years: openness is layered, and opening one more layer than was open before is already a political act, even if the foundation underneath remains closed. The fact that this board comes from China — like the RISC-V pivot we will discuss shortly — is no accident: it is consistent with a geopolitical trajectory that seeks margins of technological sovereignty wherever it is possible to extract them.&#xA;&#xA;The Linux moment for hardware&#xA;&#xA;And here RISC-V comes onstage. And the story gets more interesting.&#xA;&#xA;RISC-V was born in 2010 at the University of California Berkeley, in the same department that had helped inspire the original RISC architecture thirty years earlier. Krste Asanović and his collaborators needed a clean processor architecture for research, without having to pay licences or ask permission. They decided to design one from scratch, and to make it completely open: no royalties, no licences, no intellectual property to respect. The RISC-V instruction set is an open standard, freely published, that anyone can implement, modify, distribute.&#xA;&#xA;For ten years RISC-V was an academic experiment, then a nucleus of embedded adoption, then an interesting alternative for those who wanted custom chips without paying ARM. In the last two or three years the proportions have changed. The SHD Group, a market analysis firm that has been monitoring the RISC-V sector since 2019, announced at the November 2025 RISC-V Summit that the technology&#39;s market penetration had exceeded 25% — an important symbolic threshold, even if it is to be taken with some caution. The same RISC-V International annual report for 2025 admits it is not entirely clear whether the 25% refers to the global microprocessor market in the strict sense or only to the segments where RISC-V already has a significant presence (embedded, IoT, microcontrollers). The SHD projection for 2031 is 33.7%. However it is measured, the trajectory is that of an architecture that is no longer a niche: it is the third pillar of computing, alongside x86 and ARM.&#xA;&#xA;The strength of RISC-V is not just technical — it is political in the most precise sense of the term. Some examples:&#xA;&#xA;The Chinese front. China has very concrete reasons not to want to depend on ARM, a company listed in New York with American shareholders. Under increasingly stringent US sanctions on advanced Intel/AMD chips, China has pivoted en masse to RISC-V — also because the RISC-V International consortium was strategically moved from Delaware to Switzerland in March 2020, formally placing it beyond the reach of unilateral American export controls. Alibaba, through its T-Head division, has released the XuanTie C920 chips and successors. Smaller Chinese manufacturers are flooding the mid-market with RISC-V AI accelerators that cost significantly less than the equivalent Western ones under sanction. It is an architectural decoupling, not just a commercial one.&#xA;&#xA;The European front. The European Union, through the EU Chips Act, funds the Project DARE consortium (Digital Autonomy with RISC-V in Europe) with the explicit goal of reducing European dependence on American and British technology in critical infrastructure. Quintauris, a joint venture founded in December 2023 by Bosch, Infineon, Nordic Semiconductor, NXP and Qualcomm (with STMicroelectronics joining as a sixth shareholder in 2024), developed in 2025 RT-Europa, the first RISC-V platform for real-time automotive controllers — a sector where dependence on foreign IP had become strategically intolerable.&#xA;&#xA;The Qualcomm front. In December 2025, while the Nuvia case closed yet another chapter against ARM, Qualcomm acquired Ventana Micro Systems, one of the most advanced companies in the development of high-performance RISC-V cores. Literally: not only was Qualcomm fighting ARM in court, it was also buying the way to no longer need ARM. It is the most significant move in all the recent history, because for the first time one of the major ARM licensees equips itself with a credible architectural plan B.&#xA;&#xA;Three different fronts, one same direction. The parallel with Linux is more than metaphorical. Linux did not kill Windows or macOS. But it did create a real alternative that changed the terms of power in the software industry. RISC-V aspires to do the same thing for hardware. And the critical point — the one Winner would have appreciated — is that this openness is built into the architecture itself, not guaranteed by a company&#39;s good will. You cannot buy RISC-V and &#34;close it&#34;. The instruction set is public by definition. You can build proprietary implementations on top of it — and many companies are doing that — but the foundation remains accessible.&#xA;&#xA;And here the question: will RISC-V be incorporated by capitalism exactly as Linux was? The honest answer is: probably yes, and in part it already has been. The major RISC-V implementations by Apple, Google and Meta are not open source — they use the open instruction set to build proprietary architectures. The fact that the foundation is free does not mean that everything built on top of it is. The same logic Boltanski and Chiapello described applies: critique is not defeated, it is incorporated. But at least the foundation remains open. And that counts.&#xA;&#xA;Conclusions — or questions, if you prefer&#xA;&#xA;ARM is born of a public mandate and a democratisation project, and becomes the foundation of a private oligopoly. The chip is the same; the power structure on top of it is radically different from the one that produced it. And that chip really did lower the entry barriers for hardware producers — it produced the Raspberry Pi, the cheap phones, the microcontrollers everywhere, the more efficient datacentres — but the democratisation stopped at the gates of the production chain. The end users of those devices gained no real sovereignty over the silicon they hold in their pocket.&#xA;&#xA;NVIDIA&#39;s attempt to acquire ARM was blocked by regulators, but only because it would have concentrated power too visibly. The systemic power ARM already exercises — silently, through licences and royalties, through legal cases against those trying to step out of contractual terms — disturbs no regulator, generates no headlines, produces no parliamentary hearings. It is the kind of power that makes itself invisible precisely because it is structural: it does not lie in a decision, it lies in the conditions within which decisions are made.&#xA;&#xA;There is also a contradiction that concerns me personally. That Raspberry Pi I had on the table — and all the ARM chips in the phones I have hacked for years — were already, in some sense, part of a system I did not control. I changed the software on top. I did not change the power structure underneath (one could make the same argument about Intel, ça va sans dire…). Digital sovereignty ends exactly where silicon begins, and pretending otherwise would be dishonest.&#xA;&#xA;RISC-V opens a real crack. Not a revolution — a crack. The possibility that the foundation of computing be a commons, instead of private property subject to corporate decisions and legal battles. It does not solve the problem of closed hardware, it does not solve the problem of oligopolistic foundries, it does not solve any of the contradictions described. But at least it does not aggravate them. It is the same logic of the open hardware movement, which for twenty years has been trying to apply to silicon what free software has applied to code — with more modest results, because the physical layer is structurally more hostile to the commons: if you cannot open it, you do not really own it. And in a sector where every layer of the technology stack has been systematically fenced off, keeping the foundation open is a political act, not just a technical one.&#xA;&#xA;What stays with me is a feeling familiar to anyone who has spent time thinking about computing as political territory. Technological choices incorporate power structures. Power structures persist long after the original choices have been forgotten. And whoever controls the basic infrastructure — the instruction set, the architecture, the licences — controls something much more important than a company: they control the rules of the game on which everything else is built.&#xA;&#xA;The question I leave open is: in whose favour were these rules written? And by what right do they continue to apply?&#xA;&#xA;---&#xA;&#xA;Sources and further reading&#xA;&#xA;On the history of ARM and its origins&#xA;&#xA;Garnsey, E., Lorenzoni, G., Ferriani, S. (2008). &#34;Speciation through entrepreneurial spin-off: The Acorn-ARM story&#34;. Research Policy, 37(2): 210-224. doi: 10.1016/j.respol.2007.11.006. The most in-depth academic study on the origin of ARM as a spin-off from Acorn and on the genesis of its IP licensing-based business model. https://www.sciencedirect.com/science/article/abs/pii/S0048733307002363&#xA;Patterson, D., Ditzel, D. (1980). &#34;The Case for the Reduced Instruction Set Computer&#34;. ACM SIGARCH Computer Architecture News, 8(6): 25-33. The founding paper of the RISC architecture at Berkeley, which inspired the ARM project. https://dl.acm.org/doi/10.1145/641914.641917&#xA;&#xA;On the IP licensing business model&#xA;&#xA;Ferriani, S., Garnsey, E., Lorenzoni, G., Massa, L. (2015). &#34;ARM plc and the IP Business Model&#34;. Working Paper, Centre for Technology Management, University of Cambridge. https://www.ifm.eng.cam.ac.uk/uploads/Research/CTM/workingpaper/2015-02-Ferriani-Garnsey-Lorenzoni-Massa.pdf&#xA;Grindley, P. C., Teece, D. J. (1997). &#34;Managing Intellectual Capital: Licensing and Cross-Licensing in Semiconductors and Electronics&#34;. California Management Review, 39(2): 8-41.&#xA;&#xA;On power in technological choices&#xA;&#xA;Winner, L. (1980). &#34;Do Artifacts Have Politics?&#34;. Daedalus, 109(1): 121-136. https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf&#xA;Boltanski, L., Chiapello, È. (1999). Le nouvel esprit du capitalisme. Gallimard. (English transl. The New Spirit of Capitalism, Verso, 2005). https://www.jstor.org/stable/4201214&#xA;&#xA;On the Qualcomm/Nuvia case&#xA;&#xA;Paul, Weiss (2025). &#34;Qualcomm Wins Decisive Post-Trial Victory in High-Profile Licensing Dispute Against Arm&#34;. https://www.paulweiss.com/insights/client-news/qualcomm-wins-decisive-post-trial-victory-in-high-profile-licensing-dispute-against-arm. Press release of the law firm that represented Qualcomm, with summary of the 30 September 2025 ruling.&#xA;The Register (2025). &#34;Judge dismisses Arm&#39;s last legal claim against Qualcomm&#34;. https://www.theregister.com/2025/10/01/armslastlegalclaimagainst/&#xA;Computerworld (2025). &#34;Arm&#39;s high-stakes licensing suit against Qualcomm ends in mistrial, but Qualcomm prevails in key areas&#34;. https://www.computerworld.com/article/3629812/&#xA;&#xA;On the NVIDIA acquisition attempt and geopolitical implications&#xA;&#xA;U.S. Federal Trade Commission (2021). Complaint in the Matter of NVIDIA Corporation and Arm Limited. https://www.ftc.gov/legal-library/browse/cases-proceedings/2110081-nvidia-corporationarm-limited&#xA;Hauser, H. (2020). Written evidence submitted to the UK Parliament Business, Energy and Industrial Strategy Committee on the proposed acquisition of ARM by NVIDIA. Document BFA0018. https://committees.parliament.uk/writtenevidence/12711/pdf/&#xA;Hauser, H. (2022). Interview with UKTN: &#34;UK left it too late to take golden share in Arm&#34;. https://www.uktech.news/news/government-and-policy/hermann-hauser-arm-golden-share-20220623&#xA;&#xA;On Sophie Wilson, Steve Furber and the origin of ARM1&#xA;&#xA;Wilson, S. (2012). Interview with The Register: &#34;ARM creators Sophie Wilson and Steve Furber&#34;. https://www.theregister.com/2012/05/03/unsungheroesoftecharmcreatorssophiewilsonandstevefurber/. Contains Wilson&#39;s statement on low power as a complete accident.&#xA;Furber, S. (2010). Interview with ACM Queue: &#34;A Conversation with Steve Furber&#34;. https://queue.acm.org/detail.cfm?id=1716385. Contains the statement on Victorian engineering margins.&#xA;Furber, S. (2011). Interview with Communications of the ACM. https://cacm.acm.org/news/an-interview-with-steve-furber/. Contains the assessment on total ARM computing power on the planet.&#xA;Furber, S. (2017). &#34;ARM: The architecture that conquered mobile computing&#34;. Philosophical Transactions of the Royal Society A, 375(2104). doi: 10.1098/rsta.2017.0148.&#xA;Computer History Museum (2012). Fellow Award citation for Sophie Wilson and Steve Furber. https://computerhistory.org/chm-fellows/sophie-wilson/&#xA;&#xA;On ARM in datacentres&#xA;&#xA;Arm Holdings (2025). &#34;Half of the Compute Shipped to Top Hyperscalers in 2025 will be Arm-based&#34;. Arm Newsroom. https://newsroom.arm.com/blog/half-of-compute-shipped-to-top-hyperscalers-in-2025-will-be-arm-based&#xA;Arm Holdings (2025). &#34;How Arm is redefining compute through the converged AI data center&#34;. Arm Newsroom. https://newsroom.arm.com/blog/arm-converged-ai-data-center-aws-graviton5&#xA;Omdia (2026). &#34;Arm Steps Deeper into Silicon: Implications for the Semiconductor Value Chain&#34;. https://omdia.tech.informa.com&#xA;&#xA;On the democratisation of access to computing&#xA;&#xA;Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press. http://www.benkler.org/BenklerWealthOfNetworks.pdf&#xA;Söderberg, J. (2008). Hacking Capitalism: The Free and Open Source Software Movement. Routledge. https://downloads.gvsig.org/download/people/vagazzi/Hacking%20Capitalism.pdf&#xA;&#xA;On RISC-V and architectural sovereignty&#xA;&#xA;RISC-V International (2024). RISC-V Ratified Specifications. https://riscv.org/technical/specifications/&#xA;RISC-V International (2026). Annual Report 2025. https://riscv.org/wp-content/uploads/2026/01/RISC-V-Annual-Report-2025.pdf. The official RISC-V International annual report, with the SHD Group estimate on market penetration (33.7% projected by 2031, 25% threshold reached in 2025 in some segments).&#xA;Waterman, A., Asanović, K. (eds.) (2019). The RISC-V Instruction Set Manual. UC Berkeley Technical Report UCB/EECS-2019-103. https://riscv.org/wp-content/uploads/2019/12/riscv-spec-20191213.pdf&#xA;Asanović, K., Patterson, D. A. (2014). &#34;Instruction Sets Should Be Free: The Case for RISC-V&#34;. EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2014-146.&#xA;Center for Security and Emerging Technology (2025). &#34;RISC-V: What it is and Why it Matters&#34;. https://cset.georgetown.edu/article/risc-v-what-it-is-and-why-it-matters/. On the incorporation of RISC-V International in Switzerland in March 2020 and the geopolitical implications.&#xA;Jamestown Foundation (2025). &#34;Examining China&#39;s Grand Strategy For RISC-V&#34;. https://jamestown.org/program/examining-chinas-grand-strategy-for-risc-v/&#xA;The Register (2025). &#34;Qualcomm takes RISC on Arm alternative with Ventana buy&#34;. https://www.theregister.com/2025/12/10/qualcommriscvarm_ventana/. On the acquisition of Ventana Micro Systems by Qualcomm on 10 December 2025.&#xA;Quintauris GmbH (2023). &#34;Five Leading Semiconductor Industry Players Incorporate New Company, Quintauris, to Drive RISC-V Ecosystem Forward&#34;. Press release, 22 December 2023. https://www.quintauris.com&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/arm-the-chip-we-didnt-know-we-needed&#34;Discuss.../a&#xA;&#xA;#ARM #RISCV #Semiconductors #OpenHardware #SophieWilson #DigitalSovereignty #IPLicensing #Computing #SolarPunk #FOSS&#xA;&#xA;div class=&#34;center&#34;&#xD;&#xA;· 🦣 a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a · 📸 a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a ·  📬 a href=&#34;mailto:jolek78@jolek78.dev&#34;Email/a ·&#xD;&#xA;· ☕ a href=&#34;https://liberapay.com/jolek78&#34;Support this work on Liberapay/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>There are architectures you see and architectures you don&#39;t. <strong>ARM</strong> is the most extreme case of the second category: it runs in the phone in our pocket, in the home router, in the eighty-euro board that serves as a home server for millions of tinkerers, in the datacentres of <strong>Amazon</strong> and <strong>Google</strong>. It is everywhere, and almost nobody knows what it is. It took me years too to bring it into focus, and the occasion was a <strong>Raspberry Pi 3</strong> that I had decided to turn into a Nextcloud — the first brick of what would become, in the years to come, my small homelab — many years ago. It was a line in <strong>/boot/config</strong> that made me notice the thing: the Pi&#39;s processor, a <strong>Broadcom BCM2837</strong>, used the same architecture as the <strong>Android</strong> phones I had hacked for years. ARM. Same instruction set, same underlying logic, same family.</p>



<h2 id="a-room-in-cambridge-a-government-project-and-a-woman" id="a-room-in-cambridge-a-government-project-and-a-woman">A room in Cambridge, a government project, and a woman</h2>

<p>The story of ARM does not begin in a Silicon Valley garage. It begins in Cambridge, in 1983, in a small company called <strong>Acorn Computers</strong>, on a commission from the <strong>BBC</strong>.</p>

<p>The context matters, because it changes the whole flavour of the story. The British government had decided to launch a national computer literacy programme — the BBC Computer Literacy Project — and needed a machine that could go into schools. Acorn won the tender with the <strong>BBC Micro</strong>, a cheap and robust computer that would introduce an entire generation of Britons to programming. It was the first time a state systematically funded popular access to computing. Not a startup with a venture-capital pitch: a public project, with public money, for an explicitly democratising goal.</p>

<p>But the BBC Micro was not enough. Acorn needed something more powerful for the next step, and the processors available on the market — 6502, Z80, the early Intel offerings — were either too slow, too complex, or too expensive. Acorn&#39;s research and development team then decided to design one from scratch, drawing inspiration from Patterson and Ditzel&#39;s work at Berkeley on the <strong>RISC</strong> architecture: simple instructions, executed quickly, few transistors, low power consumption. The result, in 1985, was the ARM1: thirty thousand transistors, no cache, no microcode.</p>

<p>The person who designed the architecture and instruction set of that ARM1 was called Sophie Wilson. Her approach is summarised in a sentence she gave in an interview with the Telegraph, and it is worth quoting:</p>

<blockquote><p>We accomplished this by thinking about things very, very carefully beforehand.</p></blockquote>

<p>Nothing particularly sophisticated, on the face of it. But in a sector where the dominant tendency was to add instructions and complexity to increase performance, the intuition of Wilson and her colleague Steve Furber went in the opposite direction: take away instead of add, simplify instead of complicate.</p>

<p>There is an episode that explains better than any technical analysis where this philosophy led. On 26 April 1985, when the first chips came back from the <strong>VLSI Technology</strong> foundry, Furber connected them to a development board and was puzzled: the ammeter in series with the power supply read zero. The processor seemed to be consuming literally nothing. The team that had designed the ARM1 numbered a handful of people — Wilson on the instruction set, Furber on microarchitecture design, a few collaborators around them — and operated with negligible resources compared to Intel or Motorola. The idea that they had just produced a processor that consumed zero was implausible.</p>

<p>The explanation, as Wilson recounted in a 2012 interview with The Register, was wrong in the most embarrassing way possible:</p>

<blockquote><p>The development board the chip was plugged into had a fault: there was no current being sent down the power supply lines at all. The processor was actually running on leakage from the logic circuits. So the low-power big thing that the ARM is most valued for today, the reason that it&#39;s on all your mobile phones, was a complete accident.</p></blockquote>

<p>The board was faulty, the power was not actually reaching the chip, and the processor was running on the leakage current from the logic circuits. The most important characteristic of the most widespread ARM architecture on the planet — the energy efficiency that makes it suitable for mobile devices — was discovered by mistake, on a broken board, by an engineer convinced he had a faulty measuring instrument.</p>

<p>Furber, for his part, explained the dynamic in more engineering terms:</p>

<blockquote><p>We applied Victorian engineering margins, and in designing to ensure it came out under a watt, we missed, and it came out under a tenth of a watt.</p></blockquote>

<p>The “Victorian engineering margins” are the generous safety margins typical of late nineteenth-century engineering — over-dimensioning every component to avoid failures. Furber and Wilson, accustomed to designing with limited resources and no margin for error, had applied the same principle to the chip design: design for consumption under a watt, and end up well below.</p>

<blockquote><p>There was no magic with the low power characteristics apart from simplicity.</p></blockquote>

<p>No magic. Just a design done well by a small team that could not afford to get it wrong. On that accident, and on that simplicity, ARM&#39;s dominance in mobile for the next forty years would be built.</p>

<hr/>

<p><strong><em>A note on Sophie Wilson</em></strong></p>

<p><em>Born in Leeds in 1957. She studied mathematics at Selwyn College, Cambridge, and as a student already worked with Hermann Hauser at Acorn — designing the Acorn System 1 even before graduating. In 1981, on commission from the BBC, she wrote BBC BASIC: a complete programming language in 16 kilobytes, so well-designed that it is still in use today on embedded systems. The “subtract instead of add” philosophy that would make ARM1 what it is was not born in 1985: it was born in the extreme memory constraints of the BBC Micro. Only later, in 1983, did Wilson begin work on the ARM1 instruction set, which she completed with Steve Furber in 1985. After Acorn she moved to Element 14, a 1999 spin-off absorbed by Broadcom in 2000. At Broadcom, where she still works as a Distinguished Engineer, she contributed to the BCM family of SoCs — including those that ended up inside the early Raspberry Pis, BCM2837 of the Pi 3 included. Recognition came late: Computer History Museum Fellow Award in 2012, Fellow of the Royal Society in 2013, Commander of the Order of the British Empire in 2019. In the 1990s she completed her <a href="https://web.archive.org/web/20200810221447/https://www.beyondpositive.org/2012/05/09/you-are-beautiful-and-dont-you-forget-it-a-word-about-acceptance/">gender transition</a>, continuing to work in the sector without interruption.</em></p>

<hr/>

<p>In 1990, Acorn, Apple and <strong>VLSI Technology</strong> founded a separate joint venture to manage and license the architecture. The name changed from Acorn <strong>RISC Machine</strong> to <strong>Advanced RISC Machines</strong>. ARM Holdings was born as an independent company, headquartered in Cambridge, with a business model that had no precedent in the sector: it would never manufacture a single chip. It would sell the idea of the chip. Licences, royalties, IP. Anyone who wanted to build an ARM processor would have to pay them.</p>

<p>It was a technical choice, but also a political one. ARM did not have the capital to build factories, did not have the infrastructure. But it had something harder to replicate: a clean, efficient architecture, designed well from the start.</p>

<h2 id="the-architecture-of-invisible-power" id="the-architecture-of-invisible-power">The architecture of invisible power</h2>

<p>ARM&#39;s business model is one of the most elegant — and least understood — in the entire technology industry. It works like this: ARM designs the processor architectures and licenses their use to third parties in exchange for an upfront fee (typically between one and ten million dollars) plus a royalty on every chip produced, usually around 1–2% of the final device price. Whoever buys the licence can then build their own chips based on that architecture, customising it within the limits allowed by the contract. They are not buying a product, then: they are buying the right to make one.</p>

<p>Garnsey, Lorenzoni and Ferriani, in a fundamental study on the birth of ARM as a spin-off from Acorn published in Research Policy in 2008, describe this transition as an exemplary case of <em>techno-organizational speciation</em>: technology is not simply transferred, but is radically transformed in the passage to a new domain through a new organisational model. ARM is not Acorn that changes its name: it is a new organism, with a completely different survival logic, which carries the original DNA but adapts to an environment Acorn could never have inhabited.</p>

<p>The practical result of this structure is what the industry calls neutral positioning. ARM does not compete with its customers — it does not sell chips, does not produce devices — so it can sell the same licence to <strong>Qualcomm</strong>, <strong>Apple</strong>, <strong>Samsung</strong> and <strong>MediaTek</strong>, who fight each other on the market every day. It is the “Switzerland” of silicon: a credible referee, a common infrastructure, a layer everyone builds on without having to trust the others. This has created an ecosystem of over a thousand licensee partners — a number impossible to reach for any traditional chip manufacturer. Furber, today professor of computer engineering at the University of Manchester, summed up the result in a way that is hard to forget:</p>

<blockquote><p>I suspect there&#39;s more ARM computing power on the planet than everything else ever made put together. The numbers are just astronomical.</p></blockquote>

<p>It is not rhetoric: it is the logical consequence of a model that multiplies adoption instead of concentrating it.</p>

<p>But this neutrality has a structural cost that is rarely thematised. When ARM sells a licence, it also sells dependence. Whoever builds their own <strong>SoC</strong> on ARM architecture is bound to that instruction set for the entire life of the product. Changing architecture would mean rewriting the software, recertifying the systems, redoing the chip design. The exit cost is very high. And this means that ARM, despite producing nothing, exercises enormous systemic power: it can renegotiate licence terms, raise royalties, decide who gets access to the most advanced architectures and who does not. Abstract as this dependence may sound on paper, there is a recent case that makes it very concrete — and worth following in detail, because it illustrates exactly how ARM power is exercised in the real world.</p>

<p>In 2021, <strong>Qualcomm</strong> acquired for $1.4 billion a Californian startup called Nuvia, founded by three former Apple Silicon engineers — Gerard Williams III, Manu Gulati, John Bruno — who were designing a server chip called Phoenix, based on the <strong>ARM v8.7-A</strong> architecture. Nuvia had its own ALA (Architecture License Agreement) with ARM, negotiated on the terms of a small startup entering a new market. When Qualcomm bought it, it integrated the Phoenix technology into its own Oryon core, the heart of the new <strong>Snapdragon X Elite</strong> — the chip with which Qualcomm wanted to challenge Intel and <strong>AMD</strong> in the AI PC laptop market.</p>

<p>The problem was contractual, not technical. Qualcomm&#39;s ALA with ARM already existed, and provided for lower royalties than Nuvia&#39;s. Qualcomm argued that the integration of <strong>Nuvia</strong> into its own chips fell under its pre-existing ALA. ARM replied that no: the acquisition required a full renegotiation from scratch — on ARM&#39;s terms, naturally. In 2022 ARM took Qualcomm to court asking, among other things, for the physical destruction of the pre-acquisition Nuvia designs. Not a downsizing, not a renegotiation: destruction. The message was unambiguous: IP licensing is not a sale, it is a revocable permission, and the permission is granted by whoever owns the architecture.</p>

<p>The case went to trial in Wilmington, Delaware, in December 2024. The jury ruled unanimously in favour of Qualcomm on two of the three contested points, hung jury on the third. On 30 September 2025, Judge Maryellen Noreika issued the final ruling: full and final judgment in favour of Qualcomm and Nuvia on all fronts, also rejecting ARM&#39;s request for a new trial. The judge explicitly noted that ARM itself, in its own internal documents, admitted to having recorded historic licensing and royalty revenues after attempting to terminate Nuvia&#39;s ALA in 2022 — which, translated, means: while claiming to have been damaged by Nuvia&#39;s actions, ARM was making piles of money precisely thanks to the ecosystem built on that architecture.</p>

<p><strong>ARM</strong> has announced it will appeal. <strong>Qualcomm</strong>, for its part, already has a counter-suit open since April 2024 against ARM — accusing it of withholding technical deliverables, anti-competitive behaviour, and (in a subsequent amendment) of intending to enter the server chip market as a direct competitor. The trial, originally set for March 2026, has been postponed to October 2026 to deal with a series of pending motions — a sign that the complexity of the dispute does not exhaust itself easily. That is: ARM, which built everything on neutral positioning, finds itself accused in court of wanting to become a silicon producer. Aka: <em>the Switzerland that suddenly wants an army</em>.</p>

<p>The Qualcomm/Nuvia case is important not because Qualcomm won, but because it publicly exposed the nature of the power ARM exercises. The real asset had never been the architecture — the architecture is technical documentation, brutally, in the end. The real asset was the contract. The capacity to drag into court anyone who thinks they can use that documentation without the right permission. Langdon Winner, in his influential 1980 essay <em>Do Artifacts Have Politics?</em>, argued that technological choices are never neutral — they incorporate power structures, distribute access in non-random ways, create dependencies that persist long after the initial decision.</p>

<blockquote><p>It is still true that, in a world in which human beings make and maintain artificial systems, nothing is “required” in an absolute sense. Nevertheless, once a course of action is underway, once artifacts like nuclear power plants have been built and put in operation, the kinds of reasoning that justify the adaptation of social life to technical requirements pop up as spontaneously as flowers in the spring.</p></blockquote>

<p>And ARM is an almost perfect case of this thesis applied to the IP economy: an architecture born of a public computer-literacy project becomes the foundation on which an invisible monopoly is built across tens of billions of devices. It is not malice. It is structure. The chip has no intentions. But the licensing structure that sits on top of it, that one does.</p>

<h2 id="a-new-front-the-datacentre" id="a-new-front-the-datacentre">A new front: the datacentre</h2>

<p>A parenthesis is necessary, because it tells where ARM is going right now — and why the Qualcomm/Nuvia case has the importance it has.</p>

<p>For the first part of its history, ARM was the architecture of mobile. Servers, datacentres, enterprise computing were Intel territory: x86 dominated in an apparently unchallenged way. Things began to change in 2018, when <strong>Amazon Web Services</strong> announced the first <strong>Graviton</strong>, a custom ARM chip designed in-house by <strong>Annapurna Labs</strong> (acquired by AWS in 2015). The selling argument was simple and technically sound: at equivalent loads, ARM chips consumed much less energy than equivalent x86, and in a datacentre where the electricity bill is a third of operating costs, this translates directly into margin.</p>

<p>Since then the trajectory has been steady and surprisingly fast. In 2023 ARM accounted for about 5% of the cloud compute of the three major hyperscalers. ARM itself, in its 2025 communications, claims that by year-end approximately half of the compute shipped to the top hyperscalers will be ARM-based — a figure to be taken with the caution due to a company talking about its own market, but consistent: for the third consecutive year, more than half of new CPU capacity added to AWS is Graviton, and 98% of the top one thousand EC2 customers use it. AWS Graviton5, announced on 4 December 2025 at <strong>re:Invent</strong>, has 192 cores in a single socket, an L3 cache five times larger than the previous generation, and is based on the <strong>Neoverse V3 ARMv9.2</strong> cores at 3 nanometres. Google has launched <strong>Axion</strong> (based on Neoverse V2) with the claim of a 65% better price-performance compared to x86 instances. Microsoft has rolled out <strong>Cobalt 100</strong> in 29 global regions. NVIDIA — the very same <strong>NVIDIA</strong> that had tried to buy ARM — uses ARM Neoverse cores in <strong>Grace</strong>, the CPU that accompanies its H100 and B100 GPUs for AI workloads. Spotify, Paramount+, Uber, Oracle, Salesforce have migrated infrastructure to ARM. Over a billion ARM Neoverse cores have been deployed in datacentres worldwide.</p>

<p>This changes the proportions of the game. When ARM made money on smartphone royalties, we were talking about cents per chip but on billions of units. In datacentres things are different: every Graviton5 costs AWS thousands of dollars, and every server with an ARM chip on board is a more substantial royalty. The datacentre is the segment where ARM can finally start extracting value aggressively. And it is also the segment where licensees have most to lose: if Apple or Qualcomm raise your royalties on a phone, it is an annoyance; if ARM raises your royalties on the chip running your cloud, it is an attack on the operating margin of your business.</p>

<p>It is easier to understand, in this light, why Qualcomm pulled out the Nuvia case with such determination. And why — as we will see shortly — it is looking for an architectural way out.</p>

<h2 id="the-failed-coup" id="the-failed-coup">The failed coup</h2>

<p>November 2020. Jensen Huang, NVIDIA&#39;s CEO, announces the acquisition of ARM from <strong>SoftBank</strong> for $40 billion. It would have been the largest operation in semiconductor history. It did not go through, and understanding why helps to see how systemic ARM&#39;s position in the industry was — and still is.</p>

<p>Hermann Hauser, the Austrian from Cambridge who had founded Acorn, the company from which ARM was born, had reacted to the SoftBank acquisition back in July 2016 with a public statement on Twitter that left no room for interpretation:</p>

<blockquote><p>ARM is the proudest achievement of my life. The proposed sale to SoftBank is a sad day for me and for technology in Britain.</p></blockquote>

<p>When, four years later, NVIDIA announced its intention to buy ARM from SoftBank, Hauser&#39;s reaction was even sharper. In an interview with the BBC he explained the structural problem with a clarity that regulatory documents rarely achieve:</p>

<blockquote><p>It&#39;s one of the fundamental assumptions of the ARM business model that it can sell to everybody. The one saving grace about Softbank was that it wasn&#39;t a chip company, and retained ARM neutrality. If it becomes part of Nvidia, most of the licensees are competitors of Nvidia, and will of course then look for an alternative to ARM.</p></blockquote>

<p>And in his written testimony submitted to the British Parliament he added, with the freedom of someone who had nothing left to lose:</p>

<blockquote><p>I have no shares or other interest in ARM as I had to sell them all to Softbank. I can therefore freely speak my mind.</p></blockquote>

<p>Hauser was right. NVIDIA, in 2020, was already dominant in artificial intelligence through its GPUs. Buying ARM would have meant getting early access to new designs ahead of competitors, the ability to slow or deny licences to rivals, and benefiting freely from the architecture while others continued paying royalties. Qualcomm, <strong>Microsoft</strong> and <strong>Google</strong> publicly opposed the deal. The American <strong>FTC</strong> opened an antitrust proceeding. The European Commission launched an investigation. Britain opened its own. China raised a red flag. In February 2022, the deal was formally cancelled for significant regulatory challenges.</p>

<p>There is another Hauser statement worth quoting. In a 2022 interview with UKTN, he called British politicians «technologically illiterate» and «the root cause» of the governance problems around ARM. He argued that the government should have taken a golden share in ARM long before, and that any attempt to do so in 2022 was «trying to close the gate after the horse has bolted». An architecture born with public money and a public mandate had become a pawn in the power game between SoftBank, NVIDIA and the NASDAQ — because no one had thought, at the appropriate moment, that it was worth keeping it in public territory.</p>

<p>The end of the story: SoftBank took ARM public in September 2023, in what was the largest IPO of the year. <strong>ARM Holdings</strong> is today listed on NASDAQ with a market capitalisation of around $150 billion. Masayoshi Son is still the controlling shareholder. The fact that the acquisition attempt by the world&#39;s largest AI chip producer was blocked by regulators does not eliminate the problem — it shifts it. ARM is independent, but it is a very particular form of independence: that of a systemic infrastructure in the hands of financial investors, subject to stock-market logic, obliged to grow revenues every quarter. The uncomfortable question is: <em>what happens when the needs of a commons architecture — stable, predictable, accessible, neutral — conflict with the needs of a publicly listed company that has to raise royalties to satisfy shareholders?</em> It is not a theoretical question. ARM has systematically increased its licence fees in recent years. And the major licensees have started looking for alternatives.</p>

<h2 id="the-half-democratisation" id="the-half-democratisation">The half-democratisation</h2>

<p>We have to give ARM what ARM deserves, before continuing with the critique. And what it deserves is considerable.</p>

<p>The <strong>Raspberry Pi</strong> — version 3 in 2017, version 5 today — costs less than eighty euros for the most recent version. It is a complete computer, capable of running <strong>Linux</strong>, a server, a media centre, a network node. It exists because the <strong>ARM</strong> architecture has made it possible to produce powerful and very low-power <strong>SoCs</strong> at costs that <strong>x86</strong> processors cannot get close to. The same principle applies to the billion-plus smartphones in the hands of people in countries where a desktop PC would be an inaccessible luxury. To the microcontrollers controlling IoT sensors at a few cents each. To the embedded processors in medical devices, industrial control systems, critical infrastructure. ARM has materially <em>lowered the cost of access to computational hardware</em> on a global scale.</p>

<p>Wilson herself, looking back on the whole story, framed it with a lucidity that almost sounds like a warning:</p>

<blockquote><p>To build something new and complicated, it&#39;s not the sort of quick thing, it&#39;s a sustained effort over a long period of time. It takes many people&#39;s different inputs to make something unique and novel. Overnight success takes 30 years.</p></blockquote>

<p>Thirty years of invisible work, of architectures refined chip by chip, of licences negotiated one at a time, before the world noticed that ARM was everywhere.</p>

<p>The “democratisation” effected by ARM is real but structurally asymmetric. It has democratised access to hardware for device manufacturers — anyone can build an ARM chip by paying the licence — but not necessarily for the end users of those devices. An <strong>iPhone</strong> — or an <strong>Android</strong> phone — has an ARM chip designed by a company, but the end user has no access to the chip&#39;s architecture, no possibility to modify it, no transparency on what runs at that level. The chip is ARM, the device is a closed box. This is the final contradiction: you may have the right — or almost — to manage the software running on an ARM chip, but below the <strong>kernel</strong>, below the <strong>bootloader</strong>, there is a chip whose architecture was defined in Cambridge, produced in Taiwan, integrated into a SoC designed by Broadcom, over which you can have no control. Sovereignty ends exactly where silicon begins. Those who really benefited are the oligopoly of large licensees — Apple, Qualcomm, Samsung, NVIDIA, Amazon with its Gravitons — <em>not the small Bangalore startup</em> with an idea for a specialised chip.</p>

<p>And yet — and here the story gets complicated, in an interesting way — within the narrow space the ARM licensing model concedes, someone is nevertheless trying to pull the lever of openness at the levels available. In December 2024, a Shenzhen company called <strong>Radxa</strong> announced the <strong>Radxa Orion O6</strong>, presented as the “<strong><em>World&#39;s First Open Source Arm V9 Motherboard</em></strong>”. It is a Mini-ITX board at $200 in the base version, based on the <strong>Cix CD8180</strong> SoC — an <strong>ARMv9.2</strong> chip with 12 cores (four Cortex-A720 at 2.8 GHz, four at 2.4 GHz, four Cortex-A520 at 1.8 GHz) produced by Cix Technology, a Chinese fabless founded in 2021. Debian 12, Fedora and Ubuntu run natively on it, with UEFI EDKII and SystemReady SR certification. The first Geekbench benchmarks put it at the level of an Apple M1 in single-core — not bad for an ARM board at less than a tenth of the price of a Mac mini.</p>

<p>*Note: it is worth clarifying what “<strong>open source</strong>” means here, because it means different things at different levels. The ARMv9.2 instruction set on which the CD8180 is built is not open: Cix pays regular royalties to ARM Holdings like all other licensees. The SoC itself is not open: it is a proprietary chip, with the NPU microcode and Mali GPU blocks all closed. What is open is the layer immediately above: board schematics, Board Support Package, EDKII bootloader, Linux kernel, device tree — all published under free licences, replicable, modifiable.*</p>

<p>It is also a concrete demonstration of what the <strong><em>open hardware</em></strong> movement has been arguing for twenty years: openness is layered, and <em>opening one more layer than was open before is already a political act</em>, even if the foundation underneath remains closed. The fact that this board comes from China — like the RISC-V pivot we will discuss shortly — is no accident: it is consistent with a geopolitical trajectory that seeks margins of technological sovereignty wherever it is possible to extract them.</p>

<h2 id="the-linux-moment-for-hardware" id="the-linux-moment-for-hardware">The Linux moment for hardware</h2>

<p>And here RISC-V comes onstage. And the story gets more interesting.</p>

<p><strong>RISC-V</strong> was born in 2010 at the University of California Berkeley, in the same department that had helped inspire the original RISC architecture thirty years earlier. Krste Asanović and his collaborators needed a clean processor architecture for research, without having to pay licences or ask permission. They decided to design one from scratch, and to make it completely open: no royalties, no licences, no intellectual property to respect. The RISC-V instruction set is an open standard, freely published, that anyone can implement, modify, distribute.</p>

<p>For ten years RISC-V was an academic experiment, then a nucleus of embedded adoption, then an interesting alternative for those who wanted custom chips without paying ARM. In the last two or three years the proportions have changed. The <strong>SHD Group</strong>, a market analysis firm that has been monitoring the RISC-V sector since 2019, announced at the November 2025 RISC-V Summit that the technology&#39;s market penetration had exceeded 25% — an important symbolic threshold, even if it is to be taken with some caution. The same <strong>RISC-V International</strong> annual report for 2025 admits it is not entirely clear whether the 25% refers to the global microprocessor market in the strict sense or only to the segments where RISC-V already has a significant presence (embedded, IoT, microcontrollers). The SHD projection for 2031 is 33.7%. However it is measured, the trajectory is that of an architecture that is no longer a niche: it is the third pillar of computing, alongside <strong>x86</strong> and <strong>ARM</strong>.</p>

<p>The strength of RISC-V is not just technical — it is political in the most precise sense of the term. Some examples:</p>

<p>The Chinese front. China has very concrete reasons not to want to depend on ARM, a company listed in New York with American shareholders. Under increasingly stringent US sanctions on advanced <strong>Intel/AMD</strong> chips, China has pivoted en masse to RISC-V — also because the RISC-V International consortium was strategically moved from Delaware to Switzerland in March 2020, formally placing it beyond the reach of unilateral American export controls. <strong>Alibaba</strong>, through its T-Head division, has released the <strong>XuanTie C920</strong> chips and successors. Smaller Chinese manufacturers are flooding the mid-market with RISC-V AI accelerators that cost significantly less than the equivalent Western ones under sanction. It is an architectural decoupling, not just a commercial one.</p>

<p>The European front. The European Union, through the <strong>EU Chips Act</strong>, funds the Project DARE consortium (Digital Autonomy with RISC-V in Europe) with the explicit goal of reducing European dependence on American and British technology in critical infrastructure. Quintauris, a joint venture founded in December 2023 by Bosch, Infineon, Nordic Semiconductor, NXP and Qualcomm (with STMicroelectronics joining as a sixth shareholder in 2024), developed in 2025 <strong>RT-Europa</strong>, the first RISC-V platform for real-time automotive controllers — a sector where dependence on foreign IP had become strategically intolerable.</p>

<p>The Qualcomm front. In December 2025, while the Nuvia case closed yet another chapter against ARM, Qualcomm acquired Ventana Micro Systems, one of the most advanced companies in the development of high-performance RISC-V cores. Literally: not only was Qualcomm fighting ARM in court, it was also buying the way to no longer need ARM. It is the most significant move in all the recent history, because for the first time one of the major ARM licensees equips itself with a credible architectural plan B.</p>

<p>Three different fronts, one same direction. The parallel with Linux is more than metaphorical. <strong>Linux</strong> did not kill <strong>Windows</strong> or macOS. <em>But it did create a real alternative</em> that changed the terms of power in the software industry. RISC-V aspires to do the same thing for hardware. And the critical point — the one Winner would have appreciated — is that this openness is built into the architecture itself, not guaranteed by a company&#39;s good will. You cannot buy RISC-V and “close it”. The instruction set is public by definition. You can build proprietary implementations on top of it — and many companies are doing that — but the foundation remains accessible.</p>

<p>And here the question: will RISC-V be incorporated by capitalism exactly as Linux was? The honest answer is: probably yes, and in part it already has been. The major RISC-V implementations by Apple, Google and Meta are not open source — they use the open instruction set to build proprietary architectures. The fact that the foundation is free does not mean that everything built on top of it is. The same logic Boltanski and Chiapello described applies: critique is not defeated, it is incorporated. But at least the foundation remains open. And that counts.</p>

<h2 id="conclusions-or-questions-if-you-prefer" id="conclusions-or-questions-if-you-prefer">Conclusions — or questions, if you prefer</h2>

<p>ARM is born of a public mandate and a democratisation project, and becomes the foundation of a private oligopoly. The chip is the same; the power structure on top of it is radically different from the one that produced it. And that chip really did lower the entry barriers for hardware producers — it produced the Raspberry Pi, the cheap phones, the microcontrollers everywhere, the more efficient datacentres — but the democratisation stopped at the gates of the production chain. The end users of those devices gained no real sovereignty over the silicon they hold in their pocket.</p>

<p><strong>NVIDIA</strong>&#39;s attempt to acquire <strong>ARM</strong> was blocked by regulators, but only because it would have concentrated power too visibly. The systemic power ARM already exercises — silently, through licences and royalties, through legal cases against those trying to step out of contractual terms — disturbs no regulator, generates no headlines, produces no parliamentary hearings. It is the kind of power that makes itself invisible precisely because it is structural: it does not lie in a decision, it lies in the conditions within which decisions are made.</p>

<p>There is also a contradiction that concerns me personally. That Raspberry Pi I had on the table — and all the ARM chips in the phones I have hacked for years — were already, in some sense, part of a system I did not control. I changed the software on top. I did not change the power structure underneath (one could make the same argument about Intel, ça va sans dire…). Digital sovereignty ends exactly where silicon begins, and pretending otherwise would be dishonest.</p>

<p><strong>RISC-V</strong> opens a real crack. Not a revolution — <em>a crack</em>. The possibility that the foundation of computing be a commons, instead of private property subject to corporate decisions and legal battles. It does not solve the problem of closed hardware, it does not solve the problem of oligopolistic foundries, it does not solve any of the contradictions described. But at least it does not aggravate them. It is the same logic of the open hardware movement, which for twenty years has been trying to apply to silicon what free software has applied to code — with more modest results, because the physical layer is structurally more hostile to the commons: if you cannot open it, you do not really own it. And in a sector where every layer of the technology stack has been systematically fenced off, keeping the foundation open is a political act, not just a technical one.</p>

<p>What stays with me is a feeling familiar to anyone who has spent time thinking about computing as political territory. Technological choices incorporate power structures. Power structures persist long after the original choices have been forgotten. And whoever controls the basic infrastructure — the instruction set, the architecture, the licences — controls something much more important than a company: they control the rules of the game on which everything else is built.</p>

<p>The question I leave open is: in whose favour were these rules written? And by what right do they continue to apply?</p>

<hr/>

<h2 id="sources-and-further-reading" id="sources-and-further-reading">Sources and further reading</h2>

<p><strong>On the history of ARM and its origins</strong></p>
<ul><li>Garnsey, E., Lorenzoni, G., Ferriani, S. (2008). “Speciation through entrepreneurial spin-off: The Acorn-ARM story”. Research Policy, 37(2): 210-224. doi: 10.1016/j.respol.2007.11.006. The most in-depth academic study on the origin of ARM as a spin-off from Acorn and on the genesis of its IP licensing-based business model. <a href="https://www.sciencedirect.com/science/article/abs/pii/S0048733307002363">https://www.sciencedirect.com/science/article/abs/pii/S0048733307002363</a></li>
<li>Patterson, D., Ditzel, D. (1980). “The Case for the Reduced Instruction Set Computer”. ACM SIGARCH Computer Architecture News, 8(6): 25-33. The founding paper of the RISC architecture at Berkeley, which inspired the ARM project. <a href="https://dl.acm.org/doi/10.1145/641914.641917">https://dl.acm.org/doi/10.1145/641914.641917</a></li></ul>

<p><strong>On the IP licensing business model</strong></p>
<ul><li>Ferriani, S., Garnsey, E., Lorenzoni, G., Massa, L. (2015). “ARM plc and the IP Business Model”. Working Paper, Centre for Technology Management, University of Cambridge. <a href="https://www.ifm.eng.cam.ac.uk/uploads/Research/CTM/working_paper/2015-02-Ferriani-Garnsey-Lorenzoni-Massa.pdf">https://www.ifm.eng.cam.ac.uk/uploads/Research/CTM/working_paper/2015-02-Ferriani-Garnsey-Lorenzoni-Massa.pdf</a></li>
<li>Grindley, P. C., Teece, D. J. (1997). “Managing Intellectual Capital: Licensing and Cross-Licensing in Semiconductors and Electronics”. California Management Review, 39(2): 8-41.</li></ul>

<p><strong>On power in technological choices</strong></p>
<ul><li>Winner, L. (1980). “Do Artifacts Have Politics?”. Daedalus, 109(1): 121-136. <a href="https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf">https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf</a></li>
<li>Boltanski, L., Chiapello, È. (1999). Le nouvel esprit du capitalisme. Gallimard. (English transl. The New Spirit of Capitalism, Verso, 2005). <a href="https://www.jstor.org/stable/4201214">https://www.jstor.org/stable/4201214</a></li></ul>

<p><strong>On the Qualcomm/Nuvia case</strong></p>
<ul><li>Paul, Weiss (2025). “Qualcomm Wins Decisive Post-Trial Victory in High-Profile Licensing Dispute Against Arm”. <a href="https://www.paulweiss.com/insights/client-news/qualcomm-wins-decisive-post-trial-victory-in-high-profile-licensing-dispute-against-arm">https://www.paulweiss.com/insights/client-news/qualcomm-wins-decisive-post-trial-victory-in-high-profile-licensing-dispute-against-arm</a>. Press release of the law firm that represented Qualcomm, with summary of the 30 September 2025 ruling.</li>
<li>The Register (2025). “Judge dismisses Arm&#39;s last legal claim against Qualcomm”. <a href="https://www.theregister.com/2025/10/01/arms_last_legal_claim_against/">https://www.theregister.com/2025/10/01/arms_last_legal_claim_against/</a></li>
<li>Computerworld (2025). “Arm&#39;s high-stakes licensing suit against Qualcomm ends in mistrial, but Qualcomm prevails in key areas”. <a href="https://www.computerworld.com/article/3629812/">https://www.computerworld.com/article/3629812/</a></li></ul>

<p><strong>On the NVIDIA acquisition attempt and geopolitical implications</strong></p>
<ul><li>U.S. Federal Trade Commission (2021). Complaint in the Matter of NVIDIA Corporation and Arm Limited. <a href="https://www.ftc.gov/legal-library/browse/cases-proceedings/2110081-nvidia-corporationarm-limited">https://www.ftc.gov/legal-library/browse/cases-proceedings/2110081-nvidia-corporationarm-limited</a></li>
<li>Hauser, H. (2020). Written evidence submitted to the UK Parliament Business, Energy and Industrial Strategy Committee on the proposed acquisition of ARM by NVIDIA. Document BFA0018. <a href="https://committees.parliament.uk/writtenevidence/12711/pdf/">https://committees.parliament.uk/writtenevidence/12711/pdf/</a></li>
<li>Hauser, H. (2022). Interview with UKTN: “UK left it too late to take golden share in Arm”. <a href="https://www.uktech.news/news/government-and-policy/hermann-hauser-arm-golden-share-20220623">https://www.uktech.news/news/government-and-policy/hermann-hauser-arm-golden-share-20220623</a></li></ul>

<p><strong>On Sophie Wilson, Steve Furber and the origin of ARM1</strong></p>
<ul><li>Wilson, S. (2012). Interview with The Register: “ARM creators Sophie Wilson and Steve Furber”. <a href="https://www.theregister.com/2012/05/03/unsung_heroes_of_tech_arm_creators_sophie_wilson_and_steve_furber/">https://www.theregister.com/2012/05/03/unsung_heroes_of_tech_arm_creators_sophie_wilson_and_steve_furber/</a>. Contains Wilson&#39;s statement on low power as a complete accident.</li>
<li>Furber, S. (2010). Interview with ACM Queue: “A Conversation with Steve Furber”. <a href="https://queue.acm.org/detail.cfm?id=1716385">https://queue.acm.org/detail.cfm?id=1716385</a>. Contains the statement on Victorian engineering margins.</li>
<li>Furber, S. (2011). Interview with Communications of the ACM. <a href="https://cacm.acm.org/news/an-interview-with-steve-furber/">https://cacm.acm.org/news/an-interview-with-steve-furber/</a>. Contains the assessment on total ARM computing power on the planet.</li>
<li>Furber, S. (2017). “ARM: The architecture that conquered mobile computing”. Philosophical Transactions of the Royal Society A, 375(2104). doi: 10.1098/rsta.2017.0148.</li>
<li>Computer History Museum (2012). Fellow Award citation for Sophie Wilson and Steve Furber. <a href="https://computerhistory.org/chm-fellows/sophie-wilson/">https://computerhistory.org/chm-fellows/sophie-wilson/</a></li></ul>

<p><strong>On ARM in datacentres</strong></p>
<ul><li>Arm Holdings (2025). “Half of the Compute Shipped to Top Hyperscalers in 2025 will be Arm-based”. Arm Newsroom. <a href="https://newsroom.arm.com/blog/half-of-compute-shipped-to-top-hyperscalers-in-2025-will-be-arm-based">https://newsroom.arm.com/blog/half-of-compute-shipped-to-top-hyperscalers-in-2025-will-be-arm-based</a></li>
<li>Arm Holdings (2025). “How Arm is redefining compute through the converged AI data center”. Arm Newsroom. <a href="https://newsroom.arm.com/blog/arm-converged-ai-data-center-aws-graviton5">https://newsroom.arm.com/blog/arm-converged-ai-data-center-aws-graviton5</a></li>
<li>Omdia (2026). “Arm Steps Deeper into Silicon: Implications for the Semiconductor Value Chain”. <a href="https://omdia.tech.informa.com">https://omdia.tech.informa.com</a></li></ul>

<p><strong>On the democratisation of access to computing</strong></p>
<ul><li>Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press. <a href="http://www.benkler.org/Benkler_Wealth_Of_Networks.pdf">http://www.benkler.org/Benkler_Wealth_Of_Networks.pdf</a></li>
<li>Söderberg, J. (2008). Hacking Capitalism: The Free and Open Source Software Movement. Routledge. <a href="https://downloads.gvsig.org/download/people/vagazzi/Hacking%20Capitalism.pdf">https://downloads.gvsig.org/download/people/vagazzi/Hacking%20Capitalism.pdf</a></li></ul>

<p><strong>On RISC-V and architectural sovereignty</strong></p>
<ul><li>RISC-V International (2024). RISC-V Ratified Specifications. <a href="https://riscv.org/technical/specifications/">https://riscv.org/technical/specifications/</a></li>
<li>RISC-V International (2026). Annual Report 2025. <a href="https://riscv.org/wp-content/uploads/2026/01/RISC-V-Annual-Report-2025.pdf">https://riscv.org/wp-content/uploads/2026/01/RISC-V-Annual-Report-2025.pdf</a>. The official RISC-V International annual report, with the SHD Group estimate on market penetration (33.7% projected by 2031, 25% threshold reached in 2025 in some segments).</li>
<li>Waterman, A., Asanović, K. (eds.) (2019). The RISC-V Instruction Set Manual. UC Berkeley Technical Report UCB/EECS-2019-103. <a href="https://riscv.org/wp-content/uploads/2019/12/riscv-spec-20191213.pdf">https://riscv.org/wp-content/uploads/2019/12/riscv-spec-20191213.pdf</a></li>
<li>Asanović, K., Patterson, D. A. (2014). “Instruction Sets Should Be Free: The Case for RISC-V”. EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2014-146.</li>
<li>Center for Security and Emerging Technology (2025). “RISC-V: What it is and Why it Matters”. <a href="https://cset.georgetown.edu/article/risc-v-what-it-is-and-why-it-matters/">https://cset.georgetown.edu/article/risc-v-what-it-is-and-why-it-matters/</a>. On the incorporation of RISC-V International in Switzerland in March 2020 and the geopolitical implications.</li>
<li>Jamestown Foundation (2025). “Examining China&#39;s Grand Strategy For RISC-V”. <a href="https://jamestown.org/program/examining-chinas-grand-strategy-for-risc-v/">https://jamestown.org/program/examining-chinas-grand-strategy-for-risc-v/</a></li>
<li>The Register (2025). “Qualcomm takes RISC on Arm alternative with Ventana buy”. <a href="https://www.theregister.com/2025/12/10/qualcomm_riscv_arm_ventana/">https://www.theregister.com/2025/12/10/qualcomm_riscv_arm_ventana/</a>. On the acquisition of Ventana Micro Systems by Qualcomm on 10 December 2025.</li>
<li>Quintauris GmbH (2023). “Five Leading Semiconductor Industry Players Incorporate New Company, Quintauris, to Drive RISC-V Ecosystem Forward”. Press release, 22 December 2023. <a href="https://www.quintauris.com">https://www.quintauris.com</a></li></ul>

<p><a href="https://remark.as/p/jolek78/arm-the-chip-we-didnt-know-we-needed">Discuss...</a></p>

<p><a href="https://jolek78.writeas.com/tag:ARM" class="hashtag"><span>#</span><span class="p-category">ARM</span></a> <a href="https://jolek78.writeas.com/tag:RISCV" class="hashtag"><span>#</span><span class="p-category">RISCV</span></a> <a href="https://jolek78.writeas.com/tag:Semiconductors" class="hashtag"><span>#</span><span class="p-category">Semiconductors</span></a> <a href="https://jolek78.writeas.com/tag:OpenHardware" class="hashtag"><span>#</span><span class="p-category">OpenHardware</span></a> <a href="https://jolek78.writeas.com/tag:SophieWilson" class="hashtag"><span>#</span><span class="p-category">SophieWilson</span></a> <a href="https://jolek78.writeas.com/tag:DigitalSovereignty" class="hashtag"><span>#</span><span class="p-category">DigitalSovereignty</span></a> <a href="https://jolek78.writeas.com/tag:IPLicensing" class="hashtag"><span>#</span><span class="p-category">IPLicensing</span></a> <a href="https://jolek78.writeas.com/tag:Computing" class="hashtag"><span>#</span><span class="p-category">Computing</span></a> <a href="https://jolek78.writeas.com/tag:SolarPunk" class="hashtag"><span>#</span><span class="p-category">SolarPunk</span></a> <a href="https://jolek78.writeas.com/tag:FOSS" class="hashtag"><span>#</span><span class="p-category">FOSS</span></a></p>

<div class="center">
· 🦣 <a href="https://fosstodon.org/@jolek78">Mastodon</a> · 📸 <a href="https://pixelfed.social/jolek78">Pixelfed</a> ·  📬 <a href="mailto:jolek78@jolek78.dev">Email</a> ·
· ☕ <a href="https://liberapay.com/jolek78">Support this work on Liberapay</a>
</div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/arm-the-chip-we-didnt-know-we-needed</guid>
      <pubDate>Sat, 02 May 2026 00:30:00 +0000</pubDate>
    </item>
    <item>
      <title>Reflections on an (impossible) escape from capitalism</title>
      <link>https://jolek78.writeas.com/reflections-on-an-impossible-escape-from-capitalism?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[It was an ordinary Friday evening. The parcel had arrived with the courier that morning, but I only opened it after dinner, with that silent ceremony I perform every time new hardware shows up - as if opening a box too quickly were a form of disrespect toward the object. Inside was a HUNSN 4K. Small, almost ridiculously small. A mini PC in a form factor that fit in the palm of a hand. I put it on the table, looked at it. Looked at it again. And then an uncomfortable thought occurred to me. I had ordered it from a Chinese reseller, paid with a credit card, through a completely traceable payment infrastructure, from one of the most centralised and surveilled commercial ecosystems in existence. To build a homelab that would let me escape centralised and surveilled ecosystems.&#xA;&#xA;!--more--&#xA;&#xA;The funny thing - funny in the sense that it makes you laugh, but badly - is that I&#39;m not alone. Every day, somewhere in the world, someone orders a mini PC, a Raspberry Pi, a managed Mikrotik switch, with the stated goal of taking back control of their digital life. They order it on Alibaba, pay with PayPal, wait for the courier. And they see nothing strange in any of this, because the contradiction is so structural it has become invisible. This article is an attempt to make it visible again. Without easy solutions, because I don&#39;t have any. And when have I ever…&#xA;&#xA;The Promise of the Homelab&#xA;&#xA;When, in 2019, I started self-hosting pretty much everything - Nextcloud (always on a Raspberry Pi, first RPi3 then RPi4), Jellyfin, Navidrome, FreshRSS, and about twenty-five other services on Proxmox LXC, each with its own isolated Docker daemon - I did it with a precise motivation: I wanted to know where my data lived, who could read it, and have the ability to switch it off myself if I ever felt like it. Not when a company decides to shut down a service, not when someone else changes the licence terms. Me. This came after a long period of reflection on myself, the work I was doing and still do, and the technological society I live in. It is an ideological choice before it is a technical one. Technology as a tool for autonomy rather than control; infrastructure as something you own instead of something that owns you. I hope no one is alarmed if I say that some of these reflections began with reading Theodore Kaczynski&#39;s Manifesto, before eventually landing, of course, on more authoritative sources.&#xA;&#xA;Yes, I&#39;m mad, but not quite that mad…&#xA;&#xA;When you pay a subscription to a cloud service, the transaction does not end the moment you authorise the electronic payment. Shoshana Zuboff, in The Age of Surveillance Capitalism, calls this mechanism behavioral surplus: the behavioural data extracted beyond what is needed to provide the service, then resold as predictive raw material.&#xA;&#xA;  Under the regime of surveillance capitalism, however, the first text does not stand alone; it trails a shadow close behind. The first text, full of promise, actually functions as the supply operation for the second text: the shadow text. Everything that we contribute to the first text, no matter how trivial or fleeting, becomes a target for surplus extraction. That surplus fills the pages of the second text. This one is hidden from our view: &#34;read only&#34; for surveillance capitalists. In this text our experience is dragooned as raw material to be accumulated and analyzed as means to others&#39; market ends. The shadow text is a burgeoning accumulation of behavioral surplus and its analyses, and it says more about us than we can know about ourselves. Worse still, it becomes increasingly difficult, and perhaps impossible, to refrain from contributing to the shadow text. It automatically feeds on our experience as we engage in the normal and necessary routines of social participation.&#xA;&#xA;You are not the customer of the system - you are its product. Your habits, your schedules, your preferences, your hesitations before clicking on something: all of this is collected, modelled, sold. The transaction is not monthly: it is continuous, invisible, and never ends as long as you use the service. With hardware, in principle, the transaction is one-time: you buy, you pay, it ends, it is yours. The disk is in your room, not on a server subject to government requests, security breaches, or business decisions that are nothing to do with you but impact your access to those services. This distinction - between a tool you use and a system that uses you - is the real stake of the homelab. It is not about saving money, it is not about performance. It is about who controls what.&#xA;&#xA;The problem is that building this infrastructure requires hardware, time, knowledge, and resources. The hardware comes from somewhere; the time, the knowledge, and the energy resources come from a privilege not granted to everyone.&#xA;&#xA;The Market I Hadn&#39;t Seen&#xA;&#xA;Search for &#34;mini PC homelab&#34; on any marketplace. What you find is a productive ecosystem that has exploded over the past five years in a way I honestly did not expect.&#xA;&#xA;MINISFORUM, Beelink, Trigkey, Geekom, GMKtec. Zimaboard, with its single-board aesthetic designed explicitly for those who want home racks. Raspberry Pi and the galaxy of clones - Orange Pi, Rock Pi, Banana Pi. Managed Mikrotik switches at accessible prices. 1U rack cases to mount under the desk. M.2 NVMe SSDs with TBW figures calculated for small-server workloads. Silent power supplies designed to run 24/7. A market built from scratch, that exists precisely because there is a community of people who want to run servers at home. r/homelab and r/selfhosted on Reddit have approximately 2.8 and 1.7 million members respectively - numbers publicly verifiable, and growing. YouTube is full of dedicated channels. There is an entire attention economy built around &#34;escaping&#34; the attention economy.&#xA;&#xA;But it is worth asking: who built this market, and why. MINISFORUM and Beelink do not exist out of ideological sympathy for the homelab movement. They exist because they identified a profitable segment and served it with industrial precision. Kate Crawford, in Atlas of AI, documents how technology supply chains follow niche demand with the same efficiency with which they follow mass demand: factories in Guangdong optimise production lines not for a worldview, but for a margin. The fact that the resulting product also satisfies an ideological need is, from the manufacturer&#39;s point of view, irrelevant.&#xA;&#xA;  The Victorian environmental disaster at the dawn of the global information society shows how the relations between technology and its materials, environments, and labor practices are interwoven. Just as Victorians precipitated ecological disaster for their early cables, so do contemporary mining and global supply chains further imperil the delicate ecological balance of our era.&#xA;&#xA;The mechanism had been described with theoretical precision back in 1999 by Luc Boltanski and Ève Chiapello in The New Spirit of Capitalism. Their thesis: capitalism is never defeated by criticism - it is incorporated. When a critique becomes widespread enough, the system absorbs it and transforms it into a market segment. The artistic critique of the 1960s - autonomy, authenticity, rejection of standardisation - became the marketing of the creative economy. The critique of digital centralisation - sovereignty, privacy, control - has become an online catalogue to browse through.&#xA;&#xA;Resistance has become a market segment. Every time someone buys a HUNSN to stop paying subscriptions to services they don&#39;t control, a factory in Guangdong sells a HUNSN. Capitalism has not been defeated - it has shifted (at least for a small slice of the population: the nerds, the hackers) the extraction point from subscriptions to hardware.&#xA;&#xA;The Accumulation Syndrome&#xA;&#xA;But there is a further level - more ridiculous and more personal - that homelab communities never discuss openly, yet anyone who has a homelab recognises immediately. The Raspberry Pi 4 bought &#34;for a project.&#34; The old ThinkPad kept because &#34;you never know.&#34; The 4TB disk salvaged from a decommissioned NAS - and &#34;it might come in handy.&#34; The second-hand switch picked up on eBay for eighteen euros because it was cheap and might be useful. The cables, the cables, the cables.&#xA;&#xA;r/homelab has a term for this: just in case hardware. It is the hardware of the imaginary future, of projects that only exist in your head, of configurations that one day - one day - you will finally test. In the meantime it occupies a shelf, draws current in standby, and generates a diffuse sense of possibility that is indistinguishable from the most classic consumerism. The underlying psychological mechanism has a precise name: compensatory consumption - consumption as a response to a perceived loss of autonomy or control. You buy hardware because buying hardware gives you the feeling of recovering agency over something. The aesthetic is different from traditional consumerism - no luxury logos, no recognisable status symbols - but the mechanism is identical.&#xA;&#xA;That said, there is a partially honest answer to all of this: the second-hand and refurbished market. The ThinkPad X230 on eBay, the Dell R720 server decommissioned from a datacentre, the disk from someone who upgraded their NAS. My ZFS NAS, to give one example, is a recycled old tower with four 1TB disks in RAIDZ - hardware that would otherwise have ended up in landfill, with a life cycle extended by years, without generating new production demand. It is closer to the ethics of repair than to compulsive buying. But it has its own internal contradiction: it requires even more technical competence than buying new - knowing how to assess wear, diagnose an unknown component, manage ten-year-old drivers. The barrier to entry rises further. And the refurbished market is itself now an organised commercial sector, with its own margins, its own platforms, its own pricing logic. It is not a clean way out. It is a less dirty way out.&#xA;&#xA;And then there is the energy question, which is usually ignored in homelab discussions and is instead the most uncomfortable of all - uncomfortable enough to deserve a more in-depth treatment later on. For now, suffice it to say: every machine on your shelf that &#34;draws current in standby&#34; is a line item in the energy bill that the homelab movement rarely accounts for.&#xA;&#xA;Not for Everyone. And It Should Not Be This Way.&#xA;&#xA;There is a second level of the paradox that is even more uncomfortable than the first. Building a homelab costs money - relatively little, but it costs. It requires physical space. It requires a decent connection. And it requires time. A lot of time. Not installation time - that is measurable, finite. The learning time that precedes everything else. To reach the point where you can build a functional infrastructure with Proxmox, LXC containers, centralised authentication, reverse proxy, automated backups - you need to have already spent years understanding how Linux works, how to reason about networks and permissions, how to read a log. I started with a Red Hat in 1997, and it took me almost thirty years to get where I am. I should know this. Yet it always escapes me. And that time did not fall from the sky. It is time I was able to dedicate because I had a certain kind of job, a certain stability, a certain amount of mental energy left at the end of the day. It is middle-class-with-a-stable-position time, not the time of someone working three warehouse shifts a week. Passion is not enough.&#xA;&#xA;Johan Söderberg documents this in Hacking Capitalism: the FOSS movement was born as resistance to capitalism, but reproduces within itself hierarchies of skill and merit that make it structurally exclusive. Freedom is technically available to anyone, but effective access requires resources distributed in anything but a democratic manner. Söderberg goes further than simply observing the exclusivity: the voluntary open source work produces use value - functioning software, documentation, community support - that capital then extracts as exchange value without remunerating those who produced it. Red Hat builds a billion-dollar company on a kernel written largely by volunteers. It is not just that not everyone can get in: it is that those who get in often work for someone without knowing it. The homelab inherits this problem and amplifies it.&#xA;&#xA;  The narrative of orthodox historical materialism corresponds with some very popular ideas in the computer underground. It is widely held that the infinite reproducibility of information made possible by computers (forces of production) has rendered intellectual property (relations of production, superstructure) obsolete. The storyline of post-industrial ideology is endorsed but with a different ending. Rather than culminating in global markets, technocracy and liberalism, as Daniel Bell and the futurists would have it; hackers are looking forward to a digital gift economy and high-tech anarchism. In a second turn of events, hackers have jumped on the distorted remains of Marxism presented in information-age literature, and, while missing out on the vocabulary, ended up promoting an upgraded Karl Kautsky-version of historical materialism.&#xA;&#xA;This is not a quirk of the homelab movement: it is a recurring structure in every technological wave. Langdon Winner, in his influential essay Do Artifacts Have Politics?, argued that technological choices are never neutral - they incorporate power structures, distribute access in non-random ways. Amateur radio in the 1920s, the personal computer in the 1980s, the internet in the 1990s: every time the promise was democratising, every time the actual distribution followed the lines of pre-existing privilege. Not out of malice, but out of structure. The irony is this: those who would most need digital autonomy - those who cannot afford subscriptions, those who live under governments that surveil communications, those most exposed to data collection - are exactly those least likely to be able to build a homelab. Not for lack of interest or intelligence. For lack of time, money, and years of privileged exposure to technology.&#xA;&#xA;Homelab communities do not usually talk about this. They talk about which mini PC to buy, how to optimise energy consumption, which distro to use as a base. The conversation about structural exclusivity exists, but at the margins - in Jacobin, in Logic Magazine, in EFF activism - while the centre of the discourse remains impermeable. It is not that no one speaks about it: it is that the peripheries speak about it, and the peripheries do not set the agenda. This entire conversation takes place in a room to which not everyone has a ticket. And those inside do not seem to find that particularly problematic.&#xA;&#xA;A Technological Cosplay?&#xA;&#xA;So is the whole thing a con? Is the homelab just anti-capitalist cosplay while you continue to fund the same supply chains? In part, yes.&#xA;&#xA;The HUNSN 4K was designed in China, assembled in China, shipped by container on ships burning bunker fuel. Global maritime transport is responsible for approximately 2.5% of global CO₂ emissions - a share that the IMO (International Maritime Organization) has been trying to reduce for years with slow progress and targets continually postponed. Then: distributed through Alibaba, paid with a credit card. Every piece of technology hardware carries an extractive chain that begins in lithium mines in Bolivia and cobalt mines in the Democratic Republic of the Congo, passes through factories in Guangdong, and ends in electronic waste processing centres in Ghana. The hardware travels that supply chain exactly like any other consumer device. Furthermore, hardware has a lifecycle. In five years the HUNSN 4K will be too slow, or it will break, or something will come out with energy efficiency too much better to ignore. And I will buy again. The mini PC market for homelabs depends on the obsolescence of previous purchases - exactly like any other consumer market.&#xA;&#xA;The critique of capitalism, when it is widespread enough, is not suppressed - it is incorporated. The system absorbs the values of resistance and transforms them into a market segment. Autonomy becomes a selling point. Decentralisation becomes a brand. The rebel who wanted to exit the system finds himself funding a new vertical of the same system, convinced he is making an ethical choice.&#xA;&#xA;The Counter-Shot&#xA;&#xA;But there is a structural difference that would be dishonest to ignore.&#xA;&#xA;When you pay a subscription to a cloud service, the cost is not just the monthly fee. It is the continuous cession of data, behaviours, habits. It is the behavioral surplus Zuboff talks about: you are not using a service, you are being used as raw material to train models, build profiles, sell advertising. The transaction never ends, in ways you often cannot see and cannot escape from as long as you use the service.&#xA;&#xA;With hardware, the transaction ends. The data stays on a physical disk in your room, not on a server subject to government requests, breaches, or business decisions that have nothing to do with you but impact your life. The software running on it - Proxmox, Debian, Nextcloud, Jellyfin - is open source; you can modify it. If something changes in a way you cannot accept, you can leave. This resilience has real value - but it is worth noting that it is asymmetric resilience: it works for those who have the skills to exercise it. For those who do not, the theoretical portability of their data from Nextcloud to something else requires exactly the same skills we have already identified as the barrier to entry. The freedom to leave is real. Access to that freedom, much less so.&#xA;&#xA;And then there is the energy question, which I have deferred long enough. The major hyperscalers - AWS, Google, Azure - operate with a PUE (Power Usage Effectiveness) between 1.1 and 1.2. For every watt of useful computation they dissipate barely 0.1–0.2 watts in heat and infrastructure. They have enormous economies of scale, optimised industrial cooling, significant investments in renewable energy, and above all: their servers run at very high utilisation rates. Almost always busy.&#xA;&#xA;A home homelab works in a radically different way. The machine runs 24/7 even when it is doing nothing - and for most of the time it is doing nothing. Navidrome serving three requests a day, FreshRSS fetching every hour, an LDAP container sitting listening without receiving connections. You are paying the energy cost of the infrastructure regardless of usage. The implicit PUE of a homelab, calculated honestly on the ratio between total consumption and actual workload, is much worse than that of a datacentre. IEA data (Data Centres and Data Transmission Networks, updated annually) shows that large cloud providers progressively improve energy efficiency thanks to economies of scale that no individual homelab can replicate. The flip side is that the same growth in demand that makes economies of scale possible negates the efficiency gains: Amazon&#39;s absolute emissions increased between 2023 and 2024 despite improved PUE. Efficiency improves. Total consumption grows anyway. This is Jevons&#39; Paradox: energy efficiency, instead of reducing consumption, increases it, because it lowers the marginal cost of use and stimulates demand that grows faster than the efficiency gains.&#xA;&#xA;  Note: The comparison is not as linear as the numbers suggest. PUE measures the internal efficiency of a datacentre, not the energy cost of the network traffic that data generates every time it leaves it - traffic that a homelab eliminates almost completely for internal services. Nor does it measure proportion: AWS is efficient at delivering services to millions of users, but that scale says nothing about the real cost of storing fifty gigabytes of personal data on a server designed for loads a thousand times greater. A HUNSN N100 in idle consumes less than 8 watts. The honest energy comparison is not homelab vs hyperscaler in the abstract - it is homelab vs proportional share of hyperscaler for your specific workload, a calculation that nobody can make with publicly available data.&#xA;&#xA;This does not automatically mean that the cloud is the ethically correct choice - the problem does not reduce to PUE, and surveillance has costs that are not measured in kilowatts. It means that anyone with SolarPunk values who chooses the homelab must reckon with a real contradiction: the choice of sovereignty may be, watt for watt, energetically more costly than the system one wants to escape. I have no clean answer, but ignoring the question would be dishonest. Söderberg acknowledges that the FOSS movement has produced concrete and undeniable gains - they simply are not enough, on their own, to subvert the dynamics of informational capitalism.&#xA;&#xA;In short: this is not a critique of the homelab, but it is a critique of the homelab presented as a sufficient revolutionary act.&#xA;&#xA;What Happens at Eleven PM - and Beyond&#xA;&#xA;That night, with the HUNSN 4K on the table, I pressed on. I installed Proxmox. I configured the network. I started bringing up containers one by one. And at some point - three hours had passed, I had three terminals open and was debugging nslcd to centralise LDAP authentication across all the containers - I realised something: I was doing all of this simply because I enjoyed it. Not to resist something. Not to advance an ideological agenda. Because there was a problem to solve and solving it gave me satisfaction. Mihaly Csikszentmihalyi describes this state in Flow as total absorption in a task calibrated to one&#39;s own competencies: time expands, attention narrows, awareness of context vanishes. It is not motivation - it is something more immediate. Debugging an authentication problem at eleven at night on a system I could have chosen not to build is, neuropsychologically, indistinguishable from pleasure. Not the satisfaction of having finished: the process itself. Moreover, for an AuDHD person like me, going into hyperfocus allows you to lose your sense of time entirely, and to literally escape from a world you viscerally loathe.&#xA;&#xA;Ah - you had not figured that out yet?&#xA;&#xA;When I had finished and closed everything, the satisfaction was still there. Along with a mildly uncomfortable awareness: I could probably have used a hosted service, lived just as well, and not lost three hours of a weeknight. But in the meantime I had understood how PAM worked, I had read documentation I had never opened before, I had implemented it on my homelab, I had learned something I hadn&#39;t known I wanted to know.&#xA;&#xA;And here the circle closes in a somewhat unsettling way. Söderberg speaks of voluntary open source work as the production of pure use value - the intrinsic pleasure of doing, understanding, building something that works. But it is exactly this use value that capital then extracts as exchange value: the competence I accumulate debugging LDAP at eleven at night is the same competence I bring to work the next day, that I put into articles like this one, that I share in communities where others use it to build their own homelabs. Technical pleasure is not neutral. It has a production chain. Not always visible, but real.&#xA;&#xA;This is what the homelab is, at least for me: a way of learning that produces, as a side effect, an infrastructure I control. The ideology is there, but it comes second. First comes the pleasure of understanding how something works. Or rather: ideology and pleasure are interchangeable, and often run in parallel - but this does not resolve any of the contradictions I described above. It leaves them all standing, in fact makes them stranger. Am I resisting capitalism, or am I just cultivating an expensive hobby with a political aesthetic?&#xA;&#xA;The Hacker Ethic&#xA;&#xA;The word &#34;hacker&#34; has had bad press for decades. In 1990s news bulletins it was a synonym for a hooded cybercriminal; in the jargon of security companies it became a marketing term to prepend to anything. Neither has much to do with what the word historically means. Steven Levy, in Hackers: Heroes of the Computer Revolution, reconstructs the culture that formed around the MIT and Stanford labs in the 1960s: a community of programmers for whom code was an aesthetic object, access to information a moral principle, and technical competence the only legitimate hierarchy. The principles Levy identifies as the &#34;hacker ethic&#34; are precise: access to computers - and to anything that can teach you how the world works - should be unlimited and total. All information should be free. Decentralised systems are preferable to centralised ones. Hackers should be judged by what they produce, not by titles, age, race, or position. You can create art and beauty with a computer.&#xA;&#xA;It is not a political manifesto in the traditional sense. It is something more visceral - a disposition toward the world, a way of standing before a system you do not yet understand: the correct response is to take it apart, understand how it works, and put it back together better than before.&#xA;&#xA;Pekka Himanen, in The Hacker Ethic and the Spirit of the Information Age - with a preface by Linus Torvalds and an epilogue by Manuel Castells, which already says something about the project&#39;s ambition - performs a more explicit theoretical operation. He builds the hacker ethic in direct opposition to the Protestant work ethic described by Max Weber: where Weber saw work as duty, discipline as virtue, and leisure as absence of production, Himanen identifies in the hacker a figure who works out of passion, considers play an integral part of work, and rejects the sharp separation between productive time and free time. The hacker does not work for money - money is a side effect, when it comes. They work because the problem is interesting. Because the elegant solution has value in itself. Because understanding how something works is, in and of itself, sufficient.&#xA;&#xA;  Hacker activity is also joyful. It often has its roots in playful explorations. Torvalds has described, in messages on the Net, how Linux began to expand from small experiments with the computer he had just acquired. In the same messages, he has explained his motivation for developing Linux by simply stating that &#34;it was/is fun working on it.&#34; Tim Berners-Lee, the man behind the Web, also describes how this creation began with experiments in linking what he called &#34;play programs.&#34; Wozniak relates how many characteristics of the Apple computer &#34;came from a game, and the fun features that were built in were only to do one pet project, which was to program … [a game called] Breakout and show it off at the club.&#34;&#xA;&#xA;Recognise something? I do. Those three hours debugging nslcd at eleven at night were not work in the Weberian sense - nobody was paying me, nobody had asked me to do it, there was no corporate objective to reach. They were hacking in the precise sense that Levy and Himanen describe: exploration motivated by curiosity, with the infrastructure as an object of study as much as of utility. The homelab is, culturally, a direct expression of the hacker ethic. It is no coincidence that homelab communities and open source communities overlap almost perfectly, that they use the same language, the same platforms, the same values. But here, as elsewhere in this article, the story gets complicated.&#xA;&#xA;The hacker ethic promises a pure meritocracy: you are judged by what you can do, not by who you are. It is an attractive idea. It is also, in practice, a partial fiction. Technical meritocracy presupposes that everyone starts from the same point - that skills are accessible to anyone who really wants to acquire them, that the time to acquire them is distributed equally, that mentorship networks and learning resources are available regardless of context. The homelab as hacker practice inherits both things: the genuine nature of curiosity as a driver, and structural exclusivity as an undeclared side effect. The pleasure of taking a system apart to understand how it works is real and should not be devalued. But that pleasure is available, in practice, to those who already have the ticket.&#xA;&#xA;Conclusions&#xA;&#xA;The HUNSN 4K runs, alongside the other &#34;little electronic contraptions,&#34; on a rack next to my armchair - the one where, at the end of the day, I indulge my guilty pleasure of reading a book in the company of my cats. Proxmox, the Nextcloud server, the ZFS NAS, a small MINISFORUM box running Ollama with some local open-weight LLM models, a Raspberry Pi 5 running the Tor Relay, and a HUNSN RJ15 with pfSense controlling incoming and outgoing traffic. An infrastructure, in short, that allows me to have something resembling digital sovereignty within the limits of the possible. The contradictions I have described do not resolve. They are held together, with effort, as any intellectually complex position on a complex system must be held together.&#xA;&#xA;The first: the market that made the accessible homelab possible is the same market the homelab is supposed to emancipate us from. If this explosion of affordable, efficient mini PCs had not happened - if capitalism had not decided to build exactly what we wanted - how many of us would have taken the same path? How much of our &#34;ethical choice&#34; depends on the existence of products designed and sold precisely for us?&#xA;&#xA;The second: does incorporated resistance truly lose its force, or does it remain resistance even when someone profits from it? Boltanski and Chiapello describe the incorporation mechanism, but do not argue that critique loses all effectiveness in the process. Perhaps the homelab is simultaneously a product of the system and a real, if partial, form of withdrawal from it. The two things are not mutually exclusive.&#xA;&#xA;The third: if digital autonomy requires decades of accumulated skills, enough free time to use them, and enough money to buy the hardware, are we building a democratic alternative? Or are we building an exclusive club with a rebel aesthetic, reproducing the same hierarchies of privilege it claims to want to fight?&#xA;&#xA;The fourth: the energy question has no clean answer, and Jevons&#39; Paradox makes it even more uncomfortable - because it works in both directions. The cloud improves efficiency and increases total consumption. A homelab consumes proportionally more, but does not fuel the demand that drives that total consumption upwards. Are we building digital sovereignty, or are we simply choosing where to position ourselves within a contradiction that cannot be resolved at the individual level?&#xA;&#xA;I don&#39;t know. But at least I know where my data is.&#xA;&#xA;Fun Fact&#xA;&#xA;This article was written in Markdown using a Flatnotes instance running as a CT container on Proxmox, while listening to a symphonic metal playlist served by Navidrome - another CT container - pulling OGG files from a ZFS NAS over an NFS share. The cited books were in EPUB format on Calibre Web. In the background, Nextcloud on a Raspberry Pi 4 was syncing and backing up everything. Spelling mistakes were corrected by Qwen2.5, an LLM model served by Ollama on the MINISFORUM box, accessible locally via oterm and Open WebUI. And all of this, controlled from a laptop running Linux.&#xA;&#xA;Coincidences? I don&#39;t think so.&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/reflections-on-an-impossible-escape-from-capitalism&#34;Discuss.../a&#xA;&#xA;#Homelab #SelfHosted #SurveillanceCapitalism #Privacy #OpenSource #HackerEthic #SolarPunk #DigitalSovereignty #FOSS #Linux&#xA;&#xA;div class=&#34;center&#34;&#xD;&#xA;· 🦣 a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a · 📸 a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a ·  📬 a href=&#34;mailto:jolek78@jolek78.dev&#34;Email/a ·&#xD;&#xA;· ☕ a href=&#34;https://liberapay.com/jolek78&#34;Support this work on Liberapay/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>It was an ordinary Friday evening. The parcel had arrived with the courier that morning, but I only opened it after dinner, with that silent ceremony I perform every time new hardware shows up – as if opening a box too quickly were a form of disrespect toward the object. Inside was a HUNSN 4K. Small, almost ridiculously small. A mini PC in a form factor that fit in the palm of a hand. I put it on the table, looked at it. Looked at it again. And then an uncomfortable thought occurred to me. I had ordered it from a Chinese reseller, paid with a credit card, through a completely traceable payment infrastructure, from one of the most centralised and surveilled commercial ecosystems in existence. To build a homelab that would let me escape centralised and surveilled ecosystems.</p>



<p>The funny thing – funny in the sense that it makes you laugh, but badly – is that I&#39;m not alone. Every day, somewhere in the world, someone orders a mini PC, a Raspberry Pi, a managed Mikrotik switch, with the stated goal of taking back control of their digital life. They order it on Alibaba, pay with PayPal, wait for the courier. And they see nothing strange in any of this, because the contradiction is so structural it has become invisible. This article is an attempt to make it visible again. Without easy solutions, because I don&#39;t have any. And when have I ever…</p>

<h2 id="the-promise-of-the-homelab" id="the-promise-of-the-homelab">The Promise of the Homelab</h2>

<p>When, in 2019, I started self-hosting pretty much everything – Nextcloud (always on a Raspberry Pi, first RPi3 then RPi4), Jellyfin, Navidrome, FreshRSS, and about twenty-five other services on Proxmox LXC, each with its own isolated Docker daemon – I did it with a precise motivation: I wanted to know where my data lived, who could read it, and have the ability to switch it off myself if I ever felt like it. Not when a company decides to shut down a service, not when someone else changes the licence terms. Me. This came after a long period of reflection on myself, the work I was doing and still do, and the technological society I live in. It is an ideological choice before it is a technical one. Technology as a tool for autonomy rather than control; infrastructure as something you own instead of something that owns you. I hope no one is alarmed if I say that some of these reflections began with reading Theodore Kaczynski&#39;s Manifesto, before eventually landing, of course, on more authoritative sources.</p>

<p>Yes, I&#39;m mad, but not quite that mad…</p>

<p>When you pay a subscription to a cloud service, the transaction does not end the moment you authorise the electronic payment. Shoshana Zuboff, in <em>The Age of Surveillance Capitalism</em>, calls this mechanism <em>behavioral surplus</em>: the behavioural data extracted beyond what is needed to provide the service, then resold as predictive raw material.</p>

<blockquote><p>Under the regime of surveillance capitalism, however, the first text does not stand alone; it trails a shadow close behind. The first text, full of promise, actually functions as the supply operation for the second text: the shadow text. Everything that we contribute to the first text, no matter how trivial or fleeting, becomes a target for surplus extraction. That surplus fills the pages of the second text. This one is hidden from our view: “read only” for surveillance capitalists. In this text our experience is dragooned as raw material to be accumulated and analyzed as means to others&#39; market ends. The shadow text is a burgeoning accumulation of behavioral surplus and its analyses, and it says more about us than we can know about ourselves. Worse still, it becomes increasingly difficult, and perhaps impossible, to refrain from contributing to the shadow text. It automatically feeds on our experience as we engage in the normal and necessary routines of social participation.</p></blockquote>

<p>You are not the customer of the system – you are its product. Your habits, your schedules, your preferences, your hesitations before clicking on something: all of this is collected, modelled, sold. The transaction is not monthly: it is continuous, invisible, and never ends as long as you use the service. With hardware, in principle, the transaction is one-time: you buy, you pay, it ends, it is yours. The disk is in your room, not on a server subject to government requests, security breaches, or business decisions that are nothing to do with you but impact your access to those services. This distinction – between a tool you use and a system that uses you – is the real stake of the homelab. It is not about saving money, it is not about performance. It is about who controls what.</p>

<p>The problem is that building this infrastructure requires hardware, time, knowledge, and resources. The hardware comes from somewhere; the time, the knowledge, and the energy resources come from a privilege not granted to everyone.</p>

<h2 id="the-market-i-hadn-t-seen" id="the-market-i-hadn-t-seen">The Market I Hadn&#39;t Seen</h2>

<p>Search for “mini PC homelab” on any marketplace. What you find is a productive ecosystem that has exploded over the past five years in a way I honestly did not expect.</p>

<p>MINISFORUM, Beelink, Trigkey, Geekom, GMKtec. Zimaboard, with its single-board aesthetic designed explicitly for those who want home racks. Raspberry Pi and the galaxy of clones – Orange Pi, Rock Pi, Banana Pi. Managed Mikrotik switches at accessible prices. 1U rack cases to mount under the desk. M.2 NVMe SSDs with TBW figures calculated for small-server workloads. Silent power supplies designed to run 24/7. A market built from scratch, that exists precisely because there is a community of people who want to run servers at home. r/homelab and r/selfhosted on Reddit have approximately 2.8 and 1.7 million members respectively – numbers publicly verifiable, and growing. YouTube is full of dedicated channels. There is an entire attention economy built around “escaping” the attention economy.</p>

<p>But it is worth asking: who built this market, and why. MINISFORUM and Beelink do not exist out of ideological sympathy for the homelab movement. They exist because they identified a profitable segment and served it with industrial precision. Kate Crawford, in <em>Atlas of AI</em>, documents how technology supply chains follow niche demand with the same efficiency with which they follow mass demand: factories in Guangdong optimise production lines not for a worldview, but for a margin. The fact that the resulting product also satisfies an ideological need is, from the manufacturer&#39;s point of view, irrelevant.</p>

<blockquote><p>The Victorian environmental disaster at the dawn of the global information society shows how the relations between technology and its materials, environments, and labor practices are interwoven. Just as Victorians precipitated ecological disaster for their early cables, so do contemporary mining and global supply chains further imperil the delicate ecological balance of our era.</p></blockquote>

<p>The mechanism had been described with theoretical precision back in 1999 by Luc Boltanski and Ève Chiapello in <em>The New Spirit of Capitalism</em>. Their thesis: capitalism is never defeated by criticism – it is incorporated. When a critique becomes widespread enough, the system absorbs it and transforms it into a market segment. The artistic critique of the 1960s – autonomy, authenticity, rejection of standardisation – became the marketing of the creative economy. The critique of digital centralisation – sovereignty, privacy, control – has become an online catalogue to browse through.</p>

<p>Resistance has become a market segment. Every time someone buys a HUNSN to stop paying subscriptions to services they don&#39;t control, a factory in Guangdong sells a HUNSN. Capitalism has not been defeated – it has shifted (at least for a small slice of the population: the nerds, the hackers) the extraction point from subscriptions to hardware.</p>

<h2 id="the-accumulation-syndrome" id="the-accumulation-syndrome">The Accumulation Syndrome</h2>

<p>But there is a further level – more ridiculous and more personal – that homelab communities never discuss openly, yet anyone who has a homelab recognises immediately. The Raspberry Pi 4 bought “for a project.” The old ThinkPad kept because “you never know.” The 4TB disk salvaged from a decommissioned NAS – and “it might come in handy.” The second-hand switch picked up on eBay for eighteen euros because it was cheap and might be useful. The cables, the cables, the cables.</p>

<p>r/homelab has a term for this: <em>just in case hardware</em>. It is the hardware of the imaginary future, of projects that only exist in your head, of configurations that one day – one day – you will finally test. In the meantime it occupies a shelf, draws current in standby, and generates a diffuse sense of possibility that is indistinguishable from the most classic consumerism. The underlying psychological mechanism has a precise name: <em>compensatory consumption</em> – consumption as a response to a perceived loss of autonomy or control. You buy hardware because buying hardware gives you the feeling of recovering agency over something. The aesthetic is different from traditional consumerism – no luxury logos, no recognisable status symbols – but the mechanism is identical.</p>

<p>That said, there is a partially honest answer to all of this: the second-hand and refurbished market. The ThinkPad X230 on eBay, the Dell R720 server decommissioned from a datacentre, the disk from someone who upgraded their NAS. My ZFS NAS, to give one example, is a recycled old tower with four 1TB disks in RAIDZ – hardware that would otherwise have ended up in landfill, with a life cycle extended by years, without generating new production demand. It is closer to the ethics of repair than to compulsive buying. But it has its own internal contradiction: it requires even more technical competence than buying new – knowing how to assess wear, diagnose an unknown component, manage ten-year-old drivers. The barrier to entry rises further. And the refurbished market is itself now an organised commercial sector, with its own margins, its own platforms, its own pricing logic. It is not a clean way out. It is a less dirty way out.</p>

<p>And then there is the energy question, which is usually ignored in homelab discussions and is instead the most uncomfortable of all – uncomfortable enough to deserve a more in-depth treatment later on. For now, suffice it to say: every machine on your shelf that “draws current in standby” is a line item in the energy bill that the homelab movement rarely accounts for.</p>

<h2 id="not-for-everyone-and-it-should-not-be-this-way" id="not-for-everyone-and-it-should-not-be-this-way">Not for Everyone. And It Should Not Be This Way.</h2>

<p>There is a second level of the paradox that is even more uncomfortable than the first. Building a homelab costs money – relatively little, but it costs. It requires physical space. It requires a decent connection. And it requires time. A lot of time. Not installation time – that is measurable, finite. The learning time that precedes everything else. To reach the point where you can build a functional infrastructure with Proxmox, LXC containers, centralised authentication, reverse proxy, automated backups – you need to have already spent years understanding how Linux works, how to reason about networks and permissions, how to read a log. I started with a Red Hat in 1997, and it took me almost thirty years to get where I am. I should know this. Yet it always escapes me. And that time did not fall from the sky. It is time I was able to dedicate because I had a certain kind of job, a certain stability, a certain amount of mental energy left at the end of the day. It is middle-class-with-a-stable-position time, not the time of someone working three warehouse shifts a week. Passion is not enough.</p>

<p>Johan Söderberg documents this in <em>Hacking Capitalism</em>: the FOSS movement was born as resistance to capitalism, but reproduces within itself hierarchies of skill and merit that make it structurally exclusive. Freedom is technically available to anyone, but effective access requires resources distributed in anything but a democratic manner. Söderberg goes further than simply observing the exclusivity: the voluntary open source work produces use value – functioning software, documentation, community support – that capital then extracts as <em>exchange value</em> without remunerating those who produced it. Red Hat builds a billion-dollar company on a kernel written largely by volunteers. It is not just that not everyone can get in: it is that those who get in often work for someone without knowing it. The homelab inherits this problem and amplifies it.</p>

<blockquote><p>The narrative of orthodox historical materialism corresponds with some very popular ideas in the computer underground. It is widely held that the infinite reproducibility of information made possible by computers (forces of production) has rendered intellectual property (relations of production, superstructure) obsolete. The storyline of post-industrial ideology is endorsed but with a different ending. Rather than culminating in global markets, technocracy and liberalism, as Daniel Bell and the futurists would have it; hackers are looking forward to a digital gift economy and high-tech anarchism. In a second turn of events, hackers have jumped on the distorted remains of Marxism presented in information-age literature, and, while missing out on the vocabulary, ended up promoting an upgraded Karl Kautsky-version of historical materialism.</p></blockquote>

<p>This is not a quirk of the homelab movement: it is a recurring structure in every technological wave. Langdon Winner, in his influential essay <em>Do Artifacts Have Politics?</em>, argued that technological choices are never neutral – they incorporate power structures, distribute access in non-random ways. Amateur radio in the 1920s, the personal computer in the 1980s, the internet in the 1990s: every time the promise was democratising, every time the actual distribution followed the lines of pre-existing privilege. Not out of malice, but out of structure. The irony is this: those who would most need digital autonomy – those who cannot afford subscriptions, those who live under governments that surveil communications, those most exposed to data collection – are exactly those least likely to be able to build a homelab. Not for lack of interest or intelligence. For lack of time, money, and years of privileged exposure to technology.</p>

<p>Homelab communities do not usually talk about this. They talk about which mini PC to buy, how to optimise energy consumption, which distro to use as a base. The conversation about structural exclusivity exists, but at the margins – in Jacobin, in Logic Magazine, in EFF activism – while the centre of the discourse remains impermeable. It is not that no one speaks about it: it is that the peripheries speak about it, and the peripheries do not set the agenda. This entire conversation takes place in a room to which not everyone has a ticket. And those inside do not seem to find that particularly problematic.</p>

<h2 id="a-technological-cosplay" id="a-technological-cosplay">A Technological Cosplay?</h2>

<p>So is the whole thing a con? Is the homelab just anti-capitalist cosplay while you continue to fund the same supply chains? In part, yes.</p>

<p>The HUNSN 4K was designed in China, assembled in China, shipped by container on ships burning bunker fuel. Global maritime transport is responsible for approximately 2.5% of global CO₂ emissions – a share that the IMO (International Maritime Organization) has been trying to reduce for years with slow progress and targets continually postponed. Then: distributed through Alibaba, paid with a credit card. Every piece of technology hardware carries an extractive chain that begins in lithium mines in Bolivia and cobalt mines in the Democratic Republic of the Congo, passes through factories in Guangdong, and ends in electronic waste processing centres in Ghana. The hardware travels that supply chain exactly like any other consumer device. Furthermore, hardware has a lifecycle. In five years the HUNSN 4K will be too slow, or it will break, or something will come out with energy efficiency too much better to ignore. And I will buy again. The mini PC market for homelabs depends on the obsolescence of previous purchases – exactly like any other consumer market.</p>

<p>The critique of capitalism, when it is widespread enough, is not suppressed – it is incorporated. The system absorbs the values of resistance and transforms them into a market segment. Autonomy becomes a selling point. Decentralisation becomes a brand. The rebel who wanted to exit the system finds himself funding a new vertical of the same system, convinced he is making an ethical choice.</p>

<h2 id="the-counter-shot" id="the-counter-shot">The Counter-Shot</h2>

<p>But there is a structural difference that would be dishonest to ignore.</p>

<p>When you pay a subscription to a cloud service, the cost is not just the monthly fee. It is the continuous cession of data, behaviours, habits. It is the behavioral surplus Zuboff talks about: you are not using a service, you are being used as raw material to train models, build profiles, sell advertising. The transaction never ends, in ways you often cannot see and cannot escape from as long as you use the service.</p>

<p>With hardware, the transaction ends. The data stays on a physical disk in your room, not on a server subject to government requests, breaches, or business decisions that have nothing to do with you but impact your life. The software running on it – Proxmox, Debian, Nextcloud, Jellyfin – is open source; you can modify it. If something changes in a way you cannot accept, you can leave. This resilience has real value – but it is worth noting that it is asymmetric resilience: it works for those who have the skills to exercise it. For those who do not, the theoretical portability of their data from Nextcloud to something else requires exactly the same skills we have already identified as the barrier to entry. The freedom to leave is real. Access to that freedom, much less so.</p>

<p>And then there is the energy question, which I have deferred long enough. The major hyperscalers – AWS, Google, Azure – operate with a PUE (Power Usage Effectiveness) between 1.1 and 1.2. For every watt of useful computation they dissipate barely 0.1–0.2 watts in heat and infrastructure. They have enormous economies of scale, optimised industrial cooling, significant investments in renewable energy, and above all: their servers run at very high utilisation rates. Almost always busy.</p>

<p>A home homelab works in a radically different way. The machine runs 24/7 even when it is doing nothing – and for most of the time it is doing nothing. Navidrome serving three requests a day, FreshRSS fetching every hour, an LDAP container sitting listening without receiving connections. You are paying the energy cost of the infrastructure regardless of usage. The implicit PUE of a homelab, calculated honestly on the ratio between total consumption and actual workload, is much worse than that of a datacentre. IEA data (<em>Data Centres and Data Transmission Networks</em>, updated annually) shows that large cloud providers progressively improve energy efficiency thanks to economies of scale that no individual homelab can replicate. The flip side is that the same growth in demand that makes economies of scale possible negates the efficiency gains: Amazon&#39;s absolute emissions increased between 2023 and 2024 despite improved PUE. Efficiency improves. Total consumption grows anyway. This is Jevons&#39; Paradox: energy efficiency, instead of reducing consumption, increases it, because it lowers the marginal cost of use and stimulates demand that grows faster than the efficiency gains.</p>

<blockquote><p><em>Note: The comparison is not as linear as the numbers suggest. PUE measures the internal efficiency of a datacentre, not the energy cost of the network traffic that data generates every time it leaves it – traffic that a homelab eliminates almost completely for internal services. Nor does it measure proportion: AWS is efficient at delivering services to millions of users, but that scale says nothing about the real cost of storing fifty gigabytes of personal data on a server designed for loads a thousand times greater. A HUNSN N100 in idle consumes less than 8 watts. The honest energy comparison is not homelab vs hyperscaler in the abstract – it is homelab vs proportional share of hyperscaler for your specific workload, a calculation that nobody can make with publicly available data.</em></p></blockquote>

<p>This does not automatically mean that the cloud is the ethically correct choice – the problem does not reduce to PUE, and surveillance has costs that are not measured in kilowatts. It means that anyone with SolarPunk values who chooses the homelab must reckon with a real contradiction: the choice of sovereignty may be, watt for watt, energetically more costly than the system one wants to escape. I have no clean answer, but ignoring the question would be dishonest. Söderberg acknowledges that the FOSS movement has produced concrete and undeniable gains – they simply are not enough, on their own, to subvert the dynamics of informational capitalism.</p>

<p>In short: this is not a critique of the homelab, but it is a critique of the homelab presented as a sufficient revolutionary act.</p>

<h2 id="what-happens-at-eleven-pm-and-beyond" id="what-happens-at-eleven-pm-and-beyond">What Happens at Eleven PM – and Beyond</h2>

<p>That night, with the HUNSN 4K on the table, I pressed on. I installed Proxmox. I configured the network. I started bringing up containers one by one. And at some point – three hours had passed, I had three terminals open and was debugging nslcd to centralise LDAP authentication across all the containers – I realised something: I was doing all of this simply because I enjoyed it. Not to resist something. Not to advance an ideological agenda. Because there was a problem to solve and solving it gave me satisfaction. Mihaly Csikszentmihalyi describes this state in <em>Flow</em> as total absorption in a task calibrated to one&#39;s own competencies: time expands, attention narrows, awareness of context vanishes. It is not motivation – it is something more immediate. Debugging an authentication problem at eleven at night on a system I could have chosen not to build is, neuropsychologically, indistinguishable from pleasure. Not the satisfaction of having finished: the process itself. Moreover, for an AuDHD person like me, going into hyperfocus allows you to lose your sense of time entirely, and to literally escape from a world you viscerally loathe.</p>

<p>Ah – you had not figured that out yet?</p>

<p>When I had finished and closed everything, the satisfaction was still there. Along with a mildly uncomfortable awareness: I could probably have used a hosted service, lived just as well, and not lost three hours of a weeknight. But in the meantime I had understood how PAM worked, I had read documentation I had never opened before, I had implemented it on my homelab, I had learned something I hadn&#39;t known I wanted to know.</p>

<p>And here the circle closes in a somewhat unsettling way. Söderberg speaks of voluntary open source work as the production of pure use value – the intrinsic pleasure of doing, understanding, building something that works. But it is exactly this use value that capital then extracts as exchange value: the competence I accumulate debugging LDAP at eleven at night is the same competence I bring to work the next day, that I put into articles like this one, that I share in communities where others use it to build their own homelabs. Technical pleasure is not neutral. It has a production chain. Not always visible, but real.</p>

<p>This is what the homelab is, at least for me: a way of learning that produces, as a side effect, an infrastructure I control. The ideology is there, but it comes second. First comes the pleasure of understanding how something works. Or rather: ideology and pleasure are interchangeable, and often run in parallel – but this does not resolve any of the contradictions I described above. It leaves them all standing, in fact makes them stranger. Am I resisting capitalism, or am I just cultivating an expensive hobby with a political aesthetic?</p>

<h2 id="the-hacker-ethic" id="the-hacker-ethic">The Hacker Ethic</h2>

<p>The word “hacker” has had bad press for decades. In 1990s news bulletins it was a synonym for a hooded cybercriminal; in the jargon of security companies it became a marketing term to prepend to anything. Neither has much to do with what the word historically means. Steven Levy, in <em>Hackers: Heroes of the Computer Revolution</em>, reconstructs the culture that formed around the MIT and Stanford labs in the 1960s: a community of programmers for whom code was an aesthetic object, access to information a moral principle, and technical competence the only legitimate hierarchy. The principles Levy identifies as the “hacker ethic” are precise: access to computers – and to anything that can teach you how the world works – should be unlimited and total. All information should be free. Decentralised systems are preferable to centralised ones. Hackers should be judged by what they produce, not by titles, age, race, or position. You can create art and beauty with a computer.</p>

<p>It is not a political manifesto in the traditional sense. It is something more visceral – a disposition toward the world, a way of standing before a system you do not yet understand: the correct response is to take it apart, understand how it works, and put it back together better than before.</p>

<p>Pekka Himanen, in <em>The Hacker Ethic and the Spirit of the Information Age</em> – with a preface by Linus Torvalds and an epilogue by Manuel Castells, which already says something about the project&#39;s ambition – performs a more explicit theoretical operation. He builds the hacker ethic in direct opposition to the Protestant work ethic described by Max Weber: where Weber saw work as duty, discipline as virtue, and leisure as absence of production, Himanen identifies in the hacker a figure who works out of passion, considers play an integral part of work, and rejects the sharp separation between productive time and free time. The hacker does not work for money – money is a side effect, when it comes. They work because the problem is interesting. Because the elegant solution has value in itself. Because understanding how something works is, in and of itself, sufficient.</p>

<blockquote><p>Hacker activity is also joyful. It often has its roots in playful explorations. Torvalds has described, in messages on the Net, how Linux began to expand from small experiments with the computer he had just acquired. In the same messages, he has explained his motivation for developing Linux by simply stating that “it was/is fun working on it.” Tim Berners-Lee, the man behind the Web, also describes how this creation began with experiments in linking what he called “play programs.” Wozniak relates how many characteristics of the Apple computer “came from a game, and the fun features that were built in were only to do one pet project, which was to program … [a game called] Breakout and show it off at the club.”</p></blockquote>

<p>Recognise something? I do. Those three hours debugging nslcd at eleven at night were not work in the Weberian sense – nobody was paying me, nobody had asked me to do it, there was no corporate objective to reach. They were hacking in the precise sense that Levy and Himanen describe: exploration motivated by curiosity, with the infrastructure as an object of study as much as of utility. The homelab is, culturally, a direct expression of the hacker ethic. It is no coincidence that homelab communities and open source communities overlap almost perfectly, that they use the same language, the same platforms, the same values. But here, as elsewhere in this article, the story gets complicated.</p>

<p>The hacker ethic promises a pure meritocracy: you are judged by what you can do, not by who you are. It is an attractive idea. It is also, in practice, a partial fiction. Technical meritocracy presupposes that everyone starts from the same point – that skills are accessible to anyone who really wants to acquire them, that the time to acquire them is distributed equally, that mentorship networks and learning resources are available regardless of context. The homelab as hacker practice inherits both things: the genuine nature of curiosity as a driver, and structural exclusivity as an undeclared side effect. The pleasure of taking a system apart to understand how it works is real and should not be devalued. But that pleasure is available, in practice, to those who already have the ticket.</p>

<h2 id="conclusions" id="conclusions">Conclusions</h2>

<p>The HUNSN 4K runs, alongside the other “little electronic contraptions,” on a rack next to my armchair – the one where, at the end of the day, I indulge my guilty pleasure of reading a book in the company of my cats. Proxmox, the Nextcloud server, the ZFS NAS, a small MINISFORUM box running Ollama with some local open-weight LLM models, a Raspberry Pi 5 running the Tor Relay, and a HUNSN RJ15 with pfSense controlling incoming and outgoing traffic. An infrastructure, in short, that allows me to have something resembling digital sovereignty within the limits of the possible. The contradictions I have described do not resolve. They are held together, with effort, as any intellectually complex position on a complex system must be held together.</p>

<p>The first: the market that made the accessible homelab possible is the same market the homelab is supposed to emancipate us from. If this explosion of affordable, efficient mini PCs had not happened – if capitalism had not decided to build exactly what we wanted – how many of us would have taken the same path? How much of our “ethical choice” depends on the existence of products designed and sold precisely for us?</p>

<p>The second: does incorporated resistance truly lose its force, or does it remain resistance even when someone profits from it? Boltanski and Chiapello describe the incorporation mechanism, but do not argue that critique loses all effectiveness in the process. Perhaps the homelab is simultaneously a product of the system and a real, if partial, form of withdrawal from it. The two things are not mutually exclusive.</p>

<p>The third: if digital autonomy requires decades of accumulated skills, enough free time to use them, and enough money to buy the hardware, are we building a democratic alternative? Or are we building an exclusive club with a rebel aesthetic, reproducing the same hierarchies of privilege it claims to want to fight?</p>

<p>The fourth: the energy question has no clean answer, and Jevons&#39; Paradox makes it even more uncomfortable – because it works in both directions. The cloud improves efficiency and increases total consumption. A homelab consumes proportionally more, but does not fuel the demand that drives that total consumption upwards. Are we building digital sovereignty, or are we simply choosing where to position ourselves within a contradiction that cannot be resolved at the individual level?</p>

<p>I don&#39;t know. But at least I know where my data is.</p>

<h2 id="fun-fact" id="fun-fact">Fun Fact</h2>

<p>This article was written in Markdown using a Flatnotes instance running as a CT container on Proxmox, while listening to a symphonic metal playlist served by Navidrome – another CT container – pulling OGG files from a ZFS NAS over an NFS share. The cited books were in EPUB format on Calibre Web. In the background, Nextcloud on a Raspberry Pi 4 was syncing and backing up everything. Spelling mistakes were corrected by Qwen2.5, an LLM model served by Ollama on the MINISFORUM box, accessible locally via oterm and Open WebUI. And all of this, controlled from a laptop running Linux.</p>

<p>Coincidences? I don&#39;t think so.</p>

<p><a href="https://remark.as/p/jolek78/reflections-on-an-impossible-escape-from-capitalism">Discuss...</a></p>

<p><a href="https://jolek78.writeas.com/tag:Homelab" class="hashtag"><span>#</span><span class="p-category">Homelab</span></a> <a href="https://jolek78.writeas.com/tag:SelfHosted" class="hashtag"><span>#</span><span class="p-category">SelfHosted</span></a> <a href="https://jolek78.writeas.com/tag:SurveillanceCapitalism" class="hashtag"><span>#</span><span class="p-category">SurveillanceCapitalism</span></a> <a href="https://jolek78.writeas.com/tag:Privacy" class="hashtag"><span>#</span><span class="p-category">Privacy</span></a> <a href="https://jolek78.writeas.com/tag:OpenSource" class="hashtag"><span>#</span><span class="p-category">OpenSource</span></a> <a href="https://jolek78.writeas.com/tag:HackerEthic" class="hashtag"><span>#</span><span class="p-category">HackerEthic</span></a> <a href="https://jolek78.writeas.com/tag:SolarPunk" class="hashtag"><span>#</span><span class="p-category">SolarPunk</span></a> <a href="https://jolek78.writeas.com/tag:DigitalSovereignty" class="hashtag"><span>#</span><span class="p-category">DigitalSovereignty</span></a> <a href="https://jolek78.writeas.com/tag:FOSS" class="hashtag"><span>#</span><span class="p-category">FOSS</span></a> <a href="https://jolek78.writeas.com/tag:Linux" class="hashtag"><span>#</span><span class="p-category">Linux</span></a></p>

<div class="center">
· 🦣 <a href="https://fosstodon.org/@jolek78">Mastodon</a> · 📸 <a href="https://pixelfed.social/jolek78">Pixelfed</a> ·  📬 <a href="mailto:jolek78@jolek78.dev">Email</a> ·
· ☕ <a href="https://liberapay.com/jolek78">Support this work on Liberapay</a>
</div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/reflections-on-an-impossible-escape-from-capitalism</guid>
      <pubDate>Sun, 05 Apr 2026 15:46:47 +0000</pubDate>
    </item>
    <item>
      <title>Kiwix: Wikipedia in your pocket</title>
      <link>https://jolek78.writeas.com/kiwix-wikipedia-in-your-pocket?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[A hackmeeting, many years ago. A conference on various open-source projects. They were talking about Kiwix. The audience seemed interested, nodding, asking questions. I sat in the back of the room with a doubt that seemed legitimate but that I didn&#39;t dare express out loud: &#34;what&#39;s the point of offline Wikipedia?&#34; I mean: the internet is everywhere. If you need to look something up on Wikipedia, you open your browser, search, read. Done. Why would anyone download gigabytes of data to consult an encyclopedia offline? It seemed like a solution in search of a problem. Something for nerds nostalgic for CD-ROM encyclopedias.&#xA;&#xA;It took me years to understand how naive I&#39;d been.&#xA;&#xA;!--more--&#xA;&#xA;Years in which I continued to follow the project from afar. Years in which I read stories of deployments in Africa, Asia, prisons, refugee camps. Years in which I understood that the internet isn&#39;t everywhere, it&#39;s a privilege, not a given. And even where it exists, it&#39;s not necessarily accessible, affordable, or free from censorship.&#xA;&#xA;Years later, when I set up my Proxmox server, one of the first containers I decided to install was Kiwix. Not because I needed it—my connection works fine, thanks for asking—but because I wanted to be part of that project, so to speak. Because I had understood that Kiwix wasn&#39;t just software. It&#39;s a philosophy. It&#39;s practical proof that another web is possible: decentralized, offline, in users&#39; hands. &#xA;&#xA;Simply a matter of fundamental rights&#xA;There&#39;s a moment in 2004 when Emmanuel Engelhart—a French computer engineer working between Germany and Switzerland—becomes a Wikipedia editor and asks himself an apparently simple question: &#34;What about those without internet access?&#34; It wasn&#39;t a rhetorical question. At the time, as today, billions of people lived (and live) in areas where connectivity is a luxury, where broadband is science fiction, where even a single megabyte of data costs more than a meal.&#xA;&#xA;Engelhart&#39;s answer was radical: if people can&#39;t reach Wikipedia, then Wikipedia must reach people. Even without the internet.&#xA;&#xA;You know that thing about &#34;if the mountain won&#39;t come to Muhammad...&#34;? Exactly that.&#xA;&#xA;And so, in 2007, together with Renaud Gaudin—a Malian information management expert—Engelhart launched Kiwix. Open source software that allowed downloading the entire Wikipedia (and much more) to consult it completely offline.&#xA;&#xA;In a 2014 interview, Engelhart stated:&#xA;&#xA;  The contents of Wikipedia should be available for everyone! Even without Internet access. This is why I have launched the Kiwix project. Our users are all over the world: sailors on the oceans, poor students thirsty for knowledge, globetrotters almost living in planes, world&#39;s citizens suffering from censorship or free minded prisoners. For all these people, Kiwix provides a simple and practical solution to ponder about the world.&#xA;&#xA;And:&#xA;&#xA;  Water is a common good. You understand why you have to care about water. Wikipedia is the same; it&#39;s a common good. We have to care about Wikipedia.&#xA;&#xA;Digital Sovereignty&#xA;Why talk about Kiwix today? Because it&#39;s not just a technical solution to a connectivity problem. Kiwix represents something deeper: digital sovereignty in its purest form.&#xA;&#xA;While projects like Mastodon, Matrix, Lemmy, and Pixelfed create distributed networks—many nodes communicating with each other in federation—Kiwix goes beyond, or perhaps beneath, depending on your perspective. It&#39;s so radically independent that it doesn&#39;t even need a network. It&#39;s local. Completely. A single Kiwix installation is an autonomous island that communicates with nothing and no one.&#xA;&#xA;No federation, no peer-to-peer, no cloud.&#xA;&#xA;You have Wikipedia on your Raspberry Pi? It&#39;s yours—or rather, it&#39;s yours thanks to the contribution of all Wikipedians. It works without internet, without external dependencies. You can copy it to a USB stick and give it to someone else. You can take it to the middle of the ocean, the desert, Antarctica. You can share it on a local computer network. And it will work. Always. The data is on your hardware, under your physical control.&#xA;&#xA;The birth of the project&#xA;Kiwix&#39;s 2007 launch didn&#39;t happen with grand announcements or marketing campaigns. It was open source software, released under GPL license, developed by two enthusiasts. That&#39;s it.&#xA;&#xA;The technological heart of the project was (and is) the ZIM format—&#34;Zeno IMproved&#34;—an open source archive format optimized for wiki-style content. Highly compressed, easily indexable, designed to be searchable even without connection. All of Wikipedia&#39;s content is converted to static HTML, compressed into ZIM, and made available for download.&#xA;&#xA;To give you an idea of scale: the entire English Wikipedia—6.4 million articles, images included—takes up about 97 GB in ZIM format. Seems like a lot? The sum of all human knowledge now fits on a microSD card that costs 15 euros. On a 1TB portable hard drive you can put Wikipedia in ten different languages, the entire Project Gutenberg library, all TED talks, complete Stack Exchange, and you&#39;ll still have space left over.&#xA;&#xA;Between 2007 and 2011, the team also released three CD/DVD versions with article selections. Today they seem like archaeological artifacts, but at the time they were the solution for bringing Wikipedia to African schools where the internet simply didn&#39;t exist.&#xA;&#xA;The XULRunner problem and the rebirth&#xA;Like every serious open source project, Kiwix had its &#34;winter.&#34; Between 2014 and 2020, the software disappeared from many Linux distribution repositories. The reason? XULRunner, the Mozilla framework Kiwix was based on, was deprecated and removed from package databases.&#xA;&#xA;For six years, Kiwix was technically &#34;dead&#34; for many Linux users. But the community didn&#39;t give up. The team worked to completely rethink the software&#39;s architecture, rewrite it from scratch, and modernize it. When it reemerged in 2020, it was stronger than before: progressive WebApp, browser extensions, native mobile support, Raspberry Pi integration.&#xA;&#xA;It&#39;s the usual open source story: an obstacle that would seem fatal becomes an opportunity to improve and grow. How many proprietary companies would have simply shut down? But in open source, software doesn&#39;t die as long as the code is available and someone believes in it.&#xA;&#xA;Where Kiwix saves lives (not hyperbole)&#xA;Numbers are important, but it&#39;s the stories that make us truly understand a project&#39;s impact.&#xA;&#xA;Kenya: the Thika Alumni Trust&#xA;In 2015, seven friends who had studied together in the &#39;60s at a high school in Thika return for a visit. The principal asks for help: they need 50 computers to create a lab. The problem? The internet connection is 100 kbps. Useless.&#xA;&#xA;The solution was to create completely offline digital learning environments using Kiwix. Today, that project has transformed education in 61 schools throughout Kenya, reaching over 70,000 children. They&#39;ve installed 164 microservers running Kiwix—probably one of the largest networks in the world.&#xA;&#xA;The results? In primary schools where the Trust operates, national exam results improved from 8 to 12%. In special needs units, where absenteeism reached 50%, attendance now exceeds 90%.&#xA;&#xA;Mary Mungai, principal of a school with special needs units, says: &#34;All our children have benefited tremendously from the digital libraries. We have children who refused to attend classes but now do so faithfully, some who couldn&#39;t read or write but now do very well on computers.&#34;&#xA;&#xA;Ghana: the Kiwix4Schools Project&#xA;In 2019, four Ghanaian students from Ashesi University launched Kiwix4Schools with a simple goal: bring digital education to rural schools. They installed Kiwix on 15 Raspberry Pi devices, reaching 2,000 students in 15 schools.&#xA;&#xA;The impact was immediate. Teachers reported students staying after school to explore content. Children who had never touched a computer were navigating Wikipedia articles. Science class changed completely when students could look up experiments, see diagrams, understand concepts beyond what the single available textbook offered.&#xA;&#xA;India: Internet blackouts and censorship&#xA;In 2019-2020, the Indian government imposed internet blackouts in Kashmir—the longest in a democracy&#39;s history. For months, millions of people were cut off from the digital world. Hospitals, schools, businesses paralyzed.&#xA;&#xA;But those who had Kiwix continued accessing medical information, educational content, technical documentation. It wasn&#39;t a complete solution, but it was a lifeline. It demonstrated that offline access isn&#39;t just for poor countries—it&#39;s a resilience tool even in developed nations with unstable political situations.&#xA;&#xA;The ZIM format: open everything&#xA;The genius of Kiwix lies in the ZIM format. It&#39;s not just a compression format—it&#39;s an open standard specifically designed for offline content distribution. Any developer can create ZIM files, any software can read them. There&#39;s no vendor lock-in, no proprietary license.&#xA;&#xA;But ZIM isn&#39;t just for Wikipedia. Today ZIM archives exist for:&#xA;&#xA;Project Gutenberg (50,000+ public domain books)&#xA;Stack Exchange (all sites, all Q&amp;As)&#xA;TED Talks (thousands of videos with subtitles)&#xA;Khan Academy&#xA;Ubuntu documentation&#xA;Arch Wiki&#xA;WikiMed (medical encyclopedia, used by 100,000 doctors and students)&#xA;&#xA;The format is completely open, documented, and anyone can create ZIM archives of their content. It&#39;s the open source spirit in its purest form.&#xA;&#xA;Everything works&#xA;In 2018, Kiwix formalized collaboration with the Wikimedia Foundation, receiving $275,000 to improve offline access. In 2023, came a $250,000 grant from the Wikimedia Endowment.&#xA;&#xA;Stephane Coillet-Matillon, Kiwix CEO, in December 2018 declared:&#xA;&#xA;  Our hope is that one day everyone will have access to the internet, and eliminate the need for other offline methods of access to information. But we know that there are still serious gaps in internet access globally that require solutions today. Kiwix is a tool to start fixing things right now.&#xA;&#xA;Today, in 2025:&#xA;&#xA;Over 10 million users in more than 220 countries&#xA;More than 10,000 websites crawled regularly&#xA;Available on all platforms: Android, iOS, Windows, macOS, Linux&#xA;Browser extensions for Firefox, Chrome, Edge&#xA;Partnership with Orange Foundation to reach 500,000 children in West Africa&#xA;&#xA;You can explore the entire catalog at library.kiwix.org.&#xA;&#xA;The philosophy behind the code&#xA;Here we arrive at the heart of the matter. Why is Kiwix important? Not just because it works, not just because it&#39;s helped millions of people. But because it represents a way of thinking about technology.&#xA;&#xA;Kiwix is:&#xA;&#xA;Open Source: all code on GitHub, GPL license. Anyone can study it, modify it, improve it.&#xA;Completely local: doesn&#39;t depend on central servers, cloud, or connections. Each installation is autonomous.&#xA;Privacy-first: no tracking, no telemetry, no data sent to third parties. Impossible—it&#39;s offline.&#xA;Community-driven: developed by volunteers, funded by donations.&#xA;Accessible: designed to work even on old or limited hardware.&#xA;&#xA;It&#39;s the antithesis of the Big Tech model. There&#39;s no company controlling access, no centralized database of who reads what, no algorithms deciding which information to show you. It&#39;s technology as it should be: serving the user, before corporations transformed it into a machine for extracting data and selling advertising.&#xA;&#xA;A &#34;dangerous&#34; precedent&#xA;There&#39;s an interesting paradox. Kiwix exists because the internet isn&#39;t accessible to everyone. But its success demonstrates that maybe we don&#39;t even need it to be—at least not the way we conceive it now.&#xA;&#xA;Think about it: if I can have Wikipedia, Stack Exchange, Project Gutenberg, Khan Academy on a 128GB SD card, why should I depend on an always-on internet connection? If I can sync updates once a month when I pass by the library with WiFi, why should I pay 50 euros a month for a home connection?&#xA;&#xA;Kiwix demonstrates that the &#34;always connected, always online, always tracked&#34; model isn&#39;t the only possible one. That an alternative exists where knowledge is local, accessible, controllable. The monopoly isn&#39;t inevitable.&#xA;&#xA;And this, for Big Tech, is dangerous. Because if people realize they can access information without going through Google, without being tracked, without seeing ads... well, the entire business model collapses. It&#39;s also no secret that the entire streaming model—everything, no one excluded: Spotify, YouTube, Netflix, etc.—is ecologically unsustainable. Downloading once and playing a thousand times (locally) is less wasteful than downloading zero times and playing a thousand times (remotely). If it can be done for Wikipedia, TED Talks, and Project Gutenberg, it can be done for everything else.&#xA;&#xA;But the biggest challenge remains the same: making Kiwix known. Because the software exists, works, is free. But how many people know they can have Wikipedia in their pocket without the internet? How many African schools know they can have a complete digital library for the cost of a Raspberry Pi?&#xA;&#xA;Conclusions: what I learned&#xA;Innovation often doesn&#39;t come from Silicon Valley. It comes from a young French engineer working in Germany asking a simple question. It comes from developers scattered around the world contributing in their free time. It comes from the community, not corporations.&#xA;&#xA;Open source works. Kiwix is almost twenty years old, has overcome technical crises that would have killed a proprietary project, has continued to grow with ridiculous budgets. Why? Because the community believes in it. Because the code is open. Because the mission is clear.&#xA;&#xA;Technology is political. Deciding that knowledge must be accessible offline is a political choice. Deciding to use open source licenses is a political choice. Deciding not to track users is a political choice.&#xA;&#xA;Kiwix shows us an alternative. That we don&#39;t have to choose between functionality and ethics. That another web is possible.&#xA;&#xA;And now, if you&#39;ll excuse me, I&#39;m going to add a Python ZIM library to my Kiwix container, because I&#39;m studying it—or rather, &#34;I have to study it&#34;—for a bunch of small projects I have in mind. AI server included.&#xA;&#xA;#Kiwix #SmallWeb #DigitalSovereignty #OpenSource #Wikipedia #Offline #Privacy #Education #Africa&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/kiwix-wikipedia-in-your-pocket&#34;Discuss.../a&#xA;&#xA;div class=&#34;center&#34;&#xD;&#xA;· 🦣 a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a · 📸 a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a ·  📬 a href=&#34;mailto:jolek78@jolek78.dev&#34;Email/a ·&#xD;&#xA;· ☕ a href=&#34;https://liberapay.com/jolek78&#34;Support this work on Liberapay/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>A hackmeeting, many years ago. A conference on various open-source projects. They were talking about <a href="https://kiwix.org">Kiwix</a>. The audience seemed interested, nodding, asking questions. I sat in the back of the room with a doubt that seemed legitimate but that I didn&#39;t dare express out loud: “what&#39;s the point of offline Wikipedia?” I mean: the internet is everywhere. If you need to look something up on Wikipedia, you open your browser, search, read. Done. Why would anyone download gigabytes of data to consult an encyclopedia offline? It seemed like a solution in search of a problem. Something for nerds nostalgic for CD-ROM encyclopedias.</p>

<p>It took me years to understand how naive I&#39;d been.</p>



<p>Years in which I continued to follow the project from afar. Years in which I read stories of deployments in Africa, Asia, prisons, refugee camps. Years in which I understood that the internet isn&#39;t everywhere, it&#39;s a privilege, not a given. And even where it exists, it&#39;s not necessarily accessible, affordable, or free from censorship.</p>

<p>Years later, when I set up my Proxmox server, one of the first containers I decided to install was Kiwix. Not because I needed it—my connection works fine, thanks for asking—but because I wanted to be part of that project, so to speak. Because I had understood that Kiwix wasn&#39;t just software. It&#39;s a philosophy. It&#39;s practical proof that another web is possible: decentralized, offline, in users&#39; hands.</p>

<h3 id="simply-a-matter-of-fundamental-rights" id="simply-a-matter-of-fundamental-rights">Simply a matter of fundamental rights</h3>

<p>There&#39;s a moment in 2004 when Emmanuel Engelhart—a French computer engineer working between Germany and Switzerland—becomes a Wikipedia editor and asks himself an apparently simple question: “What about those without internet access?” It wasn&#39;t a rhetorical question. At the time, as today, billions of people lived (and live) in areas where connectivity is a luxury, where broadband is science fiction, where even a single megabyte of data costs more than a meal.</p>

<p>Engelhart&#39;s answer was radical: if people can&#39;t reach Wikipedia, then Wikipedia must reach people. Even without the internet.</p>

<p>You know that thing about “if the mountain won&#39;t come to Muhammad...”? Exactly that.</p>

<p>And so, in 2007, together with Renaud Gaudin—a Malian information management expert—Engelhart launched Kiwix. Open source software that allowed downloading the entire Wikipedia (and much more) to consult it completely offline.</p>

<p>In a <a href="https://diff.wikimedia.org/2014/09/12/emmanuel-engelhart-inventor-of-kiwix/">2014 interview</a>, Engelhart stated:</p>

<blockquote><p>The contents of Wikipedia should be available for everyone! Even without Internet access. This is why I have launched the Kiwix project. Our users are all over the world: sailors on the oceans, poor students thirsty for knowledge, globetrotters almost living in planes, world&#39;s citizens suffering from censorship or free minded prisoners. For all these people, Kiwix provides a simple and practical solution to ponder about the world.</p></blockquote>

<p>And:</p>

<blockquote><p>Water is a common good. You understand why you have to care about water. Wikipedia is the same; it&#39;s a common good. We have to care about Wikipedia.</p></blockquote>

<h3 id="digital-sovereignty" id="digital-sovereignty">Digital Sovereignty</h3>

<p>Why talk about Kiwix today? Because it&#39;s not just a technical solution to a connectivity problem. Kiwix represents something deeper: digital sovereignty in its purest form.</p>

<p>While projects like Mastodon, Matrix, Lemmy, and Pixelfed create distributed networks—many nodes communicating with each other in federation—Kiwix goes beyond, or perhaps beneath, depending on your perspective. It&#39;s so radically independent that it doesn&#39;t even need a network. It&#39;s local. Completely. A single Kiwix installation is an autonomous island that communicates with nothing and no one.</p>

<p>No federation, no peer-to-peer, no cloud.</p>

<p>You have Wikipedia on your Raspberry Pi? It&#39;s yours—or rather, it&#39;s yours <em>thanks to the contribution</em> of all Wikipedians. It works without internet, without external dependencies. You can copy it to a USB stick and give it to someone else. You can take it to the middle of the ocean, the desert, Antarctica. You can share it on a local computer network. And it will work. Always. The data is on your hardware, under your physical control.</p>

<h3 id="the-birth-of-the-project" id="the-birth-of-the-project">The birth of the project</h3>

<p>Kiwix&#39;s 2007 launch didn&#39;t happen with grand announcements or marketing campaigns. It was open source software, released under GPL license, developed by two enthusiasts. That&#39;s it.</p>

<p>The technological heart of the project was (and is) the ZIM format—”Zeno IMproved”—an open source archive format optimized for wiki-style content. Highly compressed, easily indexable, designed to be searchable even without connection. All of Wikipedia&#39;s content is converted to static HTML, compressed into ZIM, and made available for download.</p>

<p>To give you an idea of scale: the entire English Wikipedia—6.4 million articles, images included—takes up about 97 GB in ZIM format. Seems like a lot? The sum of all human knowledge now fits on a microSD card that costs 15 euros. On a 1TB portable hard drive you can put Wikipedia in ten different languages, the entire Project Gutenberg library, all TED talks, complete Stack Exchange, and you&#39;ll still have space left over.</p>

<p>Between 2007 and 2011, the team also released three CD/DVD versions with article selections. Today they seem like archaeological artifacts, but at the time they were the solution for bringing Wikipedia to African schools where the internet simply didn&#39;t exist.</p>

<h3 id="the-xulrunner-problem-and-the-rebirth" id="the-xulrunner-problem-and-the-rebirth">The XULRunner problem and the rebirth</h3>

<p>Like every serious open source project, Kiwix had its “winter.” Between 2014 and 2020, the software disappeared from many Linux distribution repositories. The reason? XULRunner, the Mozilla framework Kiwix was based on, was deprecated and removed from package databases.</p>

<p>For six years, Kiwix was technically “dead” for many Linux users. But the community didn&#39;t give up. The team worked to completely rethink the software&#39;s architecture, rewrite it from scratch, and modernize it. When it reemerged in 2020, it was stronger than before: progressive WebApp, browser extensions, native mobile support, Raspberry Pi integration.</p>

<p>It&#39;s the usual open source story: an obstacle that would seem fatal becomes an opportunity to improve and grow. How many proprietary companies would have simply shut down? But in open source, software doesn&#39;t die as long as the code is available and someone believes in it.</p>

<h3 id="where-kiwix-saves-lives-not-hyperbole" id="where-kiwix-saves-lives-not-hyperbole">Where Kiwix saves lives (not hyperbole)</h3>

<p>Numbers are important, but it&#39;s the stories that make us truly understand a project&#39;s impact.</p>

<h4 id="kenya-the-thika-alumni-trust" id="kenya-the-thika-alumni-trust">Kenya: the Thika Alumni Trust</h4>

<p>In 2015, seven friends who had studied together in the &#39;60s at a high school in Thika return for a visit. The principal asks for help: they need 50 computers to create a lab. The problem? The internet connection is 100 kbps. Useless.</p>

<p>The solution was to create completely offline digital learning environments using Kiwix. Today, that project has transformed education in 61 schools throughout Kenya, reaching over 70,000 children. They&#39;ve installed 164 microservers running Kiwix—probably one of the largest networks in the world.</p>

<p>The results? In primary schools where the Trust operates, national exam results improved from 8 to 12%. In special needs units, where absenteeism reached 50%, attendance now exceeds 90%.</p>

<p>Mary Mungai, principal of a school with special needs units, says: “All our children have benefited tremendously from the digital libraries. We have children who refused to attend classes but now do so faithfully, some who couldn&#39;t read or write but now do very well on computers.”</p>

<h4 id="ghana-the-kiwix4schools-project" id="ghana-the-kiwix4schools-project">Ghana: the Kiwix4Schools Project</h4>

<p>In 2019, four Ghanaian students from Ashesi University launched Kiwix4Schools with a simple goal: bring digital education to rural schools. They installed Kiwix on 15 Raspberry Pi devices, reaching 2,000 students in 15 schools.</p>

<p>The impact was immediate. Teachers reported students staying after school to explore content. Children who had never touched a computer were navigating Wikipedia articles. Science class changed completely when students could look up experiments, see diagrams, understand concepts beyond what the single available textbook offered.</p>

<h4 id="india-internet-blackouts-and-censorship" id="india-internet-blackouts-and-censorship">India: Internet blackouts and censorship</h4>

<p>In 2019-2020, the Indian government imposed internet blackouts in Kashmir—the longest in a democracy&#39;s history. For months, millions of people were cut off from the digital world. Hospitals, schools, businesses paralyzed.</p>

<p>But those who had Kiwix continued accessing medical information, educational content, technical documentation. It wasn&#39;t a complete solution, but it was a lifeline. It demonstrated that offline access isn&#39;t just for poor countries—it&#39;s a resilience tool even in developed nations with unstable political situations.</p>

<h3 id="the-zim-format-open-everything" id="the-zim-format-open-everything">The ZIM format: open everything</h3>

<p>The genius of Kiwix lies in the <a href="https://wiki.openzim.org">ZIM format</a>. It&#39;s not just a compression format—it&#39;s an open standard specifically designed for offline content distribution. Any developer can create ZIM files, any software can read them. There&#39;s no vendor lock-in, no proprietary license.</p>

<p>But ZIM isn&#39;t just for Wikipedia. Today ZIM archives exist for:</p>
<ul><li>Project Gutenberg (50,000+ public domain books)</li>
<li>Stack Exchange (all sites, all Q&amp;As)</li>
<li>TED Talks (thousands of videos with subtitles)</li>
<li>Khan Academy</li>
<li>Ubuntu documentation</li>
<li>Arch Wiki</li>
<li>WikiMed (medical encyclopedia, used by 100,000 doctors and students)</li></ul>

<p>The format is completely open, documented, and anyone can create ZIM archives of their content. It&#39;s the open source spirit in its purest form.</p>

<h3 id="everything-works" id="everything-works">Everything works</h3>

<p>In 2018, Kiwix formalized collaboration with the Wikimedia Foundation, receiving $275,000 to improve offline access. In 2023, came a $250,000 grant from the Wikimedia Endowment.</p>

<p>Stephane Coillet-Matillon, Kiwix CEO, in <a href="https://wikimediafoundation.org/news/2018/12/21/kiwix-is-connecting-the-unconnected/">December 2018</a> declared:</p>

<blockquote><p>Our hope is that one day everyone will have access to the internet, and eliminate the need for other offline methods of access to information. But we know that there are still serious gaps in internet access globally that require solutions today. Kiwix is a tool to start fixing things right now.</p></blockquote>

<p>Today, in 2025:</p>
<ul><li>Over 10 million users in more than 220 countries</li>
<li>More than 10,000 websites crawled regularly</li>
<li>Available on all platforms: Android, iOS, Windows, macOS, Linux</li>
<li>Browser extensions for Firefox, Chrome, Edge</li>
<li>Partnership with Orange Foundation to reach 500,000 children in West Africa</li></ul>

<p>You can explore the entire catalog at <a href="https://library.kiwix.org/">library.kiwix.org</a>.</p>

<h3 id="the-philosophy-behind-the-code" id="the-philosophy-behind-the-code">The philosophy behind the code</h3>

<p>Here we arrive at the heart of the matter. Why is Kiwix important? Not just because it works, not just because it&#39;s helped millions of people. But because it represents a way of thinking about technology.</p>

<p>Kiwix is:</p>
<ul><li><strong>Open Source</strong>: all code on GitHub, GPL license. Anyone can study it, modify it, improve it.</li>
<li><strong>Completely local</strong>: doesn&#39;t depend on central servers, cloud, or connections. Each installation is autonomous.</li>
<li><strong>Privacy-first</strong>: no tracking, no telemetry, no data sent to third parties. Impossible—it&#39;s offline.</li>
<li><strong>Community-driven</strong>: developed by volunteers, funded by donations.</li>
<li><strong>Accessible</strong>: designed to work even on old or limited hardware.</li></ul>

<p>It&#39;s the antithesis of the Big Tech model. There&#39;s no company controlling access, no centralized database of who reads what, no algorithms deciding which information to show you. It&#39;s technology as it should be: serving the user, before corporations transformed it into a machine for extracting data and selling advertising.</p>

<h3 id="a-dangerous-precedent" id="a-dangerous-precedent">A “dangerous” precedent</h3>

<p>There&#39;s an interesting paradox. Kiwix exists because the internet isn&#39;t accessible to everyone. But its success demonstrates that maybe we don&#39;t even need it to be—at least not the way we conceive it now.</p>

<p>Think about it: if I can have Wikipedia, Stack Exchange, Project Gutenberg, Khan Academy on a 128GB SD card, why should I depend on an always-on internet connection? If I can sync updates once a month when I pass by the library with WiFi, why should I pay 50 euros a month for a home connection?</p>

<p>Kiwix demonstrates that the “always connected, always online, always tracked” model isn&#39;t the only possible one. That an alternative exists where knowledge is local, accessible, controllable. The monopoly isn&#39;t inevitable.</p>

<p>And this, for Big Tech, is dangerous. Because if people realize they can access information without going through Google, without being tracked, without seeing ads... well, the entire business model collapses. It&#39;s also no secret that the entire streaming model—everything, no one excluded: Spotify, YouTube, Netflix, etc.—is ecologically unsustainable. Downloading once and playing a thousand times (locally) is less wasteful than downloading zero times and playing a thousand times (remotely). If it can be done for Wikipedia, TED Talks, and Project Gutenberg, it can be done for everything else.</p>

<p>But the biggest challenge remains the same: making Kiwix known. Because the software exists, works, is free. But how many people know they can have Wikipedia in their pocket without the internet? How many African schools know they can have a complete digital library for the cost of a Raspberry Pi?</p>

<h3 id="conclusions-what-i-learned" id="conclusions-what-i-learned">Conclusions: what I learned</h3>

<p>Innovation often doesn&#39;t come from Silicon Valley. It comes from a young French engineer working in Germany asking a simple question. It comes from developers scattered around the world contributing in their free time. It comes from the community, not corporations.</p>

<p>Open source works. Kiwix is almost twenty years old, has overcome technical crises that would have killed a proprietary project, has continued to grow with ridiculous budgets. Why? Because the community believes in it. Because the code is open. Because the mission is clear.</p>

<p>Technology is political. Deciding that knowledge must be accessible offline is a political choice. Deciding to use open source licenses is a political choice. Deciding not to track users is a political choice.</p>

<p>Kiwix shows us an alternative. That we don&#39;t have to choose between functionality and ethics. That another web is possible.</p>

<p>And now, if you&#39;ll excuse me, I&#39;m going to add a Python ZIM library to my Kiwix container, because I&#39;m studying it—or rather, “I have to study it”—for a bunch of small projects I have in mind. AI server included.</p>

<p><a href="https://jolek78.writeas.com/tag:Kiwix" class="hashtag"><span>#</span><span class="p-category">Kiwix</span></a> <a href="https://jolek78.writeas.com/tag:SmallWeb" class="hashtag"><span>#</span><span class="p-category">SmallWeb</span></a> <a href="https://jolek78.writeas.com/tag:DigitalSovereignty" class="hashtag"><span>#</span><span class="p-category">DigitalSovereignty</span></a> <a href="https://jolek78.writeas.com/tag:OpenSource" class="hashtag"><span>#</span><span class="p-category">OpenSource</span></a> <a href="https://jolek78.writeas.com/tag:Wikipedia" class="hashtag"><span>#</span><span class="p-category">Wikipedia</span></a> <a href="https://jolek78.writeas.com/tag:Offline" class="hashtag"><span>#</span><span class="p-category">Offline</span></a> <a href="https://jolek78.writeas.com/tag:Privacy" class="hashtag"><span>#</span><span class="p-category">Privacy</span></a> <a href="https://jolek78.writeas.com/tag:Education" class="hashtag"><span>#</span><span class="p-category">Education</span></a> <a href="https://jolek78.writeas.com/tag:Africa" class="hashtag"><span>#</span><span class="p-category">Africa</span></a></p>

<p><a href="https://remark.as/p/jolek78/kiwix-wikipedia-in-your-pocket">Discuss...</a></p>

<div class="center">
· 🦣 <a href="https://fosstodon.org/@jolek78">Mastodon</a> · 📸 <a href="https://pixelfed.social/jolek78">Pixelfed</a> ·  📬 <a href="mailto:jolek78@jolek78.dev">Email</a> ·
· ☕ <a href="https://liberapay.com/jolek78">Support this work on Liberapay</a>
</div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/kiwix-wikipedia-in-your-pocket</guid>
      <pubDate>Thu, 18 Dec 2025 14:46:00 +0000</pubDate>
    </item>
    <item>
      <title>ChatGPT didn&#39;t invent anything.</title>
      <link>https://jolek78.writeas.com/chatgpt-didnt-invent-anything?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[When the world woke up astonished in November 2022 to this &#34;magical&#34; chatbot, few realized that this magic was the result of decades of research. The history of artificial intelligence begins in 1943, when Warren McCulloch and Walter Pitts proposed the first mathematical model of an artificial neuron. In 1956, at the Dartmouth Conference, John McCarthy coined the term &#34;Artificial Intelligence&#34; and the discipline was officially born.&#xA;&#xA;The &#39;60s and &#39;70s were characterized by excessive optimism: people thought strong AI was just around the corner. Two &#34;AI winters&#34; followed – periods when funding disappeared and research slowed – because promises weren&#39;t materializing. But some continued working in the shadows. Geoffrey Hinton, Yann LeCun, Yoshua Bengio – those we now call the &#34;godfathers of deep learning&#34; – continued their studies on neural networks when no one believed in them anymore.&#xA;&#xA;!--more--&#xA;&#xA;The real breakthrough came with three converging factors: computational power (GPUs), enormous amounts of data, and better algorithms. In 2012, AlexNet won the ImageNet Challenge by an overwhelming margin, demonstrating that deep learning really worked. From there, an unstoppable acceleration.&#xA;&#xA;Once upon a time in the Carboniferous...&#xA;Before ChatGPT exploded, my only knowledge of AI came from science fiction books. Philip K. Dick and his reflections on what it means to be human. Cyberpunk in general, with its technological dystopias. Gibson&#39;s Sprawl trilogy, where AIs live in cyberspace like digital deities. Those pages were my only window to a future that seemed incredibly distant.&#xA;&#xA;When I hosted the podcast Caccia al Fotone (a nice thing, but now belonging to the Carboniferous period...), I delved deeper into the subject. I read several papers published on arXiv and dedicated two episodes to AI development. In 2019, during the pandemic period, I devoured &#34;Artificial Intelligence: A Guide for Thinking Humans&#34; by Melanie Mitchell – a book that also helped me write a &#34;thing&#34; (those who know, know; those who don&#39;t, never mind...) on the evolution of computer systems and surveillance capitalism.&#xA;&#xA;I thought I had a clear picture. I thought I was prepared.&#xA;&#xA;Mea culpa&#xA;Then ChatGPT arrived.&#xA;&#xA;November 2022. First approach: total amazement. I couldn&#39;t believe my eyes. I kept asking questions, and despite all the initial hallucinations I encountered, I continued to have that &#34;wow effect&#34; typical of a child finding the most beautiful shell on the seashore (forgive me Newton for stealing that phrase, but it&#39;s always too beautiful).&#xA;&#xA;And here&#39;s my mea culpa: I set aside all my protective filters that I generally have regarding privacy, open source, control over my data. I let myself go for hours of conversations on the most diverse topics. Until one night – one of many sleepless nights – I found myself discussing with that LLM about depression, various mental disorders, and how one or more abuses can influence a person&#39;s life.&#xA;&#xA;When I realized what was happening, I stopped abruptly. I deleted the conversation, canceled my OpenAI subscription and didn&#39;t touch any LLM for more than a month. I was entrusting my most intimate thoughts to a proprietary system controlled by a corporation. I was betraying every principle I believed in.&#xA;&#xA;But I work in IT. This is a huge revolution. I couldn&#39;t afford to fall behind, nor could I simply reject it on principle. I had to find an alternative. I began to study seriously.&#xA;&#xA;Local, always local&#xA;I encountered the first models I could test locally. I discovered Hugging Face, and it was like finding an oasis in the desert. I began studying transformers, the datasets developed by the community. And I was astounded.&#xA;&#xA;Transformers are the architecture that revolutionized AI. Presented in the 2017 paper &#34;Attention Is All You Need&#34;, they replaced old recurrent neural networks (RNNs) with a more elegant and efficient mechanism: the attention mechanism.&#xA;&#xA;In simple words: instead of processing text word by word in sequence, a transformer looks at all words simultaneously and calculates which ones are most relevant to the context. When you read &#34;The bank of the river was green,&#34; the attention mechanism understands that &#34;bank&#34; refers to the river and not the financial institution, because it evaluates the weight of each word relative to the others.&#xA;&#xA;This architecture made models like BERT, GPT, and all modern LLMs possible. It&#39;s scalable, parallelizable, and extremely powerful.&#xA;&#xA;Hugging Face and the Open Source revolution&#xA;Hugging Face is much more than a platform: it has become the Library of Alexandria of the artificial intelligence era. Founded in 2016, it now hosts over 500,000 pre-trained models, 250,000 datasets, and thousands of demo applications.&#xA;&#xA;Their transformers library has democratized access to AI. With a few lines of Python you can download and use models that would cost millions of dollars to train from scratch. Hugging Face isn&#39;t the only platform doing this – there are also Ollama, LM Studio, GPT4All – but it&#39;s certainly the most extensive and collaborative.&#xA;&#xA;Here, praise must be given to the developers: this community of people scattered around the world is doing extraordinary work. They release open source models, share knowledge, meticulously document everything. They&#39;re building a real alternative to Big Tech&#39;s monopoly on AI.&#xA;&#xA;History repeating&#xA;Watching this explosion of open models, global collaboration, shared code, I had a powerful déjà-vu. This is incredibly similar to the open source revolution that happened 30 years ago.&#xA;&#xA;In the &#39;90s, Linux and the free software movement challenged Microsoft&#39;s dominance and proprietary systems. Many said it was impossible, that free software would never work. Today Linux powers 96% of the world&#39;s servers, all Android smartphones, and much of the Internet infrastructure.&#xA;&#xA;Now the same thing is happening with AI. Llama, Mistral, Falcon, Mixtral – &#34;open weight/open source&#34; models that compete with (and often surpass) their proprietary counterparts. History repeats itself, and this time I know which side to be on.&#xA;&#xA;Another server in my homeLab&#xA;I resumed studying Python, a study I had left on standby years ago. I began experimenting with training local LLM models. I added old scripts to provide my writing style (yes, it seems incredible but every coder has their own style, and it says a lot about their personality). I used Llama 3 to improve my Bash coding.&#xA;&#xA;And when I was ready, I decided to make an important purchase: I bought a small server – to add to my homelab: Proxmox, pfSense, Nextcloud, WireGuard etc... – that I would transform into an OpenWebUI system.&#xA;&#xA;OpenWebUI is a self-hosted web interface for local language models. Like ChatGPT, but running entirely on local hardware, without sending a single byte to someone else&#39;s servers.&#xA;&#xA;For the nerds reading: the simplest way to install is obviously through Docker. Here&#39;s a basic example:&#xA;&#xA;docker run -d -p 3000:8080 \&#xA;  -v open-webui:/app/backend/data \&#xA;  --name open-webui \&#xA;  --restart always \&#xA;  ghcr.io/open-webui/open-webui:main&#xA;&#xA;Once installed, just connect OpenWebUI to Ollama (the runtime for local models), download your preferred models, and you&#39;re operational.&#xA;&#xA;GPU usage is fundamental: a medium-sized LLM requires a lot of RAM and computing power. A dedicated GPU (like an NVIDIA GTX of various types) makes an enormous difference. For those using AMD, there&#39;s ROCm. With 16GB of RAM and an 8GB GPU, you can comfortably run 7B parameter models quantized to 4-bit.&#xA;&#xA;My favorite combo? AMD, Debian, Docker, OpenWebUI, Ollama and Mistral.&#xA;&#xA;A revolution. and a choice to make&#xA;We&#39;re facing a revolution that we cannot avoid. AI is here, it&#39;s powerful, and it&#39;s evolving rapidly. There are two roads ahead of us.&#xA;&#xA;The first: avoid it now, close our eyes, hope it passes or that someone else deals with it. And then, in twenty years, find ourselves chasing an evolved AI, probably impossible to understand, completely in the hands of those who controlled it from the beginning. This is the path of least resistance, but also of maximum risk. It means ceding control, understanding, and ultimately power to whoever gets there first.&#xA;&#xA;The second: study it, analyze it, use it and understand it today to be able to handle it better tomorrow. Actively participate in its evolution. Contribute to the open source community, ensure that this technology remains accessible, understandable, in the hands of many instead of a few. This path requires effort, time, sometimes admitting we were wrong (as I did). But it&#39;s the only path that leads to actual agency over our technological future.&#xA;&#xA;The choice seems obvious when stated this way, but it&#39;s not easy in practice. It requires overcoming fear, investing time, challenging our assumptions. It means getting our hands dirty with code, running models locally, understanding how these systems actually work instead of treating them as black boxes.&#xA;&#xA;I made my choice that night when I deleted my ChatGPT conversation history. I chose not to be a passive consumer of AI technology controlled by corporations. I chose to understand, to build, to contribute to the alternative that&#39;s being constructed by thousands of developers around the world.&#xA;&#xA;The technology is already here. The question is: will it be controlled by a few companies optimizing for profit and control, or will it be a tool accessible to everyone, understandable, modifiable, improvable by the community?&#xA;&#xA;As I&#39;ve learned on this journey, choosing to understand – even when it&#39;s difficult, even when it means admitting you were wrong – is always better than passively submitting.&#xA;&#xA;AI is not magic. It&#39;s mathematics, code, hardware, and above all: it&#39;s made by people. And if it&#39;s made by people, it can be understood, modified and shaped by people. For the better, not for the worse.&#xA;&#xA;The revolution is happening. The only question is: are you participating, or are you watching?&#xA;&#xA;#AI #OpenSource #LocalLLM #Privacy #ChatGPT #HuggingFace #Ollama #SelfHosted #MachineLearning #DigitalSovereignty&#xA;&#xA;a href=&#34;https://remark.as/p/jolek78/chatgpt-didnt-invent-anything&#34;Discuss.../a&#xA;&#xA;div class=&#34;center&#34;&#xD;&#xA;· 🦣 a href=&#34;https://fosstodon.org/@jolek78&#34;Mastodon/a · 📸 a href=&#34;https://pixelfed.social/jolek78&#34;Pixelfed/a ·  📬 a href=&#34;mailto:jolek78@jolek78.dev&#34;Email/a ·&#xD;&#xA;· ☕ a href=&#34;https://liberapay.com/jolek78&#34;Support this work on Liberapay/a&#xD;&#xA;/div]]&gt;</description>
      <content:encoded><![CDATA[<p>When the world woke up astonished in November 2022 to this “magical” chatbot, few realized that this magic was the result of decades of research. The history of artificial intelligence begins in 1943, when Warren McCulloch and Walter Pitts proposed the first mathematical model of an artificial neuron. In 1956, at the Dartmouth Conference, John McCarthy coined the term “Artificial Intelligence” and the discipline was officially born.</p>

<p>The &#39;60s and &#39;70s were characterized by excessive optimism: people thought strong AI was just around the corner. Two “AI winters” followed – periods when funding disappeared and research slowed – because promises weren&#39;t materializing. But some continued working in the shadows. Geoffrey Hinton, Yann LeCun, Yoshua Bengio – those we now call the “godfathers of deep learning” – continued their studies on neural networks when no one believed in them anymore.</p>



<p>The real breakthrough came with three converging factors: computational power (GPUs), enormous amounts of data, and better algorithms. In 2012, AlexNet won the ImageNet Challenge by an overwhelming margin, demonstrating that deep learning really worked. From there, an unstoppable acceleration.</p>

<h3 id="once-upon-a-time-in-the-carboniferous" id="once-upon-a-time-in-the-carboniferous">Once upon a time in the Carboniferous...</h3>

<p>Before ChatGPT exploded, my only knowledge of AI came from science fiction books. Philip K. Dick and his reflections on what it means to be human. Cyberpunk in general, with its technological dystopias. Gibson&#39;s Sprawl trilogy, where AIs live in cyberspace like digital deities. Those pages were my only window to a future that seemed incredibly distant.</p>

<p>When I hosted the podcast Caccia al Fotone (a nice thing, but now belonging to the Carboniferous period...), I delved deeper into the subject. I read several papers published on arXiv and dedicated two episodes to AI development. In 2019, during the pandemic period, I devoured “Artificial Intelligence: A Guide for Thinking Humans” by Melanie Mitchell – a book that also helped me write a “thing” (those who know, know; those who don&#39;t, never mind...) on the evolution of computer systems and surveillance capitalism.</p>

<p>I thought I had a clear picture. I thought I was prepared.</p>

<h3 id="mea-culpa" id="mea-culpa">Mea culpa</h3>

<p>Then ChatGPT arrived.</p>

<p>November 2022. First approach: total amazement. I couldn&#39;t believe my eyes. I kept asking questions, and despite all the initial hallucinations I encountered, I continued to have that “wow effect” typical of a child finding the most beautiful shell on the seashore (forgive me Newton for stealing that phrase, but it&#39;s always too beautiful).</p>

<p>And here&#39;s my mea culpa: I set aside all my protective filters that I generally have regarding privacy, open source, control over my data. I let myself go for hours of conversations on the most diverse topics. Until one night – one of many sleepless nights – I found myself discussing with that LLM about depression, various mental disorders, and how one or more abuses can influence a person&#39;s life.</p>

<p>When I realized what was happening, I stopped abruptly. I deleted the conversation, canceled my OpenAI subscription and didn&#39;t touch any LLM for more than a month. I was entrusting my most intimate thoughts to a proprietary system controlled by a corporation. I was betraying every principle I believed in.</p>

<p>But I work in IT. This is a huge revolution. I couldn&#39;t afford to fall behind, nor could I simply reject it on principle. I had to find an alternative. I began to study seriously.</p>

<h3 id="local-always-local" id="local-always-local">Local, always local</h3>

<p>I encountered the first models I could test locally. I discovered <a href="https://huggingface.co">Hugging Face</a>, and it was like finding an oasis in the desert. I began studying transformers, the datasets developed by the community. And I was astounded.</p>

<p><strong>Transformers</strong> are the architecture that revolutionized AI. Presented in the 2017 paper <a href="https://arxiv.org/abs/1706.03762">“Attention Is All You Need”</a>, they replaced old recurrent neural networks (RNNs) with a more elegant and efficient mechanism: the attention mechanism.</p>

<p>In simple words: instead of processing text word by word in sequence, a transformer looks at all words simultaneously and calculates which ones are most relevant to the context. When you read “The bank of the river was green,” the attention mechanism understands that “bank” refers to the river and not the financial institution, because it evaluates the weight of each word relative to the others.</p>

<p>This architecture made models like BERT, GPT, and all modern LLMs possible. It&#39;s scalable, parallelizable, and extremely powerful.</p>

<h3 id="hugging-face-and-the-open-source-revolution" id="hugging-face-and-the-open-source-revolution">Hugging Face and the Open Source revolution</h3>

<p><a href="https://huggingface.co">Hugging Face</a> is much more than a platform: it has become the Library of Alexandria of the artificial intelligence era. Founded in 2016, it now hosts over 500,000 pre-trained models, 250,000 datasets, and thousands of demo applications.</p>

<p>Their <a href="https://github.com/huggingface/transformers">transformers library</a> has democratized access to AI. With a few lines of Python you can download and use models that would cost millions of dollars to train from scratch. Hugging Face isn&#39;t the only platform doing this – there are also <a href="https://ollama.com">Ollama</a>, <a href="https://lmstudio.ai">LM Studio</a>, <a href="https://gpt4all.io">GPT4All</a> – but it&#39;s certainly the most extensive and collaborative.</p>

<p>Here, praise must be given to the developers: this community of people scattered around the world is doing extraordinary work. They release open source models, share knowledge, meticulously document everything. They&#39;re building a real alternative to Big Tech&#39;s monopoly on AI.</p>

<h3 id="history-repeating" id="history-repeating">History repeating</h3>

<p>Watching this explosion of open models, global collaboration, shared code, I had a powerful déjà-vu. This is incredibly similar to the open source revolution that happened 30 years ago.</p>

<p>In the &#39;90s, Linux and the free software movement challenged Microsoft&#39;s dominance and proprietary systems. Many said it was impossible, that free software would never work. Today Linux powers 96% of the world&#39;s servers, all Android smartphones, and much of the Internet infrastructure.</p>

<p>Now the same thing is happening with AI. Llama, Mistral, Falcon, Mixtral – “open weight/open source” models that compete with (and often surpass) their proprietary counterparts. History repeats itself, and this time I know which side to be on.</p>

<h3 id="another-server-in-my-homelab" id="another-server-in-my-homelab">Another server in my homeLab</h3>

<p>I resumed studying Python, a study I had left on standby years ago. I began experimenting with training local LLM models. I added old scripts to provide my writing style (yes, it seems incredible but every coder has their own style, and it says a lot about their personality). I used Llama 3 to improve my Bash coding.</p>

<p>And when I was ready, I decided to make an important purchase: I bought a small server – to add to my homelab: Proxmox, pfSense, Nextcloud, WireGuard etc... – that I would transform into an <a href="https://openwebui.com">OpenWebUI</a> system.</p>

<p>OpenWebUI is a self-hosted web interface for local language models. Like ChatGPT, but running entirely on local hardware, without sending a single byte to someone else&#39;s servers.</p>

<p>For the nerds reading: the simplest way to install is obviously through Docker. Here&#39;s a basic example:</p>

<pre><code>docker run -d -p 3000:8080 \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main
</code></pre>

<p>Once installed, just connect OpenWebUI to <a href="https://ollama.com">Ollama</a> (the runtime for local models), download your preferred models, and you&#39;re operational.</p>

<p>GPU usage is fundamental: a medium-sized LLM requires a lot of RAM and computing power. A dedicated GPU (like an NVIDIA GTX of various types) makes an enormous difference. For those using AMD, there&#39;s ROCm. With 16GB of RAM and an 8GB GPU, you can comfortably run 7B parameter models quantized to 4-bit.</p>

<p>My favorite combo? AMD, Debian, Docker, OpenWebUI, Ollama and Mistral.</p>

<h3 id="a-revolution-and-a-choice-to-make" id="a-revolution-and-a-choice-to-make">A revolution. and a choice to make</h3>

<p>We&#39;re facing a revolution that we cannot avoid. AI is here, it&#39;s powerful, and it&#39;s evolving rapidly. There are two roads ahead of us.</p>

<p><strong>The first:</strong> avoid it now, close our eyes, hope it passes or that someone else deals with it. And then, in twenty years, find ourselves chasing an evolved AI, probably impossible to understand, completely in the hands of those who controlled it from the beginning. This is the path of least resistance, but also of maximum risk. It means ceding control, understanding, and ultimately power to whoever gets there first.</p>

<p><strong>The second:</strong> study it, analyze it, use it and understand it today to be able to handle it better tomorrow. Actively participate in its evolution. Contribute to the open source community, ensure that this technology remains accessible, understandable, in the hands of many instead of a few. This path requires effort, time, sometimes admitting we were wrong (as I did). But it&#39;s the only path that leads to actual agency over our technological future.</p>

<p>The choice seems obvious when stated this way, but it&#39;s not easy in practice. It requires overcoming fear, investing time, challenging our assumptions. It means getting our hands dirty with code, running models locally, understanding how these systems actually work instead of treating them as black boxes.</p>

<p>I made my choice that night when I deleted my ChatGPT conversation history. I chose not to be a passive consumer of AI technology controlled by corporations. I chose to understand, to build, to contribute to the alternative that&#39;s being constructed by thousands of developers around the world.</p>

<p>The technology is already here. The question is: will it be controlled by a few companies optimizing for profit and control, or will it be a tool accessible to everyone, understandable, modifiable, improvable by the community?</p>

<p>As I&#39;ve learned on this journey, choosing to understand – even when it&#39;s difficult, even when it means admitting you were wrong – is always better than passively submitting.</p>

<p>AI is not magic. It&#39;s mathematics, code, hardware, and above all: it&#39;s made by people. And if it&#39;s made by people, it can be understood, modified and shaped by people. For the better, not for the worse.</p>

<p>The revolution is happening. The only question is: are you participating, or are you watching?</p>

<p><a href="https://jolek78.writeas.com/tag:AI" class="hashtag"><span>#</span><span class="p-category">AI</span></a> <a href="https://jolek78.writeas.com/tag:OpenSource" class="hashtag"><span>#</span><span class="p-category">OpenSource</span></a> <a href="https://jolek78.writeas.com/tag:LocalLLM" class="hashtag"><span>#</span><span class="p-category">LocalLLM</span></a> <a href="https://jolek78.writeas.com/tag:Privacy" class="hashtag"><span>#</span><span class="p-category">Privacy</span></a> <a href="https://jolek78.writeas.com/tag:ChatGPT" class="hashtag"><span>#</span><span class="p-category">ChatGPT</span></a> <a href="https://jolek78.writeas.com/tag:HuggingFace" class="hashtag"><span>#</span><span class="p-category">HuggingFace</span></a> <a href="https://jolek78.writeas.com/tag:Ollama" class="hashtag"><span>#</span><span class="p-category">Ollama</span></a> <a href="https://jolek78.writeas.com/tag:SelfHosted" class="hashtag"><span>#</span><span class="p-category">SelfHosted</span></a> <a href="https://jolek78.writeas.com/tag:MachineLearning" class="hashtag"><span>#</span><span class="p-category">MachineLearning</span></a> <a href="https://jolek78.writeas.com/tag:DigitalSovereignty" class="hashtag"><span>#</span><span class="p-category">DigitalSovereignty</span></a></p>

<p><a href="https://remark.as/p/jolek78/chatgpt-didnt-invent-anything">Discuss...</a></p>

<div class="center">
· 🦣 <a href="https://fosstodon.org/@jolek78">Mastodon</a> · 📸 <a href="https://pixelfed.social/jolek78">Pixelfed</a> ·  📬 <a href="mailto:jolek78@jolek78.dev">Email</a> ·
· ☕ <a href="https://liberapay.com/jolek78">Support this work on Liberapay</a>
</div>
]]></content:encoded>
      <guid>https://jolek78.writeas.com/chatgpt-didnt-invent-anything</guid>
      <pubDate>Tue, 28 Oct 2025 12:56:35 +0000</pubDate>
    </item>
  </channel>
</rss>