limeblack 7 days ago

My favorite thing to do with TCL is to create a live distro[1] that boots of off usb/cd and eject[2] the device it boots off of entirely while the OS is still running. I would do this in libraries/cafes all the time. No device plugged into the computer but running a different OS.

[1] http://forum.tinycorelinux.net/index.php?topic=8924.0

[2] http://forum.tinycorelinux.net/index.php/topic,8522.0.html

  • thomastjeffery 7 days ago

    Back when I was in High School, I would boot a floppy with UnetBootin (the BIOS did not support USB boot), and boot a flash drive with TinyCore, then put the floppy and flash drive back in my pocket. I would end up with a working browser before anyone else was able to login to Windows XP, and my computer wouldn't freeze, crash, etc.

Karrot_Kream 7 days ago

Judging by the comments, it seems like people haven't realized yet that the distribution is precisely this small so the entire thing can run in RAM. TinyCore wants to be super fast, so they try and got everything in RAM.

  • woliveirajr 7 days ago

    At 11MB, it could be located inside the L3 cache on some processors [0] :)

    [0]: https://en.wikipedia.org/wiki/Haswell_%28microarchitecture%2...

    • eliben 7 days ago

      On disk size != memory footprint though. It may occupy 11MB as a binary but allocate way more than that at runtime.

      [nitpicking, of course :)]

    • ReverseCold 7 days ago

      Is that something that might be doable/useful? Or just a size comparison?

      • trisimix 7 days ago

        Anythings doable. But you probably need the space for the processor. Itd be hella fast though

        • nickwanninger 7 days ago

          Do you have to do anything special to load it all into the cache or is that just something that would happen by default?

  • feelin_googley 7 days ago

    Some people perhaps are missing this. Others surely understand.

    Personally, I have been running my computers this way for many years now. I do not use Linux kernel, but like "tinycore" everything fits in RAM. Size range is usually about 13-17MB for x86.

    The USB stick or other boot media with the kernel+bootloader can be removed after boot. Depending on how much free RAM I have to spare, I can overlay larger userlands on top of this, and chroot into them. With todays RAM sizes, I can hold an entire "default base" (BSD) in memory if desired.

    This filesystem on the media is not merely an "install" image. It has everything I need to do work, including custom applications. If I want distributions sets I download them; they are not stored on the media.

    In normal use, I do not use any disk. Therefore I do not use a swap file.

    Recently on HN a discussion of sorting came up, and I mentioned the issue of sorting large datasets using only RAM, no disk. Obviously BSD and other UNIX-like kernels date back to a time of severely constrained memory and are designed around space-saving concepts like "virtual memory" and "swap", "shared libraries", etc.

    When one runs from RAM with no disk, working with large files means "thrashing" is a real possibility. Aside: Did the architects of virtual memory contemplate a world where users are doing work without using HDDs or other writeable permanent storage. (Start a new thread if want to answer/debate this.)

    Sorting large files is something I do regularly so I am always open to new ideas. Currently I use k/kdb+ for large files instead of UNIX sort.

    Here's the reply I got:

    https://news.ycombinator.com/item?id=16086791

    • valarauca1 7 days ago

      the answer you are looking for to your core question is “No”.

      Generally all kernels will assume _some_ backing store. The reason for this is dealing with RAM being full.

      This is because updating MMU is generally expensive, so it’s easier to mark an address as belonging to your process, and commit the pages after the fact.

      Now this can be changed in settings. Ensure pages are backed before allocated, and running swap off can be done.

      But then EVERY and let me repeat EVERY SINGLE piece of code you run has to able to handle malloc failing gracefully. And next to none do. This is a strange condition because how you handle it depends on your kernel configuration, and how much stack you have left. Which is difficult to know unless you wrote every piece of code FOR your custom kernel config.

      Ultimately just having swap enabled and assuming a back store is easier. The few places where you get away with 100% ram and a read only store is embedded (generally, not always) and in these conditions you can closely tie software to kernel configuration.

    • breakingcups 7 days ago

      What do you use? BSD?

      • feelin_googley 7 days ago

        Yes, for me this has been the best balance between (a) the simplicity (size, 3rd party dependencies, etc.) and robustness of the system for building from source and (b) the hardware support in the source tree.

        No question Linux wins on (b) given all the corporate backing, but whenever I revisit systems like Buildroot or Yocto I feel the system I am using wins on (a). More likely than not I am just resistant to change and not wanting to invest the effort to master those systems.

        Purely subjective, but I suspect the system I devised for myself is simpler and more "stable" in the sense it is less reliant on third parties and thus less brittle. Those qualities are important to me.

        Not sure if anyone has mentioned VoidLinux yet in this thread. Probably well worth a look at least as an example if "small and simple" are design goals.

        • exikyut 6 days ago

          Which BSD flavor?

          On the same note, I take it your systems are very "I'm using the [whichever]BSD kernel and userland, but this is my own creation" as opposed to stock [whichever]BSD.

          A related tangential question:

          I've been on the fence about BSD distributions for a really long time, specifically the viability of maintaining them. The few that exist seem to end up forking the entire system, as opposed to just running off with the stock kernel and maybe doing package management differently. That puts huge pressure+responsibility on the person doing the fork.

          My question is, are things the way they are because people wanted to fork the kernel for a specific reason and their fork naturally pulled userspace along, or is there a specific reticence toward forking userspaces in the *BSD world?

        • pritambaral 6 days ago

          Buildroot and Yocto are terrible build systems, let alone build systems for distros.

          Gentoo and its Portage are IMO really good for distro building, though I would not say it is "small" or "simple". NixOS might also be a good distro builder, from the little I've read about it.

  • dozzie 7 days ago

    > [...]it seems like people haven't realized yet that the distribution is precisely this small so the entire thing can run in RAM.

    I do the same with stock Debian. Not an achievement to simply run from RAM only.

iod 7 days ago

I knew of TinyCore but I hadn't heard of dCore, which apprently has Debian/Ubuntu package support. According to http://wiki.tinycorelinux.net/dcore:welcome

"minimal live Linux system based on Micro Core (Tiny Core Linux) that uses scripts to download select packages directly from vast Debian or Ubuntu repositories and convert them into useable SCEs (self-contained extensions). "

trendia 7 days ago

I'd also look into Buildroot [0]. I'm currently using Buildroot for a project and the whole distribution (including libraries and executables) is under 40MB. Since I am using a fixed dev board (i.e. peripherals are not going to change), I used lsmod to detect which drivers are needed and only build those, which really shrinks the kernel.

Buildroot also includes a cross compiler, so that you can rebuild the entire toolchain, kernel, and libraries in one go.

[0] https://github.com/buildroot/buildroot

  • OrangeTux 7 days ago

    Buildroot is great! I'm using it to build distributions for several pieces of embedded hardware.

    But I think Buildroot targets different hardware than Tiny Core. Buildroot targets embedded systems. It therefore lacks a package manager. Tiny core seems to target bigger systems, like notebooks, servers and desktops.

  • devonkim 7 days ago

    I was under the impression that if you really want to optimize for space that you should avoid kernel module overhead entirely and build in your modules statically. Maybe it only saves a few bytes and a couple clock cycles at load time but it sounds like it’s worth a try for you.

    • trendia 6 days ago

      Oh sorry, I wasn't clear about that. I run it first where every possible module is built 'M', and then once the system boots up, run lsmod to see which modules are loaded.

      Then, I take that list of modules and set them to "Y", and disable everything else. That way, needed libraries are statically linked and unneeded libraries aren't built at all.

ComputerGuru 7 days ago

SliTaz was another great distribution, very feature complete and highly polished for its incredibly small size. Unfortunately - and you could see this coming a mile away - like so many other projects (open source or otherwise) they decided they’d “do a rewrite” and the project really lost its way and fizzled our thereafter.

  • kup0 7 days ago

    I notice there are apparently still rolling releases coming out for it?

    I've run the latest ones and for a quick and dirty live desktop from a super small image, it still seems to be a good option.

    I tried looking around for what happened to SliTaz, is there an article/detail anywhere on that (the transition/rewrite/zombification)?

  • interfixus 7 days ago

    It was indeed. Saved my butt any number of times before it zombied out in the transition.

    When I workded in public IT, on a few occasions I booted Slitaz on troubled machines that idiot colleagues had battled with for hours or days, trying to install som Windows. In less than a minute, I had a working, webbrowsing desktop running entirely from RAM. Tellingly, nobody ever as much as asked a simple how? - somehow reminding me of the Australian aborigines simply ignoring the arrival of a UFO in the shape of captain Cook's ship.

mankash666 7 days ago

Point to note: many Alpine Linux base docker images are in the 6MB ballpark.

What's tiny core's advantage over Alpine? Given the outsized interest in Alpine from the docker community, it may have better packages and security updates

  • Siecje 7 days ago

    Correct me if I'm wrong but a docker image doesn't include a kernel.

    • ffk 7 days ago

      You are correct, docker images don't generally contain kernels (and if they do, they don't load them).

      A more apt comparison would be vs alpine standard which is 109 MB and contains a kernel.

splitbrain 7 days ago

I remember a time when stuff like this fit on a single 3.5 floppy.

  • PeCaN 7 days ago

    Remember the QNX 4.25 demo disk that fit on a single 3.5 floppy and included a GUI, networking, a web browser (with basic javascript), clustering, and various other applications?

    What a great little OS.

    • rootbear 7 days ago

      I really wish that there was still a free version of QNX around. I enjoyed playing with it.

      • jacquesm 6 days ago

        I am sorely tempted to push my own little OS for another round but just porting the whole thing from 32 bit to 64 bit has me depressed. It should be a lot easier than the first time around though, now that we have VMs to test with, that is a lot faster turnaround wise than to have to reboot a physical machine every time you mess up in kernel code or some critical device process.

    • vram22 7 days ago

      I had tried it out too. It was really fast, a lot more than the Linuxes I was using then - Red Hat and Mandrake. (Not comparable, I know.) I think they had a UI / desktop environment called Photon.

      • jacquesm 6 days ago

        That 'really fast' bit that you perceived was your processor slicing 200K+ times / second versus maybe 2000 times/second under Linux or Windows.

        QNX is crazy fast in that respect simply because it has almost no context to switch. The soft-realtime aspect of the kernel also helps tremendously in keeping things moving, everything that is interactive runs at a high priority so you'll never see frozen mousepointers or stuff like that.

        • vram22 6 days ago

          Interesting and makes sense, thanks.

    • FreakyT 7 days ago

      That's exactly what I thought of when I saw this post.

      Still astounding they managed to fit so much into 1.44mb!

    • dguaraglia 7 days ago

      Yes! That was amazing, specially compared with the basic KDE/Gnome Linux distro at the time that would take a whooping 100mb. Now I install OS X apps that are > 150mb on a regular basis. It's ridiculous.

    • insulanian 7 days ago

      Still have it somewhere!

      • PeCaN 7 days ago

        I boot it up in virtualbox periodically and play with it. Setting up the clustering and dragging Photon windows between two VMs is really satisfying.

        I wish RIM/Blackberry hadn't bought it, closed the source¹, and killed off Photon. QNX was a really solid and fast OS.

        1. For those who don't follow QNX development closely, the source was available for a few years but was not open in the usual sense. It was still under a proprietary license. RIM bought QNX and closed it off completely.

        • jacquesm 6 days ago

          If they had I would have never left QNX. Quantum had a great thing going, then first the sale to Harman and after that RIM buying it may have been a good business decision for RIM but it was terrible for the mass adoption of QNX.

          Even so, there probably are still untold millions (or even 10's of millions) of embedded QNX installations out there besides the Blackberries that are still in use.

          That little os is about as elegant as they come this side of plan9/inferno.

        • jagger27 7 days ago

          Stallman was right.

  • kokey 7 days ago

    FreeBSD, where that one floppy was enough to set up a modem and dial up to establish a PPP session and then download the rest of the installation. All while allowing you to open another terminal to telnet to a shell somewhere else and pass the time with e-mail and irc while FreeBSD downloads and installs.

  • carapace 7 days ago

    Has no one mentioned Tom's Root Boot yet? :-)

    > tomsrtbt (pronounced: Tom's Root Boot) is a very small Linux distribution. It is short for "Tom's floppy which has a root filesystem and is also bootable."[1] Its author, Tom Oehser, touts it as "The most GNU/Linux on one floppy disk", containing many common Linux command-line tools useful for system recovery (Linux and other operating systems.) It also features drivers for many types of hardware, and network connectivity.

    https://en.wikipedia.org/wiki/Tomsrtbt

    Also, Oberon fits on a 1.44M floppy. http://www.projectoberon.com/

  • michrassena 7 days ago

    I remember my excitement using the QNX boot floppy. It was a full OS with a GUI, all from one 1.44MB disk. It had no PPP support, but fortunately it worked with Ethernet.

    • michrassena 7 days ago

      But back on the subject of the post, I'm happy to see this project. I went searching a small Linux distribution a few months ago, and most of the projects I'd relied on the in the past seemed to have stopped development, like DSL, or Puppy.

      • sevensor 7 days ago

        Tiny Core is the result of a DSL schism, if I remember right. Robert Shingledecker had been on the DSL team before starting Tiny Core.

  • c12 7 days ago

    I sometimes like to boot up my old 386 into MS-DOS from a 3.5" floppy... I haven't done much research into it but I used to have a disk with linux on it that would boot fine.

    Can't seem to find anything today that would fit on one, but then again I haven't done much more than a cursory dig.

    • EvanAnderson 7 days ago

      The Linux Router Project (https://en.wikipedia.org/wiki/Linux_Router_Project) did that kind of thing in the late 90's and early 2000's. It's long-since dead, however. I doubt there's much of anything today, given the size of the kernel alone.

      I built a Linux floppy-based bootable disk imaging environment to roll out masses of Windows 95/98 machines back in 1997-1999. You'd compile a minimal kernel down to 600 - 800 KB then pack-up your userland in an itty-bitty gzipped filesystem archive and concatenate it with the kernel at an even sector boundary, dd the whole thing to a disk, and set some bits to tell the kernel where to load the initrd. I've never gone back to see how much of that functionality still exits in modern kernels. (Very little, I'd assume...)

    • pjmlp 7 days ago

      I used to have one 1.44MB with MS-DOS 3.3 trimmed to the minimum and Turbo Pascal.

  • beagle3 7 days ago

    It still does.

    http://kparc.com - OS, GUI, database, boots on 64-but bare metal, still fit on a 3.5in floppy last time I looked.

    • krylon 7 days ago

      These days, finding a machine with a floppy drive, let alone an actual floppy disk, might be harder than shrinking a system down enough to fit on a floppy disk.

      EDIT: Don't get me wrong, it is really cool people can build such tiny systems. But the smallest storage device in my household it a 2GB SD card. Outside of embedded devices and low-end routers, I do not see the point other than for hack value.

      • fpoling 7 days ago

        Few months ago at work we used a 25 years old spectrometer. It got 5 inch floppy. Initially we considered to get a drive for it to get data from the spectrometer, but after few quick searches on eBay we realized that to get one that can be connected via USB was not so trivial or cheap or fast. Fortunately it turned out that spectrometer can send everything over serial port as long as the serial link has proper 12 volts voltage. So we got our data using true usb-to-serial dongle costing like 120$.

        • digi_owl 7 days ago

          Was the drive in the spectrometer in a bay?

          If so you maybe have been able to get a drive emulator that can take modern storage devices in the front.

        • jasonjayr 7 days ago

          What serial adapter did you end up using?

          I've worked with connecting equipment that we super finicky on what was on the serial port, and after 2-3 tries, we gave up and just sourced an old machine with hardware serial. We only have a few old machines left!

    • metalliqaz 7 days ago

      Such a simple website and yet still crushed by the hackernews rush.

    • hucker 7 days ago

      Are there any bootable releases anywhere?

      • beagle3 7 days ago

        You have to talk to Arthur Whitney to get one - so far AFAIK he has made it available only to people who plan to contribute.

        • exikyut 6 days ago

          What counts as contribution?

          https://www.youtube.com/watch?v=kTrOg19gzP4 seriously, seriously did my head in. I was certainly fascinated by this, but I fear that I'd contribute one character, accidentally put it in the wrong place, break the build for two days because nobody can figure out what broke, and then feel really bad for weeks afterwards.

          (Okay, okay, diffing... but still. I'd break my own build, at least.)

          I _am_ genuinely interested, and I'd LOVE to play with this, but I'm really, really conservative, and would far prefer to be a fly on the wall for a bit for a while first.

  • pokstad 7 days ago

    I remember when TCL was a scripting language.

    • rkeene2 7 days ago

      It still is, there are hundreds of contributors and a new release (Tcl v8.6.8) came out 22-DEC-2017. There are plans for a new major release, Tcl 9, as well.

      Tcl still has many unique features that other languages lack. One new highly experimental feature is TclQuadCode, which can compile Tcl to machine code using LLVM. It's been in progress for over 5 years and it is amazing work. Compiling a dynamic language is a difficult task.

      • eesmith 7 days ago

        I believe it was a joke on the initials for "Tiny Core Linux". Look through the other comments and you'll see people using "TCL" to refer to this project, and not Tcl.

  • exikyut 7 days ago

    Xwoaf is still floating around out there. It was almost unusably minimal, though.

  • IronWolve 7 days ago

    I once created a windows floppy with winsock ini settings, irc, netscape, eudora and an ntp client, all pre-configured for an ISP I was working at. Users just had to put in their username and password, oh dialup isp days...

    The other floppy OS I tinkered around with was QNX, but it was just a demo of what QNX could do.

  • Zardoz84 7 days ago

    muLinux can boot from a single floppy without X11. With X11 need 2 or 3 disks.

  • robin_reala 7 days ago

    It would still fit on an LS-120 3.5″ floppy, although I suspect they’re even harder to get hold of these days.

  • thathappened 7 days ago

    Openwrt's pretty tiny imo

    • lgierth 7 days ago

      I've regularly fit OpenWrt into 3 MiB, including an HTTP server and web UI.

  • justherefortart 7 days ago

    I remember 2 floppies. Never 1 for Linux+XWindows.

    Trinux was pretty slick back in the day.

  • anoother 7 days ago

    Yup.

    I remember Blender boasting for years about how it fit on a floppy.

sevensor 7 days ago

I recommend searching their forum if you're actually trying to use Tiny Core. It fills in a lot of gaps in their docs.

skrowl 7 days ago

I'd love to see this type of distribution on Windows Subsystem for Linux. The default Ubuntu is over 500MB!

  • limeblack 6 days ago

    I actually ran TCL in virtual machine and SSHed into it like this back when node.js only ran on linux well.

  • thomastjeffery 7 days ago

    There is some work for Archlinux and NixOS on WSL.

snowpanda 7 days ago

This is so cool, what are some great uses for this?

  • marttt 6 days ago

    I use it to produce 50-minute radio shows for my country's public broadcasting. I work with a Thinkpad T42 from circa 2004. Swapped the PATA HDD for a Compact Flash card -- prior to this, everything was just surprisingly snappy thanks to Tiny Core Linux's RAM boot; now the machine is also wonderfully quiet.

    Granted, this is pushing it, but I've been using this setup every day for almost two years. (I just like to use old hardware until it dies -- or, is this what old IPS-screened Thinkpads generally turn people into?)

    Sure, you probably should be a "computational minimalist" by nature (e.g. there is an older version of Chromium, but I suppose your main browser will be Dillo -- which, actually, is just wonderful once you get used to it). But if you are a minimalist, I'd say it's a really solid system.

    Also, it's fun for me to think that I bought this Thinkpad T42 three years ago for €20. And now I use it as my main workhorse in a field where typical setups consist of new-ish MacBooks with high-end SSDs and an up to date version of Pro Tools. (And where people occasionally still think that "duh, you can probably only edit a text file in Linux".)

    So it's an awesome, clean, fairly easy to maintain distro (e.g. in case of a typical install you have a pristine system after every reboot). And the community is very friendly and responsive.

    • exikyut 6 days ago

      I'm also using a T43, circa 2006.

      Nowadays I've found myself mostly using it for VNC to a slightly better machine (my old desktop, on permanent loan to a family member after their laptop broke. This works...). And... even on my not-great 802.11g, and the CPU even locked to 800MHz, typing this text over TigerVNC is literally realtime with no perceptible lag or delay. I'm honestly amazed. But anyway...

      FWIW, launchpad.net have multiple sources providing the latest 32-bit builds of Chromium. These are built for Ubuntu, but I find they've worked 100% fine on Slackware. :P (After some work I even got the debugging symbols into the right place!)

      (Nothing stopping you from building the world's largest LD_PRELOAD to pull in "enough Ubuntu" that Chromium boots, but I find that 100% unnecessary at this point.)

      Some build daily(ish), some build fortnightly-to-monthly-ish. Before I set up VNC, I was in the process of figuring out an autoupdate script (CLI PHP, easily rewritten) that would find and fetch the packages off Launchpad. Let me know if you'd like a copy, I never finished it but I did do the Herculean bit of figuring out the magic API URLs, the rest is just boring scripting and downloading.

      Protip. If you open 100-170 tabs (possible! on 2GB! with The Great Suspender), the main process will hit 4G VIRT. xD

      Besides that, NetSurf is a bit better than Dillo, you're probably already aware of it.

steeve 7 days ago

TCL is really nice. boot2docker was based on it before they switched to Docker for Mac & Alpine.

  • justincormack 7 days ago

    We switched to LinuxKit https://github.com/linuxkit/linuxkit - it was very hard to maintain boot2docker and TCL and make them usable. LinuxKit is generally a little larger, as we use a bunch of Go code rather than C and Go is a little bloated, although that will improve no doubt. You can make very small LinuxKit images if you really want too.

michrassena 7 days ago

In spite of needing something like this a few weeks ago, I wonder what the role of such reduced distributions actually are these days. Storage is ridiculously cheap. A 64MB compact flash card doesn't cost significantly less than a 2GB card (32x the storage).

I have a few old PCs sitting around, a Pentium 4D, a core duo or two. These are full-size PCs, old Dells and IBMs. It doesn't make much of a difference if they are running an 11MB distribution or a full Debian with GUI.

What I'd really like is a small form factor PC with full x86 support so that I can run DOS, or other PC oses, like the BSDs or BEOS. Ideally it would be the size of a Raspberry PI and cost the same. I know such systems are available, but the cost is an issue. Emulation just isn't the same either.

  • jandrese 7 days ago

    One thing that drives me nuts is how these embedded systems always want to use goddamn BusyBox instead of full fat Bash and userland. There's just enough stuff removed from BusyBox to break scripts and generally make life annoying. And for what, to save 20MB of space on your 16GB device?

    • saagarjha 7 days ago

      Not every embedded device has the luxury of 16 GB of storage.

      • jandrese 7 days ago

        They're getting increasingly rare now that storage costs are so low. Why hobble something with an 8MB flash chip when a 1GB flash chip costs almost the same? Maybe you can shave some fractional pennies on not having to wire up so many address lines, but even that's marginal.

        • saagarjha 6 days ago

          > Why hobble something with an 8MB flash chip when a 1GB flash chip costs almost the same? Maybe you can shave some fractional pennies on not having to wire up so many address lines, but even that's marginal.

          If you're making a hundred million of them, the marginal gains add up. There's no point in adding stuff you don't need.

  • markbnj 7 days ago

    We use alpine as a base for production containers because it makes pulling the image a lot faster. A typical image might be 10-20MB vs. 60-100+MB if based on ubuntu.

  • smhenderson 7 days ago

    In spite of needing something like this a few weeks ago, I wonder what the role of such reduced distributions actually are these days.

    In my mind the primary value is learning. The smaller the system the more likely I can understand how all the parts interact. Once a basic understanding is achieved you can start layering in more and still keep up.

    Once you get a bit past that it's easy to create recovery media and other interesting boot tools.

    I'm sure others can think of more but that's the first thing that I think of when I think of LFS, Tiny and to a lesser extent distros like Slackware that still tend to keep things relatively simple.

  • zxexz 7 days ago

    >In spite of needing something like this a few weeks ago, I wonder what the role of such reduced distributions actually are these days.

    I think this is pretty much why. If you were to take your average hacker/maker, most of them would probably have a similar sentiment. However, most of them would also have a story about how they needed some tiny distro like TCL recently for some niche use case.

    I myself needed a tiny distro recently. I had 22 rackmount servers with no HDD all on the same network, and I needed them to all run the same static binary. I realized like most Intel servers, with default settings, they will try to boot over the network with PXE. Wanting to not spend a day messing around with provisioning these servers, I decided I would just put a linux image on a USB stick, stick it in my OpenWRT router, and host it over TFTP. Serving a large image from a USB stick every time each of 22 servers started up was an unnecessary use of bandwidth and would often fail (I didn't have the best router). I also didn't need anything except a network stack and the ability to run a 64 bit static binary, so I did some research and ended up using the x86_64 version of TCL. Worked great! I'm sure tons of others have weird niche use cases for such a distro.

    Of course, as my needs evolved, I ended up just using a minimal Arch linux image...

  • dbuder 7 days ago

    embedded stuff, a dollar or two is a big deal sometimes.