gbickford 11 days ago

> Relationship with CVNets

> CoreNet evolved from CVNets, to encompass a broader range of applications beyond computer vision. Its expansion facilitated the training of foundational models, including LLMs.

We can expect it to have grown from here: https://apple.github.io/ml-cvnets/index.html

It looks like a mid-level implementations of training and inference. You can see in their "default_trainer.py"[1] that the engine uses Tensors from torch but implements its own training method. They implement their own LR scheduler and optimizer; the caller can optionally use Adam from torch.

It's an interesting (maybe very Apple) choice to build from the ground up instead of partnering with existing frameworks to provide first class support in them.

The MLX examples seem to be inference only at this point. It does look like this might be a landing ground for more MLX specific implementations: e.g. https://github.com/apple/corenet/blob/5b50eca42bc97f6146b812...

It will be interesting to see how it tracks over the next year; especially with their recent acquisitions:

Datakalab https://news.ycombinator.com/item?id=40114350

DarwinAI https://news.ycombinator.com/item?id=39709835

1: https://github.com/apple/corenet/blob/main/corenet/engine/de...

  • error9348 11 days ago

    The interface looks very Apple as well. Looks like you create a config file, and you already have a model in mind with the hyperparameters and it provides a simple interface. How useful is this to researchers trying to hack the model architecture?

    One example: https://github.com/apple/corenet/tree/main/projects/clip#tra...

    • sigmoid10 10 days ago

      Not much. But if you just want to adapt/optimize hyperparams, this is a useful approach. So I can certainly see a possible, less technical audience. If you actually want to hack and adapt architectures it's probably not worth it.

  • zitterbewegung 10 days ago

    What you say is true about the project but both PyTorch works on Mace and Tensorflow was ported to Macs by Apple

    • _aavaa_ 10 days ago

      They were originally available only as binaries, have they released the code changes required or upstreamed them yet?

      • zitterbewegung 8 days ago

        Tensorflow was always on GitHub and PyTorch was in their source tree in their prerelease branch and then mainlined .

  • blackeyeblitzar 11 days ago

    > It looks like a mid-level implementations of training and inference

    I’m not familiar with how any of this works but what does state of the art training look like? Almost no models release their training source code or data sets or pre processing or evaluation code. So is it known what the high level implementation even is?

  • big-chungus4 8 days ago

    They don't implement their own stuff, their optimizers just inherits pytorch optimizers

  • davedx 10 days ago

    > It's an interesting (maybe very Apple) choice to build from the ground up instead of partnering with existing frameworks to provide first class support in them.

    It smells of a somewhat panicked attempt to prepare for WWDC to me. Apple has really dropped the ball on AI and now they're trying to catch up.

    • audunw 10 days ago

      I don’t get the idea that Apple dropped the ball on AI. They were fairly early with adding neural engine hardware to their chips and have been using ML extensively on-device for a long time now

      They haven’t put an LLM assistant out there. But they don’t make their own search engine either so I don’t think “online LLM assistant” is something they’ll ever put much effort into unless it’s part of a bigger effort to launch their own AI-based search engine as well.

      As for generative AI I don’t think the quality is up to a level that would be reasonable for Apple.

      The only area where i would expect Apple to keep up is the kind of Copilot integration Microsoft is working on. And we know Apple is working on on-device AI assistant, and probably have for a long time. It’ll be launched when they can get good quality results on-device. Something nobody else has achieved anyway, so we can’t say that they’re behind anyone yet.

      • talldayo 10 days ago

        > They were fairly early with adding neural engine hardware to their chips

        If that's all it takes to stay ahead of the curve, then Rockchip and Qualcomm are arguably right up there alongside them. Tons of vendors shipped their own AI silicon, and of those vendors, it seems like Nvidia is the only one that shipped anything truly usable. Medium-sized LLMs, Stable Diffusion and probably even stuff like OAI Whisper is faster run on Apple's GPUs than their AI coprocessor.

        • wtallis 10 days ago

          > and of those vendors, it seems like Nvidia is the only one that shipped anything truly usable. Medium-sized LLMs, Stable Diffusion and probably even stuff like OAI Whisper is faster run on Apple's GPUs than their AI coprocessor.

          Be careful not to have NVIDIA-shaped tunnel vision. Performance isn't the whole story. It's very telling that approximately everybody making SoCs for battery powered devices (phones, tablets, laptops) has implemented an AI coprocessor that's separate from the GPU. NVIDIA may take exception, but the industry consensus is that GPUs aren't always the right solution to every AI/ML-related problem.

          • talldayo 10 days ago

            Ideally, you're right. Realistically, Apple has to choose between using their powerful silicon (the GPU) for high-quality results or their weaker silicon (the Neural engine) for lower-power inference. Devices that are designed around a single power profile (eg. desktop GPUs) can integrate the AI logic into the GPU and have both high-quality and high-speed inference. iPhones gotta choose one or the other.

            There's not nothing you can run on that Neural Engine, but it's absolutely being misunderstood relative to the AI applications people are excited for today. Again; if chucking a few TOPS of optimized AI compute onto a mobile chipset is all we needed, then everyone would be running float16 Llama on their smartphone already. Very clearly, something must change.

      • chrsw 10 days ago

        >I don’t get the idea that Apple dropped the ball on AI.

        That's the public perception. Maybe due to them not getting in on a quick cash grab off the LLM hype wave?

        • fauigerzigerk 10 days ago

          I share this perception for two reasons:

          1) Siri

          2) Dearth of published AI research

          • chrsw 10 days ago

            I agree with 1. For 2, have they ever been a company big into research? They're very consumer focused and it can take time to integrate new tech into consumer products at scale. Especially the way Apple likes to do it: polished and seamlessly integrated into the rest of their ecosystem.

            • fauigerzigerk 10 days ago

              I would say not doing AI research (or buying another big company that does) is tantamount to dropping the ball on AI, if it turns out that AI is a capability they should have had and must have to succeed.

              You could argue that publishing research is not the same thing as doing it. But they don't seem to have done much of it until fairly recently.

              I agree that Apple does less research than other big tech companies. But they do it where they think it matters. Their M-series CPUs are more than just integration and polishing. And they have been doing some research in health AI as well, I think.

          • jldugger 10 days ago

            > Dearth of published AI research

            https://machinelearning.apple.com/research seems to have too many publications to be considered a "dearth" IMO.

            • fauigerzigerk 10 days ago

              Dearth relative to Apple's size and relative to the amount of research that competitors have been doing.

              But I think part of the problem is that Apple simply hasn't focused on the tasks and the methods and the people that have now turned out to be so impactful.

              They have clearly been course correcting for a while now as some of the more recent papers show, and they have done successful research in areas such as health AI.

    • throw0101c 10 days ago

      > Apple has really dropped the ball on AI and now they're trying to catch up.

      Apple put a neural engine on-die in the A11 back in 2017:

      * https://en.wikipedia.org/wiki/Apple_A11#Neural_Engine

      The A-derived M-series chips had them from the beginning in 2020:

      * https://en.wikipedia.org/wiki/Apple_M1#Other_features

      Seems like they've been doing machine learning for a while now.

      • jdewerd 10 days ago

        They've been using them, too. Auto OCR so selecting text in images Just Works, image enhancements, Siri. I'm sure LLM Siri is in the works. Scanning your photos for CSAM. Let's hope that last one is more reliable than Siri :/

        • thealistra 10 days ago

          Wasn’t csam ultimately rolled back? And wasn’t it not Ai based but hash based?

          • Kerbonut 9 days ago

            They added it back in as an optional thing for child accounts managed in a family in messages. It sure sounds like it's AI based and not hash based in this case.

    • pizza 10 days ago

      Wouldn’t WWDC-related endeavors be more product-facing? I’m not so sure this has to do with their efforts to incorporate ai into products, and tbh I would say their ai research has been pretty strong generally speaking.

      • davedx 10 days ago

        I expect that a lot of WWDC will be Apple trying to get more developers to build AI products for their platforms, because at the moment, Apple products don't have much AI. The other tech companies have integrated user facing LLM products into a significant part of their ecosystem - Google and Microsoft have them up front and center in search. Apple's AI offerings for end users are what exactly? The camera photos app that does minor tweaks to photos (composing from multiple frames). What else actually is there in the first party ecosystem that significantly leverages AI? Siri is still the same trash it's been for the last 10 years - in fact IMO it's become even less useful, often refusing to even do web searches for me. (I WANT Siri to work very well).

        So because their first party AI products are so non-existent, I think WWDC is a desperate attempt by Apple to get third party developers to build compelling AI products. I say desperate because they're already a year behind the competition in this space.

        (I can imagine they'll be trying to get developers to build Vision Pro software too, though I hear sales there have collapsed so again, way too little, too late)

        • tzakrajs 10 days ago

          They have tons of computer vision, NN inference and natural language processing in their products. It's reductive to say Apple products don't have much AI.

        • matthewmacleod 10 days ago

          For one thing, I can search for any text I’ve ever take a photo of. Finding a picture of someone I took 20+ years ago by searching for a single work I remember on their t-shirt is pretty cool, and is all done on-device.

          I think it’s important to remember that there are a bunch of actual useful AI-driven features out there that aren’t just GenAI chatbots.

        • niek_pas 10 days ago

          I'm not sure what you mean by "AI products", and why you think Apple needs them for their platforms.

        • wokwokwok 10 days ago

          Can you be more specific?

          What AI products are present in other ecosystems (eg. Android, Samsung, whatever) and missing from Apple?

          Honest question: I find the platform distinction largely meaningless in most cases apart from “what your phone looks like” and “can you side load apps”…

        • lynx23 10 days ago

          I am guessing you are not familiar with the AI-powered vision features that already ship since a few years. Mostly accessibility related, so I am not surprised you missed it.

          • devinprater 10 days ago

            Yep. Google, the AI company, only recently launched image descriptions in TalkBack, which VoiceOver has had for years now. Google still doesn't have Screen Recognition, which basically does OCR and image/UI classification to make inaccessible apps more accessible.

            • lynx23 9 days ago

              Don't get me even started on TalkBack and Android. It was never on-par with VoiceOver, and is still a few years behind... However, VoiceOver is also getting slowly, but surely, worse and worse over time when it comes to small subtle bugs...

ipsum2 11 days ago

It's interesting that Apple also actively develops https://github.com/apple/axlearn, which is a library on top of Jax. Seems like half the ML teams at Apple use PyTorch, and the other half uses Jax. Maybe its split between Google Cloud and AWS?

  • josephg 10 days ago

    In my experience, this is pretty normal in large companies like Apple. Coordination costs are real. Unless there's a good reason to standardize on a single tool, its usually easier for teams to just pick whichever tool makes the most sense based on the problem they're solving and what the team has experience with.

    • tomComb 10 days ago

      Big companies like Apple yes, but not Apple

  • te_chris 10 days ago

    I don’t know as haven’t worked there, but have always heard Apple described more as a series of companies/startups than one coherent entity like Meta or whatever. Each is allowed a large degree of autonomy from what I’ve heard.

    • flawn 10 days ago

      aka Google some years ago (don't know about now...)

coder543 11 days ago

They also mention in the README:

> CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data

This is the first I’m hearing of that, and the link seems broken.

mxwsn 11 days ago

Built on top of pytorch.

  • jauntywundrkind 11 days ago

    [flagged]

    • MBCook 11 days ago

      Metal was released before Vulkan, and had a different design philosophy. So that’s not a good argument.

      They follow the licenses.

      Apple has paid developers to work on WebKit, Clamg/LLVM, and CUPS. That’s off the top of my head. All open source. All available to you. I remember reading how they contributed fixes to a huge number of packages in the OSS ecosystem when working to get POSIX certification as they fixed issues.

      They released Swift as open source when they didn’t need to, and keep opening more libraries for it as they work to reimplement foundation.

      You may not like Apple, but they’re not a leech. They give back.

      • Jtsummers 11 days ago

        Apple also contributed substantially to OpenCL (I mean, they were the originators of it), though CUDA (the closed, proprietary solution) ate that world.

      • threeseed 11 days ago

        They also had paid developers working on Apache Spark.

        As well as Apache Cassandra before they switched to FoundationDB which is still open-source and maintained.

    • throwaway5959 11 days ago

      Maybe it’s because you need to take a breath while typing.

    • andrewmcwatters 11 days ago

      Apple does this with some other technologies as well. CoreData "is" SQLite.

      • randomdata 11 days ago

        CoreData is an object graph and persistence framework. It is more like an ORM toolkit than like a database engine. Indeed, SQLite is one of the possible 'backends', but it is not limited to SQLite. It also supports XML, binary, and in-memory stores out of the box, and you can also create your own if none of those suit.

      • threeseed 11 days ago

        CoreData is a derivative of the WebObjects' Enterprise Object Framework (EOF).

        They just tied it to SQLite whereas in the past it was a general ORM.

    • BuckYeah 11 days ago

      As someone who owns apple stock. I’m good with it

      • pquki4 11 days ago

        The first sentence has almost no meaning. Basically anyone who has a 401k or SP500/VOO/etc owns significant amount of Apple stock. Not something worth pointing out.

        • jeffhuys 11 days ago

          There exists a world outside the USA bud.

    • loaderchips 11 days ago

      You have articulated what i have been feeling towards apple really well. I like their products But their philosophy and approach is not up to par

leodriesch 11 days ago

How does this compare to MLX? As far as I understand MLX is equivalent to PyTorch but optimized for Apple Silicon.

Is this meant for training MLX models in a distributed manner? Or what is its purpose?

  • reader9274 10 days ago

    As mentioned in the "mlx_examples/open_elm": "MLX is an Apple deep learning framework similar in spirit to PyTorch, which is optimized for Apple Silicon based hardware."

  • dagmx 11 days ago

    Just skimming the README it looks like it’s a layer above MLX. So looks like a framework around it to ease ML

    • ipsum2 11 days ago

      It's a layer on top of PyTorch, and it has code to translate PyTorch models into MLX.

      • Mandelmus 10 days ago

        So, is CoreNet the equivalent of Keras, whereas MLX is the Jax/PyTorch equivalent?

        • hmottestad 10 days ago

          Sounds reasonable. Apple writes the following about MLX: "The design of MLX is inspired by frameworks like NumPy, PyTorch, Jax, and ArrayFire."

        • ipsum2 10 days ago

          Not quite. The closest equivalent would be something like fairseq. It's config (yaml) driven.

miki123211 11 days ago

What's the advantage of using this over something like Huggingface Transformers, possibly with the MPS backend?

  • pshc 11 days ago

    "MLX examples demonstrate how to run CoreNet models efficiently on Apple Silicon. Please find further information in the README.md file within the corresponding example directory."

    > mlx_example/clip: ... an example to convert CoreNet's CLIP model implementation to MLX's CLIP example with some customized modification.

      - FP16 Base variant: 60% speedup over PyTorch
      - FP16 Huge variant: 12% speedup
    
    > mlx_example/open_elm: ... an MLX port of OpenELM model trained with CoreNet. MLX is an Apple deep learning framework similar in spirit to PyTorch, which is optimized for Apple Silicon based hardware.

    Seems like an advantage is extra speedups thanks to specialization for Apple Silicon. This might be the most power-efficient DNN training framework (for small models) out there. But we won't really know until someone benchmarks it.

  • upbeat_general 10 days ago

    The implementation seems to be pretty clean and modular here where transformers (and diffusers) isn’t, unless you take their modules standalone.

    This repo has a lot of handy utilities but also a bunch of clean implementations of common models, metrics, etc.

    In other words, this is more for writing new models rather than inference.

  • jaimex2 11 days ago

    Nothing, its basically pytorch with an Apple logo.

jn2clark 10 days ago

I would love an LLM agent that could generate small api examples (reliably) from a repo like this for the various different models and ways to use them.

buildbot 11 days ago

Does this support training on Apple silicon? It’s not very clear unless I missed something in the README.

  • blackeyeblitzar 11 days ago

    Would such a capability (training) be useful for anything other than small scale experimentation? Apple doesn’t make server products anymore and even when they did, they were overpriced. Unless they have private Apple silicon based servers for their own training needs?

    • donavanm 11 days ago

      > Unless they have private Apple silicon based servers for their own training needs?

      Id be SHOCKED if so. Its been 15 years, but I was there when xserve died. Priorities were iphone > other mobile devices >>> laptops > displays & desktops >>> literally anything else. When xserve died we still needed osx for OD & similar. Teams moved on to 3P rack mount trays of mac minis as a stop gap. Any internal support/preference for server style hardware was a lolwut response. Externally I see no reason to suspect thats changed.

    • MBCook 11 days ago

      There are an insane number of Apple Silicon devices out there.

      If your product runs on an iPhone or iPad, I’m sure this is great.

      If you only ever want to run on 4090s or other server stuff, yeah this probably isn’t that interesting.

      Maybe it’s a good design for the tools or something, I have no experience to know. Maybe someone else can build off it.

      But it makes sense Apple is releasing tools to make stuff that works better on Apple platforms.

      • blackeyeblitzar 11 days ago

        I can understand the inference part being useful and practical for Apple devs. I’m just wondering about the training part, for which there Apple silicon devices don’t seem very useful.

        • spmurrayzzz 10 days ago

          My M2 Max significantly outperforms my 3090 Ti for training a Mistral-7B LoRA. Its sort of a case-by-case situation though, as it depends on how optimized the CUDA kernels happen to be for whatever workload you're doing (i.e. for inference, theres a big delta between standard transformers vs exllamav2, apple silicon may outperform the former, but certainly not the latter).

        • rgbrgb 11 days ago

          I’ve seen several people fine tune mistral 7B on MacBooks.

  • zmk5 11 days ago

    I believe the MLX examples allow for it. Seems like a general purpose framework rather than a Mac specific one.

    • gbickford 11 days ago

      I couldn't find any training code in the MXL examples.

big-chungus4 8 days ago

I went through their folders, they have have a lot of classes that just inherit from pytorch and torchvision classes and seemingly do nothing new. All optimizers, schedulers and most layers do that. They do however have a reasonable amount of blocks, i.e. specific combinations of layers from various papers, similar to monai.networks.blocks. Out of "building pieces" they also have a few newly implemented losses, metrics.

RivieraKid 10 days ago

What library would you recommend for neural net training and inference on Apple M1? I want to use it from C++ or maybe Rust. The neural net will have 5M params at most.

  • the_king 10 days ago

    I would use Pytorch as your starting point. Its metal backend is pretty quick on Apple Silicon, and it's the most widely used library for everyone from hackers to foundation model builders.

benob 10 days ago

> OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework https://arxiv.org/abs/2404.14619

Apple is pushing for open information on LLM training? World is changing...

  • tzakrajs 10 days ago

    We are all starting to better understand the ethos of their engineering teams more generally.

andreygrehov 11 days ago

What hardware would one need to have for the CoreNet to train efficiently?

orena 10 days ago

The style is not very different than NeMo(nvidia)/fairseq(Facebook)/espent(oss) etc..

m3kw9 10 days ago

Ok, why would anyone use this when you have industry standard methods already?

gnabgib 11 days ago

h1: CoreNet: A library for training deep neural networks

symlinkk 11 days ago

Pretty funny that Apple engineers use Homebrew too.

  • guywithabike 11 days ago

    Why is it funny? Homebrew is the de facto standard terminal packaging tool for macOS.

    • AceJohnny2 11 days ago

      <cries in MacPorts>

      • TMWNN 11 days ago

        I also use MacPorts, but certainly have often noticed that Homebrew has some package that MacPorts doesn't.

        I guess there's nothing stopping me from moving to Homebrew other than familiarity.

        • detourdog 10 days ago

          I haven't looked at Homebrew since that got started. The philosophical difference at that time was using macports and having a consistent and managed */local/ collection of tools with self contained dependencies vs. adding new tools with dependencies tied to the current Mac OS release.

          I still use MacPorts for that reason and it is easy enough to create a local portfile for whatever isn't in Macports.

          I find this to be the easy way to manage networked development computers.

        • fastball 11 days ago

          I used MacPorts a decade ago, but at some point realized that Homebrew had more packages that were kept consistently up-to-date. Switched and never looked back.

          • nicolas_t 11 days ago

            I switched away back to macports when homebrew decided to get rid of formula options. To be honest, I always find homebrew frustrating, it feels that they've often made technical decisions that are not necessarily the best but they've been much more successful at marketing themselves than macports.

            • pnw_throwaway 10 days ago

              If I’m reading the formula docs right, only homebrew-core packages don’t support it (due to CI not testing them). That part does suck, though.

              Other taps, like homebrew-ffmpeg, offer a ton of options.

              • nicolas_t 10 days ago

                oh, I actually hadn't realized that this is what they settled on in the end. ffmpeg is the quintessential package where options make sense so good that that's still supported.

                The other issue I experienced with homebrew around that time were related to having different versions of openssl installed because I had some old codebase I had to run (and for performance reasons didn't want to use docker). But that's definitely a edge case.

    • photonbeam 11 days ago

      I hear a lot about people moving to nix-darwin, is it popular or am I showing my own bubble

      • armadsen 11 days ago

        I’m a full-time Mac and iOS developer, have been for almost 20 years, and this is the first I’ve heard of it. Might just be my bubble, but I don’t think it’s a huge thing yet. (I’m going to check it out now!)

      • jallmann 11 days ago

        I use nixpkgs on MacOS, is nix-darwin is a different project?

        I love Nix but it probably has too many rough edges for the typical homebrew user.

        • tymscar 11 days ago

          Its a different complementary thing. It lets you define your macos settings the same way you would on nixos

          • pxc 9 days ago

            it's worth noting for Homebrew users that it also has a nice built-in module for managing a Homebrew installation by generating a Brewfile for you. So you can transition at your own pace, if you like

      • firecall 11 days ago

        I've never heard of it until now, but will check it out! :-)

      • pyinstallwoes 10 days ago

        I never even heard of nix-Darwin. Interesting.

    • sevagh 11 days ago

      Apple should do like this library, re-release Homebrew with their own name on the README and people would lap it up.

    • ramesh31 11 days ago

      >Why is it funny? Homebrew is the de facto standard terminal packaging tool for macOS.

      It's funny because a multi-trillion dollar company can't be bothered to release a native package manager or an official binary repository for their OS after decades of pleading from developers.

      • randomdata 11 days ago

        They released "App Store" for the average Joe. We can all agree it is not suitable for power users, but at the same time what would power users gain over existing solutions if they were to introduce something?

        • katbyte 11 days ago

          You can brew install mas (I think) and then install/manage Mac store stuff via the cli pretty easily

      • astrange 11 days ago

        They did, they sponsored MacPorts. (And then Swift Package Manager.)

      • Tagbert 11 days ago

        So you want them to Sherlock Homebrew?

        • TillE 11 days ago

          "Sherlocking" can be unfortunate for a developer, but it's odd to view it as an inherently bad thing. A package manager is a core OS feature, even Microsoft has WinGet now.

          • Someone 10 days ago

            > A package manager is a core OS feature

            It has become a core OS feature. Historically, you see the set of core OS features expand tremendously. Back in the 80’s drawing lines and circles wasn’t even a core OS feature (not on many home computers, and certainly not on early PCs), bit-mapped fonts were third part add-ons for a while, vector-based fonts were an Adobe add-on (https://en.wikipedia.org/wiki/Adobe_Type_Manager), printer drivers were third party, etc.

            I think that’s natural. As lower layers become commodities (try making money selling an OS that only manages memory and processes), OS sellers have to add higher layer stuff to their products to make money on them.

            As to Sherlocking, big companies cannot do well there in the eyes of “the angry internet”:

            - don’t release feature F: “They don’t even support F out of the box. On the competitor’s product, you get that for free”

            - release a minimal implementation: “They have F, but it doesn’t do F1, F2, or F3”

            - release a fairly full implementation: “Sherlocking!” and/or nitpicking about their engineering choices.

          • fragmede 11 days ago

            it's odd to feel empathetic when someone has their livelihood taken from them?

      • etse 11 days ago

        Well, without charging for it, right?

        • 2muchcoffeeman 11 days ago

          They should do it to become the de facto platform for programming.

  • ClassyJacket 11 days ago

    [flagged]

    • vsnf 11 days ago

      As a thought exercise, how would one even begin to try to pollute a curl with ads? Would it print out suggested websites after every get request?

      • ronsor 11 days ago

        Probably. That's basically how npm ads work.

      • TaylorAlexander 11 days ago

        Oh if we’re thinking of terrible ideas, it could save ads as a JPG in to the folder where you saved whatever you were grabbing with curl.

        • dylan604 11 days ago

          Or even more terrible, open the image full screen with no way to close it until it feels it has been open long enough to close on its own. maybe show some sort of timer counting down, and then before dismissing itself, it opens the App Store listing for the app. it'll be very convenient for the user as no user interaction will be required for any of this

          • TaylorAlexander 10 days ago

            Subscribe now to CURL PLUS for 30% fewer unskippable ads!

      • blackoil 11 days ago

        Before you see the output, please read about this brilliant product. Type name of product to continue.

      • airstrike 11 days ago

        Print a coupon to the terminal that expires in 15 mins. Buy more to save more

      • fiddlerwoaroof 11 days ago

        Detect iterm or kitty or other image-capable terminals and display an image

      • pjmlp 10 days ago

        That is definitly an idea, see npm packages.

      • fragmede 11 days ago

        sell advertising space on the progress bar and charge for faster download speed

      • arzig 11 days ago

        I mean, there are a a number of tty graphics protocols. I’m sure with enough dedication someone could figure something out.

      • epistasis 11 days ago

        Careful what you suggest! VCs are starting to back open source companies these days!

irakeshpurohit 5 days ago

anyone have this hosted so anyone can try this out?

javcasas 10 days ago

Looks at Apple: CoreNet Looks at Microsoft: Net Core

My inner trademark troll demands a bucket of popcorn.

  • steve1977 9 days ago

    To be fair, Apple has a long tradition of naming frameworks Core Something, e.g. Core Foundation, Core Graphics etc.

    I think these had their initial releases even before .NET Framework 1.0 (and so even longer before .NET Core) so Apple could probably claim "prior art" or whatever this would be called (IANAL).

  • pixl97 10 days ago

    Heh, when I saw this post this is the first thing I thought.