Jtsummers 6 years ago

I'm not sure I expected it to take the world "by storm", but I expected more from Google Wave. It was a great concept, by a major company, but it was too slow on release and the rollout killed it. In retrospect, it probably wouldn't have lasted anyways. Google was in the process of minimizing their federated communication services by that point, and that was another major selling point of Google Wave's initial proposal.

But the rollout, that was just the worst way to ever get a product into the hands of users. If you got in they gave you some number of invites. You'd set them up to be sent out to your friends or coworkers or whatever. Turned out, it just put them in a queue to eventually get an invite. Wave was fundamentally a collaboration platform, without anyone to collaborate with it had zero value. Fantastic way to fail.

  • btown 6 years ago

    Wave's rollout was botched, its performance requirements were too ahead-of-their-time for 2009 hardware, and its UX (which didn't seem to "get" that sometimes you need to isolate your current work context from the global feed of activity in order to focus) was perplexing. But its legacy of helping to popularize Operational Transforms (and by extension, CRDTs) beyond academic circles did indeed live on - for instance, OT was adopted in Google Docs [0], and CRDTs were used in yesterday's release of Atom's real-time collaborative editing [1]. So in many ways, it accomplished exactly what to set out to do.

    [0] https://drive.googleblog.com/2010/09/whats-different-about-n...

    [1] http://blog.atom.io/2017/11/15/code-together-in-real-time-wi...

    • dkersten 6 years ago

      > its performance requirements were too ahead-of-their-time for 2009 hardware

      For 2009 hardware running javascript in a browser, you mean? I was running much fancier native stuff much faster (including 3d games) on low-to-mid tier 2009 hardware.

      > So in many ways, it accomplished exactly what to set out to do.

      I think it definitely accomplished its goals technically, but it certainly failed on adoption. I personally never got into it. I tried it quite early and didn't like it. It was clunky and confusing and didn't seem very useful to me at the time. I wonder if it were re-created with slicker UX (and good performance), would I like it...

      • mavhc 6 years ago

        Seemed like the main place people would use Wave would be massive company email chains, and they killed it 3 months after adding it to GApps

      • grkvlt 6 years ago

        But, 3D in a browser is much easier since the GPU takes care of it. Low-latency audio is much harder.

  • deepsun 6 years ago

    Internally in Google it was really, really hard to get in the Wave team. So many people felt the opportunity to show what they can and get a career boost. So, presumably, they had best Google engineers working on that product.

    What I want to say is an obvious thing -- stellar team doesn't imply success.

    • swyx 6 years ago

      thanks for the anecdote! the politics must have been savage. also means the team probably cared more about being on a cool project than the actual problem area itself.

    • ewjordan 6 years ago

      The alternative facts: maybe the great team they had did maximize their chance of success, it was just one of those 90% of "startups" that don't hit the market.

      • smt88 6 years ago

        That's a good excuse for a product that fails to profit. But a team definitely isn't "great" if it can't release a coherent, useful prototype to the market.

      • Schwolop 6 years ago

        FWIW, the core Wave team are back together again, working on language translation. See lexy.io

  • mwidell 6 years ago

    I agree. Looking back though, I think that platforms such as Slack, have incorporated many of the key things that made Google Wave a great concept.

    • Jtsummers 6 years ago

      True. Chat bots supply some of the functionality. In Wave you could have bots which would replace, alter, or add content based on things you typed in or some other rules. Like you could have a stock ticker bot, or one that inserted maps, or ones to do live translations.

      The collaborative document editing in Google Docs improved afterwards, the Google Wave wikipedia entry doesn't mention this but I have some vague recollection of some of their work being migrated into Google Docs to improve it in that regard. That may be completely wrong though since I can't seem to verify it.

    • Cthulhu_ 6 years ago

      Indeed, and the technologies developed there made their way into Slack, Google Docs, GMail / Inbox, and a bunch of other products.

  • lowlevel 6 years ago

    My recollection is no one could figure out what it was or how it worked... and yes, no one to figure it out with.

  • ajdlinux 6 years ago

    My primary recollection of Wave was that it ran really slowly on the browsers and computers I had at the time, which also wouldn't have helped.

    • msl09 6 years ago

      Yeah browsers weren't very fast at the time and we still used flash.

  • throwaway613834 6 years ago

    I liked Google Buzz! and was sad to see it go away.

  • wj 6 years ago

    They used invites for the Gmail rollout as well but maybe those immediately created accounts. I seem to remember invites being sold on eBay.

    • c22 6 years ago

      The difference is once you got a Gmail account you could immediately start sending and receiving email from it rather than twiddling your thumbs till someone you know got invited.

      • Al-Khwarizmi 6 years ago

        This, and also that GMail solved a need you already knew you had, and could be explained in a few words: practically unlimited inbox space, no need to delete emails anymore! Everyone wanted that, but most people didn't know exactly why they should want Wave.

    • Jtsummers 6 years ago

      Gmail invites, by 2004 or 2005(?) when I got in were instant. If I sent the invite, the friend had a chance to sign up right then.

      But email was already an existing protocol. So invites meant nothing. I could still communicate with my family even though they were still on their ISP account. Or my sister with her university account.

  • herbst 6 years ago

    I was so hyper. And then it did Not even work on my computer. Dont remember Why, but when it finally did it was already dead.

  • swyx 6 years ago

    i mean... there was another product that Google rolled out that was on an invitation basis. Last I checked Gmail is doing pretty well...

    facetious comparison aside, i think the (Yegge)? rant about how Google does things applied here.

    • Stratoscope 6 years ago

      Unlike Wave, you didn't have to wait for your friends and colleagues to get a Gmail account before you could talk with them. You could talk with anyone.

    • aidos 6 years ago

      It's pretty hard to socialise on a network where you're the only one there....

captainmuon 6 years ago

Peer-to-peer file sharing.

There was a time when Napster, Kazaa, eMule were king. The content industry fought against it, but developers came up with decentralized solutions like DHTs or supernodes.

I was convinced the next step would be friend-of-a-friend sharing. You only share files with your direct friends. But they can pass on these files automatically, so their friends can get the files, too. The friends of your friend don't know that the file is originally coming from you, it is completely transparent. The goal is to not let anybody you don't trust know what you are down- or uploading.

I would piggyback on the social graph of existing networks to get started. I actually had a prototype of the friend discovery code, based on XMPP (using custom stanzas). You login with your credentials, and it shows you all your friends that are using the same app. It worked with GTalk, Facebook Messenger, and Skype (via Skype API). One nice feature was that this worked without getting an API key from Facebook etc., or having a central server. I was so pissed when they all changed their APIs to make this impossible. It felt like a conspiracy to stop this use case.

I still think if somebody pulled this off, it might work, and it would be pretty distruptive (in the good and the bad sense of the word). It would be the last word in the debate between "data wants to be free, change society to enable what technology promises" and "data has to be a commodity, restrict technology because in capitalism we can't feed artists otherwise".

  • scoggs 6 years ago

    I, personally, felt that Audiogalaxy was the hallmark program from this era. It was the most surefire way to get the songs you were looking for and discover plenty more in the process.

    You'd set the music folder you wanted to share (there was no opting out) and so long as you had the program open your files were available for download by other users. The program operated like an always on satellite torrent program with very low impact. You'd find a file to download and the file would come in many chunks from any available user who had the file on their client side. Downloads were fast and in the event you lost connection or closed Audiogalaxy the download would resume immediately on reload.

    From my POV Audiogalaxy was extremely threatening to Copyright holders in a way that prior p2p programs weren't. Your comment about software being especially 'disruptive' and ruffling all of the right feathers is what reminded me of Audiogalaxy.

    There was no indication of who you were receiving files from. There were no usernames or features outside of download and upload. In terms of creating a piece of software that picked a mission and executed it I'll always look at Audiogalaxy and think it achieved precisely what it set out to do, entirely.

    • sotojuan 6 years ago

      Soulseek is still alive with the same system. The program does not make you share anything but users can disable sharing files with users who don't share files.

      At least in 2012-14 Soulseek was very popular with extremely niche and obscure music fans (stuff not even on What.cd!). Not sure how it is now.

      • twelvechairs 6 years ago

        Yes strange that soulseek seems to have been the one that has lasted longest. Probably its just flown under the radar and hence not been targeted legally or otherwise like Napster Kazaa Limewire Edonkey etc . I dont think its in any particular way more technologically advanced.

        In any case today YouTube is now surely the biggest repository of illegally shared music by some distance yet seems accepted in general by the recording industry.

        • dabockster 6 years ago

          > yet seems accepted in general by the recording industry

          YouTube (Google) pays the recording royalties out of their own pocket for those videos to keep user engagement on the site. If YouTube actually banned this sort of content, everyone would have left the site a long time ago. (IIRC, some gamers moved to Twitch after ContentID flagged their channels on YouTube.)

      • pennaMan 6 years ago

        Underground electronic music lovers heavily use soulseek to this day.

    • lobster_johnson 6 years ago

      While Audiogalaxy was great at doing P2P, the magical feature that differentiated it from the competition was that all known files were always indexed and available for download. This was not true about, say, Kazaa or Soulseek; when someone closed their client, their files would disappear from searches.

      This meant that you could search out a whole bunch of files you wanted, and queue them up for downloading, and whenever the sharers came back online the downloads would start and go through. These days, BitTorrent operates the same way.

  • madenine 6 years ago

    Unforunately, the rise of streaming content services has kinda killed off the need for file sharing for most people.

    Back in the day of mytunes on a school network, I had HDs full of mp3s. I actively browsed friends (and complete strangers) libraries to find new things, complete an artists discography... etc.

    Now? Spotify Premium is probably the best value/$ of any purchase I make in a month.

    Even if I wanted to go back to my old ways... its just not there any more. I believe iTunes patched out the features that made mytunes possible, and people don't have computers full of mp3s discoverable on a network anymore.

    • geophile 6 years ago

      > the rise of streaming content services has kinda killed off the need for file sharing for most people.

      The history is a bit different. Napster was sued out of existence. As I recall, this was done by the record labels, but the argument was that file sharing was unfair to artists. Which it was. But that was obviously not the real point. Streaming services are notoriously stingy when it comes to paying artists for their work. So access to music is still easy, and the net change is that new middlemen are profiting wildly.

      (Streaming vs. file sharing is a different issue. I want to own the music I listen to. I don't want to be dependent on an internet connection, a server, a revokable license, etc. )

  • baldfat 6 years ago

    I can get all the music I can listen to for $15 a month (Google Play Family which comes with YouTube Red) and share it with my wife and 4 kids. I think that people can get music on SoundCloud, Pandora or YouTube made the illegal less necessary. I personally prefer to just pay the money and get my music on all my devices and upload the music I have that isn't available.

    I also LOVE YouTube Red and it helps the content creators multiple of times more then my little ad revenue. How is it that people flaged YouTube Red as anti-content creators and were going after Google with pitch forks and now content creators keep saying that Red Members give them so much more per person. One creator says 1/4 of his income is from YouTube Red (He makes a living on YouTube).

    • ZirconiumX 6 years ago

      From a revenue point of view, streaming is not a very good way to support an artist[1][2]. The Verge article I linked to states that artists are paid "somewhere between $0.006 and $0.0084" per stream.

      If we are very pessimistic about the royalties of a CD sale - let's call it 20%, and say a single CD costs $10, that's ($10 x 20%) / 0.0084 = 238 streams to cover the cost of a single CD.

      If you take audiophile-level paranoia like buying vinyls, SA-CDs or FLAC masters, I think there's an argument to be made that an artist could make more money from people buying high-quality editions of their work to pirate than from people streaming their music.

      It's food for thought.

      Disclaimer: I am a music student.

      [1]: http://www.informationisbeautiful.net/visualizations/how-muc... [2]: https://www.theverge.com/2015/12/7/9861372/spotify-year-in-r...

      • zolthrowaway 6 years ago

        I disagree with this. Labels are the main beneficiaries of music distribution[0]. In addition, the non-committal nature of streaming makes it so much easier to get people to listen to your music. There are tons of artists I love with <10,000 plays that I never would have even heard of if not for music streaming services.

        Even if royalties are the biggest source of income (which I don't believe to be true), I still think streaming is beneficial to small artists. Using your math of 238 plays per CD, at 10,000 listens they would have to sell more than 50 CDs for CDs to be more profitable for them. There are an incredibly high number of artists on Spotify who have 10k plays on a single song who probably couldn't have given their CDs away.

        I've gone to shows to see the opening acts and co-headliners because of streaming services. I've bought merch from plenty of small artists that I never would have even known existed. I've listened to genres I had no interest in and found artists I like and support because of streaming services.

        I feel like the exposure and the ability to produce your music (and actually have people be able to find and listen to it) massively outweighs any potential loss of revenue. There are potentially some medium sized artists who would have done better in the CD model, but probably not with how prevalent music piracy was in the 2000s.

        [0]https://splinternews.com/buying-an-album-isnt-the-best-way-t...

        • baldfat 6 years ago

          This is why Independent Music is the only way to go and have grown almost 500% of full-time musicians. Labels are nothing but dead weight unless you are making millions.

      • Avshalom 6 years ago

        Oh you're not nearly pessimistic enough:

        >>In the U.S., recording artists earn royalties amounting to 10%–25% (of the suggested retail price of the recording[37] depending on their popularity but such is before deductions for "packaging", "breakage", "promotion sales" and holdback for "returns", which act to significantly reduce net royalty incomes.

        https://en.wikipedia.org/wiki/Royalty_payment#Mechanical_roy...

      • scooble 6 years ago

        If a single CD contains (say) 10 songs, then it seems odd to compare the cost of a listening to a cd to streaming a single song. Listening to the whole CD would be the equivalent of 10 streams.

        And if we divide those 238 streams by the 10 songs on the CD, it seems as though we only need to listen to a cd about 25 times before streaming becomes a better option for the artist.

        • FelipeCortez 6 years ago

          We "only" need to listen to a CD about 25 times? That's ~25 hours of listening to a single album.

          • scooble 6 years ago

            Over a lifetime of owning a CD, that doesn't sound too crazy. I think it is a safe bet that there are albums I've listened to 25 times this year. And if I bought an album today, I would think it unlikely that I wouldn't average 2.5 plays per year over the next 10 years. I'd probably burn 10-15 of those plays in the first few weeks, and then only need 1-1.5 plays per year to break even.

            Of course, I won't listen to all my albums this much. Some might only get a few plays. So I suspect that the Miles Davis estate would be better off if I streamed 'Kind of Blue' rather than owning the CD. Conversely, Creed have done relatively well from my ill-advised CD purchase because I can guarantee I'm not going to hit 25 plays on that.

            So I guess the moral of the story is that I should stream from artists that I love, and buy albums from artists I am on the fence about.

      • criddell 6 years ago

        > From a revenue point of view, streaming is not a very good way to support an artist

        I wonder what percentage of the top 100 artists on streaming services even own their rights? I don't know for sure, but I suspect more money flows to the rights holders when I listen to a Pandora station while I work than if I turned on an actual radio.

        > 238 streams to cover the cost of a single CD

        This is just one data point, but before I signed up for Google Music, I was buying two CDs a year and spending around $35. Now I'm paying $10 / month. How does the amount spent on streaming today compare to the spending on CDs in 1990? Does the equation change if you consider environmental costs of making those CDs and getting them to consumers?

      • devonkim 6 years ago

        Buying music is primarily supporting the distribution network that supports the artist, which has its pros and cons. I normally go to live shows and buy shirts or merchandise to support an artist because the margins on shirts, for example, are a lot better for the artist compared to the CDs. The label may pay for touring needs but a ton of artists not named Taylor Swift have to pay for almost everything themselves either because the label fronts no money or because the label will force a loan on the artist by having them pay it back in the form of ticket sales. Meanwhile, the label oftentimes works with venues to get a cut of, say, alcohol sales from them too.

        LPs and such, while coveted by hipsters, is really tough to recommend as a revenue stream for more mainstream genres. The music business has devolved back for many cases where it’s possibly better for an artist to experience play local shows and promote themselves as an entrepreneur rather than to be an employee of a label as almost an unpaid intern.

      • mehwoot 6 years ago

        238 streams to cover the cost of a single CD

        Justin Bieber has very roughly 5 billion plays on spotify. Using your math, that would equate to 17 million CDs sold. Seems pretty reasonable. I'm sure he would have been better off before the internet, but it still seems doable.

        Besides which I think youtube celebrities and twitch gamers have shown it is clearly possible to make a decent (if not quite profitable) living off creating content to be streamed. It's just music doesn't have the pre-eminent spot and advantages over other media it used to.

      • foobarchu 6 years ago

        Depends on how much you listen to as whether you're contributing more or less. I listen to spotify all day while I work, cycling through songs that I like often once every day or two. An artist that I like enough to have purchased a cd in the past will get multiple plays a day. That means less than a year to generate one cd's worth of revenue from me, and it's going to keep paying out after that because I'll keep listening to it years down the road if I really like it.

        Also worth noting that 238 streams is ~20 playthroughs of your average 12 song album, for those who frequently listen to entire albums.

        Add to that the fact that streaming services have made discovery a much bigger thing. The less mainstream artists are now getting the benefit of appearing on streaming radio and such, making their growth that much easier (after they've passed a certain threshold of course).

  • PeterisP 6 years ago

    I believe that DC++ file sharing had a model like that, but the torrents outcompeted it for most purposes; most people don't want friend-of-a-friend filesharing, they either want the most effective way to get the latest goods from whoever has it, or the ability to upload stuff while remaining mostly anonymous.

    • astura 6 years ago

      Minor nitpick, Direct Connect (DC) was the protocol, DC++ was merely a popular client but by no means the only client. It wasn't even the official client.

      Direct Connect really excelled at college campuses. All the traffic was internal so it was blazingly fast plus the hub had a chat feature which is incredibly useful (pre-reddit) and sometimes people would share old tests and stuff.

      • tmerr 6 years ago

        I'll add that DC is alive and well on my college campus. Beyond speed, it's less likely to get you in trouble with the college + ISP than torrenting, and has some textbooks and other locally relevant content.

        • jd3 6 years ago

          Are you serious? Most people on my college campus did not even realize that they could drop their files in the public_html folder to share them online, let alone setup an ADC file sharing program as esoteric as DC++

          • wadkar 6 years ago

            Well, depends on the campus ;-)

            We had a BulletinBoard based community with FTP sharing, and someone built a crawler to index the servers. So everyone just knew this staic IP which was out campus version of Google.

            We then slowly transitioned into DC++ by constantly educating and sharing content on both FTP and DC++

            The real shame was that move to DC++ actually caused the BulletinBoard community to die slowly. No more announcement posts, no more requests, no more SuperStar uploaders who “magically” found the latest TV shows aired in the US.

          • astura 6 years ago

            Depends...

            Public school? Probably not. Tech school? Absolutely.

            • nsporillo 6 years ago

              Yeah RIT students run an underground DC network.

              • astura 6 years ago

                Funny you should mention it...but.... Funny you bring it up, it's been well over a decade since I graduated, but RIT alumni here!

                The RIT DC hub was the shit back in the day....

                Back in the day there was nothing underground about it, at least for CS students. :) Connecting to the DC hub was a right of passage. So I am speaking from experience. :)

              • tmerr 6 years ago

                Yes, that's the school I went to

    • mkbkn 6 years ago

      Yup, we used DC++ a lot on our college campus about 5-6 years ago. It is definitely one of the best, if not best, to share files on an intranet.

  • fsloth 6 years ago

    I think the biggest motivation for peer-to-peer networks was the lack of legal options for digitally accessing movies and music. The current streaming services and online stores have reduced this ache.

    • rodolphoarruda 6 years ago

      I agree. In fact, I started to realize I've been using sites like The Pirate Bay just to get a sense of what people are watching, new releases etc. Ratings would tend to be very low, of course, it's their culture to say that any movie sucks; but at least I can get a list of titles and then search for them in paid service providers.

      • CreepGin 6 years ago

        I use Rottentomatoes

        • kazagistar 6 years ago

          It's a very different measure, because the sort of people who rate media have certain characteristics to their taste that are more likely.

          • foobarchu 6 years ago

            Very true for certain genres. If you want schlocky horror movies, for example, RT isn't going to do you any good because reviewers hate all the things that make those films good.

  • starsinspace 6 years ago

    Peer-to-peer was killed by the advent of home routers doing NAT, making arbitrary inbound connections impossible. Configuring port forwarding was generally too difficult for "normal users", the web-UIs offered by typical home routers didn't make it easy enough. NAT traversal methods exist, but from my experience they are generally unreliable and/or slow.

    I still remember the days when P2P connections over the internet "just worked, always" (the early Napster/Kazaa days), and then it went to "it works sometimes", and finally to "almost never works (except if the service offers some kind of middle-man datacenter to bounce the connections between the two NAT'd peers)".

    Try to do a IRC DCC file transfer with anyone these days and see what I mean... long ago, it "just worked".

    I blame today's unfortunate centralization of internet services largely on NAT breaking P2P connectivity. Who knows, maybe spread of IPv6 will fix this mess... some day...

    • dabockster 6 years ago

      IIRC, consumer wifi routers supposedly support a protocol that allows a computer to request a temporary port forward and release it when its finished using it.

      Anyone remember what that protocol is called?

      • thrown_produce 6 years ago

        UPnP, Universal Plug and Play.

        • dabockster 6 years ago

          Yep, that's it. How useful is it in practice? I haven't ventured into those settings ever since I got my Airport Extreme.

          • lenkite 6 years ago

            Pretty much every torrent software supports UPnP/NAT-PMP today. The problem was UPnP came a bit late - the spec was published only in 2008, and by that time everything was fire-walled off and required more-than-idiot-capability to expose.

  • saas_co_de 6 years ago

    Bittorrent is good enough and there aren't really any law enforcement concerns any more. Pre 9/11 FBI and other agencies were actually doing prosecutions against people for unlicensed content distribution but after 9/11 they decided to focus on higher priorities so nobody really cares about security any more.

    A warning letter from the ISP is not quite the same threat as a few years in a fed pen.

    • captainmuon 6 years ago

      In Germany, it is currently quite dangerous to use Bittorrent illegally. There is a high chance of getting an "Abmahnung", which costs around 600-900 euros.

      Using sharehosters with for-pay premium accounts is popular, as is using VPNs with bittorrent. The alternative is low-quality streaming.

      I think the content industry isn't pushing for eradicating illegal downloads, they just want it to be at least as expensive, or more inconvenient than the legal options. (Which are quite good by now, if the shows you want to watch are available from your provider.)

      • saas_co_de 6 years ago

        Interesting. Wasn't aware of that. VPN is good option.

        This brings up a good point though. Bittorrent is not free. People who use Bittorrent do pay to use it (bandwidth, setup costs, etc), and pay more to get good quality stuff, but Bittorrent offers better quality and better selection than any licensed service.

      • Tepix 6 years ago

        That's why the OP suggested that everyone only connects to friends.

    • darksim905 6 years ago

      This just isn't true. I see these all the time. It's all automatically generated these days.

  • datenwolf 6 years ago

    > was convinced the next step would be friend-of-a-friend sharing. You only share files with your direct friends. But they can pass on these files automatically, so their friends can get the files, too. The friends of your friend don't know that the file is originally coming from you, it is completely transparent. The goal is to not let anybody you don't trust know what you are down- or uploading.

    This very thing exists and readily available for use in form of Freenet https://freenetproject.org/

    • captainmuon 6 years ago

      I think freenet is very different. I know freenet from back in the 2000s. It is a platform to decentrally, anonymously store data. At some point, they did add a friend-to-friend option I think (which is what used to be called "darknet" - now that refers to sites like silk road, but back then it meant closed P2P like WASTE). The closest analogue would be Tor hidden services.

      What I envision is more like a modern version of Kazaa. The experience would be like iTunes or Spotify. Select your media folder, give it access to your contact list, and voila, "Pirate iTunes". The closest current analogue would be RetroShare.

  • sli 6 years ago

    I'm impressed that Direct Connect is still kicking while all those services slowly died off.

  • dehrmann 6 years ago

    I used to work for an on-demand content streaming company, so I'm biased, but the cataloging, searching, and quality of content are big scaling issues for the average user.

Al-Khwarizmi 6 years ago

I don't know if this has a name, but... ICQ-like "searchable" instant messaging.

Let me explain myself. In the late 90s, there was an IM client called ICQ. For starters, its UI was leaps and bounds ahead of anything to be seen in the next decade, and it had functionality like bots or groups, that has again become widespread a few years ago.

But what I'm talking about is the fact that I could look for "female, age between 16 and 20, who likes writing, RPGs and electronic music and is available for chat" and results would come up. You could enter your data, interests, hobbies, etc. in the program and mark yourself as available (only if you wanted) and it was a really nice way to meet people. I collected stamps at the time so I remember I would search for people who were stamp collectors in countries for which I had no stamp, leading to stamp exchanges apart from some interesting conversation (as you could also throw some of your interests in in the search window).

I thought that was the future and the tech could only get better from there, but then came MSN Messenger, with really bare-bones features in general and in particular no "searchable" functionality, and displaced ICQ. And since then, nothing similar has appeared. Instant messengers are focused on talking to people you already know, and if you want to meet new people, you have to use specific-purpose communities or dating sites. But good luck finding someone who likes writing, RPGs and electronic music at the same time... That function of searching people by interests in a huge directory of people (not restricted, e.g., to a dating site) is what I thought would take the world by storm, and as far as I know it doesn't even exist anymore at enough scale to be meaningful, at least in the West (maybe in WeChat, QQ or one of those apps used in Asia they have something similar, I don't know).

  • adrianmsmith 6 years ago

    Right, Skype had a similar interface in 2005-2010, you could set your status to "Skype Me" (in addition to "Available", "Away" etc.) which was the green "available" Skype icon but with a smiley overlaid. You could search for people by age, gender, location, and I think keywords, and "only Skype Me users" (also implying online at that time). I'm not sure if the interest searching was as advanced as you described but the feature was great. You could then chat to the people you found, or call them, etc.

    After a few years, setting your status to "Skype me" meant every few minutes a chat from a new person advertising porn sites or other scams. So I stopped using it. Presumably so did everyone else; at some point they disabled the feature. And now Skype is just a tool to talk to people you already know.

  • muzani 6 years ago

    I thought the same thing too. It's quite sad that talking to people is usually limited to dating, and dating has become a domain of Tinder.

    I remember an era when chat rooms like IRC had thousands of people online at a time. It didn't scale past this, but it was very exciting.

    Maybe there is a market for something like a real-time Reddit, but it's a lot harder to pull off because there needed to be many people on at the same time before it could but critical mass.

    • wrinkl3 6 years ago

      There's definitely a huge demand for less dating-oriented ways to meet people online.

      > something like a real-time Reddit

      Reddit did that with Robin, their April's Fools Day experiment two years ago. It randomly connected small groups of the participating redditors in chatrooms, and would gradually merge the chatrooms if the majority of the users voted to do so. It was a lot of fun while it lasted, but I don't think anyone managed to successfully replicate it since.

    • owebmaster 6 years ago

      I guess it is because we stay online 24 hours now but in the past connected time was a scarce commodity. I felt bad if whenever I found that I was online but wasn't connected to my IRC channels, losing all the history :(

  • wils1245 6 years ago

    I think as others have mentioned, spam is a big problem, but the more fundamental problem with this idea is supply/demand. For a sufficiently large system, the number of people that are interested in talking to person X about topic Y isn't likely to be well matched with the number of people person X is interested in talking about topic Y with.

    Dating sites, while not what you want, are very illustrative. On open messaging platforms like OkCupid, in which anyone can message anyone, women are barraged with messages from men that want to talk about sex. How do you filter them out? If you do manage to filter them out, how do you surface a reasonable number of good conversation matches?

    The only solution I've seen in this thread is offering opt-out. But all that's doing is offering people a way to not use a product, it's not fixing the problem.

    Even relatively well funded and mature dating sites solve this by either offering finicky filter options to users, or really blunt filters (women have to initiate the conversation).

  • pjc50 6 years ago

    > "female, age between 16 and 20, who likes writing, RPGs and electronic music and is available for chat"

    This is probably the reason it was abandoned.

    • anon1253 6 years ago

      I don't get this. Dating apps are incredibly popular, and it would be opt-in. If it's the age bucket you're concerned with, I met some of my best friends online during that time (female and male). Adolescence is just like that. I understand that harassment, intimidation and general annoying behavior are a concern. But, if we reached the phase where "two random people can't have a normal conversation if they both wish to" is the default stance then we all need to take a good hard look at ourselves, technology is not going to solve that one.

      • pjc50 6 years ago

        I should probably have written a lot more than one sentence there. You're absolutely right about the benefits of meeting randoms on the internet, and clearly it still happens and works for some people. But it always did have social scaling issues - as a particular system or chatroom got more popular the success rate fell. I also worry that the culture has changed. "On the internet" really isn't a place apart from everyday life now, and we have a giant all-encompassing social network. And a dating app.

        As one of the sibling comments points out, once such a system becomes popular it's also going to be a target for spammers. If there have been any advances in chatbot AI over the past couple of decades that is also likely to make the problem worse.

    • Al-Khwarizmi 6 years ago

      You only got found if you explicitly marked yourself as available for a chat, which was an opt-in choice that could be easily toggled on and off, or left off all the time if you just wanted to use ICQ to chat with your friends as current IMs are used. And you could omit information, e.g. you could fill in your hobbies but not gender if you didn't want to be found by gender. And the privacy settings were much more understandable and transparent than in e.g. Facebook.

      I guess there would be creeps, as anywhere where you can be exposed to unknown people, but I don't think the problem was worse than elsewhere. In fact, in my later experiences in chats, dating sites, etc., most women seemed to be (understandably) on the defensive due to the amount of stalkers and creeps they had to endure regularly, and in ICQ it didn't seem so much that way, I got plenty of conversations, girls (and guys) giving me their address for exchanging stamps or letters, etc., which I found much harder later in other media. Although maybe it's that both the Internet and myself were young and naive back then.

      Anyway, if stalkers did become a big problem, surely the tech could have improved to deal with it.

      There are probably several factors, but I think the main reason it was abandoned is simply network effects. MSN Messenger rose for reasons orthogonal to this, and then no one of the main IMs that succeeded it implemented this. Sadly, the question of what IMs gain more traction depends more on network effects than actual quality (see Whatsapp vs. Telegram, etc.)

  • Jaruzel 6 years ago

    I mainly blame Facebook for this, but these days it's now viewed as seedy and odd to try and be friends with strangers across the internet. Each new 'social service' that now appears is not only silo-ed by it's own protocol, but also promotes the continued siloing of users into their own (existing) friendship groups.

    Back in the day (yeah, I'm old), you could drop into an IRC channel of strangers and be having a good conversation within minutes[1]. ICQ, as mentioned, had a 'find friend' feature, and many other early social platforms encouraged like-minded strangers to interact.

    So we all end up just talking to the same 20 or people we've always spoken to, re-enforcing the echo-chamber-effect, and never introducing new ideas and viewpoints into our lives.

    ---

    [1] IRC these days, if you are lucky to find an active channel, is now so cliquéy that any interlopers are largely ignored.

    • okreallywtf 6 years ago

      You make a good point that I haven't really thought of (in quite that way). Anonymity used to be really fun on the internet in the early days for me, meeting strangers, playing games or whatever. Now so much is tied to us directly and to real-life that the thrill of exploration is kind of gone from the internet to me.

    • wils1245 6 years ago

      Definitely agree with your point about it being odd to get in touch with perfect strangers, but it's a great tool for promoting acquaintances to friends. I recently had a small house party and organized a ski trip, and Facebook allowed me to very easily invite people I didn't very well. It can be a great low pressure, non-intrusive, but genuine way to try and bring people into something.

  • herbst 6 years ago

    Facebook Graph search was like this (and creepier as people didnt always know _how_ they are searchable) nur Facebook heavily crippled this in the Last years.

    • nl 6 years ago

      Graph search still works surprisingly well if you know what you can search for.

      Try http://graph.tips/

      • herbst 6 years ago

        It respects privacy setting now tho or?

        • nl 6 years ago

          Yes. And it actually goes further, hiding some things which privacy settings would let people see.

          But there is a lot you can still do with it.

    • underwater 6 years ago

      Graph Search always surfaced relevant information that explained why they matched a search. For example “Friends who visited London” might say “Checked in at Tower Bridge” below a result.

      You’re right that people didn’t understand Facebook’s privacy model. People assume that privacy by obscurity is a guarantee. Never mind that anyone could write a bot that did what Graph Search did. Graph Search was actually more conservative than it needed to be to avoid surprises.

      In the end that all mattered less than the fact that it wasn’t solving a problem people had.

      • herbst 6 years ago

        Graph search was a lot deeper than any bot ever could. At some point you could match against likes that were set private on profiles. 'girls between 20 and 25, that like drum and bass and visit [location]. Was a actual query that enabled finding pretty much any girl I saw at any party. Without further Infos.

    • hans_mueller 6 years ago

      FG probably would allow advertisers to bypass FB products for identification of potential customers and hence is not in best financial interest of FB.

    • Al-Khwarizmi 6 years ago

      Interesting. I haven't even tried Facebook Graph search as I think it never even became available in my country. But I think the main difference is probably that in Facebook, it's generally not socially OK to contact strangers. In ICQ, you could set and communicate explicitly if you wanted to be contacted by strangers or not (and if not, you weren't searchable).

    • wrinkl3 6 years ago

      I remember when Facebook hyped the Graph search as the next big thing, a truly social search engine. Years later I only occasionally remember about its existence when trying to find someone/something specific on Facebook, and even then it's at most mildly helpful.

  • timothevs 6 years ago

    Ha. I met my wife on ICQ, looking for people to practice my German with.

    • elcapitan 6 years ago

      How did you "meet" people on ICQ, was there a discovery functionality? I used it too, but only for chatting with people I knew and whose account ids were known to me..

      • vazamb 6 years ago

        The parent literally explained how you meet people

  • pryelluw 6 years ago

    I met my wife on ICQ after using the same search method you described. :)

    • edkennedy 6 years ago

      I lost my virginity through ICQ!

    • cisanti 6 years ago

      Met my first love but didn't get married. Congratulations!

      • cableshaft 6 years ago

        Same. First love and first heartbreak. She took awhile to get over. I ended up sabotaging quite a few opportunities with women I had during my freshman year of college because I still wasn't really over her.

  • antjanus 6 years ago

    one thing people still don't remember from ICQ was the ability to collaborate / live message. Ie. edit text in a shared environment. It was bonkers. You could be writing and someone else could change the font options for it. and type at the same time, even delete your messages.

    I loved it. And this was back in 2000 maybe.

  • hans_mueller 6 years ago

    this is how I met my until now best friend about 15 years ago.

    • dabockster 6 years ago

      > until now

      I'm sorry to hear that.

      • krsdcbl 6 years ago

        German grammar - he really means he hasn't found a better friend so far

  • Tharkun 6 years ago

    Maybe I'm just a nostalgic old geezer, but that was kind of socialising was my favourite thing about the internet. It's a shame I haven't been able to experience it in over a decade.

    It was so incredibly easy to find and talk to new people. It seems like everything about the current internet makes that deliberately harder. Even sending e-mail to strangers is hard now, you'll likely end up in their spam folder.

    • brokenmachine 6 years ago

      Spammers ruined it for everyone. :-(

      • Tharkun 6 years ago

        Spammers, the lack of punishment for spammers, and the tech community's response to spammers.

  • Firegarden 6 years ago

    I am certain we could easily create this using websockets and p2p browser data streams... whos coming with me? Crowd fund me and i will release a prototype

    • Al-Khwarizmi 6 years ago

      OK, I only ask for 10% of the profits for giving you the idea :)

      Just kidding. I would definitely support such a crowdfunding within my modest means. The problem is that I don't think the tech is the main issue (after all, ICQ did it with 90s tehc). Obtaining an userbase is the main issue, and network effects go against new contenders.

      • senatorobama 6 years ago

        Not to mention.. dick picks.

        • greggman 6 years ago

          That's actually a good point in the sense that when ICQ was popular most people didn't have a digital camera in their pocket with instant upload capability so sending a dick pic would have been way more trouble.

          Maybe the fact that pretty much everyone has a digital camera in their pocket now means the world has changed into a world where ICQ as the OP described it just can't exist anymore

          • Al-Khwarizmi 6 years ago

            Definitely, the context changes and brings new challenges... but I don't think that's unsolvable. I can think of low-tech solutions (allowing only text, not pictures, until the other person explicitly allows pictures) and high-tech solutions (dick recognition via neural networks).

  • azeirah 6 years ago

    I had some succes with Omegle. They support a feature where you can select a topic you're interested in.

    • dabockster 6 years ago

      > Omegle

      Too many people named Dan.

  • anjc 6 years ago

    Facebook worked like this up until recently, and now sort of works like this.

    "females who are single, age between 16 and 20, who likes writing" would have returned precise results.

  • Elvewyn 6 years ago

    Hm, omegle sort of has that. Put in interests and find anonymous strangers with those interests.

    I met a guy to play Dota 2 with in 2012 and we've been friends since.

  • 6ue7nNMEEbHcM 6 years ago

    I loved the chat interface similar to the unix talk command.

grumblestumble 6 years ago

E-Ink. It is the "correct" choice for display technology, and with enough research money put into it, it could replace these abominable light-emitting displays. But what we already have is "good enough", despite all of the hidden costs, and so we're stuck with it.

  • swyx 6 years ago

    100% agree. imagine my shock when as a young technology analyst I discovered that E-Ink was a smallcap tech company in Taiwan and basically produced kindle displays, store price tags and that cool double sided phone that one time. I thought it was a huge deal when they managed to do color E-Ink. but no one cared.

    • wrinkl3 6 years ago

      Around 2010 I assumed it would a matter of a couple of years until we were reading comic books on our Kindle Color ink tablets.

      • deusum 6 years ago

        Still waiting on that E-Ink phone that lasts a month between charges.

        • frik 6 years ago

          I bought a Motorola F3 phone like 10 years ago for relatives.

          https://en.wikipedia.org/wiki/Motorola_Fone

          E-ink display, good for elder people, you can drop it from 100 feets height, or run a tank over it, drop it in a pool. It will survive.

        • lucaspiller 6 years ago

          The Motorola F3 is that phone, if all you care about are phone calls (it can’t really do SMS). You are probably better getting a similar era Nokia though, as the battery life was about the same and they were a lot more functional.

    • leggomylibro 6 years ago

      You can buy 3-color (white, black, and read all over) ones for ... I want to say about 20 yuan which is like $3?

      I haven't looked into driving them yet; that's step 3 and I'm still on step 1 (monochrome OLEDs) but I think you might have to do some funky temperature adjustment stuff to get good results out of them...maybe that hasn't quite been integrated into the display controller chips yet?

      • em3rgent0rdr 6 years ago

        > "(white, black, and read all over)"

        Isn't that supposed to be "red all over"? Or are you making a pun?

    • spoinkarooo 6 years ago

      8069.TW. It has more than doubled YTD.

  • amerkhalid 6 years ago

    I love E-Ink display. Fell in love with it when I bought my first Kindle. I could read all day and my eyes won't be tired at all.

    Since then I had been waiting for E-Ink based laptop for work. It doesn't need to play videos or even display color. As a backend developer, I can live with grey scale display. It doesn't need high refresh rates either.

    But now almost 10 years later, it doesn't look like this will ever happen.

    • poutrathor 6 years ago

      I arrive a bit late to the party. I have the same wish and though about startup it. So I digged around last year

      > But now almost 10 years later, it doesn't look like this will ever happen

      Some Chinese are trying to do that already : Dasung-Ink-Paperlike-13-3-Monitor

      There are reviews around the web about it and you can buy one on Amazon. I have not do it yet for the price is still high at 1200$

      I also met with a great manager from the Taiwan Eink company. Eink technologies (for they have several similar ones) are mechanical based which implies many constraints.

      For the stories the MIT guys who develop it spend around 20 years on it before passing the baby on to the Tawain based company.

      Except Amazon, most big companies are not pushing it, I guess because pictures and videos are so prevalent nowadays.

    • mycat 6 years ago

      I thought recently they made e ink monitor. But not laptop though.. More like a giant tablet http://www.dasung.com/english/

      • lj3 6 years ago

        Dasung is on their second version of the display. If you look up youtube reviews of the second version, I think you'll be disappointed with what you find. The refresh rate continues to be a major hurdle.

  • mseebach 6 years ago

    The Kindle and it's e-reader kin are pretty successful?

    I think the problem is that a device with an e-ink display is necessarily single purpose. General purpose devices can't use e-ink, they need to support video playback and browsers with full colour.

    • Andrenid 6 years ago

      I'd love an e-ink dashboard in my car so it's not so glary, e-ink tablet that does a bit more than a Kindle but isn't a full media player (super thin super light with basic browser for wiki, news, rss, and email), ~20" e-ink screen on the inside of my apartment door with calendar and notifications and todolist, there's so many cool things we could do with the tech if we could produce it cheaper and bigger. Not everything needs to be able to play video.

      • SteveMoody73 6 years ago

        Have you seen reMarkable? Doesn't seem to have a browser from what I can see but more capable than a kindle.

        https://remarkable.com/

        • michaelmior 6 years ago

          I got mine a few weeks ago. It's true there's no browser but the note-taking capabilities are great. It's running Linux so I can only assuming it's a matter of time before someone starts releasing some custom updates.

          • mattferderer 6 years ago

            I was looking into this a while back. Would love to hear more reviews on it. The ability to write notes with a pen is the one feature I wish my Kindle had. The Kindle is a great device for reading, bookmarking & highlighting but you can't write in the margins... When I first seen this I was hoping the next version of Kindle would try to accomplish this for the holiday season but unfortunately they did not.

        • mycat 6 years ago

          How does it compare with Sony's e ink tablet with stylus? Seems too similar except it is made by startup

      • oeuviz 6 years ago

        I'd really love that too! And if it was for color, I would really appreaciate a photoframe as I just think a backlight is not needed for these things and is just annoying at night. I really do not care about video, most relevant content is pretty static and does not even require the already possible 4-5 changes per second.

    • avar 6 years ago

      That's the GP's point. Of course the update rate sucks because enough research hasn't been put into it.

      There are already E-Ink displays with much a faster update rate: https://www.youtube.com/watch?v=wsY3T1uzjAI

      Who knows how much faster this could get with enough investment? Unless there's some fundamental physical limitation to the update rate that I'm not aware of.

      • IshKebab 6 years ago

        Ha that video is not a fast update rate. It's just only updating a small bit of the screen at a time so you can't really see the slowness. Modern kindles are pretty fast but not video speed, and they still have to blank the screen after a few updates.

        • avar 6 years ago

          Yes, obviously I'm not going to be linking to a video showing a technology that doesn't exist yet.

          The video serves to demonstrate that given some technological development the entire screen could be like that, and aside from that there are E-Ink screens in common use that can't refresh even such a small area that fast.

          • mavhc 6 years ago

            A technical video about eink, reprogramming the firmware etc https://www.youtube.com/watch?v=MsbiO8EAsGw

            • j_s 6 years ago

              It's unfortunate that so much of this specific content is locked up as video. The part is $35 shipped on AliExpress, a great way to begin experimenting.

              • mavhc 6 years ago

                Unedited video is the fastest kind of content to produce, and the slowest to consume.

                Edited video is the slowest kind of content to produce.

                • j_s 6 years ago

                  Unfortunately for me even edited video is still tougher to consume than the written word, when it comes to technical​ content.

                  I have a much easier time digesting information​ at my own pace.

    • r3bl 6 years ago

      Imagine a table with an e-ink display that can show you whatever you want them to show you.

      Imagine reading news above your sink while doing the dishes.

      Imagine the entire walls constructed with e-ink displays in your home and being able to change the wallpapers in your living room depending on what's currently in your mind.

      All of those could have a "read-only" mode until you push a button, they would use pretty much no electricity, and would ideally be water-proof.

      Instead, what I have right now, is an e-book reader. Which is nice and all. I'm often sending articles to my Kindle (kudos https://p2k.co/) and all sorts of things, but I had much higher hopes from e-ink technology (and still kind of do). The hope of having my home filled with e-ink displays is much greater than having a home filled with sensors.

      • marcosdumay 6 years ago

        E-ink isn't good enough for any of those yet.

        I don't know how much research is ongoing into it, and I do think it's underutilized (I can't understand why smart watches use LCDs), but there are many breakthroughs before we can get those things you mention.

        I still expect it to get all over the world, eventually.

        • mcphage 6 years ago

          > I can't understand why smart watches use LCDs

          Refresh rate. If all a smart watch did was be a watch, it would be sufficient, but there would be no reason for it to exist. Once you want it to do non-watch things, you need a better refresh rate.

          • marcosdumay 6 years ago

            Ok, I don't know all that a watch is expected to do because I don't own one. But AFAIK, messages, fitness tracking, and remote controls do not need anything above what some e-paper can get you.

            Now (after some time to think) I do think it's mostly a design restriction. Nobody was able to make an e-paper watch look futuristic and expensive.

  • krylon 6 years ago

    Many years ago, I heard a politician give a speech, and he used a phrase there (I am translating from German, so it might sound a little clumsy): The worst enemy of Good is Better.

    Over time, I have come to believe that the worst enemy of Better is "Good Enough".

    • Itaxpica 6 years ago

      A comparable English expression is “the perfect is the enemy of the good”, which I think every software engineer should have engraved on to their monitors as a reminder.

      • pbhjpbhj 6 years ago

        A slightly different twist, but also a popular idiom, is that "excellence is good enough".

        [I'm frantically resisting the urge to redraft that sentence]

    • abritinthebay 6 years ago

      The common English idiom I’ve heard that is similar to the 2nd one is “‘Good Enough’ never is”, which is grammatically a bit strange but a fun one.

  • vortico 6 years ago

    I'm out of date. Does E-ink still have ~500ms latency of refreshing the screen? If so, that's the #1 reason I believed it wouldn't be successful when I used them 6 years ago.

    • olavgg 6 years ago

      There is a Norwegian startup that has decreased the latency dramatically. https://remarkable.com/ https://youtu.be/zpUPpiV7gAo

      • Terretta 6 years ago

        I have one. Really is remarkable.

        No meaningful latency while writing unless you draw fast large figures while watching for lag.

        Upsides: Feels like pencil on paper. With iPad Pro I keep going back to paper. With this... it’s paper.

        Downsides: Mobile software clunky, doesn’t connect to all WiFi, hardware paging buttons should not have been on bottom.

        • agentultra 6 years ago

          I have one too. I love it.

          I was blown away when I was able to write some technical specifications in emacs, render them to PDF and share them to the device, then go for a coffee and review my work. I can markup my documents, sync them back to my machine, etc.

          I do wish the resolutions of the documents on my desktop were better. I've drafted presentations on the device while thinking. I vastly prefer paper for thinking. This device is perfect for that.

          I just wish I could do more with it. I want to be able to hook up a cloud service to interpret my math hand-writing, run it on my theorem prover or checker, and return the results, etc.... much like what people were doing in Xerox Parc ages ago.

          There's so much potential here.

        • drcongo 6 years ago

          Do you think it would work well for someone who sketches for a living? My wife is a fashion designer and her minimum setup of a MacBook Pro + Wacom is pretty unminimum.

          [edit: Reviews I read said that the drawings are scalable vectors, but two posts below talk about poor resolution, so maybe not the right device for her]

          • Terretta 6 years ago

            I'd consider the latest iPad Pro better for sketching, thanks to high frame rate and no lag, pressure sensitivity control, pencil "angle" control, and iPad-native software options have BnL surpassed desktop.

            // Digital ink junkie since the Newton, currently own and use latest models of Wacom Cintiq, Surface, iPad, and Remarkable.

          • kej 6 years ago

            I have a few artist friends who are pleased with the Microsoft Surface. You get a pressure-sensitive stylus like on the Wacom and can run full versions of Photoshop or whatever.

            It's not for everyone, but it might be an option for your wife to consider.

        • luck_fenovo 6 years ago

          I have one too. The drawing/writing is pretty great, though once it's transferred back to a PC, the resolution seems kind of poor. Not too bad, but I'd like it to be smoother.

          The only other major issue is software stability especially when reading. I've had it hardlock while reading something, then have to wait for the battery to drain because it won't respond to any input, then once I have it back up it forgot where I left off so I lost progress. So for the time being it sits, though once a software update or 2 come out I'll try again.

      • Tepix 6 years ago

        They got the latency down by only updating a small area of the screen. If the next version has a backlight, I'm getting one.

      • therealdrag0 6 years ago

        Looks great. But I've been itching for a eInk normal monitor. I just want to be able to do text editing (SublimeText) and terminal on it. That's it.

        Seems like in this day and age, that's not too much to ask for.

    • theshrike79 6 years ago

      But they're still amazing for status displays, e-readers and stuff that don't need low latencies.

      What's keeping them from being more popular is the price, anything over a 1" e-ink display costs a literal arm and a leg.

      • vortico 6 years ago

        I would argue that e-readers actually do need reasonably low latencies. The reason I still read books is because I can flip through pages quickly and glance over 20 pages a second as I look for the page with a certain pattern of text or an image.

        • theshrike79 6 years ago

          IMO eInk readers are only good for "linear fiction", books that are read from start to finish.

          Anything that needs to be browsed or flipped through, nope. Very few electronic versions are usable in this case, be they normal LCD screens or eInk.

        • tajen 6 years ago

          but you don’t need that for signage, calendar on your door, paper decorations on your bedroom walls, meeting room reservation sheets...

          • vortico 6 years ago

            Are those use cases significant enough to make e-ink "take the world by storm"?

    • IshKebab 6 years ago

      They're a lot faster than they used to be. Maybe 200ms? Still not as fast as video.

      And I don't know why everyone seems to think they aren't successful. Amazon sells a ton of Kindles. They're very successful.

  • gilbetron 6 years ago

    E-Ink is a technology that couldn't be developed well enough, it just never got there. I mean, I love my Kindle paperwhite, but without color and rapid (60hz+) refresh, it won't ever be quite good enough. It's a fascinating case of a technology path that fizzles out.

  • swsieber 6 years ago

    E-Ink never took off because once it reached a profitable niche, it stopped being developed. And it's patent encumbered. I expect E-Ink development will resume around 2030 or so if it first appeared in 2010.

  • bitwize 6 years ago

    For me it was transflective LCD. Very low power compared to IPS panels, high contrast in direct light. Not quite as low power as e-ink, but it looked like e-ink and had a high refresh rate.

  • ddlatham 6 years ago

    I wish I could get a programmable e-ink display at a reasonable price, that I could mount on the wall or desk. To show quotes, photos, weather, calendar, anything that is nice to have passively updated in the environment, using low power, and not emitting its own light.

    • nicolerenee 6 years ago

      This is something I would love to have. The possibilities are endless. Status screens, quotes, project progress, server health, etc. I really wish something like this existed that I could use to show some basic info that would be nice to have a screen at home, but not so nice a monitor is even a remote possibility.

  • Yizahi 6 years ago

    I think it was one of the technologies that are actually hard. So they got where we are with low/medium effort and expenses but unwilling to go further because they can't or don't have money. I suspect color eink tanked because of this.

  • Double_a_92 6 years ago

    Have you ever played gameboy as a kid?

    • andai 6 years ago

      Transflective LCDs are another tech that didn't go anywhere. It lets you use LCDs outdoors by reflecting the natural light, which solves the problems of color and video. There were even models with solar cells embedded, so not only you get longer battery life but recharge in the sun.

      • SyneRyder 6 years ago

        I think that's what the Pebble Time watches used, a transflective Sharp Memory LCD (that Pebble calls "e-paper"). In direct sunlight, the screens are actually easier to read, and the colors are really vibrant. Indoors, it's a bit dim without using a backlight, though still better than a blank screen. It seems to be the key to the Pebble's 7 - 10 day battery life even with an always-on display.

  • herbst 6 years ago

    Still strong and evolving with smart watches and eReaders isnt it?

turc1656 6 years ago

Paypal used to have this feature that allowed you to install a browser add-on and you could generate a CC number on the fly that was good for either one time use or recurring use (for subscription services). This feature served two primary purposes: 1) to be able to pay using PayPal on sites that didn't support it 2) to help prevent against fraud which was becoming a massive problem at the time. If the number was stolen, it immediately wasn't good anymore and a hacker/thief could not use the CC number to purchase/steal anything.

It was that second aspect that I thought would totally eliminate all credit card fraud and make people comfortable with online purchases on smaller sites. I have no idea why PayPal killed the program, but even before it did, not many people used it. I was the only person I knew that was even aware it existed.

EDIT - if anyone is curious, I looked it up. Two ex-PayPal employees explain here: https://www.quora.com/Why-did-PayPal-discontinue-their-one-t...

  • philipodonnell 6 years ago

    I don't think its too tin-foily to assume that there are non-technical reasons why the credit card networks don't want one-time-use credit card numbers, and that PayPal would care more about its relationships with those networks than it does for a product that didn't immediately take off.

    This is how the innovater's dilemma works. Big entrenched company, too scared to make changes that will jeopardize existing partnerships and businesses, upended by a nimbler competitor that doesn't have to care about those things. It'll happen!

    • ci5er 6 years ago

      The one-time PAN patents (and the merchant or tx-bound PAN) patents of the late '90s are largely expired at this point, so it is open art now. I see more and more companies starting to implement it more broadly (Citi, BofA, CapitalOne). It's nifty because you don't have the hefty "Verified by Visa" type integration (nor any of that SET stuff also from the 90s).

      The last time I logged into my PayPal pre-paid debit card portal, they still had this functionality (sans Browser plug-in), but I don't recall seeing it on PayPal proper for a while...

      The last time I talked to the MC folks (granted, it has been a long while), they actually thought it (OTP) was a nifty client-side (plus closed-loop) technique and was a nice (and orthogonal) add-on to the types of security that they are pushing vis-a-vis tokenization on the merchant side...

    • turc1656 6 years ago

      Yeah but in PayPal's case, they were all MasterCard numbers. And keep in mind that since one of the primary uses was to be able to Pay secretly with PayPal on a site that doesn't support it, the number would have to validate through the merchant's existing CC system. MasterCard was clearly on board with the process.

      • philipodonnell 6 years ago

        > MasterCard was clearly on board with the process.

        They might have been when it started, but clearly something changed. If it was desirable, the moment PayPal decided not to continue, MasterCard would have started looking for alternatives. Since they didn't (and haven't), its pretty reasonable to assume they decided intentionally not to pursue it.

        The two other functioning alternatives listed in this thread, getfinal.com and privacy.com, both Visa. Kinda says it all there.

        • malyk 6 years ago

          My mastercard had the same feature sometime along the way that they advertised strongly to me on the website. I think I might have used it once, but it was too much of a hassle to log in to my account, generate the number, go back to the website that I was purchasing from, etc.

          Hell, i just looked it up and they still have it: https://www.cardbenefits.citi.com/Products/Virtual-Account-N...

          • philipodonnell 6 years ago

            Ha, I didn't know that either and I have an eligible card!

            • abakker 6 years ago

              my BofA visa has this as well.

  • arfrank 6 years ago

    I’d suggest you look at what we built at Final. Getfinal.com

    We took a hard deep look at a massive stagnant industry (credit cards) and use experience and features as differentiators

    • sjs382 6 years ago

      Too bad it requires an invite code. Do you have one for us? :)

      I use privacy.com for something similar, but it's a debit card (so it just connects to your bank account) rather than a credit card and doesn't offer any rewards.

      • bwanab 6 years ago

        They don't even seem to have a way to apply for an invite.

    • rholdy 6 years ago

      Been using Final for a while. Love it and love you for building it.

    • maxscam 6 years ago

      I interviewed with you guys (didnt get hired but that's ok) and this was the first,thing that came to mind. Good luck

    • darksim905 6 years ago

      the problem is you have to tie a debit account to this, right? Why can't I tie another credit card as a source of funding to this, or Paypal or some other stream? The less organizations that have my debit information, the better.

  • ekanes 6 years ago

    This tech is available and extremely well-implemented via Final Card. I use it and have numbers stored for probably 50 sites/services/etc. Anything where I put a card in online.

    So next time my card gets compromised because I used it in a restaurant (seems to happen regularly) then none of those have to be reset. Just get a new "physical" card and go on my merry way. Many hours saved.

    https://getfinal.com/

    • muninn_ 6 years ago

      For Final am I still able to do things like, shop through Chase for Chase Ultimate Rewards points?

    • iamwil 6 years ago

      Aside: are you a pokemon fan? your username is one letter off.

  • astura 6 years ago

    It's not just PayPal. I think Discover and American Express both had that feature but killed it off. Bank of America still has it and (IIRC) Chase does too.

    • doctorsher 6 years ago

      Citibank still has this feature as well.

      • astura 6 years ago

        Doh! When I said Chase I actually meant Citibank. Chase doesn't to my knowledge.

        In my defense both start with C....

        • doctorsher 6 years ago

          Haha no defense necessary! Carry on :]

  • pattle 6 years ago

    On the Quora page they explained the main reason behind dropping the product was that fact that users had to install a browser extension to get the CC number, which put people off.

    Surely they could have very easily sent the CC details to the users by email or SMS (security risks) or better still they could obtain the details by logging into their PayPal account.

    Seems odds that the execs axed the product over this when they were so many solutions to the problem.

    • givinguflac 6 years ago

      I suspect it had more to do with pushback from card providers as it likely made tracking users more difficult. Same reason some retailers still don’t support Apple Pay.

      • ABCLAW 6 years ago

        This is not why certain retailers aren't supporting Apple Pay.

        Interchange fees and their associated bulk volume rate discounts are.

  • hallalex831 6 years ago

    Both Bank of America and Citi credit cards do it. They have it available right on their websites, and Citi even has an optional desktop application to quickly generate temporary use cards. I use it all the time for shady subscription services, and also to sign up for "New User" promos multiple times :)

    • bhandziuk 6 years ago

      I really liked this about my BOA card. I don't use the card any more because the rewards suck but used to make one time numbers if I didn't have my wallet handy.

  • mustacheemperor 6 years ago

    I think Privacy.com (no personal relation) offers a similar service now, though you've got to trust them with access to your money at some point in the process.

    Edit: Since I started writing this comment others posted the same one - and pointed out it's not really a credit card.

  • dethos 6 years ago

    In Portugal we have a company that provides a service that is very similar to what you described (creation of virtual CCs which can only be used 1 time, or X times by one retailer, for subscriptions). The service is called MBway (previously MBnet) and it works very well.

  • jklein11 6 years ago

    I don't quite see the point in this. If you are paying with a credit card you aren't on the hook for any fraudulent charges anyway.

    • therealmarv 6 years ago

      I experienced this 1 week ago. Imagine you used your one and only debit card for a subscription service. The billing system of this subscription service went mad. Guess what: To stop this subscription madness you can only block your whole card. On top of that you have stress to get a new one from your bank and you are several days without a debit card. Or imagine somebody does a fraudulent charge... when it's large and the limit is not big (or connected to your account with a debit card) the money is first lost. You practically loose time where you cannot use that account/card anymore.

      • jklein11 6 years ago

        Right, but this problem seems solvable by having multiple credit cards. I would only make an online purchase with a credit card, never a debit card. If there is a charge that I didn't authorize, I can just click a dispute button on my credit card's web app and never have to think about it again.

        • phillias 6 years ago

          The critical feature is you can specify a credit limit and expiry period for each virtual number. By keeping the credit limit at a bit above the purchase amount you dont' have to worry about losing a bunch of money if the number is abused, and by keeping the expiry at 2 months (12 months max) you avoid any abuse channels that would take more than 2 months to transact on the dark market.

          edit: And of course you have the credit card company on your side to assist with fraud awareness and refunds. You won't get this with Paypal, Android/Apple pay, and especially not with any blockchain technologies where there is no intermediary working on your behalf.

          I keep a Citi mastercard just for the Virtual Account Number feature and use it every time for online and phone purchases. The BofA feature has some severe issues.

  • TheCoreh 6 years ago

    Interesting, my bank does that, and I use it a lot.

  • dm319 6 years ago

    Interesting. That's what android pay does.

  • snug 6 years ago

    Isn't this what Apple pay basically does?

    • evan_ 6 years ago

      Sort of, you just have a single number but it’s different from your regular cc#

      • abakker 6 years ago

        Doesn't apple pay change the number every transaction?

  • emodendroket 6 years ago

    One of my credit cards offers this.

MichaelGG 6 years ago

F#. Back in 2006 I first stumbled upon it and was amazed. It had so much potential. Everything C# did, and more, better. I was sure we'd see 20% of MS devs moving to it.

I underestimated the momentum of MS, the power of embarrassment of hiring a high-profile figure to be shown up by a researcher, the incredible anti-FP and even anti-generics...resentment(?) that MS kept towards them. Plus the insane comments from actual developers that literally did not understand the basics of C# ("C is a subset of C#" and "var is dynamic typing" <- comments from a public high-profile MS hire).

I've basically given up hope on programming becoming better over time. A lot of apps are boring grunt work anyway, so the edge in using better tools can be beaten just by throwing a lot of sub-par people at it.

On the plus side, for people looking to strike it rich in "tech", knowing tech isn't really a prerequisite. Persistence and the 'hacker' spirit, even if it means you spend all night writing something in PHP that would literally be one line if you know what you were doing, hey, that's what leads to big exits.

  • RyanZAG 6 years ago

    I feel it's more a case of functional programming trying to target the wrong segment - albeit out of necessity.

    There's two kinds of programs being made. First most common one is the kind you're talking about here: grunt work. It's not about the code, but more about having the code do a straight forward task with very flexible constraints. A web app that talks to a database and does a few simple transformations -- like forum software or a todo app.

    The second type is the interesting one: actual difficult code that does something unique and difficult and can take years to write by very experienced developers. Usually the difficult part here is coming up with the correct algorithm and then applying it, often with performance considerations being extremely important as the code is doing a lot of work. An impressive new MMORPG game, a new rendering technique, complex simulations, control code for rockets or advanced batteries.

    The big program with functional programming is that it's never been positioned as a solution to the second type. Generally people trying the second type are told to use C, C++ or recently, Rust. Functional programming is marketed as making the first type "better" because trying to market it as being more efficient than C/C++ has not worked as people rely on micro benchmarks for these decisions. But this falls apart with the argument you gave: for the first type, it's better to just hire more junior programmers as there's nothing really difficult involved. And using a functional language makes it extremely difficult to hire junior programmers, because functional languages are aimed at advanced programmers.

    It's a massive product-market fit problem.

    • lastofus 6 years ago

      I think there is a third segment where FP fits well: large difficult problems that can afford a 10-20% performance hit.

      The examples you gave of difficult problems are mostly soft realtime, which not everything needs to be.

      Granted this is a relatively small subset of problems.

      • wbl 6 years ago

        It's all the problems where you get a nice clear market if you solve it.

  • whistlerbrk 6 years ago

    Don't give up. I see FP growing every day. I saw a Haskell job listing yesterday for $180k/yrly. These skills are valuable and becoming more valued. I think the Xamarin purchase bodes well for F# which I have an intention of learning. My understanding is it is more popular in Europe btw.

  • bunderbunder 6 years ago

    I'm still holding hope that F# will experience a renaissance once the whole .NET Core thing gets sorted out.

    (If the whole .NET Core thing gets sorted out.)

  • zurn 6 years ago

    FP continues to be on the upswing, also in the JVM world and frontend (compile-to-JS) languages.

  • bpyne 6 years ago

    F# was the only reason I wanted to dip into .NET. I really expected it to take the development world by storm because it was introduced when FP was getting a lot of attention.

bsaul 6 years ago

BeOS . That was the best os at the time by far, stellar performance, fantastic C++ api usable by a newbie, i still haven’t found a GUI as responsive as this one.

DVD-audio. Great multi channel, high rate, high resolution audio. But it required the whole production chain to upgrade as well as new consumer equipement, and they competed with sony own format ( which was also a failure)... But it would have given the record industry a few more years of revenue before the bandwidth would have been sufficient to download or stream music with this resolution.

  • IshKebab 6 years ago

    I'm not surprised DVD-Audio and co failed. Their quality is indistinguishable from CDs despite what audiophiles would have you believe, and by the time they came out MP3 was clearly the future.

    • baldfat 6 years ago

      > Their quality is indistinguishable from CDs despite what audiophiles would have you believe

      Well I blame the system. You needed upgraded headphones or speakers. I have $2000 studio monitors when I was an audio engineer and well I can 100% tell you in a blind test I can tell the difference.

      95% of the reason why is people don't care about quality audio. It's something about human brains. MP3 sound horrible compared to a FLAC and good equipment. Instead people wear Beats Bluetooth Headphones listening to streamed Audio.

      I also blame myself. I use $15 in ear bluetooth ear piece (Looks like a hearing aid) Inovate G10. I listen almost exclusively to podcasts or YouTube videos. The convenience is so much more needed than sound quality. I listen to music at home (When my kids aren't around because all they do is complain). I need to get a good pair of headphones.

      • Jaruzel 6 years ago

        > MP3 sound horrible compared to a FLAC and good equipment. Instead people wear Beats Bluetooth Headphones listening to streamed Audio.

        A 1000 times this.

        High bit rate (FLAC/DVD-Audio/etc.) sounds way better than standard CD Audio, even on mid-range ($3k) kit. Of course you DO need to listen to proper music on it, that uses real instruments[2], instead of all that bump-n-grind poplet-R&B stuff that kids like these days[1].

        Modern 'pop' music is also mastered for MP3/Streaming nowadays anyway, so high bit rates of that stuff will never sound good no matter what you play it on.

        ---

        [1] Get off my lawn.

        [2] No I don't just mean Classical, but good Rock/Metal also.

      • abainbridge 6 years ago

        > Well I blame the system. You needed upgraded headphones or speakers. I have $2000 studio monitors when I was an audio engineer and well I can 100% tell you in a blind test I can tell the difference.

        Do you have an explanation of how that is possible? On the face of it, CD seems to have more SNR and frequency response than necessary. I've got intelligent friends on both sides of the debate, but I've never heard a plausible explanation from the "I promise I can tell the difference" friends. I tend to believe Monty Montgomery (of Ogg Vorbis fame). After watching https://www.youtube.com/watch?v=2qFjdQP7Ep0 there's little obvious room for doing better than CD. The best explanation I've heard is that CD mastering is generally aimed at mainstream equipment, and therefore isn't optimal for good equipment.

        • baldfat 6 years ago

          I can explain. On stereo you will be hard pressed by most people. The issue is where and how is everything done from recording all the way to mastering.

          Technically here is how it works. CDs are at 44.1khz sample rate and you record at 96khz back in 2000s, similar how video recordings are 8k for 4k or 1080p. Then when you exported your mix to be mastered it would get knocked down to 44.1khz not very noticeable in the least unless you listened to stuff all day everyday and had to train your ear to hear everything.

          If you compare the two 96khz and 44.1khz there is a distinct difference BUT it is not something that people would care about because their equipment only gives a clarity of less than 44.1khz. My system from recording all the way to audio output and DAC were 96khz and cost my thousands and thousands of dollars.

          I do believe that you can tell when you hear the two. Kind of like how FLAC and MP3 sound different on my LG V20 phone.

          • dhimes 6 years ago

            But the question remains. Given Nyquist theorem and an upper limit to human ear response, what is the advantage of going over ~40kHZ? I understand you can encode more information, but decoding to higher frequencies than we can hear doesn't seem (to the novice) to be beneficial. Do we somehow 'detect' harmonics above what we can hear? Or is the extra information used to process something else differently, like the relative volumes of two slightly different instruments making a note at the same pitch? It's an interesting question IMO.

            EDIT: upper ear response is presumably limited to ~20 kHz

            • Loginid 6 years ago

              You are right about upper ear response.

              I can tell the difference with my now-old ears, if it is a good recording of real-world instruments in a common space.

              That is a lot of qualifiers, but on playback you can ‘feel’ that other space.

              I believe that it is because of the way that the higher frequency components of the sound interact with the environment and effectively down-transpose and affect the rest of the signal before they hit the listener’s ear.

      • throwanem 6 years ago

        I use a Rowkin Mini for the same sort of stuff, most often while commuting. For spoken content it renders intelligibly, and it is tiny and comfortable to use. It also lasts a few hours on a charge and lives on my keyring. Devices like this exist in a local optimum of convenience, as long as they and their audio sources both do Bluetooth right. (I tried a bunch of the cheap ones off Amazon before settling on the Rowkin. It really is a lot better!)

        For at-rest listening, God made Sennheiser.

        • j_s 6 years ago

          Woah, 60%+ off ($40) right now? https://amzn.com/dp/B01IU5ZTKC (referral: http://amzn.to/2irAi7v)

          • throwanem 6 years ago

            Yup. They have a new line out, so are probably clearing old stock with the discount. That link is the exact model I have, though, and it's been awesome; I might pick up a couple of spares just to have them against need.

    • blattimwind 6 years ago

      I own a DVD audio copy of Linkin Parks' Reanimation and played on a proper 5.1 system it really is something, completely beyond any simple stereo CD.

      • brokenmachine 6 years ago

        I have that as well. It's great.

        Also check out Crystal Method's Legion of Boom in DTS-ES.

        I wish there were more releases in 5.1 or better, and specifically EDM releases.

    • bsaul 6 years ago

      even if you don’t believe in very high quality audio like me, the fact is we’re listening to stereo music, but viewing movies on 5.1 systems. This would have been different with dvd-a. I would have loved to listen to a live concert cd with surround sound.

      • jasode 6 years ago

        >5.1 systems. This would have been different with dvd-a. I would have loved to listen to a live concert cd with surround sound.

        As to your specific DVD-Audio example, I think it didn't widely catch on because most music is consumed (1) with earphones (2) in the car (3) as incidental background while multitasking. The scenario of "purposeful" music listening by sitting in the center sweet spot of a 5.1 setup is very rare.

        There's also a long history of technology that's "superior" but isn't accepted by the masses. There's a "law of diminishing returns" and/or other dimensions of utility (such as convenience) have higher priority than quality. Examples:

        - quadrophonic sound[1] is more immersive than 2-channel stereo but people didn't buy it

        - satellite radio is higher quality sound than FM radio but Sirius XM remains a niche and won't penetrate the market like FM did. Yes, FM had higher fidelity than AM and listeners switched but the same didn't happen for satellite.

        - 24bit-96kHz or 192kHz audio did not take over 16bit-44kHz (1411kbps). Instead, the opposite happened and more listeners consumed lower quality 128kbps mp3 and AAC files. Lower quality was driven by Napster + iPod + Spotify. Convenience trumped highest fidelity.

        - Bluray discs : more customers pay for digital streaming (often with lower resolution than 1080p) than upgrading the old DVDs to Bluray. There was more market penetration with VHS to DVD.

        - large 3D tv screens : the explosive growth is smaller screens on mobile phones playing Youtube rather than 3D content like Avatar. 3D is now seen as a gimmick fad.

        History has shown that we always overestimate how technical "quality" drives consumer adoption.

        [1] https://en.wikipedia.org/wiki/Quadraphonic_sound

        • tacoman 6 years ago

          >- satellite radio is higher quality sound than FM radio

          I'm not an audiophile, but when I listen to SiriusXM it hear compression artifacts. I'd way rather listen to FM.

        • dbatten 6 years ago

          I agree with almost all of your examples... except for 3d. While almost everybody I know would say they prefer seeing a movie on a big screen (even though the phone/tablet is sometimes more convenient), I don't know _anybody_ that prefers 3d movies. They're utter garbage.

    • icebraining 6 years ago

      +1. Even if they could be distinguished, it would be only in top HiFi setups. Most people (me included) listened with shitty car stereos, portable players with terrible DACs and cheap headphones.

      • bsaul 6 years ago

        Yeap. Portable and free (because stolen) trumped quality. That's something the music industry still hasn't recovered from.

        • baldfat 6 years ago

          As someone who use to have a SMALL Recording studio. We are in the best music business times for small indie bands. More people can support themselves through kickstarter like programs. Two of my bands traveled the world and were on MTV (Back when they played videos). The most one band made was $7,000 per person and they traveled to Europe and Australia/NZ. I never made a dime. They posted on Kickstarter 6 years after they stopped touring and got $50,000 in one week and $65,000 by the time it was done. Their album was awesome because they all just took breaks from their jobs and got it done.

          BIG Music Business sucks and the slow death to the Pop Music Industry. http://brandsplusmusic.blogspot.com/2011/02/death-of-pop-mus...

          • bsaul 6 years ago

            i'm really glad to read the situation is starting to get better. after the rain comes the sun, but that was a hell of a decade to wait...

            • baldfat 6 years ago

              Well I will say that this is the first time since the traveling poets of ancient times that people can make money on their own with music. I think the issue is people think an artist should make millions. The only place losing music jobs is label employed musicians. We cut out the middle man they can make a good living. One of my friends just plays Acoustic Guitar and travels the world for competitions and makes a decent living. He isn't what you would call rich but he is paying the bills and has a family. https://www.youtube.com/watch?v=Lo9kGHYn_bI

              "...things are actually much better for independent musicians than in the past, just as we would expect. In fact, there's been an astounding 510% increase in independent musicians making their full time living from music in just the past decade."

              https://www.techdirt.com/articles/20130529/15560423243/massi...

  • bhnmmhmd 6 years ago

    Similarly, Windows 10 Mobile didn't take off either. Before that, even Windows Phone became a failure.

    I was really excited when I got my Lumia 920 [0]: The OS was just so solid, sleek, and fast. Too bad Microsoft didn't put adequate effort into it. It's as if they couldn't care less about this OS.

    [0]: Not to mention the great, solid design of the hardware by Nokia.

  • nl 6 years ago

    In an alternate universe, Apple bought Be. Inc. instead of NeXT.

    It nearly happened too - they got down to talking price[1]

    [1] https://en.wikipedia.org/wiki/BeOS#History

    • seanmcdirmid 6 years ago

      Would Apple have been resurgent without Steve Jobs? That is a huge what if.

    • throwanem 6 years ago

      In that alternate universe, what happened next?

      • BjoernKW 6 years ago

        This would've been quite a different world in terms of design and technology.

        Eventually some company, Google comes to mind in particular, would have invented a device similar to, but not as well-designed as the iPhone. The development might just not have been that momentous and perhaps technology would have developed in an entirely different direction because of that.

        Perhaps - due to a lack of an App Store business model - we even would have had a continuous boom of the web and more open technologies on mobile devices rather than the lull web technologies experienced on mobile devices until fairly recently. Who knows?

        BeOS was very much a desktop OS. Like other operating systems such as AmigaOS it performed very well on some stripped-down desktop platforms such as set-top boxes, too (those were all the rage back in the 90s, at least some companies bet the whole farm on that assumption ...). I'm not sure if BeOS would've lent itself to mobile devices as well as NeXTSTEP did.

        NeXTSTEP was a forward-looking, POSIX-compatible OS that was ahead of its time.

        Most importantly though, Be Inc. didn't have Steve Jobs. They had Jean-Louis Gassée but I somehow doubt he would've had a similar influence on design, technology and the business world.

        Someone once quipped that NeXT bought Apple for -$400 million. There's something very true about that statement.

      • bunderbunder 6 years ago

        I think that Apple would not have had quite the same resurgence. POSIX compatibility is one of the unsung heroes of the post-OS X era, and BeOS wasn't quite compatible enough.

  • mycat 6 years ago

    BeOS has extreme task scheduling to give smooth UI, for example if you run too many programs at once, BeOS network connection has noticeable stutter even if your UI seems slick enough

  • donretag 6 years ago

    I could not get BeOS installed on my only computer at the time.

kobeya 6 years ago

The general-purpose computer. It's a dinosaur on the verge of extinction at the hands of walled-garden app stores on phones and tablets. I never, ever expected that to happen.

  • fsloth 6 years ago

    Uh oh, I think I'm about to rant. Scuse me.

    Given how cheap even powerful general purpose computers are I don't think they ever will be prized beyond availability. Aficionados, internet stores, arm, Linux, market forces, etc.

    To some part, good riddance. Software engineers and product designers are the reason the PC is effectively broken by design in UX. Because they are inexcusably slow. There is 0 reason anything I want to do with e.g. office would not have 0 wait time except non-user centric design. My friggin circa 1984 Mac Plus felt faster on most stuff than the generic desktop PC.

    The decade when Moores law gave freebies to software is probably one of the reasons for this astronomical sluggishness. Application feels slow? Well, just wait a year and get a new faster CPU. No need to spend resources on optimization.

    I don't know who started with the idea that it's smarter to buy more hardware than to spend resources on programming but if I had to guess, it were the mainframe vendors. Cool for embarrassingly parallellizible batch processing jobs for massive bureaucracies, not so much for desktops.

    And don't get me started on the idiocy of thinking it's fine software developers don't need to develop domain understanding in the field they operate.

    With the general purpose computer desktop development, we have a system that's broken on so, so many levels.

    At least Os X seems to try to do the right thing at an attempt at fluency and I can get a linux desktop to operate smoothly. Even windows 10 starts fast but O brother and sister of the clunkyness of software.

    Despite what I said, I think the situation is improving. Immediacy in handheld devices puts pressure on the desktop to concentrate on not wasting the users time as well.

    Let's see how it goes.

    • baldfat 6 years ago

      > My friggin circa 1984 Mac Plus felt faster on most stuff than the generic desktop PC

      See how fast it felt.

      https://youtu.be/XwbrCYJcrKQ?t=4m34s

      No way and NO HOW! That statement is 100% false. The issue is you remember that it felt fast. My Amiga felt like lightening to me in 1985 and I still boot it up from time to time. The delays in everything is over the top and Mac 128k were horribly slow.

      • fsloth 6 years ago

        I meant the Macintosh Plus, not the Macintosh 128k. I was off by a few years, the Mac Plus came in 1986.

        https://en.wikipedia.org/wiki/Macintosh_128K

        https://en.wikipedia.org/wiki/Macintosh_Plus

        I think the example is a bit too sluggish but is probably a more correct presentation of the actual facts than my nostalgic outburst.

        • baldfat 6 years ago

          The delays of just writing a widow is so slow. Macintosh Plus Review https://youtu.be/_bI0moHdjPQ?t=22m29s

          I was an Amiga kid. I had friends and co-workers who had Macintosh Plus and they always seemed to be so S-L-O-W to an Amiga.

          Amiga 1000 from 1986 https://youtu.be/CDWdVk-hmgA?t=51s

          The window draws were also slow BUT Ray Tracing and full animation compared to that lame Macintosh is just a whole different generation. Amiga lost and it took till 1994 or 1995 to beat an Amiga in 1986.

          • Veedrac 6 years ago

            That did not look any less responsive than doing the same on LibreOffice on my high-end gaming laptop, with an SSD, and in many cases it looked more so.

            Of course, the newer one has newer graphics, but it also has enough transistors to replace each one in the 68k with a whole new 68k.

          • fsloth 6 years ago

            Oh gods, I was a kid in the 80:s and I was so envious of all my friends who had Amigas.

      • bitwize 6 years ago

        You're confusing throughput with latency. Those old systems were objectively more responsive because the keyboard and mouse were tied directly to high-priority IRQs which were serviced in the OS. Now we have USB keyboards and mice, which go on the same IRQ lines as every other USB device, and even if the OS could service them specially, it doesn't. So keyboard and mouse events go on a queue of things to do: disk accesses, network packets, etc. And the whole system feels slower because of the time lag between you sending a command to the machine via kb or mouse, and the machine actually receiving it.

        • barbs 6 years ago

          Oh come on, there's no way you can tell the difference in latency between a mouse of that era and a USB mouse today. Surely there's enough demand in the e-sports scene to lower input latency as much as possible?

          • teddyh 6 years ago

            It’s not that mouse pointers today are normally laggy; they aren’t, as you correctly say. But when a modern system is very busy, it’s normal for the keyboard and mouse input to lag too, but that wasn’t normally the case with older systems, since human input peripherals were typically given their own special higher priority interrupts or whatever.

            This was of course done because the systems back then were so slow that they would be in danger of regularly dropping key presses or clicks otherwise, which would have turned people off these newfangled electronic typewriters. But the result was that the interactive experience with respect to latency was given great importance in the design of those systems, unlike today, when it is taken for granted that people will accept whatever laggy system they are given, and will just be directed towards a CPU upgrade if they complain about it.

            • baldfat 6 years ago

              Anything that is above a netbook or a sub $500 laptop doesn't lag and today's computers and phone are spoiling us. I have a 8 year old netbook and I run a minimal Linux install with a tiled window manager and it is quick. I also have an i5 Sandy-bridge desktop and I also work with a tiled window manager because I prefer it, but my wife has Windows 10 on the same computer and it is spot on and that is 6 years old. My video work station is blazing fast and it never lags on anything except chugging through 4K/8K video renders.

          • retrogradeorbit 6 years ago

            Have you actually used a computer from that era for any significant amount of time?

            • bitwize 6 years ago

              Yes, an Amiga. And it ALWAYS responded instantly to mouse input at least.

              A Windows 3.1 PC from that era, not so much :)

    • hvidgaard 6 years ago

      > My friggin circa 1984 Mac Plus felt faster on most stuff than the generic desktop PC

      You do have some merit to the argument, but your 1984 Mac didn't have half as "pretty" UI, and that matters to some people. It didn't do dynamic language processing to figure out if you've made a grammatical error. It did not have to continuously scan every data stream for malware. It did not have to use a significantly restricted environment to limit damage when some malware did get through. It didn't even support multitasking. My point is that computers today do so much more, but responsive UI is entirely possible for desktop applications, but the development cost is more than most companies/people are willing to pay.

      • fsloth 6 years ago

        You are repeating the same excuses the industry has pilfered to us for decades.

        "It didn't do dynamic language processing "

        Which could very well do it's stuff in a different thread and execution context given the actual payload such a system needs to transfer from a wordprocessor is trivially small. Then notify user. Don't rob the system of its low latency just because you want to run the analysis in the same thread because it's "simpler". It's simpler only if low latency is not a fundamental requirement.

        "It did not have to continuously scan every data stream for malware"

        Which can be considered a one more defect of the system which I forgot to lambast. Sandbox my wordprocessor, let me operate in realtime in unsafe environment and once I'm moving something out of the sandbox, then do the analysis.

        "your 1984 Mac didn't have half as "pretty" UI,"

        This argument is moot. Pretty UI:s are computationally a tenth or hundredth of the complexity of an AAA title. Unless one does it wrong. And if the system does not provide enough horsepower, provide a backup! Reduce the computational load! Cut the transparencies, freeze the animations, etc.

        • hvidgaard 6 years ago

          > Which could very well do it's stuff in a different thread and execution context given the actual payload such a system needs to transfer from a wordprocessor is trivially small.

          It's a blocking operation, so doing it in a different thread will not magically mean the requesting application can get it faster. Besides, this is already done in a different thread, but it increases the load of updating the UI.

          > Which can be considered a one more defect of the system which I forgot to lambast. Sandbox my wordprocessor, let me operate in realtime in unsafe environment and once I'm moving something out of the sandbox, then do the analysis.

          Which is already what is happening, but all data in and out needs to monitored. But regardless, it cannot operate in an unsafe unrestricted environment, because that implies full access to the memory.

          > This argument is moot. Pretty UI:s are computationally a tenth or hundredth of the complexity of an AAA title. Unless one does it wrong. And if the system does not provide enough horsepower, provide a backup! Reduce the computational load! Cut the transparencies, freeze the animations, etc.

          As I started to say, you have some merit to the argument, but I simply pointed out that modern UIs require multiple orders of magnitude more computational power to render. And providing a fallback is yet another cost of developing, and for the most part not something people/companies will pay for.

      • secura 6 years ago

        So why have a full pretty UI at the cost of latency?

        If the latency begins to cross a threshold and begins to be inconvenient or distracting, it is time to step and think that may be the pretty UI is too ahead of its time, and we should stop making the UI prettier than it can be at this time.

        • platinumrad 6 years ago

          I think you greatly underestimate the number of non-technical people who care more about gee-whiz coolness than responsiveness. Back in the aqua era of OS X with the super sluggish genie effects and so on, the general public loved it and it was mostly "power users" and professionals who went into the settings to tone down the animations.

          • dkersten 6 years ago

            I dunno. Maybe initially, sure, but I've seen countless people use, for example, internet explorer where half the screen was taken up by malware toolbars (I wish I was exaggerating...), which is not pretty, not good UX, not gee-whiz coolness at all. And they never once complained, they just accepted it as the way it is and kept on going.

            • TheOtherHobbes 6 years ago

              And there’s the reason. Users are too forgiving and too passive, because the industry has trained everyone to have incredibly low expectations.

              Developer and corporate status games select for political skill and self-indulgent overcomplexity, with added spice from dark patterns. There’s no reward for UI/UX/internal simplicity and elegance.

              The culture at some companies would have been hugely improved with a few literal angry-users-with-pitchforks moments.

              • hvidgaard 6 years ago

                > There’s no reward for UI/UX/internal simplicity and elegance.

                Then why do it? I like a good UX just as much as you, and I value simplicity, but people just want "good enough". And that is all there is to it.

          • AnIdiotOnTheNet 6 years ago

            > I think you greatly underestimate the number of non-technical people who care more about gee-whiz coolness than responsiveness.

            I think you've built a strawman. People who use computers to get shit done might pay lip service to things looking good, but at the end of the day they want a tool that does the job without a lot of bullshit. To the extent that the kind of person you're talking about exists, they are consumers who live in tablet land and should be ignored for desktop/workstation use cases.

            • platinumrad 6 years ago

              I completely ageee that they belong in tablet land but tablets did not exist in the mid 00s. All of my older relatives who now use iPads exclusively all used OS X back then for both ease of use and aesthetic reasons.

        • kowdermeister 6 years ago

          Because some people (including me) loves it. I don't care about a few extra ms for having an interface that's satisfying to watch / use for 12 hours per day.

          I have to add that most computer I see of my friends are extremely snappy, I don't really see what you are talking about.

    • arkh 6 years ago

      > The decade when Moores law gave freebies to software is probably one of the reasons for this astronomical sluggishness. Application feels slow? Well, just wait a year and get a new faster CPU. No need to spend resources on optimization.

      I'd like to see a word 1.0 coded with current tech stacks. Same functionalities. And see how fast it feels.

      I think this feeling is a combination of a lot of things taken for granted and nostalgia.

      • fsloth 6 years ago

        "with current tech stacks"

        Computers will behave sluggishly, unless care and dedication is paid they don't.

        I'd start from a 3D engine, a font layout system, and the huge chunk of simple and efficient open source utility libraries that have sprung up. Or slap it on top of QT if that system can be made to behave non-sluggisly.

        One point is, that you need to be able to access the low levels of the system. To understand it's limitations on your higher level architecture, and on the other hand, figure out what can be made blazing fast under which circumstances and use those as your bearings as you are developing the system. With usability, and understanding of end user being on the front seat. The whole time.

        The point is, although the system does a great many things, there is no reason from the usability point of view those things are done in serial except legacy and poor design. A system can be made to feel immediate despite what it's doing under the skin. Delay those action, design the system with the user and her time as the first class citizen, rather than some other constraint.

        Take a modern complex 3D AAA game. One of the root constraints is that it should never lag. Well, they do, obviously, but that is considered a defect more often than not.

        And... Word... would need some deeper fixing on it's basic premises. As I recall I was pretty easy to break the Word 1.0 layout system in any number of ways.

      • Cthulhu_ 6 years ago

        I think a Word 1.0 with current tech would be awesome. Unfortunately, few parties seem interested in delivering fast applications anymore, instead focusing on cross-platform availability - you can't release a desktop app anymore without a mobile counterpart.

        In theory things have gotten a lot faster, doubly so if you consider UI and logic can be distributed over multiple cores, and even moreso considering file access is a factor 10-100 faster than back then, and memory size has increased by over 9000.

        But it'd be poor on features and nobody would want to use it because of Feature X. I had been looking for a lightweight Slack client alternative, but the only thing I could find looked like shit.

    • 3chelon 6 years ago

      >I don't know who started with the idea that it's smarter to buy more hardware than to spend resources on programming

      It was the CFO's idea. Software is expensive, and hardware is _incredibly cheap_.

      Moreover, if you're in the business of selling hardware, be it PCs, mobile phones, white goods, whatever, it is in your power to throw cheap hardware at the problem, because software development will always be one of your highest costs. You will never be able to just drop readymade code into your new hardware design. And you can more easily recoup the cost of hardware development because you can mark up the price of anything with a faster CPU or more memory accordingly; you can't so easily charge for software features.

  • vortico 6 years ago

    Eventually at some point, you have to get some work done, and you just can't do that with a tablet screen or a touch interface. The mouse and keyboard will reign, and even if the concept of a non-sanboxed system goes away, the future will still have hardware which look like desktop computers and laptops. The OS filesystem won't go away because applications on these personal computers need to be able to talk to each other somehow, and the current iOS method of "launch a file in this application from this other application" is too limiting. The only thing that will change is the application packaging, which will become optionally centralized and have permission levels just like Android apps. I think overall, future computers will be just as pleasant to use, just as programmable/configurable, and just as easy to download software from decentralized sources.

    Just think, Chrome OS has essentially failed to take over the OS market, so I doubt anything using this overly-sandboxed model will be successful in the future.

    • taneq 6 years ago

      > Just think, Chrome OS has essentially failed to take over the OS market, so I doubt anything using this overly-sandboxed model will be successful in the future.

      But that's the thing, there's no "os market" for appliances. ChromeOS won't 'take over' computing devices any more than toasters will take over your kitchen.

  • closeparen 6 years ago

    Did people really think the arms race of heuristic malware detection in bolt-on security software would be be sustainable?

    Sandboxing applications and ending the free-for-all over the user’s entire file system were a long time coming. Then spammy software download websites and “Download Now” banner ads bigger than the official download buttons for popular software made it obvious that we needed a trustworthy distribution center. Of course someone starting an OS in the mid 2000s wasn’t going to adopt the Windows shareware distribution model.

    The Linux distribution model of a vetted, quality controlled repository of software from a trusted middleman was so obviously superior, of course it won. I guess it’s surprising that iOS got so far with no escape hatch, but Android is the far more popular platform and has always let more technical users peek over the walls if they so desire.

    I’m more surprised that so many people are content to type on glass. I was sure that the blackberry keyboard was the future.

    • abecedarius 6 years ago

      False dichotomy. A user-in-control sandboxing model is perfectly possible, e.g. https://sandstorm.io/

      I agree that mainstream desktop OSes have failed to give us this yet. Fuchsia sounds like an opportunity to do better.

      • AnIdiotOnTheNet 6 years ago

        Agreeing with you, mini rant time:

        Why in the fuck do desktop computers have user-based permission models? Don't answer that, I know why, it's because they inherited them from the server-OS ancestors, but it's retarded. Desktops are personal computers, there is no reason to stop the user from doing what they want because it is their computer. Some might say, "oh, well, we can protect the system from malicious software by limiting user access to it" and they're correct, but who gives a shit? If my OS gets destroyed I'll reinstall it. It's a bit of a pain, but whatever. What's really important to me are my documents, my work, and the user-permission model does jack and shit for protecting those as the ransomware wave proved.

        Application-oriented permissions are the obvious and correct way to do things in personal computing.

        • christophilus 6 years ago

          There are desktop computers which are in shared environments (computer labs, public libraries, kiosks, teller lines, etc). So user-based permission models make a lot of sense for desktops.

          • AnIdiotOnTheNet 6 years ago

            Not really. There are better solutions than user permissions for those situations, we used them all the time in the 90s when multi-user didn't even exist in the Windows or Mac Desktop world.

            Network resources, hosted on servers, need user permissions. Not desktops.

          • jenscow 6 years ago

            In those environments the same local user account is usually used. Or, a network account and in that case files are usually stored on a server.

      • Mayzie 6 years ago

        Thanks for sharing that link with me.

        Never heard of Sandstorm before. That's a really impressive product. I'll definitely be using it in the future.

    • throwanem 6 years ago

      I'm not super a fan of typing on glass, especially when next-keystroke prediction is as poor as iOS has been lately managing. I am super a fan of having a device with the size of display that eliminating a physical input device achieves.

      My old Palm TX did the same thing, showing the graffiti area when a text input had focus and otherwise giving applications the use of all but a status bar's worth of glorious 320x480 screen space. Granted that Palm OS did less to help applications really leverage the extra space than iOS does. But most of what I did on that device was reading anyway.

      Of course, Graffiti via stylus suffers less from a soft UI than keyboard input via touch, so it's not quite apples to apples. Still, though, as annoying as a soft keyboard so frequently can be, I wouldn't give up half my phone's display for a hard one.

    • michaelt 6 years ago

        Did people really think the arms race of heuristic
        malware detection in bolt-on security software would
        be be sustainable?
      
      No, people thought user education and decreasing software bug counts would be sustainable.
  • Silhouette 6 years ago

    That's profoundly disappointing, I agree, but in a sense perhaps we geeks only have ourselves to blame. We didn't solve the basic (to any normal user) problems of ease of use and security, despite having decades to do so. We also didn't start routinely educating the next generation of kids in how useful and powerful programming skills are. Consequently, for many people personal computing has been reduced to little more than a mechanism for consuming online content and for relatively simple communications using a few trusted channels.

    The one comfort is that someone still needs to be creating all that content and all those communications channels, and those people are always going to benefit from something much more capable than a small, lock-down touchscreen+WiFi device.

    • AnIdiotOnTheNet 6 years ago

      >We didn't solve the basic (to any normal user) problems of ease of use and security

      This user is a strawman. The problem we didn't solve was decoupling ourselves from corporations controlling our user experience, because we didn't create a good enough alternative for getting stuff done.

    • kobeya 6 years ago

      > The one comfort is that someone still needs to be creating all that content and all those communications channels, and those people are always going to benefit from something much more capable than a small, lock-down touchscreen+WiFi device.

      Maybe. Let's see what the next generation of visual programming and IDE UX bring.

  • pimmen 6 years ago

    This really destroyed my day. I really have a big problem staying positive for at least a day now that you've eloquently expressed what I've had a creeping feeling of but just quite couldn't word in any discussion.

    As computers become "locked app collections" they become appliances, appliances business and government institutions can control and regulate. "Can you make the iPad just not do that thing?"

    • swyx 6 years ago

      ok honestly i dont understand this fear. isn't the open web the antithesis of the locked appstore?

      • jochung 6 years ago

        The only way for the "open" web to evolve is for Google, Apple and Microsoft to agree on a feature, spec it out and roll it out to their different platforms. All anyone else can do is twiddle their thumbs hoping they don't fuck it up.

        We sandboxed away innovation, so now all we have is web apps that process text, video and audio all in the same way.

      • kelnos 6 years ago

        Yes, but:

        1) Some would say that the open web isn't so open anymore, and is under constant assault. One example might be the recent W3C adoption and endorsement of EME.

        2) The fact that the open web "lost" in the battle for mobile dominance is a thing. Most "serious" apps have a native version on at least iOS and Android, and the webapp version (if there even is one) is usually a worse experience.

        • jo909 6 years ago

          I would like to think the open web has not lost yet, and that progressive web apps still have a great chance of taking over. Nobody wants to develop apps for at least three platforms. Nobody wants half their user base use old versions. Nobody wants to fear being dropped from some walled garden without any recourse.

          There are still good technical reasons for some native apps, but that will dwindle more and more.

      • pjc50 6 years ago

        Not quite: the data is then locked away with the owner of the website, who can also surveil its usage and show you ads. Or ban you.

  • swyx 6 years ago

    as a web developer i am bemused at every other comment on here that seems to agree with your sentiment that the world is walling up. yes, app stores will be a control mechanism for ever and consumers may not understand what they are opting into. but equally the open web will never go away and that's what general purpose computation has shifted to, thats basically all we've done this century. am i misunderstanding what you mean by general purpose computer?

    • chubot 6 years ago

      Can a program written for the web access all the great sensors on your Android or iPhone? Can it access new movies and TV shows?

      The web is a big success story, but it's not all there is in computing.

      • kobeya 6 years ago

        I hate the browser as a platform, but the answers to your questions are yes, and yes.

        • chubot 6 years ago

          The web can't access all the sensors on your phone -- at least not in their full glory. It might have trivial APIs like "take a picture", but it can't do anything specific to a piece of hardware, by definition. It can't control the camera's imaging pipeline (lighting, focus, preprocessing, etc.).

          As a shortcut to understanding this, consider what hardware support something like ARKit [1] needs. Or consider whether you could write a multitrack audio recorder as a web app (no, because the web audio APIs aren't multitrack).

          My point is that you need "general purpose computing" that is not the web for many things. The web is indeed open, but we also need our general purpose computers to be open. Android is better than nothing, but it's arguably not as open as a PC running Linux.

          https://developer.apple.com/documentation/arkit/about_augmen...

          The answer to the second question is also no.

      • rimliu 6 years ago

        Amusing, how people fear monoculture on one hand, but a longing to have everyting as the web app on the other. For what is worth it is good web apps cannot access everything.

      • zerostar07 6 years ago

        I think it can , but if your website needs to use these things, then it essentially locks itself in the "app garden" we 're talking about escaping from.

  • zerostar07 6 years ago

    It all started with laptops. Wish we could do something to the direction of personal, components-based workstations. Gaming does that.

  • amelius 6 years ago

    Yes, the only direction seems to be that of closed systems. Smartphones will probably never be as open as desktop systems once were.

  • gilbetron 6 years ago

    Huh? I'm not following. Do you just mean desktops?

  • justhackedme 6 years ago

    This is wrong.

    If you're 10 or 65 this is correct, but for anyone else the PC has not only been increasing in use but other parts of your life are filled with other computers.

mncolinlee 6 years ago

Near-field communication (NFC).

It came out in 2010 for Android and Microsoft phones, but has been hamstrung by Apple for most of its development. I had several startup ideas which required NFC market penetration on phones, but which even today are impractical due to Apple's chokehold on NFC APIs for fear of a payments competitor.

In order to even find "tap to pair" devices, you often need to seek out the one manufacturer or option available. There should be personal phone-to-phone "tap to pay" by e-cash or cryptocurrency. "Tap to auth" using your phone. "Tap to key exchange" should be part of doing business, sending "business card" contact details as well. Samsung had S-Beam, "tap to share files" securely with WiFI Direct.

But even with the addition of Core NFC, you need to find a responsible adult with an Android phone to even write a single bit as an iPhone owner.

  • allthing 6 years ago

    We use it in products at work and it always works seamlessly. It could be a quick and easy way to pair a phone to a car or connect a phone to WiFi. Just tapping is way more efficient than the current connecting options when you're near the hardware you want to connect with.

adrianmsmith 6 years ago

GWT.

To create client-side web apps, rather than writing Javascript, write Java and have it transpiled to Javascript. Java has IDEs, isn't a new language to learn (like Dart), has static type checking, support for refactoring in the IDEs, etc.

GWT could do tree shaking, obfuscation, etc.

You could write your server code in Java, and "call" remote server methods from your client code, "passing" Java objects transparently back and forth between the client and the server.

It was released in around 2006, it was not without issues but seemed to be a much better way to develop complex client code than the Javascript options available in 2006, as far as I could see.

It never really took off.

Google moved their team over to Dart at some point. I guess the Oracle Java lawsuit meant Google were unenthusiastic about continuing to have anything to do with Java where they didn't need to.

The compile times were slow. Not all Java compiled so you had to have a mental model in your head not only of Java but also the Javascript that was going to be generated which was mental overhead.

  • BjoernKW 6 years ago

    GWT was quite complex. You had to write a lot of code for achieving very little.

    There also was a huge impedance mismatch between Java and JavaScript at the time. Nowadays you can write decent functional code in Java but back then you needed anonymous classes for that.

    • adrianmsmith 6 years ago

      That's true, but while it was difficult back then to write functional code in Java (now solved), it was difficult back then to write class-based object-oriented code in Javascript (now solved).

      • abritinthebay 6 years ago

        If you go into JS classes assuming they work at all like Java/C++ classes... you’ll have a bad time.

        They’re just sugar around protypical object-oriented code.

  • bambax 6 years ago

    What I would really like to know is why Google didn't buy Sun. It seemed such an obvious match. Did they try and fail, or did they not try, and why???

  • nickm12 6 years ago

    In addition to the ones mentioned, another issue with GWT was that the differences in browser JS environments were much more salient in 2006 (IE6 was still a massively popular browser) and browser development tools were limited. If your script didn't work, debugging was likely difficult.

    • adrianmsmith 6 years ago

      It was a stated goal of GWT to produce different JS files, one per browser, so that it could generate the right JS dialect for each browser. (And so that the browser didn't have to download "if" statements containing code for other browsers that it was never going to run, meaning a smaller downloaded, also relevant then.) That always worked perfectly for me, but I only used it for about 6 months, on a small project.

      Also, it had a debugging version where it actually ran the client code in a JVM, and you could step through it with Eclipse etc. Of course that didn't help if the bug only happened after the Java-to-Javascript compilation process, but in my short experience with it I never had such a bug.

      Of course Javascript and the browsers have caught up, but I would have expected these features to be gladly embraced by the development community at the time, as they really did help. But they weren't.

  • le-mark 6 years ago

    This came up a few weeks ago, and someone from google chimed in saying google just recently release new apps usin gwt, and it's still active. But I share your view that it was a failed technology. All the gwt I see in my area is legacy. I remember prototyping a project with it and being flummoxed by the ui not laying out the way it ought to have. From that experience, I never used it again.

    • adrianmsmith 6 years ago

      I haven't worked with GWT for about 5 years, but out of personal interest I want to a GWT conference in Florence, Italy a few weeks ago.

      The open source movement is still ongoing, but there is apparently no help from Google any more.

      Apparently Google do use GWT internally, but it's a fork and not the same as the open source version that we can all use. Apparently the reason is they integrate it with their own build tools (i.e. not Maven) that themselves are not open source.

      At the conference they were talking about GWT 3 which is basically a re-write, open source, but it seemed to lack a lot of the features that made GWT good. I appreciate the time they're devoting to the project, and don't want to sound ungrateful, but I do not expect a useful GWT 3 any time soon.

  • dehrmann 6 years ago

    There was a time when the consensus was that things like GWT were too far from the "bare metal" of the browser that issues in anything beyond toy projects were impossible to debug.

    Now we have React and JSX.

    Maybe the tooling just caught up.

  • j_s 6 years ago

    In C# land, it also has yet to catch on. There is commercial backing right now, though.

    https://bridge.net/

  • collinmanderson 6 years ago

    WebAssembly is probably the closest similar technology - the ability to compile any language into the browser.

  • leephillips 6 years ago

    Doesn't clojurescript use GWT? If it does, then GWT has certainly lived on. If I'm confused, sorry.

    • adrianmsmith 6 years ago

      I'm no expert but I think Clojurescript takes the Clojure language (spelt with a "j") and converts it to unoptimized Javascript, which then gets processed by the Closure compiler (spelt with an "s"). The Closure compiler takes unoptimized Javascript and emits optimized/minified Javascript.

      Closure is from Google, but is independent of GWT which is also from Google, despite the two projects having similar aims. Closure is modern, whereas GWT is not.

      The open source team who maintain GWT now (after Google abandoned it) are in the middle of a re-write, which will be called GWT 3 (current version is GWT 2). GWT 3 will convert Java to unoptimized Javascript and use the Closure compiler to optimize it etc (as Clojurescript does). Current GWT 2 does not use Closure but does its optimizations itself.

      I may have got some of that wrong, corrections welcome!

    • bpicolo 6 years ago

      They generated a Closure compiler implemented in js by running the closure compiler through GWT, but it's worse than the java equivalent (fewer features, slower)

  • tmzt 6 years ago

    Not using Java, but I'm working on non-JS iso rendering of web applications.

Dowwie 6 years ago

The semantic web. It spawned an entire generation of academic study, but aside from assuring job security for researchers hasn't amounted to anything beyond their ivory towers.

  • lolive 6 years ago

    Thanks for mentionning it.

    Beyond some very nice pieces of software created by clever SW developpers (let's mention DBPedia Spotlight, RelFinder, Semantic MediaWiki, N3.js) I mostly agree with your argument.

    To be the devil's advocate, in my opinion, data lakes are just a new buzzword that will eventually become what the Semantic Web was supposed to be. With some millions dollars, companies will aggregate and interlink their datasets. There are then one step away from extracting their master data from their data lake & publish it as Linked Corporate Data. From that, the public part of will be published as Linked Open Data.

  • jacobr 6 years ago

    I remember my huge number of delicious bookmarks tagged with "folksonomy" and other buzz words.

  • michaelmior 6 years ago

    That's not entirely true. It's perhaps a far cry from the original goal but Schema.org markup has definitely caught on mostly thanks to use by Google for rich snippets.

  • perlgeek 6 years ago

    I get the impression that "dumb" crawling and machine learning have turned out to be more practicable.

    • mannykannot 6 years ago

      It is definitely more practicable in the sense that no-one has figured out how to implement the vision of the semantic web.

shove 6 years ago

WebGL. I spent ~10 years doing Flash / ActionScript and when Jobs nixed the mobile Flash plugin, I figured everything would shift to HTML5 / WebGL. Nope. The era of fancy, interactive websites faded into the rearview mirror.

I'm sure most of you don't miss it, but there was so much amazing work. Rather than the budgets shifting to Three.js etc, it went to apps or dried up completely.

  • gilbetron 6 years ago

    WebGL is fantastic, but writing compelling software with it is significantly more difficult than just using "normal" web libraries because you have to be able to think in rendering terms, use linear algebra (a bit), etc. And the thing is, it turns out it is really difficult to find something to do using WebGL that isn't easier to understand in a normal 2D visualization.

    I think there is great stuff to be done (and is done) with WebGL, but most of it is in the background and subtle. "Walking through a data world" is just a dumb idea, for lots of reasons. But using WebGL to speed up 2D visualizations works great, as is using it to make more interesting 2D visualizations. However, that is a subtle art.

  • disease 6 years ago

    There is still demand for interactive, animated and "flashy" websites in the educational sector.

    • shove 6 years ago

      Any way you slice it, demand is probably less than a tenth what it once was.

  • culot 6 years ago

    Back in like 1994 I was certain that RoboBoard BBS software was the future of the online world. But no.

    • shove 6 years ago

      That’s a pretty deep troll

  • hutzlibu 6 years ago

    Since in my experience WebGL is still not really stable on many devices (especially mobile), I think its time, combined with wasm, is about to come ...

  • kyle-rb 6 years ago

    WebGL is really cool, but in my personal experience, pretty hard to pick up, at least with no prior experience in graphics rendering type stuff. There's a lot of overhead to do relatively simple stuff. (In 3D anyway.)

    In retrospect I maybe should have just started with ThreeJS or another higher level abstraction.

  • billconan 6 years ago

    I thought about the this.

    The reason that 3D web hasn't been big is because, the most effective way of making and consuming content is via natural language, or text.

    text, by its nature, is 2D. 3D won't be mainstream, because it's too demanding to make.

    • billconan 6 years ago

      unless, people invent some device like camera, which simplifies 3d content capturing, making it as easy as taking pictures and video.

Animats 6 years ago

Virtual worlds. The "metaverse" of Snow Crash. "Snow Crash", the TV series, has been green-lighted by Amazon. That may drum up more interest.

Second Life comes close. They've had a working system for over a decade and have with adequate solutions to most of the problems. But it's hard to use, and not that many people are interested.

  • mseebach 6 years ago

    I doubt that the physical manifestation of the metaverse (you "plug in" and navigate in a virtual 3D-world, especially one that requires you to transport yourself around) will happen, it's simply not very compelling. What we do have, which is very similar to the substance, if not the aesthetic manifestation, of the metaverse, are online communities. Everybody and their mothers on Facebook, specialized trade communities like HN, "dark web" for drugs, weapons, hitmen and 0-day exploits (this, especially, sounds like that metaverse-bar in Snowcrash) and encrypted chat apps for insurgents/terrorists.

    But you don't need to put on VR helmet and you don't need a fibre and you don't need to ride a VR motorcycle to get to HN, but on the other hand, you can hang out in multiple bars simultaneously from a tiny, wireless multiverse terminal that you carry in your pocket everywhere. In many ways, we're far beyond the metaverse imagined in Snowcrash.

    • CodeCube 6 years ago

      But ... I want a VR motorcycle! :P

      • Animats 6 years ago

        You can get them in Second Life. I have one

        Remember to slow down when crossing sim boundaries. There's a big bump as the avatar is handed off from one simulator to another. Physics engine handoff is a tough problem.

  • nathan_f77 6 years ago

    I agree with that. I think it just suffers from stigma, like online dating a few years ago. It's very easy to make fun of Second Life. For example, there's a scene in The Office where Dwight has an account, and Jim signs up in order to prank him, but then he ends up taking it too seriously and gets made fun of. All the articles I've read about Second Life paint a sad picture of lonely people in their basement or bedroom.

    I would like to try a virtual world with facial feature tracking, like the new Animojis on iPhone X. It would be interesting to play a multiplayer game where you communicate with voice, body language, and facial expressions.

    A VR headset obscures the eyes, so you couldn't use a 3D camera. Maybe a combination of 3D tracking and sensors inside the headset.

    • Cthulhu_ 6 years ago

      > It would be interesting to play a multiplayer game where you communicate with voice, body language, and facial expressions.

      Simple applications already exist; look at Rec Room for example. That's got voice, limited body language (mostly hands) and I think even simple facial expressions that it infers from... idk, something. Voice chat too.

  • rebuilder 6 years ago

    I think maybe the virtual world metaphor has been taken too literally. The real point of "jacking in", to me, would be a vastly superior interface compared to the screen-and-keyboard systems we have now. Ideally, it'd be more like a subsystem for your brain, an extension of the way we think, but in fiction it's easier to depict that as a virtual space.

    Letting people use their devices without looking at or touching them would be a huge improvement over what we have now. Input especially is terrible, I'm typing this on a phone and a part of me hates every second of it. Someone, anyone, fix this crap!

    • throwanem 6 years ago

      > in fiction it's easier to depict that as a virtual space

      Not really! Watts does something a lot closer to what you describe with "ConSensus" in Blindsight, and handles it deftly enough. I think it's more just that the idea of VR as virtual space was the dominant metaphor of its time, and contemporary fiction reflects that time every bit as much as Watts's ConSensus - thought-controlled wireless information access, rendering to the optic nerves - reflects its own.

      I think the persistence of the older style has to do partly with the fact that we can't yet implement the newer one no matter how much more useful it would actually be, and partly with the fact that VR metaverses look really cool in demos.

  • dkersten 6 years ago

    > Second Life comes close.

    I never got the value proposition of Second Life. It wasn't game-y enough to be a game and, for me at least, it didn't really add much over more "traditional" collaboration/communication/socialising methods like text and video chat, social networks, twitter etc.

    • zerostar07 6 years ago

      It's a lot closer to facebook than a game. It's basically a building game.

      • TheOtherHobbes 6 years ago

        It’s basically 80% doll’s house, 20% brothel, where everyone pays rent.

        The idea isn’t terrible, but the implementation had some horrific flaws, and Linden Labs became notorious for infuriating users.

        SL might have taken off if both servers and clients were fully open-sourced, and LL had concentrated on building a global system for connecting open servers together.

        I strongly suspect someone will rediscover the idea 5-10 years from now, do it properly, and replace today’s web.

        • zerostar07 6 years ago

          There is an open source version of the server called opensimulator , and compatible viewers. There are tons of worlds outside SL : http://opensimworld.com/

        • Cthulhu_ 6 years ago

          > I strongly suspect someone will rediscover the idea 5-10 years from now, do it properly

          Very likely, also happening in VR.

          > and replace today’s web.

          Not likely. It's just not going to happen. Pretty sure there have been multiple attempts at a 3d internet, but, nope.

          • kazagistar 6 years ago

            In already annoyed by the shift of data from text to video. It's so much harder to absorb and skim quickly. 3d internet is harder for content creators, and harder for users to access what they need.

            • dkersten 6 years ago

              I wholeheartedly ageee with this. I really dislike videos for content that used to be shared in articles. I can’t always play sound, it’s hard to skim, impossible to search and more difficult to jump to specific content. I really hope that this kind of content never shifts to 3D.

          • zerostar07 6 years ago

            > also happening in VR.

            They tried, but so far it has failed to catch on and i predict more of these projects will go abandoned , including second life's own experiment, Sansar. IF anything the side-detour in VR has stalled the progress in virtual worlds for a few years.

  • zerostar07 6 years ago

    I think SL is sabotaging the popularity of metaverse with its pricing. They place the bulk of the cost on virtual land, which is super expensive, while their market is not very profitable and few ppl can make actual money.

    Another problem is underpowered laptops with intel HD graphics and tablets.

    I don't think the idea of metaverse is going away per se, it's a freeing and creative experience . I do think the focus on VR has held it back for 2 years now.

    • Animats 6 years ago

      The land thing in SL is interesting. Eight people now own 40% of the land. Just like the real world, there are real estate magnates.

  • swyx 6 years ago

    i think it will come. ironically its a hardware problem rather than a software problem

  • abritinthebay 6 years ago

    The interface isn’t there. That’s the issue.

    Notice how all the sf implementations have a lightweight & fully immersive interface? You need tgat tech - and at a cheap commodity price - before you’ll see virtual worlds be a thing.

    Until then you just have MMPOGs with some more creative options.

  • mirimir 6 years ago

    Yes, fully immersive VR. As in SF, using one sort of "trode net" or another. It predates cyberpunk. I vaguely remember a story from the 60s.

  • Blazespinnaker 6 years ago

    Yeah, SL is probably closest but they can't seem to get their users interested in headsets. Kinda ironic in a way.

    • Animats 6 years ago

      Who wants to wear a VR headset for hours on end?

      • Blazespinnaker 6 years ago

        Yeah, I think it makes a very poignant statement about the potential of VR headsets when VR enthusiasts don't want to use them. SLers are the real deal, people like zuck just don't get it. Too many people have dismissed SL foolishly. Glad it survived and proved them mostly wrong.

        • jabretti 6 years ago

          >Too many people have dismissed SL foolishly. Glad it survived and proved them mostly wrong.

          I dunno, ten years ago a lot of people were predicting that Second Life would be the Next Big Thing, and assuring me that I should really be buying up Second Life real estate with my hard-earned real-world money because soon it'd be worth a lot more.

          Second Life has survived, but it certainly hasn't gone mainstream. It's just another weird corner of the internet with its own community, and good luck to 'em.

  • arca_vorago 6 years ago

    I hope one day I can do a good metaverse.

johan_larson 6 years ago

I'm disappointed none of the various constructed "universal" languages (like Esperanto) ever took off. Having a single common language spoken by everyone would be enormously useful. As it is, the closest we have is English, but it's a natural language and therefore complicated and difficult to learn.

  • Swizec 6 years ago

    And yet easy enough for basically everyone in the world to learn.

    Just think, it could’ve been French. It was dominant for many many years.

    Russian was a huge candidate as well. Wanna learn that? I’m a native Slavic speaker and I wouldn’t wish a Slavic language on anyone who needs to become fluent enough to do business.

    Spanish was (is?) a big candidate too, that one is about as hard as English i guess.

    Portuguese? I can’t tell how hard it is, but it sounds more difficult to me than Spanish.

    Arabic was a strong contender once upon a time as well. That to me seems very difficult to learn.

    Of course we could go for Chinese. I hear that’s easy to learn conceptually, but hard to learn how to speak properly.

    Ultimately we landed on English in part because of the cultural and historical dominance, and in part because it’s just easy enough for anyone to become fluent enough. The nice thing about it is that it steals language constructs and words so liberally from other languages that anyone can find something familiar to grab onto.

    • tuomosipola 6 years ago

      I have a feeling that the right timing of English-speaking political dominance in 19th and 20th century combined with global information technology boom has contributed a lot here. It could have been French (and in a sense was) if they had had movies and Internet earlier.

      To me English doesn't feel logical at times (maybe a subjective observation) but at least we all can agree that the ortography with its historical baggage is not optimal.

      Personally, I think the most helpful thing has been the cultural dominance of English language movies and TV series in my youth. Also, much of today's computer technology coming from the US, all that material is in English. It could have been any of those languages, because I don't strongly believe in that languages are intrinsically difficult to learn. They are just different. And typologically, English is quite distant from my native language.

      • lucozade 6 years ago

        I think your point about cultural dominance, and specifically youth culture, is the key one.

        I travel a fair bit and it still surprises me how often I hear English, usually lyrics, regardless of the local language.

        English isn't particularly logical. This is partly because it is a hodge-podge of Germanic and Latin roots. I think its also because it's so widely used. It's accepted a ton of vocab from around the world and usage seems to have a "natural selection" effect on grammar.

        I'm not sure I agree about French. It was the main diplomatic language before American hegemony but I'm not sure it ever really took off much as a second language. I actually think that German was the natural choice if things had turned out differently. The reason is that, in the 20s and 30s, German was the lingua franca of science and technology. Germany was also a pioneer in film. These were only just starting to dominate culture at that time and, without the upheaval of WWII, could easily have spread from there.

        • oblio 6 years ago

          French did take off in many places. In Romania for example we even borrowed something like 20% of our current vocabulary from it. However English became popular in the age of mass literacy, mass transport mass media and the Internet. Huge expansion vectors for a language.

          Also English had multiple support points instead of 1 (France): UK, US, British colonies.

        • tuomosipola 6 years ago

          Yes, German is perhaps a better recent example (late 19th, 20th century), having similar technology and science reasons that work for English today.

      • icebraining 6 years ago

        It could have been French (and in a sense was) if they had had movies and Internet earlier.

        Earlier? The French invented the movie theater.

        • tuomosipola 6 years ago

          I was thinking like 17th up to 19th century earlier when French was the dominant international language in many domains :)

        • whatyoucantsay 6 years ago

          They isolated themselves right off with Minitel, though.

    • vslira 6 years ago

      I second this feeling. I'm not a native English speaker but I often see myself hoping that it becomes ever more prevalent.

      Unfortunately there's a lot of resistance to accepting it as the official universal language, not in small part due to the anti-multeralist stance of the major anglo nations. One can only hope Brexit doesn't make German a virtual necessity to deal with the EU in the long term.

      • blattimwind 6 years ago

        > One can only hope Brexit doesn't make German a virtual necessity to deal with the EU in the long term.

        The EU has had more than twenty official languages for many years, I don't think it is likely at all that this will suddenly change to one language. I also don't think there is any chance for English being removed, since it is the most widely understood language in the union (the UK being a member or not has no relevance to that).

        • germanier 6 years ago

          Internally the EU uses almost only French, English, and (to a much lesser degree) German. Many documents are only available in French and English.

    • blattimwind 6 years ago

      > Just think, it could’ve been French. It was dominant for many many years.

      Or German, which was also a very widely internationally used language in science before the wars.

  • hans_mueller 6 years ago

    yes, but the English spoken by most non-natives is actually something like an Esperanto. the pronunciation, vocabulary and grammar is almost normalized and an artificial dialect. i think English is a very good compromise. the only problem are French people and other communities who disdain English for patriotical, historical, or political reasons.

    • dandare 6 years ago

      English is considered to be very difficult language and I find it to be a historical tragedy that English became the lingua franca of the world. I remember that as a child I could not wrap my head around the "spelling competitions" I saw in English cartoons. What is the deal with spelling words? If you can say it you just spelled it, if you can read it you just pronanceed it. And that is only one of many things that make English difficult. Only Chinese Mandarin with its tonality could be worse (will be worse?).

      • eatplayrove 6 years ago

        Aside from the spelling issue (which is an issue in many languages), English is one of the simplest languages by far (among languages spoken by 100k+ people).

      • jabretti 6 years ago

        Spelling difficulties are are artefact of what makes English so powerful, though -- its ability to simply adopt foreign words any time it feels the need to do so.

        The base language was formed by crashing two major European language families into each other a thousand years ago, and we've been happily borrowing words from every other language on Earth ever since as we've needed them. If we want to make up a brand new word, as we so often do in science, we'll probably reach for a handful of Greek and Latin prefixes and suffices and stick them together any way we like.

        After a thousand years of this, English is probably the language with the biggest and most expressive vocabulary of any language in the world.

        • ufo 6 years ago

          I think the spelling difficulties have more to do with English not having phonetic spelling than with it having many loanwords, prefixes or suffixes. I speak Portuguese and we have lots of loanwords as well but after a while they are all spelled like the rest of the words are instead of copying the original spelling. And every once in a while there is a spelling reform to replace archaic spellings with more modern forms (this never happened with English because it was politically unpopular)

          • maffydub 6 years ago

            I think the cause of a lot of the bizarre spelling in English is the "Great Vowel Shift" - basically, sometime between 1350 and 1700, most uses of most long vowels changed their pronunciation.

            Obviously (because this is English), the change didn't apply universally, but it applied widely enough to move our pronunciation out of sync with the European languages it was derived from.

            See https://en.wikipedia.org/wiki/Great_Vowel_Shift for more detail.

        • pbhjpbhj 6 years ago

          When you say a thousand years ago presumably you mean the pre-French Norman language mixed with the Old English from 1066?

          I'd go back to the Roman invasion, at least, and the mixing of Latin as the official language of Roman Britain with the Brytthonic language(s) [themselves already probably mixing with other earlier Celtic tongues].

          Some of the Latin in English (and Cymraeg) was adopted via the Norman/French tongue but there's pretty clear evidence -- AFAICT, I'm not a linguist -- of Latin being retained and adopted from the occupation. That presumably gave us a Creole as a starting place to add in Germanic (Saxon, Frisian, Jutlandish), Celtic (Irish, Pictish?), and Nordic languages in the first millennium AD making the mixing of Norman French just more of the same??

          So I'd say, "after two thousand years of this" ...

          • baud147258 6 years ago

            As a French, I've always enjoyed finding words in English coming from French, but I've never though about latin words coming from the roman occupation. Thank you for sharing!

            • pbhjpbhj 6 years ago

              Words like ffenestr and ysgol from Cymraeg show the adoption of Latin that you'd recognise in French (fenetre, ecole). Suggesting these words were already around in Old English from Latin.

              It can be hard to tell the history though, eg sovereign came from French (rein) but got Latinised (regnare) in spelling reform.

              One interesting thing for me was finding English loanwords in Cymraeg (former Welsh language) that are no longer used in English. I'll bet there are some French loanwords in English that aren't in French any more?

              There are words adopted twice too, https://en.m.wikipedia.org/wiki/List_of_English_words_of_Fre... has lots of examples. Giving us chief and chef in English with different meanings but based on a single French word (I gather) adopted at different times.

              • baud147258 6 years ago

                > I'll bet there are some French loanwords in English that aren't in French any more?

                I don't know any and I don't think I'll be able to spot them since they aren't in French anymore.

                But (from your link), there are words coming from Old French in English which then went back in French, but usually keeping the English pronunciation.

    • meigwilym 6 years ago

      I think My Language is a good compromise. The only problem are not My Language speakers and other communities who disdain My Language for patriotic, historic or political reasons.

      • matt4077 6 years ago

        I'm a native speaker of German, and anytime someone wants to try their mediocre German on my, I cringe, and quickly move to English to move the conversation along.

        For some reason, it's not half has horrible to listen to broken English. Unless it's by Germans, in which case I usually leave the room. Or drop the class, as it happened during my studies.

      • hans_mueller 6 years ago

        as my fake username suggests - I'm a German.

    • baud147258 6 years ago

      As a French people, I do feel disdain for the England, but I love the English language since it allow me to connect with the rest of the planet :).

    • cm2187 6 years ago

      And it's called "airport english"

      • matt4077 6 years ago

        I always thought Wolf Blitzer was "airport english"

    • marcosdumay 6 years ago

      If only the native speakers would understand international English is different from the language they use every day.

  • swyx 6 years ago

    even within English we have a ton of local and regional variances in everything from accent to connotation to words to the point that some variants aren't even intelligible. Check out the Ocracoke Brogue: https://www.youtube.com/watch?v=csfyrRqc5TU and that's right in the heart of North America!

    i'm no linguist but my familiarity with german/chinese and a smattering of other languages has led me to conclude that humans have an inbuilt tendency to just mess things up just because. If there's a grammar rule that can be broken or inverted, some language out there will do it. the tower of babel is encoded in our DNA. The Diryabal language even has a gender for vegetables and fruits. (http://www.superlinguo.com/post/5746695497/lady-nouns-and-ve... and https://en.wikipedia.org/wiki/Dyirbal_language)

  • HumanDrivenDev 6 years ago

    One thing I like about Interligua is that... I can read it with just a bit of difficulty, despite not having learned it. I'm a native english speaker with high school french. I imagine native speakers of romance languages could read it fluently. That's a huge percentage of the worlds population right there.

    It seems like a much better idea than esperanto.

  • zerostar07 6 years ago

    We have english. Although not a technically great language, it s a savior in Europe. In fact, with the UK gone, i think EU should adopt it as primary lingua franca.

    • bantunes 6 years ago

      Still an official language in the EU because Ireland (and Cyprus and Malta, to an extent). Right?

      • danmaz74 6 years ago

        Actually, Ireland chose Gaelic as their official language, and I think the same thing is true for Cyprus and Malta. So, when the UK leaves, there could be no member with English as its official language (the UK is still part of the EU until 2019).

        • donaltroddyn 6 years ago

          In Ireland, both English and Irish (Gaeilge in Irish - not Gaelic) are our official languages, and both are also official languages of the EU.

          English is also an official language in Malta, but not Cyprus.

      • zerostar07 6 years ago

        Yes, but with the UK gone there will be less nagging from the french :)

        • baud147258 6 years ago

          No, we'll continue to nag, but about something else (we'll find something easily).

          Still I think UE should keep English, since it's already in place and I don't think it would be useful to switch to French.

    • danmaz74 6 years ago

      Agreed. If only we could fix the non-phonetic spelling...

      • zerostar07 6 years ago

        there are 2 ways for that. I would go for changing the pronounciation rather than the writing

        • pbhjpbhj 6 years ago

          What's your linguistic background (eg mother tongue)?

          I, naytiv British, wud go for speling reform in preference to pronouncing "wo-uh-ld" or "deb-t'", et cetera.

          • zerostar07 6 years ago

            greek.

            oh no don't do that! that's like british teenage slang. even their parents cannot understand tem!

  • theshrike79 6 years ago

    English is a surprisingly versatile language. You can get your point across with pidgin english just fine, and it's the most spoken language in the world.

    On the other hand you have people who master the language on a whole new level, like Douglas Adams.

    • superplussed 6 years ago

      Chinese is #1 according to Wikipedia.

      • Freak_NL 6 years ago

        That list comes with a fair warning about its reliability at the top though¹. English is indisputably the largest second language (L2) spoken in the world, and is vying for the position of most spoken language with Mandarin Chinese.

        1: https://en.wikipedia.org/wiki/List_of_languages_by_total_num...

        • bpicolo 6 years ago

          "For example, English has about 400 million native speakers but, depending on the criterion chosen, can be said to have as many as 2 billion speakers."

          So depending on how you choose you metric, English wins overall

      • hnaccount91 6 years ago

        Yeah, but it's all inside one region isn't it? Comprising of China (obv), HK/Macau, Taiwan and parts of Vietnam maybe (not sure). Like everyone likes citing this fact to me as though it is mind bending, but if that language is never going to spread out, then what's the point of counting it as a "widely" spoken language.

  • gattilorenz 6 years ago

    To my understanding, the problem is that the nuances and subtleties of natural languages are not possible (or not as easy to convey) in artificial languages, for a number of reasons. This makes it impractical for real use: imagine diplomats having to speak their mind instead of beating around the bush :)

    But if you are interested in universal languages, I suggest you to check out Sol-re-sol [1]. I have read the story in Paul Collins' "Banvard's Folly" and have been fascinated by it ever since.

    [1] https://en.wikipedia.org/wiki/Solresol

  • akvadrako 6 years ago

    Esperanto is still growing; I wouldn't give up on it yet. It's only recently been added to Duolingo.

    The main reason is that it's so much easier to learn than English, it can be taught "for free". Basically, instead of just teaching the target language, first teach Esperanto. At the end of a fixed time, students are better at the target language.

    https://en.wikipedia.org/wiki/Propaedeutic_value_of_Esperant...

  • dopkew 6 years ago

    For me it is Lojban. But I haven't gotten around to learning it.

    • Pranz 6 years ago

      ui. mi darsygau do nu cilre fi lo lojbo

  • lamby 6 years ago

    > Having a single common language spoken by everyone would be enormously useful

    In the same way that having a single programming language would be, right? :)

    Joking aside, I find having multiple languages part of the charm of being a homo sapiens. We would be missing some colour in our world if we all communicated in the same tongue and, indeed, I don't think it would solve many real-life issues around _miscommunication_.

  • PeachPlum 6 years ago

    Lingo ain't, like, fixed, blood. Your spitting is tripping.

marcus_holmes 6 years ago

Optical computing. I remember talking to a bloke working on it back in 1990, and he said they had a working optical transistor so all the hard bits were done. Soon we'd have computers that used multiple frequencies of light to process operations concurrently on the same hardware!

Still waiting...

  • Nanite 6 years ago

    In a sense we did get it, the research into optical computing didn't lead to photonic cpu's, any advantages of optical computing are mute when each building block can't scale below a few square micrometers due to limits on the physics.

    However, the research did find it's way into telecom, high speed multiplexing switches, with massive bandwidth in a compact formfactor would not have been economically viable without photonics.

  • ktta 6 years ago

    He's right! There are transistors which have light at different frequencies on the same hardware. This paper[1] talks about how they've achieved 1.125 Tb/s throughput with various techniques.

    So he's right about optical computing, but just not in the way you anticipated :)

    Just extremely fixed functions with mind-blowing capacity.

    [1]: https://www.nature.com/articles/srep21278

  • btown 6 years ago

    These computers do exist on tabletops in labs; the problem is that it's comparatively difficult/expensive to mass-produce and miniaturize these components compared to silicon (where you can etch billions of transistors at once on a modern CPU). There are really cool things you can do, especially in terms of continuous/non-discretized-time computations, but I wouldn't hold your breath for this to become mainstream.

    • avian 6 years ago

      Are you talking about analog computing? We tried that both with electronic and mechanical computers and it obviously didn’t work out since nobody uses them anymore. What benefits do optical computers have in that regard?

      • krylon 6 years ago

        I once heard somebody talk about analog computers who is old enough to have used them.

        The impression I got was that they did work excessively well on a very narrow niche of problems (simulations, mostly). But "programming" them was a huge pain, and for the general purpose arena, they were not useful.

        • marcosdumay 6 years ago

          They would be comparing them with the digital computers of the same time.

          There's a reason people used analog computers back them, and there's a reason nobody uses them now. Most of the reason is about how noise interferes with small components, and it's a physical reality that won't go away.

delgaudm 6 years ago

The Lytro Camera[1]. The idea of refocusing a photo after the fact felt like an absolute revelation. Their initial camera was somewhat affordable, but ended up being just downright awful. The pictures were low res, low quality, and the camera was difficult to use and lacked basic, yet essential functions. I was a super-excited early adopter. After I got the camera I think I took, maybe 100 pictures with mine and it's sat in a box of disappointment ever since, because you couldn't actually do anything with the crummy pictures it took. All I wanted was a reasonably good 4MP image where I could refocus a picture on my kid's face when the camera focused on the wall behind her so I could send that picture to grandma, or post it to facebook. They clearly had something else very different in mind and the novelty of all the interactive-y stuff they added around the refocusing forgot that what we(I) actually wanted was a picture.

It's all fine that I ended up not taking more pictures becuase the one place you could see them is being shut down at the end of the month.

They appear to have pivoted to professional cinema.

[1] https://www.wired.com/2011/06/lytro-camera-lets-you-focus-ph...

  • greggman 6 years ago

    I wanted to like Lytro but after trying it I realized I don't want to refocus later. Just don't care. I'm not saying some Pros wouldn't want that but for the masses my guess was "no, can't be bothered"

    On the other hand, what I do expect to happen maybe is my mobile camera will advance to the point where it takes 1000 images in 1/1000th of a second so I can just wave it around then then post process any image at nearly any resolution any dof so I don't need lenses, don't even have to aim much. Unfortunately actually processing 1000 images in a short amount of time is 10-20yrs off?

    I'm a little surprised at their pro video camera hasn't been more popular. It seems like the no need for a green screen feature would be useful for getting better lighting for effects scenes.

  • ryanchants 6 years ago

    In the same vein, I'm excited about the Light L16[0]. But with the hefty price tag, I don't know if I can get on it this early. If takes off, then the price is comparable to a DSLR and a few lenses, but right now I'm hesitant.

    [0] https://light.co/

grahamburger 6 years ago

3D printing. Turns out making little plastic shapes isn't all that useful. Seems obvious in hindsight.

  • GistNoesis 6 years ago

    It's already there working 24 hours a day in many makers home. Why isn't it taking the world faster ? This is a classic chicken and egg problem which is being solved daily via the market invisible hands. As a maker, what reduced my printing usage a few years back was the cost of the filament (which is now cheaper but can still be reduced by a 10x factor), and the noise of the machine (which can be made now noiseless). As a user, what hold me back is that the use cases aren't so numerous in a world where everything is factory made. But as soon as you decide to customize your environment, it becomes the missing component which allows you to adapt two objects together. As a 3d-designer, software wasn't really made for 3d printing. But now with OpenSCAD, objects can be made from source code, making any hobby programmer vastly more productive for the most current usage of 3d printing than an expert 3d-designer.

    • theklub 6 years ago

      Off-topic but, what printer do you recommend for starting out?

      • GistNoesis 6 years ago

        I use a Lulzbot Taz4 (with diy rudimentary enclosure), but a 3d printer is very dependent on the maker and what he intend to do with it, how much time, money and effort he is willing to invest so that's quite hard to answer. If you can find a local hackerspace, they usually have people which can give you good advice after a talk.

  • IshKebab 6 years ago

    That's just because plastic trinkets aren't what they are useful for. For useful applications, like prototyping, repairs, bespoke items and so on they are going strong and still getting cheaper and better.

    I'm hoping we'll have some affordable SLA printers fairly soon.

    • blattimwind 6 years ago

      Whether or not folks in the shop find them useful seems to mainly depend on whether they want to put up with CAD packages in their hobby.

  • grmarcil 6 years ago

    In the consumer/household space, sure. 3D printing is huge in prototyping and hardware R&D though. High-end printers are even good enough for mass production runs now.

    • Fomite 6 years ago

      3D printing is absolutely revolutionizing a hobby of mine.

      • lj3 6 years ago

        > Maker of artisinal, small-batch simulation models for the discerning infectious disease consumer.

        Could you talk more about what that entails? That sounds fascinating as hell.

        • Fomite 6 years ago

          I think it is!

          Basically, I'm an epidemiologist, and I work on mathematical and computational models of disease spread, usually trying to respond to either clinical or policy questions.

          Let me know if you want more information?

  • PeachPlum 6 years ago

    Adidas breaks the mould with 3D-printed performance footwear

    https://www.adidas-group.com/en/media/news-archive/press-rel...

    And it's not all plastic

    The patents for laser sintering are running out

    https://en.wikipedia.org/wiki/Selective_laser_sintering

    my colleague at my Uni 3D printing society showed me a fully working planetary gear unit about 30mm x 30mm x 30mm yesterday, made from titanium and printed in one pass with zero assembly.

    It's coming

    • vlehto 6 years ago

      Metals can only be SLS printed in a vacuum. That makes it cost prohibitive for home use. It's cheaper to buy a small CNC machine and that's good enough for 99,9% of everything.

  • milesvp 6 years ago

    My understanding is that 3d printing has greatly transformed product design work. You can do a lot of fast prototyping that used to take a lot of manual labour and turn around designs much faster. Then once the design is finalized you build your injection mold for mass production.

    But I agree that desktop 3d printing certainly didn't transform much in the home.

  • gozur88 6 years ago

    The jury is still out on this one. 3D printing is used for some pretty serious industrial items, like rocket engines. Ultimately I think it's going to replace milling in most applications because you can print shapes you can't mill.

  • sigi45 6 years ago

    That was the main motivation why i never bought one. I had never good enough ideas to buy it for that price.

    • icebraining 6 years ago

      Me neither, but I have ordered a few prints for various projects involving home maintenance and theater props. Quite useful, if not life changing.

  • cm2187 6 years ago

    And that you can pretty much find anything you can think of on amazon & competitors. I really wanted to design and 3d print something, but it took me several years to find something I couldn't just order for under $5 on amazon. And even then I could have done something good enough and cheaper with lego bricks!

    • Cthulhu_ 6 years ago

      Lego bricks or if you want something a bit more permanent, a bit of wood. The plastic really isn't a good material to do much with.

      • cm2187 6 years ago

        If I had a basement where I could do some DIY, I could also have gone the metal/laser cutting and folding route (which one can order online for not much), and done a bit of welding. It seems pretty neat and easy to do and very durable.

        I made the mistake to 3d print in white plastic, which is turning yellow very quickly, not very durable as you say.

  • segmondy 6 years ago

    Tesla and other companies are 3D printing metal engines and parts for rockets. 3D printing is amazing and in full effect. It just hasn't become available for the consumer to print whatever they want.

  • chx 6 years ago

    I believe the problem is the sort of plastic used. You need structural strength in most things and filament pieces melted together is just not cutting it.

    • cheeze 6 years ago

      Absolutely. One area that they have revolutionized is casting. Hell, even Jay Leno's crew used a 3d printer to create prototype parts that were no longer made and then cast the plastic printed part into a sand mold using aluminum.

      • vlehto 6 years ago

        The next step would be to 3D print the sand mold directly.

    • greggman 6 years ago

      I thought the point was you just use the plastic as a mold for something else

  • grahamburger 6 years ago

    After reading replies I guess I should have been more specific. Early on there was some hype around the idea that 3D printers would become as ubiquitous in homes as paper printers once were. I bought in to that a little bit, but it seems like that is (and probably always was) tremendously unlikely.

  • wrinkl3 6 years ago

    I feel like that technology needs more time. Give it a few more decades, a couple of breakthroughs and lots of small incremental advances and we might yet see something resembling the Feed from Diamond Age.

  • gilbetron 6 years ago

    I don't get it. 3D printing is massive, and is shaping our future. How did it not pan out?

    • krapp 6 years ago

      The dream was that 3D printing would replace all manufacturing and distribution. That everyone was going to have a 3D printer in their home, and they would just be able to download patterns from the internet and get whatever they wanted, like replicators from Star Trek.

      • bpicolo 6 years ago

        We're barely over 5 years into it. Scifi takes longer than that

        • gilbetron 6 years ago

          That's the theme I'm seeing in a lot of the other posts: "technology X has been out for like 3 years, and it hasn't changed the world, it's a failure!"

  • swyx 6 years ago

    little plastic shapes? no.

    but big metal industrial parts that were impossible to machine might be a thing? I confess no knowledge as to their real commercial success so far.

    • vlehto 6 years ago

      You need to beat castings. And this may surprise you, but the mechanical properties of casts are often superior to 3D prints.

kumarski 6 years ago

CRISPR.

My naivete surrounded me. I didn't realize that even though there was a high output of genetic marker tests that very few if of them would result in viable CRISPR targets.

There's 60K genetic marker tests on the market.

8 to 10 new ones come out each day.

Humanity has little hope of figuring out the sort of "druggable targets."

Genome Wide Association Studies aren't like websites or apps waiting to be optimized with more data.

The more data you feed in the more obfuscated the truth becomes.

Drug discovery is hard and CRISPR modifications are technology way ahead of clear problem framing.

https://omicsomics.blogspot.com/2017/03/targets-drugability-...

For 12 years, I've had a chronic autoimmune disorder that I had hoped would be solved by the likes of a massive genetic dataset of Ulcerative Colitis twins that had been separated at birth where one twin had the disease and the other didn't.

I had hoped that CRISPR would be the solution given enough problem context from a solid GWAS, but this was foolish.

Our best bets in biotech are not in the latest technologies, they are in ensuring that basic science, phase 0, and phase 1 trials get large unfettered funding.

I had hoped Liz Parrish's outfit would make leaps and bounds in genetic modification for patients like me, but that ended up being baloney. https://news.ycombinator.com/item?id=11560943

CRISPR technologies are like bullets to attack problems, but the operator is always increasingly blind. Bonferroni Corrections abound and family wise error rate are unforgiving in all things related to GWAS studies.

  • 88e282102ae2e5b 6 years ago

    It's not like CRISPR didn't pan out - it's just that biology takes a long time to understand. Cas9 was only characterized 5 years ago. It's now routinely used in all sorts of research applications and clinical trials for therapeutic uses are starting up - I can't imagine a faster timeline.

  • TrueDuality 6 years ago

    A close mentor of mine has been working on private cancer research for close to three decades and just recently retired. He was both incredibly hopeful and positively terrified of CRISPR.

    He explained CRISPR as "one of the crucial technologies we don't yet have the wisdom to wield"

    I recently learned (I believe on here, in the past week), CRISPR was used to develop a genetic treatment on the first human patient to solve an inherently genetic disorder.

    I'm honestly terrified we're accelerating our technology faster than our societies can adapt to the potential threats they may pose.

    • faltad 6 years ago

      Are you speaking of the person being treated for his Hunter Syndrome via Gene-editing treatment (I think it was done this week)? It doesn't use Crispr but another tool named ZFN . Pretty exciting that we're not only trying those out in vivo but also in fully grown patients with a specific disorder!

      • baldfat 6 years ago

        "Like the newer gene-editing technology CRISPR, ZFNs can cut both strands of the genome’s double DNA helix at a specific location."

        They are different technologies but zinc finger nucleases (ZFNs) but they both do DNA editing.

  • seehafer 6 years ago

    > CRISPR technologies are like bullets to attack problems

    This is a correct and useful analogy. Nonetheless, biologists are excited about CRISPR because, to continue your analogy, we were previously working with spears. And now we have bullets. Still don't know where to quite fire them yet. But it's a nontrivial advance.

  • zerostar07 6 years ago

    If anything, CRISPR is continuously improving, it is not at fault for the other problem of "what to do with it".

  • swyx 6 years ago

    whoa this is non consensus (to the biology-uneducated me). thank you for pointing this out, the mass media is still on the CRISPR hype train. Oh dear.

antjanus 6 years ago

Adobe Air. HTML/CSS/JS in a native executible with access to native methods? It sounded like a dream. ANd yeah, you can do Adobe Air without Flash. You could build desktop apps and phone apps with the same tools on top of it. I really liked the desktop stuff.

I made a couple of apps and used a ton of Adobe Air apps (non-Flash) back in the day. Thought it was the future.

It failed, for some reason.

^^^^ this is the reason why the rise of Electron wasn't surprising to me. In fact, I was surprised by people being surprised. I realize that Electron is WAY more sophisticated but "desktop web apps" have been around for a while.

  • shove 6 years ago

    it was soooooo slow though. But yeah, the comparison to Electron et al is spot on.

    • antjanus 6 years ago

      not for smaller apps. I mean, can't imagine running VS Code on it.

  • ashleyn 6 years ago

    Microsoft flirted with it as far back as Windows 95, in the form of the HTA (HTML Application).

    • _wmd 6 years ago

      HTA were awesome, I wrote a very rich UI Innosetup-alike entirely in it back in the late 90s. Everything was there, it exposed pretty much any system interface via COM, and everything pristinely documented in Microsoft style

  • j_s 6 years ago

    Perhaps V8 + baseline hardware performance pushed Electron over the threshold to usable.

  • abritinthebay 6 years ago

    It was slow as hell, that was the main issue.

    It’s still around however.

Johnny_Brahms 6 years ago

I am still waiting for all my favourite applications to be extendable in GNU Guile.

On a more general note, I am still waiting for the world to fall for lisp in general.

  • jknoepfler 6 years ago

    > On a more general note, I am still waiting for the world to fall for lisp in general.

    Gah, aren't we all? Watching John Carmack push Scheme for the Oculus Rift 2 [https://www.youtube.com/watch?v=ydyztGZnbNs] made me feel like something was right in the universe.

    • marcosdumay 6 years ago

      > Gah, aren't we all?

      Since I found Haskell, no. Not anymore.

      And I don't expect Haskell to take over the world either. I do expect a large crop of new languages that extend on the concept, and some of them to take over the world.

    • leavethebubble 6 years ago

      Why not just leave the lisp-bubble and admit it's a terrible type of language which is behind of ML languages in terms of productivity, safety and performance?

      • socksy 6 years ago

        Good thing that there isn't an ML bubble then ;)

  • eadmund 6 years ago

    As someone who's spent the last week implementing two different Schemes atop two different languages (Lua & Go), I think I can explain why GNU Guile didn't take off: Scheme. The problem with Scheme is that it's an utterly lovely little language with some wonderfully neat concepts which is great for teaching students about continuations … and it happens to have some broken features, like DYNAMIC-WIND instead of UNWIND-PROTECT, and some design decisions seemed wise at the time but in retrospect were foolish (Lisp-1 seems really awesome, right until you have to use it every day; eliminating NIL and making '() truthy seems very elegant and orthogonal but is actually a right royal pain — and don't even get me started about forbidding (car '())).

    Scheme pursued a foolish simplicity, foregoing the complexity necessary to deal with the real world.

    It's a wonderful language for implementing language interpreters, but I wouldn't want to write large programs in it. For that, the only thing worth using is Common Lisp.

    • Johnny_Brahms 6 years ago

      I don't happen to agree with you on eliminating NIL. I don't understand what the royal pain is.

      And dynamic-wind is the price you have to pay for call/cc. UNWIND-PROTECT is not the same thing. Multi-shot continuations is he reason. One way would be having explicit one-shot continuations and escape continuations. Those are faster and will make UNWIND-PROTECT work.

      I am more in favour of delimited continuations though.

      Delimited continuations makes that a bit simpler (since you can easily implement something like dynamic wind using them).

Simulacra 6 years ago

Google Glass. I thought this would become standard, miniaturized, and used in all aspects of life. Perhaps I read Daemon too much, but I was sure Glass would fly

  • Nanite 6 years ago

    Google Glass's application and marketing was hijacked when Sergey Brinn was shown the prototypes and decided to go for the consumer market. It was originally intended as an enterprise product, with somewhat similar envisioned applications to what Microsoft's hololens is aimed at today.

    Now that the hype over the original Glass release subsided, there are actually some promising developments for the enterprise market. Notably Augmedix, a company aimed at optimizing doctor-patient interaction uses it as a tool allowing for remote medical assistants (https://www.augmedix.com)

  • mszcz 6 years ago

    Yeah, that would have been so awesome. I told everyone I would wear one even despite the fact that I would look like a total dork. I got super excited when they said it would be available around the end of 2013...

    • Simulacra 6 years ago

      I have, regrettably, invested in just about every Kickstarter, Indegogo, and CES vaporware to bring a fully wearable lifeblogging camera to fruition. ZionEyez still smarts... I desperately want to be able to record the world around me, and use AI to pick out, identify, and catalog everything. Ever since I read Daemon by Daniel Suarez, and more importantly, the Halting State series by Charles Stross, I've wanted this ability. We have two pairs of Google Glass and I keep them on a bookshelf, as a reminder of how close we've come to this reality.

  • swyx 6 years ago

    it died when the word "Glasshole" was coined.

    • toyg 6 years ago

      It had everything to do with recording. People wouldn’t mind people flaunting weird gadgetry, but the feeling of being recorded was too creepy. It didn’t help that a lot of early adopters were very creepy to start with.

      It could have revolutionised journalism, and I still think we will eventually get something like it once the tech is really invisible, but the social aspect will take ages to sort out.

      • netsharc 6 years ago

        Maybe in 15 years... toddlers nowadays have so many smartphone cameras being pointed at them for their mom's Instasnapbooktwit feed, they will probably grow up being used to have lenses pointed at them.

        • toyg 6 years ago

          Tbh it looks like the opposite might be happening, exactly as a reaction to the invasive approach from their exhibitionist parents who grew up before the ‘00s (when exposure was valuable and scarce). Most kids I see are moving towards technologies where recording is managed very consciously (staged ala instagram or ephemeral ala snapchat). Camera awareness has gone up, hoodies have become everyday clothing... Rather than “I’ll be recorded, whatever”, the attitude I see emerging is “I need to know when I’ll be recorded because I know it can make or ruin me”. The revenge-porn laws imho are the first salvo in what will become a campaign to redraw the relationship we have with cameras, in the long run.

          • dennisgorelik 6 years ago

            The reasons why teenagers do not want to get recorded is because they are frequently doing (or talking about) something that is illegal to do at their age (sex, alcohol, drugs...)

            When they reach the age when all that is legal, the need for privacy goes down quite a bit.

      • hyperpallium 6 years ago

        A headsup display (without a camera) makes tiny devices usable (unlike watch displays).

        Google engineering was unwilling to remove such a useful feature... but Apple has withheld seemingly crucial features... so I still hope one day there'll be iGlasses.

  • api 6 years ago

    The problem is it was a portable surveillance eye for an ad company. The idea itself is viable.

  • rajeshp1986 6 years ago

    I was hoping that Google Glass will be the next de-facto device/platform which everyone will be using. Feeling so sad a great idea didn't work well. Maybe it was way ahead of time and can be relaunched again.

    • myaso 6 years ago

      Likely will happen at some point. It's the next logical extension of consumer devices after the smart phone. It will happen eventually, except there isn't a clear time table as to when. The concept is general purpose enough that interesting things can be done with it.

michaelchisari 6 years ago

The blockchain. A strange thing to say when Bitcoin is nearing $8k each, but I'm not interested in cryptocurrencies as an abstract store of value based on the fever of the market.

By now, however, I really thought someone would have found a use for the blockchain as the underpinning of some kind of new app or tech that would be able to create real value for bitcoin or whichever crypto they built it on.

However, we just haven't seen that. We have a lot of gambling, a lot of whales moving the waters, and a lot of irrational exuberance.

But no solid tech. It's still early, but I haven't even heard of anything in the works that really knocks my boots off. All in all, I'm glad people are getting rich (although my hunch is that most getting rich were so to begin with), but so far the tech part of it all has been a big disappointment.

I guess other than finding a way to make space heaters that generate money. That's pretty cool.

  • marcus_holmes 6 years ago

    yep. Haven't seen a single application that isn't primarily aimed at raising investment money. It's Ponzi's all the way down as far as I can tell.

    That may change, of course.

    • intruder 6 years ago

      Golem seems interesting, rent out your cpu cycles to others. I think it currently works only with Blender, i.e. you could rent out rendering-farm.

      • anonymous5133 6 years ago

        Golem is definitely interesting considering that big cloud computing companies are now selling computing power.

    • 6nf 6 years ago

      The only application I've seen other than raising money is gambling, e.g. SatoshiDice

  • Nuzzerino 6 years ago

    Hmm, have you heard of SingularityNET? There are plenty of projects being made that have real, valuable uses for blockchain tech. Marketplaces are one of them. If you want an app store that isn't controlled by one corporation that can force you play by their arbitrary rules, blockchain can allow this to happen. A centralized front-end is necessary, but that's all it is, a front-end. They have pretty much zero incentive to play hardball with the customers.

    You're right that most of it is over-inflated hype. But to say the technology is useless is a stretch.

    • laksjd 6 years ago

      But what percentage of systems is actually ONLY implementable using a block-chain-based cryptocurrency?

      I love decentralisation as much as the next guy but it's not a feature. Apart from the obvious authority-circumvention (both positive and negative), what killer features do these systems have?

      All the interesting projects I've seen for ethereum rely on Intel SGX to bring ground truth about the real world onto the chain.

      • Nuzzerino 6 years ago

        In the case of SingularityNET, AI agents are accessible and composable. They can interact with other agents, or act in unison to form larger, more complex agents. The market incentivizes development and maturity of both the system and the individual agents. This is a killer feature if you ask me. There will be a great number of already-useful agents available upon launch of SingularityNET, to kick-start the service's ability to provide tangible value.

        However, you're right that most of these projects fail to deliver any value. Many projects are riding the hype-train, and many more are outright scams. However, the example I named is none of those.

        Edit: I should add that a big problem of centralized markets is that nobody wants to put all of their eggs in one basket. Take Second Life for example: In SL, your digital avatar and assets are siloed into that world. You cannot transfer them to the next great virtual world. This is conceptually similar to the standardization debates. Anyone can make their own protocol, but if there are profit motives behind one, the industry will be reluctant to adopt. Standardization takes a lot of time, trust, and debate. And for good reason!

        With decentralized marketplaces, standards are not quite as important. An implementation can be as fluid as an app, and the cost of replacing one interface/implementation with another is much lower than the cost of standardization. The internal workings of the marketplace itself can be altered with a democratic vote.

        This is all theoretical, we have yet to see these ideas produce real returns. But the killer feature is that people have more incentives to invest their resources toward adopting the platform, and that itself is a tangible value, as long as the adoption of the platform itself creates value in other ways.

      • unboxed_type 6 years ago

        When you have a need to establish some business procedure between different parties that do not trust each other, smart contracts might be very useful. No other database is able to provide guarantees/features comparable to the public blockchain in that respect. A trust model of most modern databases just do not comply with "everyone trusts noone" principle.

        About relying on Intel SGX. Yes, blockchain oracle to be trusted needs to be run in some kind of protected environment. So what? It doesn't imply that technology is useless. I would say, we have a synergy of different security technologies to get really impressive results.

    • natch 6 years ago

      “Arbitrary” rules? That’s flamebait... it’s already been explained here as nauseum that the rules help protect the privacy and security of end users.

      • Nuzzerino 6 years ago

        > The rules help protect the privacy and security of end users

        The rules are for one thing and one thing only: To maximize the price per share of the company. This correlates with privacy and security, but only because the market has eliminated the players that do not provide security and privacy for users. But even then, there is plenty of evidence that your data is not secure or private, in the hands of tech giants. The definition of security is itself subjective and contextual.

        What the market does not eliminate (yet), are companies that abuse their power once they have it. Google dropped "Don't Be Evil" and backed it up with actions that aren't quite so benevolent. Although other services such as DuckDuckGo have launched, to try to compete, the market ultimately did not change. If this were to happen in a decentralized blockchain service, the back-end would be fully transparent. Therefore, the end users could have easily chosen another engine which is built upon that same back-end, rather than a fully watered down version that tries to reinvent the billion-dollar wheel on a sub-million-dollar budget.

      • icebraining 6 years ago

        Arbitrary (adj):

        1. subject to individual will or judgment without restriction; contingent solely upon one's discretion.

  • jjn2009 6 years ago

    2017 was the biggest year for blockchain by a huge margin by many metrics (see https://coinmarketcap.com/charts/). there is now 200 billion dollars in this space, give it another year or two. Some things will shake out for sure, probably a lot of it, but many efforts will lead to production applications.

    • pandler 6 years ago

      That's the thing. Blockchain tech is still pretty early stage in terms of reaching the general consumer. Yes, Bitcoin has been around for 8+ years now, but it takes a damn long time for a brand new type of technology to mature and work out the kinks. Ethereum has only been around for about 3 (2014).

      It's easy to look at the space right now and see that it isn't currently viable, but that sentiment completely ignores the passage of time and the fact that we are moving towards the goal of general applicability.

      Yes, there will probably be a 2000's style shakeout of all the failed or mislead or me-too attempts, but the fact that there are so many projects popping up should tell you that we are still trying to figure out what we can and should do with blockchain.

  • lj3 6 years ago

    > I guess other than finding a way to make space heaters that generate money. That's pretty cool.

    That's the fundamental flaw in the blockchain, IMHO. The amount of resources in terms of computing and electricity you need to verify transactions at scale is ridiculous. Nobody's going to bother with that unless there's a serious payoff involved.

    I find it helps to mentally replace the word 'blockchain' with 'triple entry accounting.' You don't expect much groundbreaking tech work to come from something like triple entry accounting.

  • fidrelity 6 years ago

    What do you think about the Brave Browser?

    While it's just a Chromium with an integrated ad blocker + no-track, I think their curated publisher model might change the online ad industry for the better!

    • namdnay 6 years ago

      I'm not sure the Brave concept really needs blockchain - it could be implemented far more easily with a centralised DB. To be honest it looks like yet another ICO cashgrab

  • kobeya 6 years ago

    Well, what are you looking for? There's lots of tech developments happening in the bitcoin space, from mimblewimble to lightning to confidential assets & zero knowledge proofs. But it sounds like none of this is what you had in mind?

    • michaelchisari 6 years ago

      Something consumer driven that makes a $200 billion market even begin to make sense would be a start.

      • kobeya 6 years ago

        So, payments? There's plenty of work being done on that. See: lightning.

        "Market" and "market cap" are two different things. The market cap of a currency is inversely driven by the velocity of money, which for bitcoin is quite low. The "market" (cap) of the US dollar is in the trillions even by low conservative estimates.

  • anonymous5133 6 years ago

    IMO, the killer app for cryptocurrency is going to be open bazaar. What is open bazaar? Simply put - it is a peer to peer decentralized market place. Think of a something like amazon except no fees to sell, no restriction on what you can sell and powered by bitcoin. Right now the software does not work fully because the only crypto you can use it bitcoin which currently has high fees. The devs are going o switch to using bitcoin cash which has low transaction fees (pennies per transaction).

    I have bought and sold items on open bazaar when it was working and it works flawlessly. The buying and selling experience is by far the best I've had. Since there are no fees (other than transaction fees charged by BTC/BCH) you can make more profit compared to selling on ebay/amazon.

    • m_t 6 years ago

      There's no fee (except, you know, for the fees).

  • tramGG 6 years ago

    You should check out "Synapse AI" -- it's one of the first pieces of blockchain tech that actually makes sense.

    They want to decentralize AI and facilitate democratized AI economies on the blockchain.

  • swyx 6 years ago

    have you looked into IPFS/Filecoin and still view distributed storage as overhyped? or just havent looked that hard into the different ideas that are out there?

    • DaSilentStorm 6 years ago

      First: I love the idea behind IPFS and decentralized storage and I'm sure there will be some valid use cases.

      But: There's already tech available enabling these very use cases (see https://webtorrent.io/) and the adoption is pretty low. Plus IPFS doesn't solve the problem of disappearing peers at all. If no one's willing to mirror your awesome IPFS page, it will be gone the same way as with centralized hosting.

      Fun Fact: IPFS is not BlockChain tech.

  • guiomie 6 years ago

    Same here. Smart contract look interesting, but I haven't found a use case the WOW'ed me.

  • dbcooper 6 years ago

    Check out FunFair - a well thought out/executed blockchain gambling system.

  • justhackedme 6 years ago

    What does "blockchain" do?

    If you can answer that question, it'll be worth it.

the_decider 6 years ago

Semantic ontologies that would totally change the nature of search.

  • narrator 6 years ago

    The problem with user supplied metadata is users lie. For example, the "keywords" meta field was mainly used for SEO spamming.

    • GistNoesis 6 years ago

      If you do the counting properly, this can be taken into account. Just like in poker, where you can bluff sometimes but if you do it always, people will call you upon. It's called Reification Done Right (in quadstore terminology), where you store metadata about the statement. Then it's just about building a reputation score for each statement. You can use simple bayesian updates like it's done in spam filters. Pretty quickly there is very little incentive to lying in an open book world.

      Even more interestingly you can compute a score between author of the statement and viewer of the statement, basically telling you whether or not you can trust the information given your current belief system. For example if you are a flat-earther, you can get spaceX news filtered quite easily. From the computational point of view it's faster not to compute the whole matrix of score but use some lower dimensional embeddings so you do a matrix factorization, and that's how you get all the user recommendation systems ala Netflix.

    • williamscales 6 years ago

      I dunno, I think as long as you do statistical analysis that's vulnerable to adverse inputs you're going to see the same problem. Meaning that you can infer the keywords and prevent obvious lies only until the users figure out how to trick your keyword machine into inferring the wrong keywords.

  • jhanschoo 6 years ago

    As I've previously mentioned, it turns out that natural language processing on the natural language webpage that users see is a better way rank pages than invisible metadata.

    • icebraining 6 years ago

      It wasn't just about "pages". For example Nokia had a prototype of a PIM software that could handle very natural queries like "What are the emails of people who participated in a meeting on Monday?" not by a static planner but by dynamically mapping the terms to any semantic graph.

  • marcosdumay 6 years ago

    Ontologies don't scale.

    There were plenty of people warning about that by then (I wasn't). And time showed them right.

    Interestingly, now we have Google pushing for the opposite extreme. There is absolutely no objective reality on its searches. Even the meaning of the words are linked to your persona.

  • williamscales 6 years ago

    Preach. I'll pour one out for embedded semantic metadata that Facebook bludgeoned to death.

rticesterp 6 years ago

3D Sports. 2005, in Austin one local theater showed the National Championship in 3D. To this day this was the best football viewing experience I've come across. During the game, I could see the holes open for running backs as they opened and later closed in a split second. I felt like I was actually on the field as a player as the game played out. At half time, everyone gathered in the lobby of the theater and there was just this feeling that this was the future of watching sports in talking to other people.

I was convinced at that time that 3D would eventually be the way that everyone watched sports and that this would eventually top the in game experience. I'm still surprised that theaters and sports networks haven't partnered to display more sporting events in this fashion.

b0rsuk 6 years ago

Separate screen for touching on the back of the mobile device.

I forgot how the device was called, but there was a Blackberry or similar device that didn't have your oily fingers smearing the very screen you're trying to interact with. You didn't obscure the display with fingers. Instead, the touch-sensitive rectangle was on the back of the device, and cursor was displayed on front side, in the spot corresponding to the spot you touched.

  • gempir 6 years ago

    I like the trend of adding this functionality to the fingerprint sensor.

    I have a pixel 2 and I can swipe down the notifications by swiping down on the fingerprint sensor.

  • arketyp 6 years ago

    That's brilliant. With phones getting thinner this would feel even more natural. It would probably allow for better tactile feedback technologies as well, and even thinner devices.

zinckiwi 6 years ago

The Commodore Amiga. Used it from release in 1985 through to 1993, at which point I had to admit defeat and defect. Much has been written, including a great retrospective on Ars, but its failure wasn't due to engineering. Way ahead of its time.

Minidisc. This was generally popular in Hong Kong and Japan, perhaps other parts of the world, but in the US I gather it was marketed as a CD alternative instead of a mix-tape alternative. Digital copy protection hobbled the latter immediately, of course. I still used a pair of them ten years after getting my first player as a poor man's two-track recorder.

  • ekianjo 6 years ago

    The Amiga was awesome and it did sell quite well in some areas like in Europe. Its failure on the US market is clearly what killed it, as well as the lack of innovation after the Amiga 500... It did take over the world by storm with the Video Toaster though - and stayed there even long after it was dead.

0xcde4c3db 6 years ago

In the '90s I was convinced that basically everything with a name attached to it was going to be built around PowerPC and MIPS processors. When StrongARM went to Intel, I figured ARM was basically dead outside of controller/coprocessor/offload applications and expected Pentium II/III to suffer a similar fate to Pentium Pro.

  • kobeya 6 years ago

    RISC-V is a derivative of MIPS heritage and there's still a chance it will take the world by storm...

    • 0xcde4c3db 6 years ago

      I thought RISC (Berkeley) and MIPS (Stanford) were originally rival projects, with RISC being more closely related to SPARC. I wasn't there, though.

kdoherty 6 years ago

I remember being in high school when Google Plus came out and thinking it would be incredibly popular. I totally missed the mark, and it's funny because while I thought it would take off with other people, I never really used it myself.

  • Kequc 6 years ago

    I touched on this before in my rambling. Google botched it, not the product itself as the initial offering of G+ was actually really really good. Far better than Facebook, a return to basics in a clean interface with Circles. Groups being a feature Facebook later added.

    When Google launched G+ it was actually one of two new products the primary one being Google Accounts. Before that point you needed to use a Gmail account to sign up on any of Google's services and it made everything a big pain when you had more than one Gmail.

    Now with a single Google Account you could sign in to Google's entire ecosystem, including all of your Gmail accounts at the same time. It also came with a free, ad-free, totally simple, you-don't-have-to-use-it social network called G+.

    Google should have marketed this all honestly. Instead they started talking up G+, and hid Google Accounts underneath it. This was a huge mistake. The general public felt like they were being tricked, because they were, albeit far less nefarious reasons than was assumed. Nobody wanted to give their name, to sign up to G+, and therefore rejected both G+ and the Google Accounts system.

    This compounded because Google really needed everyone to switch to the new account system. Otherwise they were doomed to continue supporting both Gmail authentication and Google Accounts authentication forever. So Google began pushing for people to create Google Accounts, as you would imagine. People then saw this as Google trying to force them to give their full name, trust was already gone.

    Amazing to me people trusted Facebook over Google because at the time Google so far had a very good record protecting private information. Now that's all gone of course but at the time they had an incredible amount of good will which was seemingly not reciprocated and now Google is a slimy gross privacy succubus the same as Facebook is. Probably they just threw their hands up and said 'fuck it'.

    Google tried to further reduce complexity in their myriad of different platforms. They now had two social networks G+ and Youtube. G+ was going to eventually absorb and replace Youtube, it would have been pretty elegant really. The first step was to have the Google Accounts system absorb Youtube accounts. People resisted this again because at this point good will was gone. Eventually Google gave up on everything.

    Now G+ is a cesspool of tacked on features, advertising, just a bad smell entirely. It did have a lively base for a short while. Things really went downhill after the Youtube consolidation because G+ suddenly had what was notoriously one of the worst communities on the internet, Youtube comments.

    Google has I believe still not successfully moved everyone over to Google Accounts. Youtube was never successfully merged into G+. G+ is pretty much dead at this point. Nothing went well at all and now Google has to support a million different things that all barely work.

    Meanwhile Youtube hasn't seen a facelift in about a decade while Google figures out what it wants to do with it.

contingencies 6 years ago

X3D - http://en.wikipedia.org/wiki/X3D

Rationale: It was the 1990s, VRML flew but didn't quite make it. 3D hardware acceleration had arrived, gaming was exploding, Descent and Quake had proven complex worlds and freedom of movement were possible on commodity hardware, and talk of buffering 3D content in video streams seemed to make perfect sense for next generation passive entertainment. Product placements, architectural documentation, mechanical design courses, etc.

What killed it (IMHO): Took too long to standardize, too much formalism, no consumer electronics manufacturers on board sending actual products to market. If they had done an IETF approach ("rough consensus and running code") they may not have missed the window of opportunity.

  • greggman 6 years ago

    What killed IMO and what will continue to kill these kinds of things for a long time is that nearly all 3D applications need massive amounts of custom optimization solutions. Stuff like X3D gets you a few cubes on the screen. Then you want to do GTA or Google Maps or Minecraft or pick your app and suddenly your generic scene graph just isn't good enough.

  • vortico 6 years ago

    Wouldn't a Javascript X3D-to-WebGL wrapper solve this? That would allow time for adoption to happen during the standardization process, until browsers begin to implement it natively. If nobody is using this now with a JS wrapper, then why should browsers standardize it?

  • Animats 6 years ago

    X3D is VRML, just with XML delimiters.

bane 6 years ago

I thought (and still think) the phone as single computing device is going to replace desktops and laptops via some kind of dock of some sort. There's been a few attempts at it by Dell and Microsoft and a few others, but none really satisfactory.

I think the most successful to date is the Nintendo Switch.

  • robodale 6 years ago

    I thought this as well. I'd love to plug my phone into a laptop-looking-thingy...and instantly have a keyboard, screen, and maybe a battery (to keep the phone charged up).

    • dpcx 6 years ago

      There actually was an Android based device that did this years ago. 2011/2012, I think; you could connect the phone to an external device that gave you a mouse, keyboard, and screen. The only problem was that the external device was something like $300 on top of the phone.

      Edit: https://www.amazon.com/AT-Laptop-Dock-Motorola-ATRIX/dp/B004... - Motorola Atrix Lapdock

    • eatplayrove 6 years ago

      What about the Intel compute stick and similar?

      • bpicolo 6 years ago

        Can't use it as a phone. The all-purpose compute device is the shtick they want

  • ifdefdebug 6 years ago

    I don't think the desktop is going anywhere - only the use case will shift around. Because at the time when smartphones will be able to beat today's desktops in performance, desktops will have evolved as well - it's always possible to pack more performance into a desktop case than into a smartphone case, with the technology available in any given moment.

    • bane 6 years ago

      Sure, but pretty much any modern phone can provide enough horsepower for most users, even games. The problem really comes down to the UI and IO modes.

      The Switch works because it limits the options to just games. But for general purpose computing, even my slightly older Android phone can open dozens of tabs, play games, video, drive a high-res display, and so on.

      If it had a dock that immediately connected it to some monitors a mouse and keyboard and the UI changed a hair I'd be a perfectly happy computing user and I get to carry around my primary device with me.

      In fact, it has similar specs and a better GPU than many low-end laptops that millions of people are reasonably content to use on a daily basis.

      • hyperpallium 6 years ago

        If part of your reasoning is it follows the pattern of computers getting smaller (as in The Innovator's Dilemma), in a way, this kinda sorta has happened, in the form of Chromebooks that use smartphone hardware, firmware and software.

        Of course, it doesn't imclude the crucial "dock" part (except in the sense you can plug in a fullsize keyboard and monitor as you can for a laptop).

        Not sure of the situation recently, but at one time they were topping amazon lists.

  • deegles 6 years ago

    I bought into the Kickstarter for the Ubuntu phone and was honestly kind of relieved it didn’t go through.

tlrobinson 6 years ago

The converse is possibly even more interesting: what tech were you convinced would fail but took the world by storm?

  • OkGoDoIt 6 years ago

    Facebook. I was convinced "social" was just a fad. Played around with the app platform in undergrad in 2008 but thought it would be stupid to build something completely reliant on a 3rd party social platform. I had the opportunity to join FB pre-IPO in 2010 but I still thought it was a fad that couldn't last. I just don't get social.

  • CryoLogic 6 years ago

    Snapchat.

    But than again if we look at their finances (and stock), they have always been in the red and are resorting to selling glasses and hotdog costumes now (huge losses).

    So maybe they aren't that successful after all.

    • Al-Khwarizmi 6 years ago

      Yes, that one totally got me as well...

      An app whose only novel idea is that the messages and photos you send delete themselves after some time, except for the fact that once they're in your friend's phone, they can of course take a screencap, or just have an app installed to store them... so the whole idea is based on cluelessness about basic security... and yet, there it is.

      • iEchoic 6 years ago

        It's not based on cluelessness, people are aware that there are ways to circumvent the screenshot notification. The vast majority of people don't install third-party apps to save their friends' snaps, so content you share on Snapchat is still drastically less likely to be on someone's phone forever than if you were to send it some other way.

      • BronSteeDiam 6 years ago

        I was never about security. They use FOMO to create a daily habit. When you post, people see it and react quickly, which further strengthens the addiction.

    • thatgerhard 6 years ago

      snapchat's biggest problem is the interface, it's extremely hard to use. I think the plan was to make it complicated so that older people don't get into it, turns out nobody wants to struggle using a selfie app

      • nemothekid 6 years ago

        >turns out nobody wants to struggle using a selfie app

        Tons of people use the app. The interface is central to its virality. Snap's growth wasn't a problem until Facebook did everything they could to clone it. Snaps failing because everyone is starting to realize that if you aren't GOOG or FB, you aren't worth spending $$$ on. Just the other day I saw an "OfferUp" ad on Snap that looked like a 90s banner ad. I can't tell if Snap went full self serve or if they are struggling to find people to buy on their platform. And who would after advertisers have been spoiled by FB's demographic data and GOOG's intent data?

        Edit: I feel like I spend a lot of time defending snapchat when I hardly use the app.

        • thatgerhard 6 years ago

          IMHO even if facebook cloned all the functionality, if people found snapchat easier to use they wouldn't have jumped ship the moment it became an option.

      • Double_a_92 6 years ago

        I think that's on purpose. So friends have to show you the app first... It kinda creates a "lock-in" effect, like "I already bothered to learn this with my friends, why change now?"

  • aspett 6 years ago

    React.

    Who would've known people would come to like JSX? I used to think it was an abomination.. now it's great.

    • swyx 6 years ago

      i mean.. also Javascript. people learned to live with it, and it evolved to be more tolerable through a messy mishmash of tooling and language evolution.

  • barrkel 6 years ago

    YouTube - it didn't look profitable and I thought they'd go out of business

    • hobofan 6 years ago

      From official statements here and there, and especially the "Adpocalypse" happening this year, I doesn't look like they are profitable even now.

      Sometimes I think the only thing keeping Youtube alive as a Google product is that the internet would rip them to shreds if they were to shut it down.

      • krapp 6 years ago

        I think Google was expecting Youtube to no longer exist as an entity in its own right, but just be a part of Google+ after they integrated all of their properties into a massive Facebook killer social media app, but that was a colossal failure.

        Now, I think they want to use the infrastructure to build something that competes against Netflix and Amazon, but to do that they have to destroy the existing economic incentives for content creators and drive off as many content derivative users as they can (anyone incorporating or using licensed content, even as covered by fair use), and basically rebuild Youtube from the inside out.

    • erik_seaberg 6 years ago

      I was worried rampant infringement suits might kill Google.

    • lj3 6 years ago

      YouTube loses billions of dollars a year. If they weren't being subsidized by Google search profits, they would be out of business.

  • Al-Khwarizmi 6 years ago

    Instagram.

    Sharing pictures and comenting on them, like Fotolog.com in the early 2000s, but more closed (it didn't want you to browse photos on mobile without the app)... yeah, revolutionary.

    • zimpenfish 6 years ago

      Flickr should have owned the simple sharing of pictures in a social network because ... they already had one. I'd consider that the biggest, most damning failure of Yahoo!'s management that they didn't immediately devote huge resources to countering Instagram when it appeared.

    • ejolto 6 years ago

      Instagram has a slick, easy to use, minimalist ui. Fotolog.com looks messy in comparison.

  • joelaaronseely 6 years ago

    Google! At the time I heard about this startup in the search space, I thought these people were idiots. Didn't they know that there was already Yahoo!, Ask Jeeves, HotBot, Alta Vista, Etc. Another search engine is doomed to failure - especially if they don't even know enough to spell "googol" correctly.

  • vortico 6 years ago

    Uber. It's a "taxi service except about half the price because it starts over with no unions". If it's so profitable, why didn't other taxi services do it instead? Is the whole reason because it lowers the bar for taxi drivers to be hired, thus reaching those who would be fine with the lower pay?

    • thaumasiotes 6 years ago

      > If it's so profitable, why didn't other taxi services do it instead?

      Because that was illegal. This isn't a secret - it's constantly mentioned by pro-Uber and con-Uber groups.

      Reimagine it as "it's half the price because it is not the beneficiary of a government-granted monopoly on its business".

  • ckarmann 6 years ago

    Twitter

    • kobeya 6 years ago

      It's like Facebook, but without pictures, friends, or events! Just status updates. Oh, and they're limited to 140 characters. It's going to be awesome. /s

      • sincerely 6 years ago

        turns out restrictions can breed creativity :)

    • christophilus 6 years ago

      Yeah. All of the tech listed here, but this one especially. When I first heard about it, I thought it was spectacularly stupid.

  • ridiculous_fish 6 years ago

    x86. Even Intel seemed bent on dismantling it with Itanium. And while they were distracted, AMD humiliated Goliath with the Opteron. It was EPIC - no wait, it wasn't!

    • jl6 6 years ago

      I don’t recall the Opteron stealing anybody’s lunch. It was the Athlon that outshone contemporary Pentium III chips.

  • umanwizard 6 years ago

    Bitcoin.

    • narrator 6 years ago

      I knew about Bitcoin quite early, but I was convinced the government would shut it down. I am mystified as to why they didn't.

      • tlrobinson 6 years ago

        Possibly because it's not actually as anonymous as some people believe.

      • krapp 6 years ago

        Some governments have banned it, many haven't. It's likely that Bitcoin (and cryptocurrencies in general) aren't, and never were, the existential threat to governments as a whole that anarcho-capitalists wanted it it to be.

        States and law enforcement are more likely to be concerned with what Bitcoin is used for, than the mere fact that it is used.

      • umanwizard 6 years ago

        The government can't just "shut down" whatever it wants in a society like the US with (relatively) strong rule of law.

      • mslate 6 years ago

        They cannot--coordinating enforcement is impossible for nation state actors, at least not yet.

        • cheeze 6 years ago

          Shutting it completely down is impossible, but making possession illegal is not.

    • kenbaylor 6 years ago

      Same here. In the early days, the price was driven by emotion. Every time there was a problem, everyone sold and the price went to the floor.

  • em3rgent0rdr 6 years ago

    Facebook

    • erik_seaberg 6 years ago

      Myspace without stylesheets. And only your university friends even have access. Yeah, did not see that getting big.

  • thatgerhard 6 years ago

    drones

    • vortico 6 years ago

      I can definitely understand why these have grown in popularity. Why is it a surprise?

      • thatgerhard 6 years ago

        I don't really know, I just kinda got blindsided.. i saw 1 or 2 people playing with drones and i was like "this is a specific group of people like the model aircraft folk" and then the next day it was everywhere.

        Now I can totally understand that it's an important evolution in tech and will probably solve the last mile problem (at least some of it) in the near future.

  • sjg007 6 years ago

    Alexa and amazon.com

    • cm2187 6 years ago

      Has it taken over the world really? Techies love it but I am not convinced there is a mass market for it yet. I don't know if people really like talking to objects, and have non discoverable UIs.

indubitable 6 years ago

I thought this was going to be 'The VR Thread.'

I still don't entirely understand why virtual reality didn't really catch on. I think there were some very poor business decisions by Facebook which were treating it as a success before it had even launched -- trying to ring-fence a platform, both hardware and software, whose early demographic is going to be entirely high information users seems a very questionable decision.

But beyond this, I don't really understand why it didn't catch on. The first time I tried it I was hooked simply due to the issue of presence. Though perhaps the fact I ended up never purchasing a device answers my own question -- I had no interest in a Facebook driven platform, and the price point to get involved with VR for things like the Vive did not mesh well with the availability of software, as well as the fact that hardware tends to rapidly iterate.

  • Animats 6 years ago

    Spend 10 minutes with an HTC Vibe or an Oculus Rift on your head, and you'll be impressed.

    Now spend 4 hours. Most of those things end up in closets. Or on eBay.

    • vortico 6 years ago

      I like this way of thinking. It explains a load of products and technologies.

    • swyx 6 years ago

      absolutely. there are things that demo well and then there are things that just fade into daily use because they just make sense.

  • phantom_package 6 years ago

    Ditto on presence.

    I bought a Vive and it's been... interesting. I actually haven't touched any games in months, but I still break it out every few weeks to prototype something in Unity/Unreal. Also, I've been using Google Blocks a ton to create lowpoly models for use in games.

    I think the tech is way too compelling to go away anytime soon, but it's obviously not the splash hit that Facebook/etc predicted it would be.

  • greggman 6 years ago

    I'm still mixed on VR. I don't believe it will be a mass market consumer item anytime soon

    On the other hand I expect it to revolutionize 3D design (architecture, 3d art for movies, 3d art for games (non-vr games). Manipulating stuff with your hands in VR is 10x-100x faster than using a mouse in Maya/3DSMax/ZBrush etc. We just need someone to bring the pro tools into VR.

    As for games I loved both Farpoint (PSV4, bad game great experience) and Lone Echo (Oculus). Also loved Vanishing Realms though it feel more prototype than full game. To a certain degree VR has spoiled my enjoyment of certain non VR games (not all). Something is now just missing if I can't "be there". Like many who have done VR it's the difference between a picture of the Grand Canyon and being at the Grand Canyon. Farpoint is not that great of a game but actually feeling like I'm on another planet with 3 mile high volcanos and 15 mile high plumbs of smoke is a feeling I can't get from a non VR game.

  • CJKerr 6 years ago

    Cost is still a big factor - you need fairly decent hardware (minimum GTX 970 / RX 480) for an acceptable experience on Vive/Rift, and we're still a generation away from that performance hitting mainstream price points (generally nVidia's x50 range is considered mainstream, and next year's, let's call it the 2050, will probably be around GTX 970 performance).

    Even once you have the hardware, you need the space, and you need to decide to spend $500USD on a less-versatile HMD rather than a high-framerate gaming monitor.

    For me, space is the hangup - my main PC has the grunt, but I share the space with another PC on another desk, and that takes up all the floorspace. The cost of getting more space makes the cost of PC hardware look trivial....

  • rytill 6 years ago

    It depends on your metric for success. VR already has research applications and artists are using it for 3D modeling. VR gaming is neat, but in my expectations, not going to be the core focus of the technology.

    Generally, VR is about natural control of a simulated environment. I suspect within a few years, when our hardware gets better/is stable and more groups are developing applications, you'll see VR become a central component of more systems.

  • sigi45 6 years ago

    I still think it will take of. The question is how.

    There is a very cool company who is doing VR in real world. Like if you touch the 3d wall, there is a plywood wall there.

    Or training on machines, driving school, some games, 3d modelling, 3D Kitchen, 3D House Building, Journeys.

  • zerostar07 6 years ago

    Hooked for how long?

    There are lots of reasons why you can't use VR for long.

    physical: (wearing the equivalent of a helmet with short cables)

    timing: it requires that you can switch off your real life environment for a long time - not good for working hours

    controls: Sometimes in my nightmares i want to speak or move but can't, thats how it feels when trying to do things in VR.

hannob 6 years ago

I would have expected robots and automation technologies that replace human workers to come much faster to everyday life.

Of course this is happening in some areas, but not nearly as much as it could happen.

Just one example: It's easy to imagine running a shop that works with little to no workers. The closest thing that exists are these self-checkout cashiers, but even these aren't that prevalent, the norm is still a human cashier.

  • blattimwind 6 years ago

    A foreshadowing of this is that while industrial robots can do impressive work, they do this in a very specific set of constraints:

    - they are very expensive in relation to consumer income

    - they are complex to program

    - generally specialized: more or less general purpose frames (like your typical 5 axis arm) with special single-purpose tools mounted to them

    - safety architecture is generally: total enclosure, no human access/interaction during operation, if not totally enclosed usually with light curtains or very stringent requirements.

    A general-purpose helper bot looks like a completely different beast from this angle.

jawns 6 years ago

In early 2013 I read this article:

https://www.sciencedaily.com/releases/2013/02/130220084442.h...

It described what seemed like an incredible advance in capacitor design that would allow xenon flashes to be installed in even the slimmest of smartphones.

I consider xenon flashes, which are found in most cameras that aren't smartphones, to be wildly superior to LED flashes, which are used in phones, and assumed that within a couple of years, the new technology would be ubiquitous.

But it never happened. So far as I know, the partnership between Nanyang Technological University and Xenon Technologies never really went anywhere.

After a couple of years of waiting, I even went so far as to try to contact the researchers to find out what happened, but never got a response.

I hope that someday this tech does hit the mass market. Although LED flash quality has improved somewhat, xenon flashes are still so much better.

geowwy 6 years ago

I expected PCs to be in every home pretty much forever. Now a lot of non-tech people prefer phones and tablets.

  • vortico 6 years ago

    To avoid redundancy, here's why I don't think phones and tablets will take over the concept of a personal computer in the future. https://news.ycombinator.com/item?id=15719836

    I believe the situation is even better. Desktops and laptops, no longer needing to support the layman on their phones and tablets, will actually become more oriented toward the needs of power users, although their total userbase may decrease. In a decade, Macs will no longer be heading toward being "Facebook machines" because you can do that on your phone, so Apple will focus more on the target userbase of creative digital artists. Windows machines will always need to support word processors and CAD programs but in ways that are cleaner to install and manage. Linux will keep growing to keep up with the number of programmers in the world, so more professional and amatuer solutions will be made available.

    There are too many cheap, low quality PC hardware solutions in the market now. Their users will switch to phones and tablets. Higher class hardware will remain the same as the power users will keep buying these.

    • icebraining 6 years ago

      Yeah, but I'm not sure if I would have become a programmer if "real" computers only targeted the professional market, while you had much cheaper appliances that solved the needs of my parent. Will a single mother be able to afford the difference just so her child can geek out? Turning computing into a privilege of the upper middle class doesn't appeal to me.

      • vortico 6 years ago

        The "learning programming was much easier back in the day" rhetoric is exaggerated. Programming on any device is much easier than it was 10 years ago in VBScript, which was easier than 10 years before. Downloading some $1 app to write code on an iPad is way easier to dive into than learning BASIC on an IBM PC. There are even apps that actually specifically help kids learn instead of dumping a 200 page manual on you.

        Also, phones and tablets will remain to be as expensive or more than the future "working man" desktop computer, so this isn't a matter of one wealth class vs. another.

  • Muuuchem 6 years ago

    Curious, aren't phones and tablets essentially just smaller personal computers?

    • PostOnce 6 years ago

      As kobeya points out https://news.ycombinator.com/item?id=15719171

      they are not the same thing

      they do less; the software is restricted, especially on iOS, there are _rules_ set out by the manufacturer about what is acceptable in software on the platform, both thematically and functionally. An app generally can't interfere with other apps, it can't be a dope wars style game about drugs; there are a lot more caveats, but I am far from aware of them all. For a long time you couldn't get a compiler or interpreter on iOS, so you couldn't use your "computer" to make more software... so it wasn't general-purpose at all, since you by definition couldn't change its purpose without the ability to make new software.

      • ajdlinux 6 years ago

        The bigger difference, to me, isn't so much that the walled-garden ecosystems restrict a phone/tablet's use as a general-purpose computer, than the form factors and interfaces of the device being appropriate for different forms of content creation, manipulation and consumption.

        Phones and tablets are great for consuming certain types of content, and having a portable camera with networking has enabled a lot of content creation. But a phone isn't a great device for doing something like long-form writing.

        • hackits 6 years ago

          I don't know about you, but for most reviews and new articles it easier for me to put it on in the back-ground than having to deal with advertisement and pop-up windows on web-pages when reading text.

      • thaumasiotes 6 years ago

        > especially on iOS, there are _rules_ set out by the manufacturer about what is acceptable in software on the platform, both thematically and functionally.

        But iOS is not a phone. Windows could set rules about what was acceptable on the platform, but that wouldn't change what a personal computer was. There are no rules about what's acceptable on an iPhone.

        • PostOnce 6 years ago

          it's either impossible or near impossible for normal people to change the operating system on an iPhone

          so iOS essentially is the phone, since there are no iPhones without it. It's not like you can just pop in an SD card with Linux on it...

      • swyx 6 years ago

        ok i expected HN to have the pmarca thesis in mind - aren't phone browsers still open software? am i being too idealistic?

  • jenscow 6 years ago

    This should be a good thing.

    Hopefully now that the non-tech people are moving away from traditional PC/laptop, this trend of dumbing down for the "average user" (read: lowest common denominator) can be stopped.

    • abricot 6 years ago

      the "dumbing down" is what made it successful - and attainable - in the first place.

      • jenscow 6 years ago

        Exactly. It can stop now.

  • flatline 6 years ago

    I’m a tech person and hardly ever use my home computer any more. And nowadays I only have the one.

muzani 6 years ago

I expected microtransaction games to take off, but not in the form of "pay to win" as it is today.

I expected things like buying side quests or episodes for games. People could just pay for as much content as they would play. It would have been similar to how musicians could sell singles instead of albums.

Paradox games and CS:GO seem to have adopted this model very well but it never happened in mobile games, where microtransactions would have been most successful.

  • Zpalmtree 6 years ago

    Well, DLC is pretty common in PC games.

tnitsche 6 years ago

Mobile/Micro payments. There is still no friction free solution, where I would be sure, my mother could use it.

  • vortico 6 years ago

    This would be my #1, as it would have drastically changed (for better or worse) the internet. Although, it's a psycological problem instead of a technical one. You can put a PayPal button or Stripe popup on your page using technology today, which will usually take 10 seconds (given that the user is logged into those services, since we're assuming that this micro-payments practice is common), but it's a wild expectation that people would actually get used to willingly give up a nonzero amount of time to pay for something so small as an article.

    Another problem may have been this. If everyone on the internet agreed that say, watching a video should cost $0.02/minute, then people would get accustomed to paying this amount because that's such a small cost. But it's only a $0.02 decision for a content creator to restrict or allow it to be watched for free, and if say, 5% of artists just decided to "eat" the potenential earnings, then you'd have 5% of free videos and 95% premium videos. The 5% free videos would prevent the 95% of videos from being watched, because why would anyone pay to watch a video if you can watch another one for free. Basically I'm saying that this idealization of micropayment content is destroyed by free content, whether that is a good thing or not. I do this myself with respect to Android apps, since I don't take my time seriously enough to pay the occassional $1 per app.

    • alexasmyths 6 years ago

      This is because VISA/AMEX have an oligarchy, and VISA is backed by the banks, i.e. the entities that issue credit.

      So, the financial system is 'behind' the crapy payment scheme.

      Something would have to come along to basically offer credit without all that.

      I'm mostly surprised that Amazon and/or Wallmart have not simply done that.

      I'll bet Bezos does some day: a low-interest rate 'Amazon Card' that other merchants can use, credit provided by Amazon (not banks), and it's 0% payment for other merchants.

      • vortico 6 years ago

        That is a good point, but PayPal and other services don't need to use cards at all (PayPal balance, attached to a bank routing number), so I don't think it's the reason micro-payments are not the norm.

        A credit card alternative from Amazon would be fantastic competitor to VISA / MasterCard to break the lack of innovation in the personal finance industry.

        • alexasmyths 6 years ago

          To use PayPal - you have to have your money in PayPal, or it will use an underlying credit card. So, it's really just like using a bank account to pay directly.

          Most people don't like having to keep a separate current amount in PayPal to buy stuff.

          Also - I'm not sure if the merchant fees that PayPal charges are any less, i.e. they didn't take the 'no fees' angle for the merchants.

          And of course there is the consumer benefit: even if a system only charged 0% to merchants, and not 2-3%, well, consumers don't care. So getting enough consumers to sign up is hard.

          VISA / AMEX prevent merchants from advertising cheaper prices for use of other cards - so even if a merchant had a 0% processor option, they're not allowed to discount by 2-3% for that card vis-a-vis VISA / AMEX.

          It's a tightly controlled oligarchy. It's huge, entrenched, it's entire consumer financial system. Going after it would be economic thermonuclear war.

          A disruptor will have to come from an outside angle - or - be huge and have their own base.

          So I can see Amazon just 'doing it' themselves, for their own customer base, which equals a critical mass.

          Or - someone does something 'online' that shakes things up, and that bleeds over into the brick and mortar. A crypto-currency ... I can see doing that in some way.

          • vortico 6 years ago

            You can connect PayPal to your bank without a credit or debit card, so yes, they can be out of the equation if the consumer wants to. I don't think PayPal or card fees are significant in this discussion of micropayments. The issue of micropayments is getting people to pay at all, either 0% or 100%. A 3% detail doesn't have a huge effect on why they aren't commonly used.

            • alexasmyths 6 years ago

              "A 3% detail doesn't have a huge effect on why they aren't commonly used."

              At brick and mortars, where they are operating on 5% margins in many cases, a 3% skim off the top is massive.

              A 3% arbitrary fee for every electronic payment in north America is really, really quite a huge thing.

              It's quite massive form of rent extraction, given that there is no real IP, and the actual cost is almost nothing.

              It's one of the biggest and juiciest forms of obvious oligarchy/monopoly that there is.

              • vortico 6 years ago

                Right, totally agree, but this is not the reason that micropayments is not more commonplace. It's equally difficult to get people to log into a payment system to pay $1.40 vs $1.55.

                • alexasmyths 6 years ago

                  The micropayments are not there because of the minimum charge on VISA transactions of about 25 cents.

                  • vortico 6 years ago

                    I already mentioned two ways around this, which is why your argument is void: Holding small amounts in PayPal (or similar) accounts, and bypassing cards entirely and using bank routing transactions. Both are solved technical problems.

            • jenscow 6 years ago

              Yes. The jump from $0 to $0.05 is greater than from $0.05 to $1.

  • toyg 6 years ago

    Looking at how micropayments literally devastated the gaming experience, I’m not sure I really want this to happen. It’s become a synonym of “nickel & dime”. Imagine going to a website, reading 50% of a long article, then being told mid-way that you have to pay 0.99 to continue. Want to see pictures? Another 0.99. Post a comment? 0.99. Reply to a comment? 0.99. Then you go to a chat site, want to chat with more than one friend per day? 0.99. In the same chat? 0.99. With emojis? 0.99. With animated gifs? 0.99. And so on and so forth, at every. Single. Step.

  • PeachPlum 6 years ago

    Contactless payments with either a card or your phone is widespread in the UK from big chain supermarkets to small local stores.

    I can buy coffee at a vending machine in my Uni computer lab without cash money by waving my NFC enabled phone at it.

  • swyx 6 years ago

    i think venmo's done a great job of it in the US and in China the micropayments have done much better. I think its in progress, just definitely not a "revolution" maybe because it doesnt fix a real pain point for people

    • singularity2001 6 years ago

      it could sove the pain point of online advertisement for those users who are ready to pay 3 cents for a page view.

altitudinous 6 years ago

Kinect - such a human interface that doesn't require direct integration with hardware or technical knowledge or buttons, and a lot of fun in the games I played. But it failed.

  • kobeya 6 years ago

    It's built into the top of the iPhone X... I wouldn't call that a failure.

  • flockonus 6 years ago

    Maybe the product failed, but the technology I'll argue it did not

    • popcorncolonel 6 years ago

      Classic Microsoft. See the Windows Phone as well.

  • swyx 6 years ago

    i'm still shocked that Microsoft decided to discontinue that.. there was so much great research being done on the back of that little device.

api 6 years ago

The open web, open OSes, open in general. We (average HN users) use these things but most people use walled garden platforms all the time. For non techies the trend is that way. Computing market seems to be bisecting into pro and consumer.

Two issues have prevented this open computing for the masses IMO:

1. Spam and malware. Any open system that gets any user base becomes spam hell. This is even happening to the more open of the walled gardens now (e.g. YouTube) so the future will likely be even more closed.

2. People like "free" and with no revenue stream you can't pay programmers to work on the parts that are not fun. This includes UX, which is a brutal grind. Walled gardens have a revenue stream from being surveillance driven advertising platforms users can't customize.

examancer 6 years ago

Flying cars. I remember seeing prototypes in late 80s and early 90s, constantly showing up on Discovery Channel programs I watched as a kid like Beyond 2000 or Next Step.

Over time it seemed increasingly far out, but, possible. Eventually I realized things like how catastrophic a break down or accident would be meant it will likely never happen. I and everyone else who thought flying cars were on the horizon were suckers.

  • anon1253 6 years ago

    We have helicopters though, even quadcopters that can carry people, not to mention airplanes. But general purpose "flying cars" are a ridiculous idea: the failure mode is catastrophic (as you pointed out), the noise will be terrible (especially in cities), not to mention pollution (you might not care about CO2, but breathable air is still nice). And there's regulation, since there are no roads you need centralized control (e.g commercial/military flight control systems) which makes it almost impossible to scale and would mean sacrificing to great extend the benefits of cars: autonomy. And above that it solves a weird problem: congestion and capacity, but those are better tackled with self-driving cars and better public transit.

  • swyx 6 years ago

    same here. It took me an embarrassingly long time to come around to Elon's view that we need to go down, not up.

akkartik 6 years ago

RSS and Google Reader. What can I say, I was young and foolish.

  • vortico 6 years ago

    The fundamental question for the users here is "should I read an article on a website, which is designed exactly as the users should read and explore it and has 100% support for all typographical element, or on a third party reader which displays a processed copy of the article and requires effort to set up?"

    RSS hasn't died, but it is definitely not growing, as those who prefer to use it are already using it.

    • eadmund 6 years ago

      > The fundamental question for the users here is "should I read an article on a website, which is designed exactly as the users should read and explore it and has 100% support for all typographical element, or on a third party reader which displays a processed copy of the article and requires effort to set up?"

      I'd take the plain text of an article over a JavaScript-laden monstrosity of a single-page app any day of the week, and twice on Sundays.

      I really miss being able to quickly & easily read articles, without distraction. But then, I miss lynx — and I still harbour a deep-seated hatred for those who have destroyed the Internet I once loved.

    • akkartik 6 years ago

      That's one question, sure, but there's nothing fundamental about it. Here's a second question: should I have to go to the frontpage of a website to find out what's new on it, or get notifications automatically when there's something new?

      Here's a completely unprocessed feedreader that I built as a Firefox extension until #$%# Mozilla killed it: https://github.com/akkartik/spew. It answered your question with, "why yes, read each article precisely as it was intended to be read on its own website," but still used RSS feeds for push notifications. Best of both worlds.

      ---

      Here's a third question: should I prioritize articles based on how their website shows them to me, based on when they're posted, or by some other prioritization? This may be what stunted RSS: people chose with their feet to prioritize socially (reading what their friends shared: Facebook, Twitter, etc.) or by collaborative filtering (reading what people like them read: Reddit, HN, Lobsters, etc.)

      ---

      Here's a fourth question: do I want to be notified of every single post from a source? And this may be RSS's remaining niche. I no longer use it for high-volume sites like CNN or Buzzfeed. Those need prioritization. I do use it for ~250 extremely low-volume sources (that cumulatively yield a dozen or so stories a week) from whom I want to see everything: http://akkartik.name/feeds.xml.

    • krylon 6 years ago

      With Firefox, you can turn an RSS feed into a virtual bookmark folder that always contains the newest items. I am not sure if Chrome can do this, With Internet Explorer I could not figure out how to get it to work.

      But it is such a cool feature. Effectively, it allows me to embed a tiny RSS reader into the bookmark toolbar.

  • tehlike 6 years ago

    Take the world by storm? Or become a utility?

    Looks like neither happened.

    • akkartik 6 years ago

      Name a utility that didn't first take the world by storm.

vssr 6 years ago

Speech to text. 20 years ago there were programs already doing a fairly good job at it. By now I'd expected to see people recite written documents to their computer instead of typing them. Not the case.

wbl 6 years ago

Nuclear power. Carbon free. Reliable. No waste if you reprocess. Cheap fuel.

  • api 6 years ago

    Three things are killing it:

    It is exponentially harder to finance a few huge super costly projects than loads of small ones.

    It has a high "PITA factor." That stands for pain in the you know what. A high PITA factor means there is a very long tail of non obvious problems that compound and multiply. It looks much better on paper than in real life because on paper that stuff doesn't show up.

    An exponential is now in place for solar and storage. It would not surprise me if almost all energy outside aviation and a few heavy industry applications is solar and batteries by 2050.

    • wbl 6 years ago

      It would surprise me. Battery chemistry has already tried everything on the periodic table together, and storage remains expensive. Right now more solar leads to more natural gas to cover loads when the sun doesn't shine. Load shifting can only go so far.

      • Jeff_Brown 6 years ago

        > already tried everything on the periodic table together

        Shall we consider the size of the space to explore? If n is the number of elements, at first blush it's n*2. But there are compounds -- more than nine million organic compounds alone. Electrical and temperature treatments can have different effects -- and those are functional spaces, you don't just choose a current, you vary the current over time. Simple positioning can have an effect: rotate one surface relative to another and it can behave totally differently. Wafers behave differently than strings, which behave differently than spheres and crystals, which behave differently than dust ...

        If there was a race between our exploring the entire chemical space and colonizing the universe, I would not know which to bet on.

ilkan 6 years ago

Google Glass. IMHO they mistakenly went after the consumer and advertising markets instead of starting with maintenance workers and hobbyists. On a related note, I also expected some mobile computers would be "cyberdecks", embedded in the keyboard. Instead, the computer is the screen.

  • SimbaOnSteroids 6 years ago

    Glass failed because it was too early for the tech, they needed to be able have the functionality of the the camera they had but make it something you didn't immediately notice. Also the display mechanism needed to be less noticeable as well. What killed Glass was that it made people uncomfortable.

    Also glass relaunched in July targeting enterprise applications.

codesternews 6 years ago

Mobile Augmented Reality or ARKit, But it was just hype as previous. I mean nothing in mainstream.

I think main reason is UX or User Interface. It is very hard for people who do not know how it works and expect it to work perfect (in low light etc) as with normal user interface of smartphone. It was not very intuitive for normal user because of Smartphone limitations.

I can think that it might take in future if they improve the correctness of platform. It have lot of potential and let's see what future brings.

  • kromem 6 years ago

    Give it time.

    Just as a frame of reference, the original iPad sold less units in its first year than the PlayStation VR.

    Virtual reality is still in its infancy, and honestly AR is a technology that's going to have to build on top of a lot of VR's technical underpinnings.

    Hardware growth takes a lot longer and more iterations than software. VR is going to be huge eventually, and AR is then going to dwarf VR... eventually.

    There's a lot of work in wireless signal standards, battery life, displays, and pure computational power that needs to happen before these technologies can deliver on the promise though.

    We're slightly past the Newton stage, and somewhere behind the Windows Mobile 3.1 era with these technologies. It's going to still be a while before we see the "iPhone" for VR/AR, and even then, it'll be a few years before we see mass adoption.

    But in my opinion, the end result is an inevitably.

  • swyx 6 years ago

    Same, I was very optimistic. but even in good lighting I was disappointed by ARKit's performance. Still it is too early to judge the success.

bitwarrior 6 years ago

Nanotubes. Seemed like it was all upside, the stuff we were going to be making starships out of.

  • kobeya 6 years ago

    It will be. When we're able to manufacture them in large quantities and with few imperfections. Give it time.

    • newsbinator 6 years ago

      Also it would be nice if accidentally inhaling them (during the manufacturing process, or in the wild) didn't directly cause cancer.

    • vortico 6 years ago

      Yup, it hasn't gone away or stopped growing. It might be taking too long for journalists to find interest in it right now, but that has low correlation to the research effort.

      • swyx 6 years ago

        mind explaining a little more in terms of how far away we might be able to mass produce these things?

        • vortico 6 years ago

          I'm not in the field so I don't want to try to guess how long it will be. But of course it will be in gradual increments of cost, quality, and size. There won't be an exact point in time where we transition to being able to mass produce this material on a large scale.

eklavya 6 years ago

Memristor. I thought that will reshape the entire industry.

  • swyx 6 years ago

    these things? https://en.wikipedia.org/wiki/Memristor

    why didn't they? (at a high level... im not too smart about these EE topics)

    • henrikeh 6 years ago

      AFAIK very difficult to manufacture/effects guarding its behavior are poorly understood.

      Another issue is that memristors are not part of any standard curriculum and thus limits the number of people who will work on it.

j45 6 years ago

Palm/HP/LG WebOS - JavaScript based os a few years ahead of it's time in mobile form.

  • examancer 6 years ago

    Came here to vote for this. I even published an app to their store during the ~2 weeks it looked like HP was going to throw their weight behind the TouchPad.

    It was really ahead of its time and most of its major metaphors can be found in the surviving mobile OSes. Just wish the web-centric development model survived as well.

    I have WebOS on my LG TV now. It sucks.

    • jetti 6 years ago

      I'm curious as to why you think WebOS on your TV sucks? I too have it and am content with it. To be fair, though, I don't have any other smart TV OS experience to compare it to.

      I really enjoyed the HP Touchpad with WeBOS but there were some rendering issues with it that I noticed when going to certain websites. My father in law got me one during the flash sale as he worked at OfficeMax Corporate at the time and had access. For ~$60 it wasn't too bad at all.

  • dizzystar 6 years ago

    I was rooting for them back then. Web OS was truly second to none at that time. Really too bad.

d--b 6 years ago

Theranos, hands down. I don't understand why we need more than a drop of blood for medical tests. A drop is a lot of molecules.

digitalzombie 6 years ago

Those VR glasses for augmented reality, the google glasses... but nope.

  • kobeya 6 years ago

    Is it really augmented reality if it's just a HUD overlay? I was excited for Google Glass until I found out it was more about the camera and text overlays. I wanted stereoscopic full-frame overlays and bionic camera input...

phillc73 6 years ago

UDP for fast file transfer.

Working in the television industry at the time and a few companies were making tools based on UDP, for sending large files across public networks. Smartjog, Aspera, Signiant. The companies are still around, but only operating in the B2B space as far as I know.

I thought this technology was going to replace FTP. People wanting to punt bigger and bigger files around, they needed better tools. Smartjog even open sourced an early version of their tool.

Anyway, it didn't happen. I'm not in the televisual industry anymore and find myself entirely satisfied with the current file transfer tools commonly available (FTP!). I guess there was just never consumer demand and I was looking at things from the perspective of a very narrow niche.

  • ben1040 6 years ago

    I used to work in biotech and we were using Aspera to transfer 200-300GB genomics files up to the NIH's central repository, because FTP stopped cutting it at that scale.

    http://asperasoft.com/

  • swyx 6 years ago

    that's interesting. Isn't UDP still in use today? (honestly i'm not too clear in what context, i just know that its used in some webapps)

    • phillc73 6 years ago

      Maybe it is and I'm just not aware of it. I left TV about four years ago, so not sure if the technology has made it to consumer file transfer tools, but as far as I know clients for services like Dropbox (mentioned below) don't use UDP.

      If they do, and I just don't know about it, then the future is here!

    • inDigiNeous 6 years ago

      Many games (if not most) use UDP for multiplayer functionality, as it's more bandwith efficient. UDP has no error recovery, packets are sent without any idea whether the receiver got them or not, and there is no handshaking the connection like with TCP, resulting in smaller latencies.

  • nikanj 6 years ago

    The #1 most used file transfer tool seems to be Dropbox. I put stuff into mine, give you a link and everything magically works.

    • phillc73 6 years ago

      Yes, that's a nice workflow for non-technical users. However, if you're transfering 50GB files, this takes some time. UDP behind the scenes could fix that.

      I guess consumers just aren't punting 50GB of data around all that much, so there's not enough wait time pain to justify it.

      • stephen_g 6 years ago

        The biggest problem with UDP is NAT, since UDP is connectionless. Still, it is extensively used in all sorts of applications.

        The reason it's not used to much in file transfer is because you basically need to re-implement everything you get for free with TCP for reliable transfer. Secondly, one of the big things that made TCP slow for large file transfer was that the way most TCP congestion control algorithms worked meant that transfer rates would drop quickly as latency increased. Google have come up with an algorithm that drastically improves this [1], and contributed it to the Linux kernel. I expect their strategy will be adopted by most operating systems. I believe Google is already using it, and I read something on Netflix's tech blog that they are at least trialing it.

        For small files, TCP slow-start is an issue but protocols like HTTP/2 can work around this by multiplexing multiple downloads through one connection.

        So I don't think there will be much advantage to UDP based file transfer for things that you want reliability for.

        1. http://queue.acm.org/detail.cfm?id=3022184

      • nikanj 6 years ago

        I just realized I have no idea if Dropbox uses UDP behind the scenes. And I love it.

        From a larger "tragedy of the commons" point of view, using UDP to force maximum packets down the shared pipes is not fair towards everyone doing TCP. Congestion detection/prevention tries to make traffic more fair to everyone.

mmgutz 6 years ago

MS Silverlight. I thought the world needed a better Flash and with MS behind it and IE was still popular enough. I was a corporate C# developer so I saw everything through Microsoft. It turns out iPhone killed Flash. MS could hardly get anyone to adopt Silverlight.

  • sterex 6 years ago

    This happens quite a bit even now. Many corporate Microsoft technology developers have a myopic view of the software world. This is on a downward trend, but still exists.

  • dennisgorelik 6 years ago

    Microsoft Silverlight had a good overall idea, but bad implementation:

    1) Bad API (too complex).

    2) Awkward installer.

ryandrake 6 years ago

OpenGL: For a brief, glorious period, we had the promise of a truly cross-platform general-purpose graphics API. Then Microsoft began its relentless assault on it, ran its tried-and-true 1990s playbook on it and eventually FUDded it into irrelevance on the Windows platform. For a short while, it at least held up as a Linux/Mac solution, and survived on Windows as a second-class citizen. For the last few years, however, Apple seems to have even abandoned it for their own proprietary technology, dooming developers back to the bad-old-days of needing to support N platform-specific APIs if they want to support N platforms. As Trump would say: "Sad!"

  • bhouston 6 years ago

    OpenGL is the basis of computer graphics on Andriod and Linux and works well on Windows. So it is fairly universal these day. It does work well. There are always new proprietary APIs to compete with OpenGL and they usually jump ahead but then OpenGL slowly but surely consumes them.

johnmw 6 years ago

Java applets. Back in the mid 90's Java was at the peak of its hype cycle and was the darling language. I thought the idea of writing an app in a 'modern advanced' language, compiling it down to a bytecode, and running it in your browser was the future of the web. And if someone had told me back then that Javascript would be the language of the web I would have spat my coffee out laughing.

Unfortunately speed, security, and the need to install a seperate run-time/plugin layer were too much for people at the time.

But hey, now 20 years later we are finally starting to see that vision in Web Assembly. ;-)

therealmarv 6 years ago

VR (yes the standalone/PC systems or the one which you can have with your phone). I have the feeling it's dying already because most non geeks are not really interested into it. It's like 3D TV of 2017.

  • zimpenfish 6 years ago

    It's not just that non-geeks aren't really interested; it's that it has a whole bunch of barriers as well for those that are interested.

    1. You need a reasonably powerful PC or PS4 ($$$) 2. You need the VR kit itself ($$) 3. You need space to use it 4. You need to be physically able to use it 5. You need a lack of motion sickness, etc. 6. There's bugger all games at the moment 7. Probably something else I've not thought of

    • lookACamel 6 years ago

      7. You need to find time to use it. (Pairs nicely with reason 3.) Currently, VR is a separate activity which doesn't fit into one's existing activities. It's a nice experience the first time, but what's the compelling reason to turn it into a habit?

      • zimpenfish 6 years ago

        8. Wildly unportable (pairs with 7 and 3) - you can't play it whilst commuting, travelling, etc.

  • rf15 6 years ago

    a lot of geeks are also not into it: it is advertised as giving you more degrees of freedom and immersion, but VR gives you half the features of these degrees (free hand movements! looking around wherever you please!) and feels limiting because it cannot offer you the other half (proper free movement, you're tied down by cables, have to set up your room for it, etc.)

    Also, after a short while, you stop seeing "3D" as it is because your brain abstracts it all away anyways and will draw a similar 3D experience out of a moving 2D image and a proper stereoscopic one.

    (not to mention that the price is pretty off the charts too)

harrisreynolds 6 years ago

Parts of the old Web Services stack.

It is obvious why SOAP died (to me at least). It was a solution in search of a problem that was already solved by REST/HTTP with generic payloads.

BUT.... things like WSDL that gave you an interface description... it would be nice if something like that had survived.

The closest thing to this that I know of is Swagger. But not having a standard machine-readable way to integrate with REST APIs/services is a missing piece of the puzzle I think.

The Web Services Description Language provided that. I'm surprised that or an equivalent never got traction.

virtualized 6 years ago

Email.

I work at a software development shop and most of our internal Email-like communication happens face-to-face or via phone.

"Will you be available for a meeting two weeks from now?"

"I just wanted to tell you that I checked in my code."

"Please answer my trivially Google-able programming question right now."

"Tell person X that I want to talk to them later."

"The boss just told me [important news that concerns several people who are not at work today]."

I have to assume that most of the world works like that and Open Source communities are the great exception to this rule.

  • adrianmsmith 6 years ago

    I have been thinking about this recently and come to the following conclusion:

    - If you receive a call, it breaks your flow, which is bad.

    - If you send an email, it breaks your flow, which is bad. (As you have to wait for an answer, and do something else in the meantime.)

    So if you have a question, the optimal strategy (for you) is to make a call. But if someone else has a question, the optimal strategy (for you) is they email you.

    There's even a further level, which is that if someone else has a question, it's optimal (for you) that they email you, and you call them to give your answer, and discuss it until it's done.

    So if it's up to you (and you're thinking only about your own productivity above all else) then tell people to email you when they need something, but call them when you need something.

jetti 6 years ago

Sega Channel[0]. My buddy had it around ~1995 and it was amazing. Sleeping over at his house was always unique because they would change up what games they offered each month. There are game streaming services offered now, such as Playstation Now and I think Gamefly has game streaming as well, but it amazes me that it didn't catch on sooner.

[0] https://en.wikipedia.org/wiki/Sega_Channel

simonsarris 6 years ago

Google Wave had so many potential routes to usability as a kind of collaborative scratch pad.

Google Keep and the now-very-good Google Docs collaborating is what we really needed all along, I guess.

  • sidcool 6 years ago

    Google Keep is still quite low on features.

peterburkimsher 6 years ago

Smalltalk. Everybody was supposed to be able to program, not just professional software developers.

  • AnimalMuppet 6 years ago

    Believe it or not, that was the original goal of COBOL.

    • eadmund 6 years ago

      And SQL!

      The problem is that it turns out most human beings don't actually think in a clear and logical manner, and thus there's a need for someone to translate unclear and illogical statements into clear and logical instructions for a computer — that someone is a programmer.

      • hyperpallium 6 years ago

        And BASIC!

        BTW I find maths, with equations that are true in both directions, much less intuitive than programming, with functions that accept input and return output.

        I suppose maths is "clear and logical statements", code is "clear and logical instructions". Does that mean maths is easier than programming for most people?

farseer 6 years ago

I would have thought, by now Anti Ageing related research and products would be mainstream but most of that area is still fringe research and the products seem like snake oil.

  • swyx 6 years ago

    Anti Ageing seems too broad a category to be useful. were there any specific vectors or products you had particular hopes for?

true_religion 6 years ago

Xhtml. I bought into it completely but hmtl5 is a better system.

  • nayuki 6 years ago

    XHTML5 works today, and my site is live proof. It helps catch dumb typos when typing HTML code by hand. It also means the CSS is more likely to be applied to the correct DOM tree.

    • Kiro 6 years ago

      I had a look at your site and it looks like normal HTML to me, except <?xml version="1.0" encoding="UTF-8"?> at the top. How does that help anything?

      • nayuki 6 years ago

        The biggest difference is serving with the header "Content-Type: application/xhtml+xml". This triggers strict XML parsing in the browser, and any syntax error produces a big ugly yellow screen.

      • AndrewOMartin 6 years ago

        Many sites are very subtly not normal HTML.

        • Kiro 6 years ago

          Example from this site?

hashim-warren 6 years ago

I was sure the Facebook Platform would take over the world. I thought the fast success of Zynga would create a blueprint for all web apps would be built in the future

  • zerostar07 6 years ago

    I think it did. It's facebook itself that killed it as soon as their ecosystem had enough content of its own.

    In fact, i believe even today there's an opportunity for a social network for silly games. its a great way to make random new friends.

  • swyx 6 years ago

    hmm that's an interesting thought. I have no explanation for it either. My best suggestion for why this didn't happen was the world went mobile and the Android/Apple app store then become the platforms because of the install base and the tighter integration. Facebook (and msft of course) really, really, really fumbled mobile. Not news to anyone but i never thought of it in this context.

zwischenzug 6 years ago

erlang.

As a programmer it was a revelatory thing of beauty. When we tried to spread it within our org we found that the 'typical' (even relatively strong) programmer resisted its concepts and found it hard to learn.

Therefore it didn't scale for us.

  • jetti 6 years ago

    To me, Erlang syntax is odd and a put off. However, I am digging Elixir and diving into Erlang when need be for OTP functionality not yet wrapped by Elixir.

  • unboxed_type 6 years ago

    Most probably because of a lack of functional programming mindset.

  • gozur88 6 years ago

    How very odd. That's one language I found intuitive.

    • zwischenzug 6 years ago

      Yeah, me too. It's an interesting thing. The C syntax and imperative mindset is hard for a lot of people to get. Especially if they're not motivated by the effort.

awinder 6 years ago

I was pretty convinced that microtransactions were going to become super ubiquitous and solve a lot of monetization problems. Particularly in terms of press / media. I think you could make arguments that this still might be true in a long-term sense, and theres some movement in this space still, but I don't think this has panned out to be the slam-dunk hit that I thought it would be.

deerpig 6 years ago

For me it was VRML. I was working with SGI (not for) in Hong Kong at the time and it was amazing what you could do with it. They were even pushing it for use in interactive 3D banner ads. I went to the Tokyo event where they streamed someone in a motion capture suit in Mountain View with the animated figure being rendered on a browser in Tokyo. Very cool stuff. I wish it had taken off.

DoubleGlazing 6 years ago

Interactive digital TV.

back when digital TV launched in the UK during late 1998, the major platofrm (Sky, ONdigital, Telewest and ntl) were all hyping up their interactive services.

Sky's service was called Open.... (https://en.wikipedia.org/wiki/Open....) and you could order pizza, email, go shopping, dating, betting etc. Sky were so convinced it was going to be a money spinner they gave away all the boxes for free.

The underlying technology was also used to add interactively to TV shows. For example you could choose the camera angle in some football games and the BBC made a documentary about dinosaurs that worked like a multimedia CD-ROM.

Most of these services lasted less than five years, the services were sluggish, the proprietary tech didn't help and in any case the web did it better.

I didn't think the shopping/commercial side would succeed, but I was convinced there would be a huge market for interactive TV. Nope, there wasn't.

  • arethuza 6 years ago

    I remember looking at some of this stuff when I worked at an interactive TV company in the early 2000s - I remember being deeply troubled looking at the MHEG spec:

    https://en.wikipedia.org/wiki/MHEG-5

Blazespinnaker 6 years ago

Yahoo pipes. I am still holding out hope for this, especially AI pipelines.

  • vax 6 years ago

    I really miss pipes. Wondering how much we'd have to pay Yahoo for the source code. Think they'd take $100 for it? :)

  • swyx 6 years ago

    hmm. im sure this has been redone by dozens of startups. have you investigated modern iterations? what did you find?

too_tired 6 years ago

End-to-end email encryption.

  • vortico 6 years ago

    Meaning PGP, or services like ProtonMail?

    If PGP, it's a huge hassle to set up, and both parties have to do it, so it's no surprise it is rarely used.

ilkan 6 years ago

I expected Remote Desktop to be on every system... it's really frustrating that I can't remotely see and configure my elderly mom's iPhone screen when she needs tech support. "Do you see an x? Or is there something that looks like a backwards arrow? No? A less-than symbol? Or the word Back? No? Cancel? Maybe at the bottom?"

Steel_Phoenix 6 years ago

Projection mapping. It seems like a natural step between standard interfaces and AR. We've had the tech for a while. I want devices that use projected touchscreens. I loved the idea of Augmented Reality Sandbox as a type of toy for my kids. It seems like the tech got skipped over in anticipation of an AR tech that has yet to hit market.

  • hyperpallium 6 years ago

    I liked this too. There have been some phones with it; problem is projectors use a lot of power.

VLM 6 years ago

Token Ring Networking as a protocol/concept (not so much physical IBM hardware, of course) rather than CSMA/CD.

It naturally makes sense that allocating BW via a token would be much more power efficient and faster than transmitting into the void and hoping for no collision. Its hard to get line rate traffic with CSMA/CD but the financial services co I was working at back then had very high util in the upper 90%s for hours.

I also miss ATM and its complicated and semi-obscure adaptation layers, pretty cool although toward its death it was just a weird way to send pt-to-pt ethernet frames. ISDN was like that too, a whole protocol stack, pretty interesting exciting ideas, toward the end it was just a 128K pt-pt interface for internet access and of course a voice trunk signalling protocol replacing old E+M signalling. Both could have been interesting/cool. X.25 switching never really took off either.

  • sah2ed 6 years ago

    I did a cursory search [0] on Token Ring vs CSMA/CD and it appears that "worse is better" is why CSMA/CD won out.

    CSMA/CD was much cheaper to deploy than Token Ring.

    "Token-Ring has been a bit of a mystery for many people. This is due to the fact that Ethernet, and other Carrier Sense Multi Access - Collision Detection (CSMA/CD) networks, are the most widely installed network topology. This is because most network designers cannot look past the initial cost of Token-Ring. While Token-Ring does cost more per port to install, it offers vast benefits over Ethernet and other CSMA/CD topologies."

    [0] http://www.let.rug.nl/bosveld/algoritmiek/aptokenhc7.htm

albeebe1 6 years ago

I remember going to a cafe in Harvard Square called Cybersmiths in the mid 90s. You got a card, loaded some money on it, then you could surf the world wide web at high speeds. I thought you were going to see these types of cafes everywhere, but high speed internet to the home put them out of business.

  • ufo 6 years ago

    These still exist in many third world countries. A similar service is also available for PC gaming because even when people have a PC at home with internet, they might need a better PC with a better connection to be able to play their favourite games.

1065 6 years ago

Holidays on the moon never got the traction I was hoping for.

  • Jeff_Brown 6 years ago

    This will happen for sure, at least for princes and the like.

DannyB2 6 years ago

Ten years ago I was debating with a friend that GPUs would gradually change into massive general purpose processors. Instead they have remained special purpose and difficult to program.

My thinking was that with a large number of cores of processors that are more general in nature, you could still do great graphics. But you could do many other things as well. There are plenty of problems that are embarrassingly parallel if done right. There are also plenty of applications for adding significant amounts of computational power at the cost of a good graphics card. Computer vision. Speech recognition. Speech synthesis. Photo and video editing filters and processes. Even amazing scream savers.

  • lookACamel 6 years ago

    Well deep learning is pretty general purpose so ...

RalphJr45 6 years ago

I thought the laptop arms race would be raw power and battery not weight, thinness and style.

  • lowry 6 years ago

    Competition in laptops stalled, it is probably ripe for disruption. Looking the shameful Thinkpad 25, and the hype around it... there is a market for a developer laptop that does not suck.

    • hyperpallium 6 years ago

      Laptops were disrupted years ago by netbooks, which got killed by phones.

croisillon 6 years ago

Sony's MiniDisc music format!

  • VinzO 6 years ago

    I came here to say that. It was much more convenient than burning CDs.

qilo 6 years ago

Large OLED panels everywhere: TVs, monitors, laptops, etc.

There was a lot of excitement, once blue color longevity issues were solved, how it is easier and thus cheaper to manufacture, how soon it'll outcompete LCD. 20 years later still waiting.

  • d3ckard 6 years ago

    That one is just becoming true I guess.

raphinou 6 years ago

Sun readable laptop screens. The olpc had one and a tablet was also produced with such a screen, which could switch from color mode to e-ink mode. Amazes me we still don't have sun readable screens for laptop and smartphones.

vadimberman 6 years ago

Segway and Code Morphing Software by Transmeta.

andrewstuart 6 years ago

CDROM & "interactive multimedia"

Linked Data & "Open Data"

On the other hand, I really wasn't convinced about the web until I essentially missed the giant early opportunities, which I was well placed to capitalize on.

aryehof 6 years ago

I thought that we would see a world where we would increasingly "mash-up" a solution using different services accessible through APIs. One where service and data providers would provide a programmable interface to their functionality (and data), perhaps in addition to their own user-interface.

It was conceivable for a time, however it didn't occur to me that doing so would mean no advertising dollars for them, unless they made their offering as "walled gardens".

Truly to me, it seems that "advertising, marketing and consumerism" are what makes the world go around.

no_gravity 6 years ago

VR glasses / headsets

I remember when I tried the first headset sometime between 1999 and 2002. I thought "This will be big". 15 years later it's still not big.

Tablets

I expected tablets to become omnipresent. But phones took that spot instead.

coke 6 years ago

Mach microkernal and GNU Hurd

with_a_herring 6 years ago

Plan 9 from Bell Labs

  • PeachPlum 6 years ago

    I was a user from 2000. The first question new users would ask was "where's the web browser". You basically had to run two computers, one with Linux/Windows and one Plan9.

    The next question was "what are the keyboard shortcuts" and the answer was "escape toggles text entry in the shell, research says using the mouse is quicker".

    It was a hard sell.

grandalf 6 years ago

It's fascinating to read everyone's responses. Mine are:

- mobile devices with extremely long battery life (weeks)

- mobile devices that can be put into a dishwasher and do not need to be treated gently.

- HashCash

- non-ai personal assistant/concierge services

- Theranos-like tech

- Teledildonics

  • hyperpallium 6 years ago

    > mobile devices with extremely long battery life (weeks)

    Problem is charging overnight solves this for most people, and they turn to other needs.

    • grandalf 6 years ago

      Maybe. But I see lots of cases for sale w extra batteries, public charging stations, portable batteries, etc. I have to scramble to avoid a dead phone a few times per month, usually after not charging the night before.

  • jrs95 6 years ago

    I dunno about that last one. It seems like most consumer tech is trending towards teledildonics, metaphorically speaking ;)

kapilkaisare 6 years ago

I remember holding the ideas behind Jini (now practically defunct Apache River[0]) in very high regard.

[0]: https://river.apache.org/

  • erik_seaberg 6 years ago

    Thank you, I actually forgot about how excited I had once been about untrusted mobile code.

knackers 6 years ago

Google Wave. RIP T_T

TallGuyShort 6 years ago

Adapteva's Parallela. I don't follow the semiconductor industry that closely, so perhaps very similar architectures are becoming more common, but as someone with a passing interest in HPC I was very excited about their crowd-sourced board for playing with parallel programming on an architecture built just for that, but it suffered from many problems typical of crowd-sourcing. It was delivered late with a lot of business problems leading up to it. I played with it a bit and haven't heard of it otherwise since.

jFriedensreich 6 years ago

The famo.us frontend framework. The initial demos and vision seemed impressive and i fell for it. Unfortunately they got over ambitious and failed to get something solid enough out fast enough.

c_shu 6 years ago

1. E-book and products like evernote/google keep. I rarely write things on paper now because I often experience the difficulty of copying/searching. On computers copying/searching is a breeze.

2. VoIP, Skype, WhatsApp, etc. Many years ago I thought they can totally replace traditional phone calls, at least for 99% users. But that didn't happen.

3. Technologies for telecommuting. Less congestion, less pollution. And it saves both time and money. But it's still not so popular. (In Asia, it's very rare.)

jraines 6 years ago

NFC, Wave, Glass, semantic web, the DCI architecture; a few frameworks that I wouldn't want to speak ill of because they're not dead yet and their maintainers pop in here.

  • blowski 6 years ago

    NFC seems to be rather popular.

    • sigi45 6 years ago

      I bought my NFC Phone 3,9 Years ago, never used it. Apple is now doing something with it and it might come but it should have taken of way earlier.

      Right now i have already a contact less credit card and that works like a charm.

      • blowski 6 years ago

        Correct me if I’m wrong, but don’t Apple Pay and Android Pay already use NFC?

        • sigi45 6 years ago

          Thats what i meant with Apple. Android pay exists? Is it useable? Haven't seen that anywhere.

          • blowski 6 years ago

            I live in London, and both Apple Pay and Android Pay are almost ubiquitous here. I recently hired a rowboat from a guy at renting them at the side of a river in the countryside - and paid with Apple Pay.

            Admittedly, I’ve no idea how much either are used, and how widespread they are outside London.

            • sigi45 6 years ago

              Nice, still hoping for it to arrive in germany.

              There was one supermarket where i was able to pay with there app (it used a 4 digit code). It was quite nice to receive the digital bon per email.

              It would have been easy to make automatic analysis from them but after just a few weeks, the supermarket switched brand.

              • germanier 6 years ago

                Almost all German supermarkets actually support them. It's just the banks that don't offer it. If you make use of your EU freedoms and open an account abroad you could use Android/Apple Pay in Germany.

Tade0 6 years ago

Leap motion.

I bought one used just three months after rollout - should have seen this as a red flag back then.

The user experience was great - for the first 30 minutes. After that the pain related to holding the hand in the air was becoming to annoying to ignore.

Switching between this and the mouse/touchpad proved to be a great optimum though.

Anyway I successfully used it for my master's thesis to precisely place a few microphones in space, but after that I stopped using it altogether - too few useful applications to bother.

magoghm 6 years ago

4GL (Fourth Generation Languages). I never thought they would take the world by storm, but many people in the 1980's thought they were the future of (business) software.

alexee 6 years ago

Online accredited bachelor degree in CS, it seems there is a lot of resistance in this area too, why it takes so long for Coursera/Udacity to implemented this?

monk_e_boy 6 years ago

API / AI government. It seems that we could replace a lot of politicians with a simple bash script.

Voting reform.

It seems that with fake news and real news labelled as fake news, stupid facebook memes, junk internet adverts on everything... I thought the internet would be better. It started off with so much promise. Lots of smart people connected together, they all chanted "Get the masses online!" ... it turns out that comes with its own set of issues.

jraby3 6 years ago

3D printing. I thought it'd put China out of business and we'd all be able to 3D print random plastic Chinese goods on demand from everyone's home.

zimpenfish 6 years ago

Fractal Image Compression. Way better than JPEG in the mid-1990s but patents + expensive restrictive licenses left it pretty much dead in the water as a result.

dquail 6 years ago

Biometric Auth. My first job out of university in 2003 was with a biometric security company. even back then the technology was shockingly mature. But yet getting deployed was so difficult. I also remember in 2004 going to my favorite waterslide park and being able to use my fingerprint to get in and out of my locker - rather than an awkward key or combo. But a year later those lockers were replaced by clunky combo ones.

em3rgent0rdr 6 years ago

Fuel Cells.

  • donpdonp 6 years ago

    Yes Hydrogen Fuel Cells! Toshiba was claiming in 2006 that they were coming for laptop batteries. I still think the future of energy is solar cells, using water+electrolysis to get hydrogen for storage, and a fuel cell for conversion back to electricity.

ricardobeat 6 years ago

Bump, a technology that allowed you to match two devices (phone and/or computer) over the internet. You did this by literally bumping them together, or bumping the phone against your spacebar on a computer. A mix of geolocation, timing, and accelerometer data.

I used it a lot for file sharing, exchanging contact cards. Worked like magic, until Google bought and murdered it.

fallingmeat 6 years ago

Webvan. I could have ordered groceries to my door!

  • swyx 6 years ago

    i mean.. you were right... eventually!

magoghm 6 years ago

NeWS (Network Extensible Window System).

Although, we now have web browsers + JavaScript (not exactly the same thing, but there are some similarities).

ekianjo 6 years ago

3DO. I was not "convinced it would take the world by storm" but I really liked the idea of a console standard that anyone could manufacture, all compatible with each other. The PC equivalent for consoles. Too bad it did not work out (and there are many reasons why it did not, but that's not the place here to discuss it).

santaclaus 6 years ago

3D printing.

  • api 6 years ago

    It's huge in prototyping and is being used to build parts that can't be made any other way. Just never took off for personal use.

Iwan-Zotow 6 years ago

Plan9 (or rather whole stack of P9/Alef/Inferno/Limbo). Seems so elegant, small, could run everywhere

roryisok 6 years ago

Windows phones! And before those, minidiscs

CM30 6 years ago

From the gaming world, some I believed would be popular (but ended up failing) are:

1. Augmented reality. When I played around it with on the 3DS, I thought the idea was going to really blow up and end up having a ton of games based around it. And in the first year or two, we did get a few like that. Such as Kid Icarus Uprising having said features or Spirit Camera being entirely based on the technology.

But it quickly died off afterwards, and since about 2012 I don't think I've seen a single major game on the system where AR has been a central feature. Same goes with other consoles and systems too. It's occasionally been advertised (like with Microsoft and the Hololens or whatever it is), but it's generally remained a niche idea.

2. Also, the eReader. No, not the Kindle type, that thing the Game Boy Advance had in the early 00s where you could scan cards to unlock features in games. I genuinely believed that would be a huge revolution, to the point of importing the device from America to try it out. Alas, it failed pretty damn hard, and even the titles which had support for it in the US dropped said support in the European versions.

3. For non gaming stuff, VRML was a good example as well. Again, I thought the future of the internet would be 3D worlds accessed through the browser, and looked at the sites writing about it as if they were a glimpse of a high tech future only a few years off in the distance. Nope, VRML failed, and pretty much every attempt to implement VR functionality in the browser died too.

4. Also, I'm not sure if it counts as revolutionary on a tech level, but at one point there was a lot of talk about using oauth to tie various communities together into something akin to a forum network or Reddit equivalent. I think a group called Zoints tried this in the early 00s or so, and I expected their service to do pretty well off it.

Again, didn't really happen. Shared login systems did, in the most basic sense (login with Facebook/Twitter/Google/whatever) but it seemed people preferred walled gardens run by large companies over individual communities networked together.

Finally, there were an awful lot of Google products and services I expected to take the world by storm too. Google Wave has already been mentioned, but Google Buzz was another one too. Think it acted as a neat alternative to using Facebook comments or Diqus when it was active.

But yeah, my record with tech predictions is not exactly a great one.

dexterdexter 6 years ago

Segway! I'm surprised no one else mentioned this. The hype equaled the excitement it triggered on people. Personally, it signaled the arrival of the future at the time. Fast forward to today and they are just niche transportation devices used by tourists and to a lesser degree by some security officials.

  • krapp 6 years ago

    The hype with Segway was high as long as no one knew what it was... for a while people were seriously speculating that it was an anti-gravity hoverboard like from Back to the Future or something just as exotic - but as soon as everyone found out it was just a scooter (albeit a slightly clever one), the hype vanished.

rodolphoarruda 6 years ago

Wearables. I was expecting much better sensors and services to be consumed, specially in the Healthcare field.

What I see now is extreme attention to design and looks of smartwatches, but little talk on the type of value the could be adding to people's lives when connected to online services, phones or what have you.

trhway 6 years ago

Quantum computers. I thought so until i spent some time looking at superposition&entanglement and got convinced that there is no superposition&entanglement. Looking at the published Bell inequality experiments i see exactly opposite of what whose experiments supposedly confirm :/

MarkMMullin 6 years ago

The original idea of the internet as a highly distributed fault tolerant system that could route around localized failures. I will admit that part of that philosophy emerged when leaving arpanet and trying to get stupid bang email addys and netnews flowing over dialup :-)

ezconnect 6 years ago

Google Glass. When I first saw it, I thought it would be the greatest tool every human should have.

arca_vorago 6 years ago

Wireless power. In about 2007 I was convinced it was "the future", but other than a few mobile devices with "lay it directly on top of this charging pad" setups, the safety and power-loss issues seem to be pretty large barriers to overcome.

NautilusWave 6 years ago

Mirasol display technology. I wouldn't be surprised if the display's colors simply weren't saturated enough to be marketable; but I still pine for a screen that I could actually see better under bright light instead of struggling with glare.

epx 6 years ago

Windows Phone

virtualized 6 years ago

Solid State Drives.

In 2012 I did not expect that you could still buy $2000 laptops with spinning rust in 2017.

  • dsschnau 6 years ago

    yeah but that's just because spinning platters are still cheaper by the byte - if you don't care much about performance you buy that. But any computer worth using (imo) has a ssd in it, and that's been the case since 2012.

vog 6 years ago

Parser generators. They get better and easier to use every year (PEGs being the newest kid on the street), but we still see loads of buggy ad-hoc parsers with loops and regexes. And many of those bugs turn into actual security issues.

  • Jeff_Brown 6 years ago

    Yes, these are great! Text.Megaparsec.Expr blew my mind. It lets you write something that can evaluate expressions like "(3 + 4/(-2)) * 7" -- a nested expression parser, with prefix, postfix and infix operators of varying associativity and precedence -- in 12 lines of code. I made a repo about it [here](https://github.com/JeffreyBenjaminBrown/megaparsec/tree/mast...).

smt88 6 years ago

Google Glass

realrocker 6 years ago

Smartwatches. I was convinced enough to bet two years of my life working on them.

  • pimmen 6 years ago

    I have two big reasons why I didn't get a smart watch. Number one is the price, I just don't think it's worth hundreds of dollars not to pick up my phone when I get a notification.

    The other is that the interface is too small to do anything but consuming notifications or looking at a compass. And, I just don't see the problem being solved by speech recognition either, I don't want the rest of the bus to know that I have to Google "where do I buy bigger condoms?".

    • sigi45 6 years ago

      My reason: Don't like to add another thing on the power cord every night.

  • swyx 6 years ago

    i was also extremely excited about them but didnt bet a career. i think its a combination of the tech (esp battery tech but also networking) not quite being there yet + a lot of people just plain don't wear watches no matter what. i wouldn't give up on them just yet (for that subset of the general population)

stfnhrrs 6 years ago

:CueCat this device allowed you to open up a link from a magazine or newspaper without typing it, truly a timesaver! I can't figure out why everyone wasn't using these. Also had a really cute form factor.

JustSomeNobody 6 years ago

Mesh networking.

I figured by now the cost of nodes would be so low and the technology so far more advanced that everyone could just stick a few nodes around and we'd have a completely free, open and decentralized internet.

swah 6 years ago

Segways - I thought every one moving by more than a block would be on one.

antfarm 6 years ago

Ubiquitous P2P networking.

  • api 6 years ago

    I see more and more of that.

  • swyx 6 years ago

    how was that supposed to work, in your view?

Raphmedia 6 years ago

Pokemon Go. It did take the world by storm but promptly died due to incompetence from Niantic. One can only hope that they didn't set enthusiasm for AR games too far back.

  • jetti 6 years ago

    I don't think Niantic was responsible for the decline but just the game itself was. It only garnered the attention it did because of the Pokemon name but at the end of the day the game was a grind. I saw a lot of people lose interest because there was nothing really unique about it. Everybody that I knew that played it turned the AR off as well, so it became just another game.

    • Raphmedia 6 years ago

      > I don't think Niantic was responsible for the decline but just the game itself was.

      They re-skinned their previous game and only added a gym fighting minigame. It's entirely their fault. Even today the game doesn't have all the features shown in the trailer. Their game has no end-game content either so even the core demographic lost any reason to play.

      • jetti 6 years ago

        That is true. I guess you can't blame the game without putting the blame on the developers. I'm honestly shocked that the game got approved by all parties involved, though I'm sure it still made a bunch of money for all parties involved.

  • CodeCube 6 years ago

    I still can't believe that trading pokemon between friends wasn't a day 1 feature.

wj 6 years ago

Palm. A computer (some with wireless Internet) in your pocket!

technofiend 6 years ago

Iridium held so much promise and was such a disappointment.

  • swyx 6 years ago

    the radioactive element? what was the promise you were excited about?

    • technofiend 6 years ago

      The satellite phone provider that filed for bankruptcy.

GFischer 6 years ago

I thought videocalls and live video-shopping would be ubiquitous (and I put my money where my mouth was).

I still think we're due more interactive shopping experiences.

  • michaelmior 6 years ago

    Aren't video calls pretty ubiquitous now? (Facetime, Skype, etc.) Not in connection to shopping though.

Heraclite 6 years ago

NFC on the phone.

I thought it would revolutionise lots of things. Turns out it took 10 years longer than I expected and is just a "fun" feature for now.

  • jpatokal 6 years ago

    NFC & Android/Apple Pay is actually kinda amazing if you live in a country like Australia where tap to pay is universally supported. It's starting to get traction for transport smart cards as well.

robk 6 years ago

OS/2

  • swyx 6 years ago

    why were you convinced it would take over the world? I'm pretty ignorant on OS history, but would love to learn.

godisdad 6 years ago

Types.

  • Jeff_Brown 6 years ago

    And purity! And I'm still waiting for dependent types. Idris looks beautiful but I'm afraid to commit to such a new language.

lldata 6 years ago

Scala ... now betting on Kotlin to replace Java.

ainiriand 6 years ago

Any GNU/Linux desktop for the common user.

krapp 6 years ago

I didn't expect it to take the world by storm, but I expected Hack to be a lot more popular by now than it seems to be.

amelius 6 years ago

I was convinced at some point that CPU clocks would become much faster than the 4Ghz that seems to have become the limit.

deepnotderp 6 years ago

Silicon on insulator.

I was convinced the entire industry would rapidly adopt it and that it would quickly replace bulk CMOS.

mindcrime 6 years ago

Various approaches to authentication / identity management:

OpenID WS-Federation etc.

Multicast

XHTML, XQuery/XPath/XLink/XPointer/etc.

antfarm 6 years ago

Interactive computer simulations of complex adaptive systems, i.e. flight simulators for decision makers.

  • swyx 6 years ago

    i remember the Scorpion guy claimed to have built something like that and somehow applied it to war scenario modeling in Afghanistan? Smelled like a crock of b.s. but then again I don't have a 200 IQ

    for those who dont know about it: its this thing: https://scorpioncomputerservices.com/scengen

eyko 6 years ago

The semantic web, and speech to text.

susi22 6 years ago

Spaced repetition is the most efficient way to learn associations. Thought, it's still barely.

apapli 6 years ago

100VG-AnyLAN. Was superior to Ethernet in performance but it’s proprietary nature held it back.

firozansar 6 years ago

Personally I thought Google Glass had the potential but technically its not dead yet.

pknerd 6 years ago

Internet Explorer... kidding :P

Lapsa 6 years ago

Microsoft LightSwitch :->

foobazzy 6 years ago

Nokia + Belle operating system. It was a beautiful future I dreamed of.

magoghm 6 years ago

Fractal image compression.

dejv 6 years ago

Smart homes as with computer controlled switches and automation.

peterkelly 6 years ago

Linux as a desktop OS

  • k3a 6 years ago

    Unfortunately. I still use it as my primary desktop but I see why people prefer others.

    It seems like most people don't want flexibility and configurability and they don't want to learn to manage OS. They just want to use PC comfortably and don't care about anything.

    For example one of a few things I find annoying on Linux is multiple GUI frameworks, each having it's own OpenFileDialog so I can't always easily open recent folder or recent file. It looks totally different in Qt and in GTK.

    I think Linux kernel is very solid and cool. But userspace is a mess, there is probably not enough standardization. For example also X11 (hacky, security problems) vs still buggy and unfinished Wayland. :( Compare that to OSX frameworks.

    Yet I love free software, appreciate all effort people put into it and will continue using it. Maybe one day I will also be able to help GNU/Linux improve..

anonymous5133 6 years ago

cell phone coverage broadcast by satellites. FCC eventually killed off the plan. A company called light squared launched a satellite to have it done.

reacweb 6 years ago

google glasses. For me, a smartphone should not have a display (only be a touchpad) and the glasses should be the display+camera+headset.

saluki 6 years ago

QR Codes . . . although they are BIG IN JAPAN

  • cheeze 6 years ago

    IMO the biggest issue with them is that the average consumer doesn't know what to do with them. Neither iPhone or Android (that I'm aware of at least) expose an easy way to read a QR code which would be intuitive to even a large minority of users.

  • dnh44 6 years ago

    Well you should see how common they are in China.

    • mszcz 6 years ago

      I don't know and I can't tell if that's sarcasm ;)

pers0n 6 years ago

E-ink RSS Virtual reality Webrtc FirefoxOS

amelius 6 years ago

Laptop battery power that lasts a month.

dogcow 6 years ago

XMPP - federated instant messaging

GnarfGnarf 6 years ago

Bubble memory. Josephson junction.

magoghm 6 years ago

The Eiffel Programming Language.

antfarm 6 years ago

Beacon technology in shops.

magoghm 6 years ago

The Connection Machine.

MBO35711 6 years ago

Space flight. Sigh!

cjsuk 6 years ago

Windows Phone 7 :(

  • jetti 6 years ago

    I really liked aspects of my Windows Phone but there were other things that just drove me crazy. It would lock up when I was on a call and I wouldn't be able to do anything. I'd have to pull the battery in order to get it back to a usable state. The speakers were terribly soft too, I couldn't hear my phone ring if it fell under the couch. I absolutely loved the live tiles though.

    • cjsuk 6 years ago

      Yes that was always the problem. If it worked properly it would have been amazing. But it didn't. My daughter has a Lumia 650 and it's a buggy pile of crap. That's SOP from MSFT mobile products.

rssllm 6 years ago

The Physical Web.

magoghm 6 years ago

Space Colonies.

magoghm 6 years ago

Expert systems.

bstamour 6 years ago

xhtml: it's just a good idea, IMO.

sicher 6 years ago

Magic wands.

Jeff_Brown 6 years ago

Anybody who thinks reading and writing is powerful, and anybody who thinks sharing is powerful, ought to be excited about knowledge graphs.

Knowledge graphs are powerful -- they underpin Google search, Facebook, Siri, etc. And open source tools exist for keeping your own knowledge graph. (Here's my favorite: https://github.com/synchrony/smsn/wiki/. It has, to my knolwedge, two ongoing users.)

I was part of a small study group once -- four to seven people, meeting daily for a couple hours to share economics notes. It was critical -- I could not have made it through the first year of grad school otherwise. Knowledge graphs can scale to far greater numbers of users.

So many people are biting their nails about the rise of AI. I belive we could leverage existing technology to bring about human superintelligence before then.

Internally, our minds work nonlinearly, but when it comes to written media, we process text linearly. That is slow, wasteful -- if someone writes a book full of gems of wisdom, but scatters redundant illustrations, obvious examples, or unnecessary motivating passages between them, you've got to wade through that chaff in order to find the gems. Knowledge graphs let us write and read nonlinearly -- faster, better targeted, hence able to cover more information, more kinds of information. It's like tables of contents all the way down.

Writing is great because it allows a reader to build on the work of earlier writers. Knowledge graphs, in addition to that, let readers build on the work of other readers. If I mark a passage as "obvious" or "critical" or "beautiful", that metadata can guide another reader. Online systems already use this idea to some extent -- facebook likes, reddit and HN votes, etc. can help inform reader choices. But knowledge graphs in principle allow for arbitrarily general metadata -- for instance, "show me every statement [group] has written about [topic] which has been marked useful for [goal] by at least [number] readers".

Publishing even a small body of useful, organized information can be extremely valuable. Craigslist, for instance. Although there was an obvious economic argument for publishing the information on Craigslist.

In fact, economics might be making us stupid. Non-monetizable information is of enormous importance. Globally, we are experiencing an epistemic crisis. Huge swaths of the public mistrust science, journalism, history. We appear to be at least nearly as susceptible to fascism as we were in the thirties. The arguments for freedom of speech, or civil rights, or being kind to strangers, ought to be as obvious to us as which job pays how much money.

Camus talked about something he called "philosophical suicide", wherein someone stops trying to think high thoughts. They fall into an economic routine, the specialize, their awareness narrows. They might excel at what they do. But a world full of such narrow thinking is a dangerous place.

I had heart surgery, almost died, and for years after, knowledge-gardened furiously. I wrote, reviewed, organized, categorized. I pondered long passages and reduced them to a few words, which became easier to process. What is pleasure? What are the elements of consciousness? Where is my boundary of certainty in ethics? Can I enumerate the state space of a conversation? It was a transformative experience -- I changed from an awkward, selfish, angry young man to a warm, gregarious, relaxed middle-aged man.

It was also slow, because I was working alone, and I used trees, which are less expressive than graphs. I can only dream of how transformative it would be to process those ideas in a group, nonlinearly, using a knowledge graph.

beamatronic 6 years ago

WebTV

  • Double_a_92 6 years ago

    Well didn't it? Considering Youtube and Netflix...

    Or is that some specific technology that I don't know about?

    • vortico 6 years ago

      No, it hasn't. Still, the majority of TV viewers use TV utility providers, not ISPs. When the average couch dweller sits down, grabs a remote, and turns on Netflix instead of "channel 42" on cable TV, WebTV will be commonplace, but it won't happen until the young Netflix-watching generation completely replaces the older cable-watching generation.

      • jayflux 6 years ago

        This is moving a lot faster than you think, we may be thinking more in years than generations.

Manicdave 6 years ago

Google glasses and those VR headsets

kapauldo 6 years ago

Virtual reality.. It was a solution looking for a problem last decade and is still.

timthelion 6 years ago

Microsoft XAML and .NET. They seem popularish, but the lack of an open ecosystem means that they didn't take the world by storm.

throw-away-8 6 years ago

Lisp machines. Expert systems. Functional languages: Scheme, Ocaml, etc. Itanium. OpenStack.

fallingmeat 6 years ago

CueCat. Coupon clipping was going to be rocked!

  • swyx 6 years ago

    were you actually convinced it would take the world by storm? is there a deeper story here?