devindotcom 6 years ago

It's amazing to me how researchers spend a day or two looking into something and come away with tens of thousands of bots, in this case some that are nearly a decade old. Seems like Twitter can't be looking that hard, though it's far from a simple task and it's probably all they can do to stem the tide of new bots day by day.

  • acdha 6 years ago

    It's really hard to escape the cynical conclusion that for years they've been trying to ignore anything which would lower engagement metrics.

  • rhema 6 years ago

    Even with a 95% accuracy of detection, a good algorithm can point the "is a bot" finger wrongly at many people. If 10,000 bots are detected and banned, it could upset 500 real people. So, Twitter has to ask if its easier to put up with 9,500 not-so-active bots, or to potentially make 500 people very upset.

    • notahacker 6 years ago

      And academic and journalistic investigations into political bots have often subsequently discovered that a non-trivial proportion of apparent bot accounts are actually manned by ordinary people who genuinely feel retweeting certain influencers for hours on end and indiscriminately replying to other political figures with memes and slogans was a good use of their time in election cycles. It's Human Turing Test Failure central.

      • DyslexicAtheist 6 years ago

        >> retweeting certain influencers for hours on end and indiscriminately replying to other political figures with memes and slogans

        My twitter feed has became much more civilized after muting retweets[1] from anyone I follow. This leaves only genuine comments & discussions. I can again focus on what people are saying and toxic accounts that previously slipped through in the noise become visible allowing me to weed them out.

        [1] https://medium.com/@Luca/how-to-turn-off-retweets-for-everyo...

      • bogomipz 6 years ago

        >"And academic and journalistic investigations into political bots have often subsequently discovered that a non-trivial proportion of apparent bot accounts are actually manned by ordinary people ..."

        Do you have a citation for this? Or do you have some resources you could share that support this?

      • andrewbinstock 6 years ago

        Any source for this? Not disputing it, but would like to see some evidence.

        • jandrese 6 years ago

          Wasn't there a mass bot purge a couple of weeks ago immediately followed by complaints that Twitter was silencing important conservative voices because they're part of the Deep State?

          • bdcravens 6 years ago

            I believe the complaint was reduced follower counts reduced credibility of some real accounts

        • ATsch 6 years ago

          the 34c3 talk "social bots, fake news and filterbubbles" comes to mind (unfortunately originally in German)

          The tl;dw of this is that when defining "bots", most researchers used very misleading metrics e.g. ">20 tweets/day" and the worries and studies about "social bots" shaping discussion on twitter are generally not founded in reality.

          https://media.ccc.de/v/34c3-9268-social_bots_fake_news_und_f...

    • TehCorwiz 6 years ago

      It feels like the solution is some version of person-hood verification.

      Back in that day my brother and I both played on various MUDs (multi-user-dungeon/dimension). At the time most people had dial-up, due to my father's business we had an ISDN line and multiple computers, to prevent bots they made the assumption that multiple logins from the same IP were bots. So to combat this without blanket bans, the mods would warp us into a private area and have us type a different list of things simultaneously to satisfy themselves that we were actual people. It was a low-tech solution to a low-volume problem in a low-user-count game.

      However, something similar like making the user do a captcha on each login or post if they're suspected of being a bot would be a low bar for a human to cross, but someone running a bot network would probably have to invest some time or resources into overcoming the captcha. (Mechanical Turk, machine learning, captcha farm, etc.)

      • doorbumper 6 years ago

        The idea that real users will only be slightly inconvenienced is something I see often. However, that imposes real costs. A significant percentage of those users will not be determined enough to solve a captcha every time they login, and their usage will drop off.

        It's the reason I will never use CloudFlare for any product I build. Their DDOS and spam protection does protect you, but it also literally drives away users.

        The question Twitter should be asking is not, "Which accounts are bots?". That would just be bad business. Instead, they should be asking at what point does the presence of bots hurt the user experience more than imposing barriers on usage.

    • aylmao 6 years ago

      As far as I like to believe this is the reason behind this, I can't but assume this is a very simplistic view of the situation. I'm sure if they wanted they could do a better job verifying against bots; they don't have to instantly ban. They could instead require users to solve some captchas, submit some form of official verification, etc.

    • mschuster91 6 years ago

      > So, Twitter has to ask if its easier to put up with 9,500 not-so-active bots, or to potentially make 500 people very upset.

      The latter wouldn't be such a problem if they had an established, WORKING process for appeals. Or to get in contact with a human at all.

      As for the spambots: ignore them, they don't do much besides luring morons to porn sites. The Nazis and other similar trolls are vastly more dangerous for Twitter and for societies.

  • wpietri 6 years ago

    I think the thing they care about is not total number of bot accounts, but total amount users are bothered by bots. I agree that they could do a lot better, but there's something to be said for letting sleeping bots lie.

    In particular, it's important not to feed an adversary's OODA loop. If you kill bot accounts immediately upon detection, you give them information about what you can detect. That encourages them to create things you can't detect. Better to wait until the network becomes a problem and then roll up the whole thing at once. That tells them a lot less about how they got caught, and burns more of their work for the information you do give up.

  • dawnerd 6 years ago

    Twitter has a hard time dealing with the fake Elon Musk bots that post ethereum scams. I've reported way more than I can count and so far only a fraction have been banned. It's amazing how bad twitter is at filtering very obvious spam.

    • B1FF_PSUVM 6 years ago

      mumble ... (Twitter's) "salary depends on not understanding it" ... smh

  • everdev 6 years ago

    If you get paid to drive user growth are you really going to stiffle it?

    Those accounts and their interactions get wrapped up into overall stats presented to investors. They're not going to clamp down too hard on bot traffic as it's inflating their actual user base and interactions.

    As a content company blinded by growth stats, I'd want bots craving out content on my platform too as long as it was legible.

    As someone trying to build a quality platform, I wouldn't want them on there or would at least want them marked with a robot icon.

    Which side do you think Twitter is on?

  • jellicle 6 years ago

    Hard to get a man to find a bot when his job depends on his not finding it.

johnny99 6 years ago

It would be great if Twitter had a way of accepting community contributions to addressing their bot/troll/spam problem.

I see a lot more efforts like this coming from outside of Twitter than inside, and these kinds problems are rampant.

angelguerrero 6 years ago

I have a hunch that this is how the Russians are using Twitter and Facebook to mess with the American people

  • rdtsc 6 years ago

    Good news! You don't need a hunch, since Twitter directly approached RT (which is effectively a government control media apparatus) and pitched them access and discounts to "US voters" ahead of the 2016 election:

    https://www.buzzfeed.com/alexkantrowitz/twitter-offered-rt-1...

    Their response after that reveal was "We do not have any comment on our private conversations with any advertiser". Which, mostly confirms it. If it was a made up story, they would have quickly refuted it as such ("RT made this stuff up like they always do etc").

    Not only did they not care about American voters, they were an active part in allowing an external entity to manipulate them. But of course we all believe they changed now and their hearts and minds are one with the American voters. Phew, I can finally sleep better at night.

    • John_KZ 6 years ago

      Is this really any better than "internal" entities trying to actively brain-wash the electorate right before the elections with personalized propaganda and a fake impression of consensus?

      The problem is personalized ads. We need to get rid of them, regardless of who does it.

      • rdtsc 6 years ago

        > Is this really any better than "internal" entities trying to actively brain-wash the electorate right before the elections with personalized propaganda and a fake impression of consensus?

        It's the same thing pretty much. It's not like some Wall Street bank is going to have a lot more care and concern for the American voters than some country out there. The story about Russians is only interesting as it was turned into a rather successful PR campaign and a lot of time was spent writing, talking, investigating, and mud slinging based on it.

        > The problem is personalized ads. We need to get rid of them, regardless of who does it.

        It's going to be hard. The company which provides the most refined and exact profiles in the ad space will win. So even if there was some company that decided they are not doing this individual targeting, they would lose out and go out of business while company with the most detailed profiles would win. In a way I think that is why Google is afraid of FB. They realized at some point that FB holds much more nuanced and detailed profiles on people.

lmeyerov 6 years ago

If anyone is into this kind of analysis and likes jupyter notebooks, we've got a bunch of users having fun with it on Graphistry.com using our GPU viz tech. We support OSINT, so feel free to request an API key!

johnhenry 6 years ago

Unfortunately, the title of this post doesn't convey the content very well. I don't really have a better suggestion... Maybe something along the lines of "Finding botnets within twitter", or something.

  • dang 6 years ago

    We've taken a crack at it. If someone can suggest a better title—i.e. more accurate and neutral, preferably using representative language from the article—we can change it again.

    • Bud 6 years ago

      New title seems quite descriptive to me.

  • mirimir 6 years ago

    My initial reaction to your post was "Wait, these are networks of bots, not botnets." But now I'm wondering. Do botnet slaves typically run Twitter accounts?