> Banks don’t want private account details (like the user’s current balance and credit limits etc) being seen by anybody other than the account holder.
This is exactly the kind of justification they'd use. And surprise surprise there won't be a box that says "I'd rather risk somebody seeing my account details than have a biometric model of my face stored in your database and given to whoever you give it to".
Do people really consider the ethics of introducing tech like this? I guess it just comes down to a consequentialism vs deontology whether someone thinks this is a good idea. And, obviously, if they don't do it, I guess someone else will?
The great irony is that often technological innovations are believed to be liberating and done with the motivation of improving the common lot of humanity.
Two examples:
Social networking - Enabling more meaningful and greater connection between people, leading to greater happiness and fulfillment? Or, fostering more divisions, balkanization into "like" echo chambers, promulgating fear and prejudice, leading to greater depression and mass manipulation?
Cryptocurrencies - Disintermediation! need I say more ;) All kinds of utopian views of how this is supposed to promote freedom, security and efficiency. But top of the wish-list for the most paranoid and repressive authoritarian tyrant would have to be a magical way to fully control and monitor all economic activity: A cash-less society where every single transaction is done with the government controlled digital currency, recorded on the government controlled blockchain.
> Social networking - Enabling more meaningful and greater connection between people, leading to greater happiness and fulfillment?
I don't believe that this was a serious motivation for any of these companies for a second. That might have been in the press release, but most of these social networks were either born from Geocities-begets-Myspace incrementalism, or "watch these idiots give away their personal data" egocentrism.
An abridged survey of recent history:
Snapchat: it'd sure be easier to sext if these pictures disappeared
Instagram: it's easier to take a pic than write something
Twitter: pivot from failing product to internal tool
Facebook: privacy invasion as a service
Myspace 2.0: maybe we can make money off napster?
Myspace: Geocities clone
Geocities: AOL clone for the World Wide Web
AOL: Prodigy clone
Prodigy: Walled garden for selling Usenet access
Napster: OK, this one was probably the only one motivated to enable more meaningful and greater connections between people, leading to greater happiness and fulfillment
I once quit a valuable job, because my desk was situated in an open office floor plan, in front of a smart TV with an attached webcam for video chat meetings.
I was a person with a job title in control of passwords and two-factor auth routines that served as a potential gateway to millions of credit card numbers.
It pissed me off that these people needed us to have our daily morning stand-up right next to my desk, but it pissed me off even more, that I had a camera over my shoulder, running an operating system that I couldn’t audit with an admin account.
Rather than rocking the boat, and coming across as a paranoid obsessive control freak security goon, I drafted my resume, and who knows what’s going on at that place by now.
To anybody out there, dreaming up bullshit concepts and investing your youth in retarded start up ideas, when everything goes to shit and you’re left high and dry, it will be people like me who turned our collective backs on you, because of shit decision making like this.
> Social networking - Enabling more meaningful and greater connection between people, leading to greater happiness and fulfillment?
That wasn't the motivation-- it was instead to make identities discoverable online. And-- with Facebook-- to make it easier for college students to get laid.
Decentralization is not essential to cryptocurrency as a technology. It's essential to a view of how it could be used for the betterment of many.
The technology can just as easily be applied for other purposes, mandated by an authoritarian government to be the only legal way to conduct commerce. You'd use it not because you want to, but because you want to avoid their ire.
I think we're mixing terminology. There's a difference between distributed and decentralized.
You can have a distributed blockchain, while still being controlled by a central authority (i.e. not decentralized). That's pretty much Ripple, and it still has many of the advantages of blockchain technology: distributed/quick settlement, highly secure, highly available, etc...
It would be actually possible to use a government's money in various new ways. Think of what could happen in the economy if the government provided flexible and free payment services.
(Which is exactly why they'll never do it, but just pointing out that a government-controlled blockchain for normal people would have a LOT of applications)
Actually my government already has rules on that. Using my debetcard at a store, withdrawing money from an ATM or transferring money between bankaccounts is free.
> Do people really consider the ethics of introducing tech like this?
But consider the fact that you are using terms out of the humanities, while most of my fellow CS students endlessly griped about how useless their humanities courses were and how they wished the school would get rid of that course requirement (six courses total over 4 years).
Until we teach the tech-priests of the 21st century the great responsibility due to the power that they hold, I doubt things will improve.
I'm not sure I agree the implicit assumption here that humanities courses necessarily result in more, well, humanity.
In the UK, for instance, we have no requirements for a university degree other than the courses relevant to that degree. I was there to study computer science, not get a rounded education at someone else's insistence.
And yet most of the really rapacious, conscienceless, anti-human stuff seems to me to come from the libertarian and 'bro' fringes of Silicon Valley, seemingly despite the requirements to study more 'humanities'.
> In the UK, for instance, we have no requirements for a university degree other than the courses relevant to that degree.
Which depends almost entirely on the university in question. My computer science degree (Cambridge) had two separate courses focusing on ethics/humanities. Year 1 had Professional Practice and Ethics (which starts as broad as "Ethical theory. Basic questions in ethics. Survey of ethical theories: [...]. Advantages and disadvantages of the two main theories: utilitarian and deontological.", not just as it relates to CS), and Year 2 had Economics and Law (broad introduction to micro/macroeconomics, and a general overview of the law as it related to CS). The course introduction for the latter notes that you are to treat it as if reading a humanties subject:
> One word of warning: many part 1b students may never have studied a humanties subject since GCSE. It is a different task from learning a programming language; it is not sufficient to acquire proficiency at a small core of manipulative techniques, and figure out the rest when needed. Breadth matters. You should spend at least half of the study time you allocate to this subject on general reading. There are many introductory texts on economics and on law; your college library is probably a good place to start.
FWIW, I don't recall many complaints about the presence of these courses. Most seemed to find it useful to get a more rounded view, and it was a nice change of pace from tens of hours of pure computer science a week). It was also likely helpful for my later studies in Law.
> Which depends almost entirely on the university in question.
This is also true in the US. I actually chose to study religion and philosophy in addition to CS. My reasons for doing so aside, I truly benefited from it as it helps guide the type of work I will take. I'm torn on whether it should be required, mainly because I am not in a position to decide what makes a 'better' software engineer.
I didn't doubt that; I just felt that the OP made it sound like doing a UK CS degree would mean only studying pure CS, which is overstating things somewhat so wanted to clarify. You aren't necessarily going to be picking classes from across the university to fulfill generic requirements (if nothing else, you don't really have the time to do so), but it's not hyperfocused either.
I would certainly consider courses in professional practice, law and ethics to be far more supportive and on topic than generalised requirements to take a certain number of "humanities" courses.
I agree. It seems to come from the silicon valley culture more than anything. After all Peter Thiel, one of the more extreme examples of this sort of thinking, got a philosophy bachelors degree iirc.
The ethics and actions that spring from libertarian thought certainly appear to be suited more to perfectly rational robots than to imperfect, fleshy and irrational humans, and how people actually live their lives.
There was an article about just that the other week, on here I believe. It was detailing how a lot of the controversial tech ideas coming out of places like silicon valley are from these tech/startup moguls who are so disconnected from reality that they don't understand the concept of ethics or technological consequence. To them it doesn't matter how intrusive a design is, if it solves a problem, then that justifies all negatives.
I'm willing to consider that they're all confused Futurama fans. They heard "technically correct...the best kind of correct!" and transmogrified it to "technically a good idea, the best kind of idea!"
> Like any technology, we need to consider the ethics of its application carefully so we don’t build tools that are open to abuse, or worst case, terminators that can travel through time to kill people.
So no, they took the opportunity to clear up any fears or concerns about the project and used it to make a really, really scary joke about robots specifically designed to identify individuals by using cameras and kill them with guns.
Nobody at this project gave one real thought about ethics. Ethically, you can't make that joke about your software, it's nauseating.
The terrifying thing is that this type of application "demo" is being built by smart people who are already fully aware of the _theoretical_ concept of ethical violations being enabled by software.
And yet refuse to connect the obvious dots to the dystopian, anti-human capabilities enabled by the tools they're building. "Oh yes well we didn't mean it for THAT".
WAY more respect (fear) if they came out and just honestly explained all the revenue generating capabilities this could extract from users. Better business. Less disingenuous.
I certainly feel MUCH safer knowing that my software could FORCE me to spend my attention on it for whatever reason. Bravo Machine-Box, really helping make the world a better place.
This is why software needs an association with some teeth, because professional ethical standards don’t grow on trees. Note how a doctor doesn’t just say, “Well if I don’t dose these people without their knowledge or consent, someone else will.”
"When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success." - Robert Oppenheimer
Good analogy. Oppenheimer was a young, brilliant, and ambitious scientist with the money and power to pursue a huge project. The famous Oppenheimer quote upon seeing the result of his work is "now I am become Death, the destroyer of worlds." I don't think modern tech-bros have his level of insight or introspection, but fortunately the stakes are a bit lower.
It's an ethics arbitrage. This kind of business looks for unethical technologies that other companies wouldn't dare to create, and then packages it up in a form that is palatable enough to bring to market at a profit.
I both pay my mortgage and keep ethics in the house. I'm not sure this company or dev was so desperate to make money that this seemed like the last hope.
For most people, the biometric model of their face is available to nearly anyone. Most people have multiple profile photos available on Facebook, Instagram, LinkedIn etc.
> For most people, the biometric model of their face is available to nearly anyone.
I was curious about this claim, so I did some cursory research.
A Pew survey from 2013 [0]--which is perhaps a bit dated--found that 66% of respondents believed a photo of them existed online--which, notably, says nothing about access to the photo or metadata that relates the photo to an identity. Pew found in 2016 [1] that about 68%, 28%, and 25% of US adults, respectively, have a Facebook, Instagram, and LinkedIn account. These are presumably the main vectors for accessing the face:ID pair. So this gives us some ability to quantify the first part of your claim, "most people".
As for the second part of your claim--that this subgroup's face:ID is available to nearly anyone--I did not find data on who or how many people might have access to this information. The vector here is important, though. Let's consider the public UI, which is realistically the only interface most interested entities have access to. With only a name and no other queryable bits of information, finding the matching face is unlikely because of how many identical names there are. The ability to query other bits like geolocation, work history, the social graph, and, of course, the face itself should greatly increase the chance of finding face:ID, which is minimally an account with profile picture. The profile picture is not per se sufficient to extract a face model but also not per se necessary, as other face photos might be viewable in the profile. At this point I can really only speculate about the intersection of privacy settings and photos, but I think it's far from clear that this information is "available", which I take to mean that the information is accessible with relatively little effort and means. And again, this is just the people who have a social media account and probably a photo tied to their account, not the population at large.
Of course, there are entities which have access to far more data than this, but that is not "nearly anyone".
How careful have you been about that? Do you just mean not uploading your own profile photo to sites?
What about any birthday parties or group pictures you might be in? Photos for a work event or with friends?
Unless you've been very careful not to appear in any public photos of any kind, especially ones taken by your family and friends (as it is easily traced to you), then I would expect you have just as much of an online photo-profile identity as anyone else.
If you've been that careful, then props to you. Good job.
> I’ve searched for a picture of me, and have only ever found a grainy one from two decades ago. That is it.
But that's not what I mean.
Facebook knows who you are. Maybe you can't find it yourself using search terms, but that's not the point. You have a Facebook profile with an email and a set of pictures and a social web connected, I'm sure, even if you've never signed up. There's just a db flag set that says 'waiting for this person to sign up'.
That could be, it I’ve never uploaded a picture of me anywhere, and I avoid having my picture taken. I’ve never had a social media account with my real identity either, and never had an FB profile, period.
Those are all good steps, and I do want to say, I think our society would be in a better position if more people were concerned about this as you are. I am sorry to be saying these downer things. I don't mean to be defeatist or tell you that there's no hope. It's just that the situation is really that dire.
Phone cameras are high res. If you live in a city or go out in public often in busy places, then the sheer number of people taking selfies and photos is immense and makes it likely that over time, each of us is caught repeatedly in these photos.
With Facebook et. al's newfound face recognition and social scale, people who have never heard of a computer or phone or Facebook can now be automatically identified and tracked throughout the real world just by your face in other people's social photos.
I realize this dystopia might not accurate portray your life, but it is also meant as a comment for others. Privacy is not an individual choice any more. Our social networks have forced a change in expected privacy and there's little that we can do right now to change that, as the profit motives for the corporations like Facebook are aligned this way.
Note: Yes, being in public has an element of privacy. It is reasonable to expect that if you buy some groceries at a store in Atlanta, and the next week walk to Central Park in NYC, that a company in San Francisco who you have no relationship with would not know about it. But that expectation is now gone, and already it seems wild that we could have ever had it. That is a kind of privacy that is lost forever.
Damn, I'd never realised things had gotten that bad. I willingly share photos of myself online but that's my choice. It's sad that even privacy nuts can't hide themselves anymore.
> there's little that we can do right now to change that
There's plenty, but I think it starts with not wasting that energy on people who think there is nothing we can do.
> already it seems wild that we could have ever had it
It seems wild to you, it's a given to me. The metrics for what is wheat and what is chaff haven't changed one bit, it's just we have a lot more chaff, to the point of chaff declaring itself edible. But individually, a piece of wheat beats a piece of chaff every single time. So I suggest thinking of it sequentially going through a mob person by person. The emperor is naked, the people who worship him are fools, and individually they can offer no resistance and no argument. All they have is "but there are so MANY who have spines like wet noodles", or even "I agree, but there are these other people who aren't even here who maybe don't". In my mind, the more the merrier. Let every last person on the planet buy into this and it will change nothing at all about my personal outlook.
"Now, the modern person is determined by data exhaust—an invisible anthropocentric ether of ones and zeros that is a product of our digitally monitored age."
I won't doubt that in 15-20 years, using the process of deduction of your phone/laptop/browsing habits/credit card usage/address info a company could, if bothered, collate that data to get "your picture". And I mean literally zoom in a security cam to snap the photo.
You don't have a smartphone? You don't have a Google account? You don't have accounts on any platform (other than HN, obviously)? I think it's incredibly understated how easy it is to piece together who someone is from their data. You may not think you have pictures out there either but I'd venture to guess that something is out there.
My google accounts are in no way linked to a real identity. I have little doubt that my identity would be easy to deduce, but it won’t come with a picture, I hope. I’m not engaged in any secret or illegal activities, I just don’t want to give away my privacy, so I’m just cautious rather than appropriately paranoid. I do have a smartphone, and I have no illusions that law enforcement and governments can track me. The average person, and hopefully Facebook cannot.
Now that this is possible, what stops bad people/companies/governments from (ab)using this?
There's really nothing else you can do apart from wearing a mask and sunglasses, which will also be bypassed soon enough. (Not to mention that no one wants to do that.)
Unlike weapons, a lot of these ai tools have some force of good behind them as well - so good governments won't pass laws against this either.
I think that is highly unlikely. If lots of people are not okay with watching ads, there would be economic incentives for people to create ad-free alternatives. Even now we have lots of such alternatives. If someone is not able to afford that alternative, it may actually be harmful to advertisers to spend their money on showing these people ads anyway.
> economic incentives for people to create ad-free alternatives
Yes, but they will have to fight a giant up-hill battle because ads exploit a quirk of human psychology, which is that most people strongly undervalue their own attention.
I think it's hard to consider at this moment, but I'm sure this is something they'll at least investigate and consider. The ad industry is cutthroat, and if this gave them anyone an edge, you can bet all of them will follow suit in time.
There's something about software like this that really creeps me out, even though I realize it's not ridiculously advanced (i.e. it's way more common than I think) anymore. That might just be a personal aversion, though. I can imagine useful scenarios for this even if I get a bit of an icky feeling from it.
My co-founder and I have talked about things like this as an "anti-cheating" measure (we developed a take-home assessment platform), but it always feels way too overboard and invasive for an exaggerated problem (and I'm just against it in pretty much every way imaginable).
Interestingly this somehow feels better than overt measures like ProctorU, but that's an emotional reaction and not a logical one. In some ways it's probably much worse.
> Rather than one proctor sitting at the head of a physical classroom and roaming the aisles every once in a while, remote proctors peer into a student's home, seize control of her computer, and stare at her face for the duration of a test, reading her body language for signs of impropriety.
That article is from 2013, I wonder how much of this is now partially automated (i.e. relying on human remote proctors)?
In my experience ProctorU is not automated. They use a branded version of logmeinrescue that allows ProctorU to take full control of your PC and record from your webcam. You can also see the proctor. However, the proctor's camera is usually paused while taking the exam and they proctor more than one exam at a time. ProctorU also executes custom scripts on your PC to check for suspicious software and virtualization.
Source: As a remote student, I have used ProctorU several times. Most recently within the last few months. I use a dedicated PC for this proctoring.
Online proctoring is actually a really busy space in edtech. There are numerous companies with products deployed that are fully automated. They record videos of students taking the tests through a webcam, then send the analysis back to the instructors highlighting which videos are worth watching.
I never got the point of that. You can use a hdmi/dvi splitter to allow your accomplice to see your screen, and you can use your monitor's PIP function to allow your accomplice to send messages to you. both of which are totally undetectable to the student's computer.
had no idea about this, thanks for the link. Its kind of difficult to put into words how crazy this is to me without hyperbole or cliche. Very much feeling of through the looking glass.
Yeah, this whole thing seems a bit silly. You can't trust the webcam to be real or even functional.
And even if you could trust the entire website-to-webcam path end-to-end, you can't trust the image the hardware is reading. There's a reason that other face recognition systems like Windows Hello require that you have an IR camera, so that it knows it's not just looking at a photograph of a person.
It's just a cheap/easy biometric. Retina pattern or finger print are better biometrics, but generally require special purpose hardware for ease of use.
So what stops me from creating a device that registers itself as a webcam natively, but just puts a loop of a pre-recorded video that satisfies the face recognition software?
Stop trying to find solutions to problems that aren't real.
If you can be prosecuted for storing someone's diagnostic medical images improperly under HIPAA law, this seems like a VERY risky thing for a company to implement.
The policy might have been changed? An article I just read says: "taking a photo via webcam, uploading a photo of a picture id issued by the government, and making a record of their typing pattern."
> Banks don’t want private account details (like the user’s current balance and credit limits etc) being seen by anybody other than the account holder.
Unless it's an in-person interaction, a face has little security value, because it's not a secret. Getting a photo, or even full motion video of someone often just requires finding their Instagram page.
I think this is really about having powerful machine learning tools like face recognition, image recognition, content personalization and recommendation etc. in the browser.
Technologically speaking: Yeah, that's a nice feature.
Real world: This will be awful, I really don't want any DB to have a photo of me associated with transaction or authentification. Yes, I do have profile pictures, but allowing a service to get a "stream" of your face will be way worse, and I cannot imagine what would happen if this DB get compromised...anyway...still a nice Black Mirror episode though..
I can see from a privacy standpoint, this may cause some concern.
However, from a credit card processor point of view and combating "friendly fraud". This could be an excellent tool to prevent that.
For example. The scenario where a transaction has been processed and 6 weeks later, it is disputed because the card holder doesn't recognise the transaction. Perhaps the wife used the husbands card whilst he was in the shower, for [insert candy crush clone].
A capture of the users face would definitely help the merchant win the representment against Visa/Mastercard.
In a scenario where goods are being shipped cross-border. Lets say from China to the US and it's for a large amount. Then this could be an extra step, where the data hasn't passed a certain threshold and thus further information is required. Having a real-time snapshot and validation to prove the card holder is legitimate. Ensures the transaction goes through.
Ultimately, I do understand it's about weighing privacy concerns. But that doesn't mean some good can't come out from this.
Credit card processors and banks could do a lot to combat fraud that they don't do. Eg, realtime 2FA per transaction, which, as a bonus, could give the user a chance to categorize the purchase in their budget.
They do exactly the amount of security that they think is most profitable - balancing losses to fraud vs abandoned sales because of inconvenience.
Jumping straight to face recognition is a bit like physical security adding strip searches when they haven't yet bothered visually scanning for weapons.
Please, no. This has to be a parody.
> Banks don’t want private account details (like the user’s current balance and credit limits etc) being seen by anybody other than the account holder.
This is exactly the kind of justification they'd use. And surprise surprise there won't be a box that says "I'd rather risk somebody seeing my account details than have a biometric model of my face stored in your database and given to whoever you give it to".
Do people really consider the ethics of introducing tech like this? I guess it just comes down to a consequentialism vs deontology whether someone thinks this is a good idea. And, obviously, if they don't do it, I guess someone else will?
The great irony is that often technological innovations are believed to be liberating and done with the motivation of improving the common lot of humanity.
Two examples:
Social networking - Enabling more meaningful and greater connection between people, leading to greater happiness and fulfillment? Or, fostering more divisions, balkanization into "like" echo chambers, promulgating fear and prejudice, leading to greater depression and mass manipulation?
Cryptocurrencies - Disintermediation! need I say more ;) All kinds of utopian views of how this is supposed to promote freedom, security and efficiency. But top of the wish-list for the most paranoid and repressive authoritarian tyrant would have to be a magical way to fully control and monitor all economic activity: A cash-less society where every single transaction is done with the government controlled digital currency, recorded on the government controlled blockchain.
> Social networking - Enabling more meaningful and greater connection between people, leading to greater happiness and fulfillment?
I don't believe that this was a serious motivation for any of these companies for a second. That might have been in the press release, but most of these social networks were either born from Geocities-begets-Myspace incrementalism, or "watch these idiots give away their personal data" egocentrism.
An abridged survey of recent history:
Snapchat: it'd sure be easier to sext if these pictures disappeared
Instagram: it's easier to take a pic than write something
Twitter: pivot from failing product to internal tool
Facebook: privacy invasion as a service
Myspace 2.0: maybe we can make money off napster?
Myspace: Geocities clone
Geocities: AOL clone for the World Wide Web
AOL: Prodigy clone
Prodigy: Walled garden for selling Usenet access
Napster: OK, this one was probably the only one motivated to enable more meaningful and greater connections between people, leading to greater happiness and fulfillment
I once quit a valuable job, because my desk was situated in an open office floor plan, in front of a smart TV with an attached webcam for video chat meetings.
I was a person with a job title in control of passwords and two-factor auth routines that served as a potential gateway to millions of credit card numbers.
It pissed me off that these people needed us to have our daily morning stand-up right next to my desk, but it pissed me off even more, that I had a camera over my shoulder, running an operating system that I couldn’t audit with an admin account.
Rather than rocking the boat, and coming across as a paranoid obsessive control freak security goon, I drafted my resume, and who knows what’s going on at that place by now.
To anybody out there, dreaming up bullshit concepts and investing your youth in retarded start up ideas, when everything goes to shit and you’re left high and dry, it will be people like me who turned our collective backs on you, because of shit decision making like this.
> Social networking - Enabling more meaningful and greater connection between people, leading to greater happiness and fulfillment?
That wasn't the motivation-- it was instead to make identities discoverable online. And-- with Facebook-- to make it easier for college students to get laid.
Both were liberating.
But why would I want to use a government controlled blockchain? For me the concept of decentralization is essential to cryptocurrency.
Decentralization is not essential to cryptocurrency as a technology. It's essential to a view of how it could be used for the betterment of many.
The technology can just as easily be applied for other purposes, mandated by an authoritarian government to be the only legal way to conduct commerce. You'd use it not because you want to, but because you want to avoid their ire.
With a single trusted party, a blockchain is just a wildly inefficient database.
There's literally no reason to use it at that point, just make a simple db of account numbers and balances.
I think we're mixing terminology. There's a difference between distributed and decentralized. You can have a distributed blockchain, while still being controlled by a central authority (i.e. not decentralized). That's pretty much Ripple, and it still has many of the advantages of blockchain technology: distributed/quick settlement, highly secure, highly available, etc...
Actually there is. 21st century money.
It would be actually possible to use a government's money in various new ways. Think of what could happen in the economy if the government provided flexible and free payment services.
(Which is exactly why they'll never do it, but just pointing out that a government-controlled blockchain for normal people would have a LOT of applications)
"flexible and free payment services"
Actually my government already has rules on that. Using my debetcard at a store, withdrawing money from an ATM or transferring money between bankaccounts is free.
Its almost as if we live in 21st century already!
Okay, try to use your card to pay your neighbor or babysit, and tell me again how convenient it is.
Second, go to your nearest store, bakery or whatever, and ask them just how free debit cards are.
Why do you need blockchain for that if it's centralized and controlled by the government?
Monero, ZCash :)
> Do people really consider the ethics of introducing tech like this?
But consider the fact that you are using terms out of the humanities, while most of my fellow CS students endlessly griped about how useless their humanities courses were and how they wished the school would get rid of that course requirement (six courses total over 4 years).
Until we teach the tech-priests of the 21st century the great responsibility due to the power that they hold, I doubt things will improve.
I'm not sure I agree the implicit assumption here that humanities courses necessarily result in more, well, humanity.
In the UK, for instance, we have no requirements for a university degree other than the courses relevant to that degree. I was there to study computer science, not get a rounded education at someone else's insistence.
And yet most of the really rapacious, conscienceless, anti-human stuff seems to me to come from the libertarian and 'bro' fringes of Silicon Valley, seemingly despite the requirements to study more 'humanities'.
> In the UK, for instance, we have no requirements for a university degree other than the courses relevant to that degree.
Which depends almost entirely on the university in question. My computer science degree (Cambridge) had two separate courses focusing on ethics/humanities. Year 1 had Professional Practice and Ethics (which starts as broad as "Ethical theory. Basic questions in ethics. Survey of ethical theories: [...]. Advantages and disadvantages of the two main theories: utilitarian and deontological.", not just as it relates to CS), and Year 2 had Economics and Law (broad introduction to micro/macroeconomics, and a general overview of the law as it related to CS). The course introduction for the latter notes that you are to treat it as if reading a humanties subject:
> One word of warning: many part 1b students may never have studied a humanties subject since GCSE. It is a different task from learning a programming language; it is not sufficient to acquire proficiency at a small core of manipulative techniques, and figure out the rest when needed. Breadth matters. You should spend at least half of the study time you allocate to this subject on general reading. There are many introductory texts on economics and on law; your college library is probably a good place to start.
FWIW, I don't recall many complaints about the presence of these courses. Most seemed to find it useful to get a more rounded view, and it was a nice change of pace from tens of hours of pure computer science a week). It was also likely helpful for my later studies in Law.
https://www.cl.cam.ac.uk/teaching/0809/EconLaw/
> Which depends almost entirely on the university in question.
This is also true in the US. I actually chose to study religion and philosophy in addition to CS. My reasons for doing so aside, I truly benefited from it as it helps guide the type of work I will take. I'm torn on whether it should be required, mainly because I am not in a position to decide what makes a 'better' software engineer.
I didn't doubt that; I just felt that the OP made it sound like doing a UK CS degree would mean only studying pure CS, which is overstating things somewhat so wanted to clarify. You aren't necessarily going to be picking classes from across the university to fulfill generic requirements (if nothing else, you don't really have the time to do so), but it's not hyperfocused either.
I would certainly consider courses in professional practice, law and ethics to be far more supportive and on topic than generalised requirements to take a certain number of "humanities" courses.
Unfortunately the Econ, Law and Ethics course is still widely derided, even though some of the content is genuinely useful.
I agree. It seems to come from the silicon valley culture more than anything. After all Peter Thiel, one of the more extreme examples of this sort of thinking, got a philosophy bachelors degree iirc.
Libertarians aren’t “anti-human,” they are anti-totalitarianism.
The ethics and actions that spring from libertarian thought certainly appear to be suited more to perfectly rational robots than to imperfect, fleshy and irrational humans, and how people actually live their lives.
There was an article about just that the other week, on here I believe. It was detailing how a lot of the controversial tech ideas coming out of places like silicon valley are from these tech/startup moguls who are so disconnected from reality that they don't understand the concept of ethics or technological consequence. To them it doesn't matter how intrusive a design is, if it solves a problem, then that justifies all negatives.
I'm willing to consider that they're all confused Futurama fans. They heard "technically correct...the best kind of correct!" and transmogrified it to "technically a good idea, the best kind of idea!"
The article says,
> Like any technology, we need to consider the ethics of its application carefully so we don’t build tools that are open to abuse, or worst case, terminators that can travel through time to kill people.
So no, they took the opportunity to clear up any fears or concerns about the project and used it to make a really, really scary joke about robots specifically designed to identify individuals by using cameras and kill them with guns.
Nobody at this project gave one real thought about ethics. Ethically, you can't make that joke about your software, it's nauseating.
Yes. This.
The terrifying thing is that this type of application "demo" is being built by smart people who are already fully aware of the _theoretical_ concept of ethical violations being enabled by software.
And yet refuse to connect the obvious dots to the dystopian, anti-human capabilities enabled by the tools they're building. "Oh yes well we didn't mean it for THAT".
WAY more respect (fear) if they came out and just honestly explained all the revenue generating capabilities this could extract from users. Better business. Less disingenuous.
I certainly feel MUCH safer knowing that my software could FORCE me to spend my attention on it for whatever reason. Bravo Machine-Box, really helping make the world a better place.
Maybe they gave thought to their own ethics, but they likely didn't consider implications of being useful for someone without said ethics.
This is why software needs an association with some teeth, because professional ethical standards don’t grow on trees. Note how a doctor doesn’t just say, “Well if I don’t dose these people without their knowledge or consent, someone else will.”
Fuck no
"When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success." - Robert Oppenheimer
Good analogy. Oppenheimer was a young, brilliant, and ambitious scientist with the money and power to pursue a huge project. The famous Oppenheimer quote upon seeing the result of his work is "now I am become Death, the destroyer of worlds." I don't think modern tech-bros have his level of insight or introspection, but fortunately the stakes are a bit lower.
The site itself has a tiny section on ethics that might as well say "hey, use this ethically! wink, wink!"
Yes, but not for the reasons you think.
It's an ethics arbitrage. This kind of business looks for unethical technologies that other companies wouldn't dare to create, and then packages it up in a form that is palatable enough to bring to market at a profit.
> if they don't do it, I guess someone else will
In my book, the ethical viewpoint would be, “someone else might eventually do that, but it sure as hell won't be me”.
Ethics go out the window when you have a mortgage to pay. The tech industry also lives in a bubble.
I both pay my mortgage and keep ethics in the house. I'm not sure this company or dev was so desperate to make money that this seemed like the last hope.
For most people, the biometric model of their face is available to nearly anyone. Most people have multiple profile photos available on Facebook, Instagram, LinkedIn etc.
> For most people, the biometric model of their face is available to nearly anyone.
I was curious about this claim, so I did some cursory research.
A Pew survey from 2013 [0]--which is perhaps a bit dated--found that 66% of respondents believed a photo of them existed online--which, notably, says nothing about access to the photo or metadata that relates the photo to an identity. Pew found in 2016 [1] that about 68%, 28%, and 25% of US adults, respectively, have a Facebook, Instagram, and LinkedIn account. These are presumably the main vectors for accessing the face:ID pair. So this gives us some ability to quantify the first part of your claim, "most people".
As for the second part of your claim--that this subgroup's face:ID is available to nearly anyone--I did not find data on who or how many people might have access to this information. The vector here is important, though. Let's consider the public UI, which is realistically the only interface most interested entities have access to. With only a name and no other queryable bits of information, finding the matching face is unlikely because of how many identical names there are. The ability to query other bits like geolocation, work history, the social graph, and, of course, the face itself should greatly increase the chance of finding face:ID, which is minimally an account with profile picture. The profile picture is not per se sufficient to extract a face model but also not per se necessary, as other face photos might be viewable in the profile. At this point I can really only speculate about the intersection of privacy settings and photos, but I think it's far from clear that this information is "available", which I take to mean that the information is accessible with relatively little effort and means. And again, this is just the people who have a social media account and probably a photo tied to their account, not the population at large.
Of course, there are entities which have access to far more data than this, but that is not "nearly anyone".
[0] http://www.pewinternet.org/2013/09/05/anonymity-privacy-and-...
[1] http://www.pewinternet.org/2016/11/11/social-media-update-20...
As someone who has been careful about not doing that for a couple of decades, just fuck me, hm?
How careful have you been about that? Do you just mean not uploading your own profile photo to sites?
What about any birthday parties or group pictures you might be in? Photos for a work event or with friends?
Unless you've been very careful not to appear in any public photos of any kind, especially ones taken by your family and friends (as it is easily traced to you), then I would expect you have just as much of an online photo-profile identity as anyone else.
I’ve been that weirdly careful, yes. I’ve searched for a picture of me, and have only ever found a grainy one from two decades ago. That is it.
If you've been that careful, then props to you. Good job.
> I’ve searched for a picture of me, and have only ever found a grainy one from two decades ago. That is it.
But that's not what I mean.
Facebook knows who you are. Maybe you can't find it yourself using search terms, but that's not the point. You have a Facebook profile with an email and a set of pictures and a social web connected, I'm sure, even if you've never signed up. There's just a db flag set that says 'waiting for this person to sign up'.
Facebook knows.
That could be, it I’ve never uploaded a picture of me anywhere, and I avoid having my picture taken. I’ve never had a social media account with my real identity either, and never had an FB profile, period.
Those are all good steps, and I do want to say, I think our society would be in a better position if more people were concerned about this as you are. I am sorry to be saying these downer things. I don't mean to be defeatist or tell you that there's no hope. It's just that the situation is really that dire.
Phone cameras are high res. If you live in a city or go out in public often in busy places, then the sheer number of people taking selfies and photos is immense and makes it likely that over time, each of us is caught repeatedly in these photos.
With Facebook et. al's newfound face recognition and social scale, people who have never heard of a computer or phone or Facebook can now be automatically identified and tracked throughout the real world just by your face in other people's social photos.
I realize this dystopia might not accurate portray your life, but it is also meant as a comment for others. Privacy is not an individual choice any more. Our social networks have forced a change in expected privacy and there's little that we can do right now to change that, as the profit motives for the corporations like Facebook are aligned this way.
Note: Yes, being in public has an element of privacy. It is reasonable to expect that if you buy some groceries at a store in Atlanta, and the next week walk to Central Park in NYC, that a company in San Francisco who you have no relationship with would not know about it. But that expectation is now gone, and already it seems wild that we could have ever had it. That is a kind of privacy that is lost forever.
So maybe not "fuck IntraExon," so much as fuck everybody who isn't so vigilant?
Damn, I'd never realised things had gotten that bad. I willingly share photos of myself online but that's my choice. It's sad that even privacy nuts can't hide themselves anymore.
> there's little that we can do right now to change that
There's plenty, but I think it starts with not wasting that energy on people who think there is nothing we can do.
> already it seems wild that we could have ever had it
It seems wild to you, it's a given to me. The metrics for what is wheat and what is chaff haven't changed one bit, it's just we have a lot more chaff, to the point of chaff declaring itself edible. But individually, a piece of wheat beats a piece of chaff every single time. So I suggest thinking of it sequentially going through a mob person by person. The emperor is naked, the people who worship him are fools, and individually they can offer no resistance and no argument. All they have is "but there are so MANY who have spines like wet noodles", or even "I agree, but there are these other people who aren't even here who maybe don't". In my mind, the more the merrier. Let every last person on the planet buy into this and it will change nothing at all about my personal outlook.
I'm reading a (sometimes hyperbolic) but interesting "book" on this: Digital Stockholm Syndrome in the Post-Ontological Age:
https://www.upress.umn.edu/book-division/books/digital-stock...
"Now, the modern person is determined by data exhaust—an invisible anthropocentric ether of ones and zeros that is a product of our digitally monitored age."
I won't doubt that in 15-20 years, using the process of deduction of your phone/laptop/browsing habits/credit card usage/address info a company could, if bothered, collate that data to get "your picture". And I mean literally zoom in a security cam to snap the photo.
There's the combined idea of machine recognition (facial, gait, etc) - "convenience" and overreach:
Goes a bit like:
1) a place to start, 2) a way to follow, virtually, in time and space - augmented with total surveillance.
1) "In China, KFC tests out ‘smile to pay’" https://www.techinasia.com/kfc-china-tests-facial-recognitio...
2) "China’s facial-recognition systems crunch data from cameras to monitor citizens"
https://news.ycombinator.com/item?id=14643433
Also:
https://www.theverge.com/2013/2/1/3940898/darpa-gigapixel-dr...
https://motherboard.vice.com/en_us/article/nzey3w/what-those...
You don't have a smartphone? You don't have a Google account? You don't have accounts on any platform (other than HN, obviously)? I think it's incredibly understated how easy it is to piece together who someone is from their data. You may not think you have pictures out there either but I'd venture to guess that something is out there.
My google accounts are in no way linked to a real identity. I have little doubt that my identity would be easy to deduce, but it won’t come with a picture, I hope. I’m not engaged in any secret or illegal activities, I just don’t want to give away my privacy, so I’m just cautious rather than appropriately paranoid. I do have a smartphone, and I have no illusions that law enforcement and governments can track me. The average person, and hopefully Facebook cannot.
Apple could burn me horribly though.
Now that this is possible, what stops bad people/companies/governments from (ab)using this?
There's really nothing else you can do apart from wearing a mask and sunglasses, which will also be bypassed soon enough. (Not to mention that no one wants to do that.)
Unlike weapons, a lot of these ai tools have some force of good behind them as well - so good governments won't pass laws against this either.
disable your webcam? webcams aren't exactly standard on desktops.
Where "this" = face recognition technology :)
It's here, and I can't think of anything you can do about it (especially in public).
Plenty of fashionable options for messing with facial recognition. https://cvdazzle.com/
Not all computers have cameras. Pray that it remains that way.
My reaction too. But then, I use no devices with cameras or microphones.
Black Mirror predicted this in a much more realistic business case - forcing users to view ads.
https://www.youtube.com/watch?v=QleMXX24v5g&t=52s
I think that is highly unlikely. If lots of people are not okay with watching ads, there would be economic incentives for people to create ad-free alternatives. Even now we have lots of such alternatives. If someone is not able to afford that alternative, it may actually be harmful to advertisers to spend their money on showing these people ads anyway.
> economic incentives for people to create ad-free alternatives
Yes, but they will have to fight a giant up-hill battle because ads exploit a quirk of human psychology, which is that most people strongly undervalue their own attention.
I think it's hard to consider at this moment, but I'm sure this is something they'll at least investigate and consider. The ad industry is cutthroat, and if this gave them anyone an edge, you can bet all of them will follow suit in time.
There's something about software like this that really creeps me out, even though I realize it's not ridiculously advanced (i.e. it's way more common than I think) anymore. That might just be a personal aversion, though. I can imagine useful scenarios for this even if I get a bit of an icky feeling from it.
My co-founder and I have talked about things like this as an "anti-cheating" measure (we developed a take-home assessment platform), but it always feels way too overboard and invasive for an exaggerated problem (and I'm just against it in pretty much every way imaginable).
Interestingly this somehow feels better than overt measures like ProctorU, but that's an emotional reaction and not a logical one. In some ways it's probably much worse.
Some more info about ProctorU (for others who haven't heard about it):
https://www.chronicle.com/article/Behind-the-Webcams-Watchfu...
> Rather than one proctor sitting at the head of a physical classroom and roaming the aisles every once in a while, remote proctors peer into a student's home, seize control of her computer, and stare at her face for the duration of a test, reading her body language for signs of impropriety.
That article is from 2013, I wonder how much of this is now partially automated (i.e. relying on human remote proctors)?
In my experience ProctorU is not automated. They use a branded version of logmeinrescue that allows ProctorU to take full control of your PC and record from your webcam. You can also see the proctor. However, the proctor's camera is usually paused while taking the exam and they proctor more than one exam at a time. ProctorU also executes custom scripts on your PC to check for suspicious software and virtualization.
Source: As a remote student, I have used ProctorU several times. Most recently within the last few months. I use a dedicated PC for this proctoring.
Edit: Clarified text and fixed some typos
Online proctoring is actually a really busy space in edtech. There are numerous companies with products deployed that are fully automated. They record videos of students taking the tests through a webcam, then send the analysis back to the instructors highlighting which videos are worth watching.
I never got the point of that. You can use a hdmi/dvi splitter to allow your accomplice to see your screen, and you can use your monitor's PIP function to allow your accomplice to send messages to you. both of which are totally undetectable to the student's computer.
It’s security theater.
This system seems designed to encourage abuse by all parties involved.
had no idea about this, thanks for the link. Its kind of difficult to put into words how crazy this is to me without hyperbole or cliche. Very much feeling of through the looking glass.
Am I the only one who doesn't quite understand the point?
I would assume you wouldn't only use this tech to secure information. But I don't see really how this adds any security when software cams exist.
Plus you have other issues, like people like me who work in low light, or picture frames in the shot, etc.
Cool hobby project though.
Yeah, this whole thing seems a bit silly. You can't trust the webcam to be real or even functional.
And even if you could trust the entire website-to-webcam path end-to-end, you can't trust the image the hardware is reading. There's a reason that other face recognition systems like Windows Hello require that you have an IR camera, so that it knows it's not just looking at a photograph of a person.
It's just a cheap/easy biometric. Retina pattern or finger print are better biometrics, but generally require special purpose hardware for ease of use.
So what stops me from creating a device that registers itself as a webcam natively, but just puts a loop of a pre-recorded video that satisfies the face recognition software?
Stop trying to find solutions to problems that aren't real.
i.e. what people were doing on ChatRoulette for a long time (and probably still are).
If you can be prosecuted for storing someone's diagnostic medical images improperly under HIPAA law, this seems like a VERY risky thing for a company to implement.
Coursera required face verification recently. Once I found out, I had my course fee refunded and have not signed up for another course.
The policy might have been changed? An article I just read says: "taking a photo via webcam, uploading a photo of a picture id issued by the government, and making a record of their typing pattern."
Sounds like spyware.
Do people realy think identities are stolen by criminals peaking over shoulders?
And, if so, couldn’t they use a camera, or mirror, or periscope to bypass this software?
or looking at the screen outside the camera's fov, which shouldn't be hard to do considering IPS have up to 178 degrees viewing angle.
For real, good grief. Telescopic lenses and office windows also exist.
Alright, it might be time to finally jump on getting a privacy filter.
No! No! No! How much more intrusive do we need to get seriously? This is a naive solution to the problem stated. And a bad one at it, too.
> Banks don’t want private account details (like the user’s current balance and credit limits etc) being seen by anybody other than the account holder.
Unless it's an in-person interaction, a face has little security value, because it's not a secret. Getting a photo, or even full motion video of someone often just requires finding their Instagram page.
How much secure this will be ?
My concern is what if some one show my digital photo to the website, will framework detect it ?
Apple said that they over come this by using true depth technology (which i guess required specific hardware).
I like the idea thought, but there is a big reason people did not implemented this before.
I think this is really about having powerful machine learning tools like face recognition, image recognition, content personalization and recommendation etc. in the browser.
Technologically speaking: Yeah, that's a nice feature. Real world: This will be awful, I really don't want any DB to have a photo of me associated with transaction or authentification. Yes, I do have profile pictures, but allowing a service to get a "stream" of your face will be way worse, and I cannot imagine what would happen if this DB get compromised...anyway...still a nice Black Mirror episode though..
"We noticed that you've looked away from your screen. Please continue watching the advertisement in order to continue!"
"Eye roll detected. Would you like to send feedback to this ad partner?"
"Thanks for watching! Please help keeping this channel alive and do not look away during the ads. Bye!"
What happens if I do inspect elements and remove styling? :D
We obviously need to make this tech mandatory. You wouldn’t want your children looking at harmful websites, would you?
I see we will soon stop putting stickers on camera holes, but will use screwdrivers to pry them out and remove them. :)
Creepy shit
Oh my god don't
I can see from a privacy standpoint, this may cause some concern.
However, from a credit card processor point of view and combating "friendly fraud". This could be an excellent tool to prevent that.
For example. The scenario where a transaction has been processed and 6 weeks later, it is disputed because the card holder doesn't recognise the transaction. Perhaps the wife used the husbands card whilst he was in the shower, for [insert candy crush clone].
A capture of the users face would definitely help the merchant win the representment against Visa/Mastercard.
In a scenario where goods are being shipped cross-border. Lets say from China to the US and it's for a large amount. Then this could be an extra step, where the data hasn't passed a certain threshold and thus further information is required. Having a real-time snapshot and validation to prove the card holder is legitimate. Ensures the transaction goes through.
Ultimately, I do understand it's about weighing privacy concerns. But that doesn't mean some good can't come out from this.
No. I'm not having a photo of my face and the inside of my home associated with every online purchase. That's ludicrous.
You would still be free to checkout without face ID.
Of course this would incur a slight fee on the final sum, and buyer protections would be void, but you would still be free to opt out :-)
I agree. From a consumer point of view, you are welcome to take that position.
However, my comment was purely from a credit card processor, acquirer or even bank.
At the end of the day, if something like this was introduced. You are free to not comply, pay with an alternative method or shop elsewhere.
>You are free to not comply, pay with an alternative method or shop elsewhere
You are increasingly not free to choose an alternative, given the increasing centralization of the "too big to fail" finance and tech industries.
From a fraud-prevention POV this is idiotic as well, given how susceptible systems like this are to replay attacks.
Given the resistance in the US to chip-and-pin, I wouldn’t expect “anything-and-facial-rec” to catch on quickly.
Credit card processors and banks could do a lot to combat fraud that they don't do. Eg, realtime 2FA per transaction, which, as a bonus, could give the user a chance to categorize the purchase in their budget.
They do exactly the amount of security that they think is most profitable - balancing losses to fraud vs abandoned sales because of inconvenience.
Jumping straight to face recognition is a bit like physical security adding strip searches when they haven't yet bothered visually scanning for weapons.
N26 does real time 2FA for online card purchases. It’s super easy and painless.
Merchants ask for ID well less than 1% of all of my physical purchases.