Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used to work for Uber, but not on this feature or anything related.

Apropos to the article, as a programmer for this feature, what you are actually asked to do is write a greybanning engine. It can take various features (geofence, denylist of phone numbers/emails/device identifiers/payment, etc.) and use it to calculate a score that applies a greybanning policy. The policy may be that the cars in the app are now fake, the ride will never come, your CC is "denied", etc.

Nothing illegal or unethical about this feature, as written, but it is a "dual-use" technology.

The feature has been used to literally save lives. There were taxi-affiliated people in South America that would call an Uber and then, at best, trash the car and beat the driver. At worst, they'd kill the driver. Those people need to be greybanned, along with scammers, criminals, and abusive people of all sorts.

The local market administrators, however, definitely might ban users that the know to be police ticketing the drivers, might ban any account signup from the police station, might ban city credit cards, etc.

You, as the programmer on this feature, can't defend against that unethical use of it.

If you work at the insurance company and get asked to write a rules engine but not the rules, this same thing applies to you.



Thanks for the info! I've always wondered about the inside dev perspective on it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: