github

daviddao / awful-ai

  • вторник, 6 ноября 2018 г. в 00:17:06
https://github.com/daviddao/awful-ai


😈 Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness



Awful AI

Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness to its misuses in society

Artificial intelligence in its current state is unfair and easily susceptible to attacks. Nevertheless, more and more concerning the uses of AI technology are appearing in the wild. This list aims to track all of them. We hope that Awful AI can be a platform to spur discussion for the development of possible contestational technology (to fight back!).


Discrimination

HireVue - App that scans your face and tells companies whether you’re worth hiring [summary]

AI-based Gaydar - Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans [summary]

Racist Chat Bots - Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages [summary]

Racist Auto Tag - a Google image recognition program labeled the faces of several black people as gorillas [summary]

PredPol - PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighborhoods [summary]

COMPAS - is a risk assessment algorithm used in legal courts by the state of Wisconsin to predict the risk of recidivism. Its manufacturer refuses to disclose the proprietary algorithm and only the final risk assessment score is known. The algorithm is biased against blacks (even worse than humans) [summary][NYT opinion]

Infer Criminality From Your Face - A program that judges if you’re a criminal from your facial features [summary]

Influencing, disinformation, and fakes

Cambridge Analytica - Cambridge Analytica uses Facebook data to change audience behavior for political and commercial causes [Guardian article]

Deep Fakes - Deep Fakes is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos. Deepfakes may be used to create fake celebrity pornographic videos or revenge porn. [AI assisted fake porn]

Fake News Bots - Automated accounts are being programmed to spread fake news. In recent times, fake news has been used to manipulate stock markets, make people choose dangerous health-care options, and manipulate elections, including last year’s presidential election in the U.S. [summary][the role of bots]

Attention Engineering - From Facebook notifications to Snapstreaks to YouTube auto plays, they're all competing for one thing: your attention. Companies prey on our psychology for their own profit [TED Talk]

Social credit systems

Social Credit System - Using a secret algorithm, Sesame credit constantly scores people from 350 to 950, and its ratings are based on factors including considerations of “interpersonal relationships” and consumer habits - [summary][Foreign Correspondent (video)][travel ban]

Vitality - Health insurance company that offer deals based on access to data from fitness trackers [summary]

Surveillance

Gait Analysis - Your gait is highly complex, very much unique and hard, if not impossible, to mask in this era of CCTV. Your gait only needs to be recorded once and associated with your identity, for you to be tracked in real-time. In China this kind of surveillance is already deployed. In addition, multiple people have been convicted on their gait alone in the west. We can no longer stay even modestly anonymous in public.

SenseTime & Megvii- Based on Face Recognition technology powered by deep learning algorithm, SenseFace and Megvii provides integrated solutions of intelligent video analysis, which functions in target surveillance, trajectory analysis, population management [summary][forbes][The Economist (video)]

Uber God View - Uber's "God View" let Uber employees see all of the Ubers in a city and the silhouettes of waiting for Uber users who have flagged cars - including names [rides of glory]

Palantir - A billion-dollar startup that focuses on predictive policies, intelligence and ai-powered military defense systems [summary]


Contestational research

Research to create a less awful and more privacy-preserving AI

Differential Privacy - A formal definition of privacy that allows us to make theoretical guarantees on data breaches. AI algorithms can be trained to be differentially private [original paper]

Privacy-Preservation using Trusted Hardware - AI algorithms that can run inside trusted hardware enclaves (or private blockchains that build upon it) and train without any shareholder having access to private data

Fair Machine Learning & Algorithm Bias - A subfield in AI that investigates different fairness criteria and algorithm bias. A recent best paper (in ICLR18), e.g. shows that implementing specific criteria can have a delayed impact on fairness.

Contestational tech projects

These open source projects try to spur discourse, offer protection or awareness to awful AI

Data Selfie - Data Selfie is a browser extension that tracks you while you are on Facebook to show you your own data traces and reveal what machine learning algorithms could predict about your personality based on that data [code]

AdNauseam - AdNauseam is a lightweight browser extension to fight back against tracking by advertising networks. It works like an ad-blocker (it is built atop uBlock-Origin) to silently simulate clicks on each blocked ad, confusing trackers as to one's real interests [code]

B.S Detector - B.S. Detector is a browser extension that searches all links on a given webpage for references to unreliable sources, checking against a manually compiled list of domains. It then provides visual warnings about the presence of questionable links or the browsing of questionable websites [code]

Snopes.com - The Snopes.com website was founded by David Mikkelson, a project begun in 1994 and has since grown into the oldest and largest fact-checking site on the Internet, one widely regarded by journalists, folklorists, and laypersons alike as one of the world’s essential resources

Facebook Container - Facebook Container isolates your Facebook activity from the rest of your web activity in order to prevent Facebook from tracking you outside of the Facebook website via third-party cookies [code]

Licenses

License

CC0

To the extent possible under law, David Dao has waived all copyright and related or neighboring rights to this work.