microsoft / presidio
- суббота, 31 августа 2019 г. в 00:23:17
Go
Context aware, pluggable and customizable data protection and PII data anonymization service for text and images
Context aware, pluggable and customizable PII anonymization service for text and images.
Presidio (Origin from Latin praesidium ‘protection, garrison’) helps to ensure sensitive text is properly managed and governed. It provides fast analytics and anonymization for sensitive text such as credit card numbers, names, locations, social security numbers, bitcoin wallets, US phone numbers and financial data. Presidio analyzes the text using predefined or custom recognizers to identify entities, patterns, formats, and checksums with relevant context. Presidio leverages docker and kubernetes for workloads at scale.
Presidio can be integrated into any data pipeline for intelligent PII scrubbing. It is open-source, transparent and scalable. Additionally, PII anonymization use-cases often require a different set of PII entities to be detected, some of which are domain or business specific. Presidio allows you to customize or add new PII recognizers via API or code to best fit your anonymization needs.
Try Presidio with your own data
Unstsructured text anonymization
Presidio automatically detects Personal-Identifiable Information (PII) in unstructured text, annonymizes it based on one or more anonymization mechanisms, and returns a string with no personal identifiable data. For example:
For each PII entity, presidio returns a confidence score:
Text anonymization in images (beta)
Presidio uses OCR to detect text in images. It further allows the redaction of the text from the original image.
More information could be found in the Presidio documentation.
Presidio accepts multiple sources and targets for data annonymization. Specifically:
Storage solutions
Databases
Streaming platforms
REST requests
It then can export the results to file storage, databases or streaming platforms.
Presidio leverages:
<my-project>.Note: Examples are made with HTTPie
Sample 1: Simple text analysis
echo -n '{"text":"John Smith lives in New York. We met yesterday morning in Seattle. I called him before on (212) 555-1234 to verify the appointment. He also told me that his drivers license is AC333991", "analyzeTemplate":{"allFields":true} }' | http <api-service-address>/api/v1/projects/<my-project>/analyzeSample 2: Create reusable templates
Create an analyzer template:
echo -n '{"allFields":true}' | http <api-service-address>/api/v1/templates/<my-project>/analyze/<my-template-name>Analyze text:
echo -n '{"text":"my credit card number is 2970-84746760-9907 345954225667833 4961-2765-5327-5913", "AnalyzeTemplateId":"<my-template-name>" }' | http <api-service-address>/api/v1/projects/<my-project>/analyzeSample 3: Detect specific entities
Create an analyzer project with a specific set of entities:
echo -n '{"fields":[{"name":"PHONE_NUMBER"}, {"name":"LOCATION"}, {"name":"DATE_TIME"}]}' | http <api-service-address>/api/v1/templates/<my-project>/analyze/<my-template-name>Analyze text:
echo -n '{"text":"We met yesterday morning in Seattle and his phone number is (212) 555 1234", "AnalyzeTemplateId":"<my-template-name>" }' | http <api-service-address>/api/v1/projects/<my-project>/analyzeSample 4: Custom anonymization
Create an anonymizer template (This template replaces values in PHONE_NUMBER and redacts CREDIT_CARD):
echo -n '{"fieldTypeTransformations":[{"fields":[{"name":"PHONE_NUMBER"}],"transformation":{"replaceValue":{"newValue":"\u003cphone-number\u003e"}}},{"fields":[{"name":"CREDIT_CARD"}],"transformation":{"redactValue":{}}}]}' | http <api-service-address>/api/v1/templates/<my-project>/anonymize/<my-anonymize-template-name>Anonymize text:
echo -n '{"text":"my phone number is 057-555-2323 and my credit card is 4961-2765-5327-5913", "AnalyzeTemplateId":"<my-analyze-template-name>", "AnonymizeTemplateId":"<my-anonymize-template-name>" }' | http <api-service-address>/api/v1/projects/<my-project>/anonymizeSample 5: Add custom PII entity recognizer
This sample shows how to add an new regex recognizer via API. This simple recognizer identifies the word "rocket" in a text and tags it as a "ROCKET entity.
Add a custom recognizer
echo -n {"value": {"entity": "ROCKET","language": "en", "patterns": [{"name": "rocket-regex","regex": "\\W*(rocket)\\W*","score": 1}]}} | http <api-service-address>/api/v1/analyzer/recognizers/rocketAnalyze text:
echo -n '{"text":"They sent a rocket to the moon!", "analyzeTemplate":{"allFields":true} }' | http <api-service-address>/api/v1/projects/<my-project>/analyzeSample 6: Image anonymization
Create an anonymizer image template (This template redacts values with black color):
echo -n '{"fieldTypeGraphics":[{"graphic":{"fillColorValue":{"blue":0,"red":0,"green":0}}}]}' | http <api-service-address>/api/v1/templates/<my-project>/anonymize-image/<my-anonymize-image-template-name>Anonymize image:
http -f POST <api-service-address>/api/v1/projects/<my-project>/anonymize-image detectionType='OCR' analyzeTemplateId='<my-analyze-template-name>' anonymizeImageTemplateId='<my-anonymize-image-template-name>' imageType='image/png' file@~/test-ocr.png > test-output.pngThe script will install Presidio on your Kubenetes cluster. Prerequesites:
A Kubernetes cluster with RBAC enabled
kubectl installed
verify you can communicate with the cluster by running:
kubectl versionLocal helm client.
Recent presidio repo is cloned on your local machine.
Navigate into <root>\deployment from command line.
If You have helm installed, but havn't run helm init, execute deploy-helm.sh in the command line. It will install tiller (helm server side) on your cluster, and grant it sufficient permissions.
Grant the Kubernetes cluster access to the container registry
If you already have helm and tiller configured, or if you installed it in the previous step, execute deploy-presidio.sh in the command line as follows:
deploy-presidio.shThe script will install Presidio on your cluster using the default values.
Note: You can edit the file to use your own container registry and image.
| Module | Feature | Status |
|---|---|---|
| API | HTTP input | |
| Scanner | MySQL | |
| Scanner | MSSQL | |
| Scanner | PostgreSQL | |
| Scanner | Oracle | |
| Scanner | Azure Blob Storage | |
| Scanner | S3 | |
| Scanner | Google Cloud Storage | |
| Streams | Kafka | |
| Streams | Azure Event Hub | |
| Datasink (output) | MySQL | |
| Datasink (output) | MSSQL | |
| Datasink (output) | Oracle | |
| Datasink (output) | PostgreSQL | |
| Datasink (output) | Kafka | |
| Datasink (output) | Azure Event Hub | |
| Datasink (output) | Azure Blob Storage | |
| Datasink (output) | S3 | |
| Datasink (output) | Google Cloud Storage |
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.