Difference between revisions of "Personally Identifiable Information (PII)"
Line 1: | Line 1: | ||
In the context of a survey, personally identifiable information (PII) are variables that can, either on their own or in combination with other variables, be used to identify a single surveyed individual with reasonable certainty. During all steps of research and field work, research teams must protect PII through [[Encryption | encryption]] and [[De-identification | de-identification]]. This page will explain how to identify PII and how to calculate its disclosure risk | In the context of a [[Survey Pilot|survey]], personally identifiable information (PII) are '''variables''' that can, either on their own or in combination with other '''variables''', be used to identify a single '''surveyed''' individual with reasonable certainty. During all steps of research and field work, [[Impact Evaluation Team|research teams]] must protect PII through [[Encryption | encryption]] and [[De-identification | de-identification]]. This page will explain how to identify PII and how to calculate its disclosure risk. | ||
== Read First == | == Read First == |
Revision as of 19:08, 14 August 2023
In the context of a survey, personally identifiable information (PII) are variables that can, either on their own or in combination with other variables, be used to identify a single surveyed individual with reasonable certainty. During all steps of research and field work, research teams must protect PII through encryption and de-identification. This page will explain how to identify PII and how to calculate its disclosure risk.
Read First
- All PII must be stored in an encrypted folder.
- PII should be masked, encoded, or removed from the working dataset and any shared or published datasets. See de-identification for details on how to de-identify data.
- No PII can ever be publicly released without explicit consent. Researchers must ensure that this data remains private and safely stored.
Personally Identifiable Information
Common PII variables include:
- Names of survey respondent, household members, enumerators and other individuals
- Names of schools, clinics, villages and/or other administrative units (depending on the survey)
- Date of birth
- GPS coordinates
- Contact information
- Record identifier (i.e. social security number, process number, medical record number, national clinic code, license plate, IP address)
- Pictures of individuals or houses
Depending on survey context, the following variables may also be PII:
- Age
- Gender
- Ethnicity
- Grades, salary, job position
These lists aren’t exhaustive: what exactly is PII depends on the context of each survey. For example, if a survey covers a small farming community, variables such as plot size and crops cultivated could be combined to identify an individual household and, as such, would be PII. Administrative units could also be considered PII if there are few individuals in each of them.
Disclosure Risk
In order to calculate disclosure risk, researchers typically define a minimum threshold of individuals for which a certain value of the variable must apply in order for the variable be considered safe to disclose. If the threshold is not met, then the variable is considered PII. For example, at a threshold of 10, if a school has less than 10 students of a certain age, then age is considered PII as it could be used with other information to identify these students. The value of these thresholds depends on the context of the survey. See publishing data for more details.
Further, the US Census has published this brief on disclosure avoidance which lists the various forms of disclosure, and steps to avoid them.
Related Pages
Click here for pages that link to this topic.
Additional Resources
- DIME Analytics (World Bank), Encryption 101
- DIME Analytics (World Bank), Research Ethics
- J-PAL,
pii_scan
: A Stata program to scan for personally identifiable information (PII) - Matthew and Harel (University of Connecticut), Data confidentiality: A review of methods for statistical disclosure limitation and methods for assessing privacy
- Natalie Shlomo (University of Southampton), Releasing Microdata: Disclosure Risk Estimation, Data Masking and Assessing Utility