United States: Privacy

The United States has adopted myriad federal and state privacy laws that give protections to individuals regarding the collection and use of personal information by both the public and private sectors. These laws are generally targeted to address specific instances of abuse or perceived market failures, or to protect particularly sensitive information, such as health information, and groups deemed worthy of special protections, such as children. The US approach stands in sharp contrast to the approach found in over 100 other countries around the world that have adopted omnibus privacy laws.

Legal and historical reasons largely account for the differences in privacy approaches. The US approach to the regulation of information as a general matter relies on the ‘marketplace of ideas’. Reflected in the First Amendment to the US Constitution, the United States has a long-standing history of rectifying inaccurate and inappropriate speech with more speech rather than less. As a result of these legal traditions, the United States focuses on the misuse of information rather than prohibiting or strictly regulating the collection or use of personal information. In contrast to the privacy regimes in other countries, the focus of any privacy inquiry in the United States is whether an individual can be harmed by the misuse of personal information. The premise under US law is not that the mere collection of personal information is improper and must be justified; rather, under US law an organisation usually can collect any information it desires, but it may not misuse that information to commit fraud or harm an individual.

In addition, information privacy regulation in the United States often functions as a patchwork, varying across different industries and states. Historically, individuals, government and industry shared a belief that ‘a one size fits all’ legislative approach would lack the necessary precision to avoid interfering with the benefits resulting from the free flow of information. Similarly, the United States, rather than the federal government, often pass their own versions of sectoral laws and enact legislation aimed at correcting specific issues of misuse or preventing harm in limited situations.

Federal law and regulation

The United States has a federal system in which laws are enacted at the levels of national government, state government and local government (eg, cities and counties). In general, privacy and information security laws are enacted at the state and national levels of government.

The federal government, for example, has enacted detailed privacy and information security rules that apply to financial institutions regarding the use of information relating to individual consumers, even though the states are also authorised to regulate these same entities (with certain exceptions) with respect to the same information. As a result, an organisation can be subject to the laws of the state in which the organisation is located, as well as subject to the laws of other states in which the organisation conducts activity, and also subject to all of the federal laws regulating those activities. Moreover, state laws continue to be enforceable even if a national law regulates the same conduct, unless certain conflicts between the laws cannot be reconciled under certain principles of constitutional law. In that case, the national law prevails over, or pre-empts, the state or local law.

Sectoral regulations

Regulation in the United States focuses on information viewed as particularly sensitive on a sectoral basis, such as financial information, health information, consumer report infor­mation, information about children and information that can be used for identification theft or fraud.[1]

Financial information

Title V of the Gramm–Leach–Bliley Act (1999) (GLBA) imposes privacy obligations on financial institutions with respect to the privacy of non-public personal information about consumers, including limiting disclosures of information to non-affiliated third parties.[2] The GLBA also directs various financial regulators to prescribe information security standards for financial institutions with respect to the security of customer records and information.[3]

The GLBA is implemented functionally by certain financial regulators that have oversight authority over different types of financial institutions. In general, the Consumer Financial Protection Bureau (CFPB) has primary authority to implement the GLBA with respect to privacy, although this authority does not extend to certain types of financial institutions, such as broker-dealers. With respect to security, various regulators implement and enforce information security standards, including the federal banking agencies, the National Credit Union Administration, the Securities and Exchange Commission (SEC) and the Federal Trade Commission (FTC).

Financial Privacy Rule

The CFPB has issued the Financial Privacy Rule that governs the privacy practices of most financial institutions.[4]

In general, the Financial Privacy Rule requires financial institutions to give customers privacy notices and prescribes requirements relating to the content, timing and methods of delivering the notices. For example, the Financial Privacy Rule requires that a financial institution provide its ‘consumers’ and ‘customers’ with its privacy policy in various instances (eg, an annual notice to customers).[5] In addition, the Financial Privacy Rule prohibits a financial institution from disclosing ‘nonpublic personal information’ about a ‘consumer’ to a non-affiliated third party unless: the institution has provided the consumer with notice and an opportunity to opt out of the disclosure of his or her information and the consumer has not opted out; or an exception applies that permits the financial institution to disclose the information.[6] In addition, the Financial Privacy Rule places limitations on the ability of any party that receives non-public personal information from a financial institution to re-disclose and reuse the information.[7]

Safeguards Rule

The GLBA also requires the FTC and other federal agencies to set standards regarding the administrative, technical and physical safeguards that financial institutions must implement to protect customer information (the Safeguards Rule).[8] Each agency is charged with issuing its own Safeguards Rule that imposes specific security requirements on institutions subject to its jurisdiction, but these rules are similar in many respects.

For example, the FTC’s Safeguards Rule directs all covered financial institutions to ‘develop, implement, and maintain a comprehensive information security program that . . . contains administrative, technical, and physical safeguards that are appropriate to [the institution’s] size and complexity, the nature and scope of [its] activities, and the sensitivity of any customer information at issue’.[9] As part of this information security programme, the Safeguards Rule requires financial institutions to, among other things, conduct a risk assessment of the reasonably foreseeable internal and external threats to the security, confidentiality and integrity of customer information to inform the programme’s creation, designate employees to coordinate the programme, train employees, contractually require service providers to safeguard personally identifiable information about customers, test and monitor the programme’s effectiveness, and update the programme in response to testing results and changed circumstances.[10]

Health information

The Health Insurance Portability and Accountability Act of 1996

The Health Insurance Portability and Accountability Act of 1996 (HIPAA),[11] as amended by the Health Information Technology for Economic and Clinical Health Act (HITECH) under the American Recovery and Reinvestment Act of 2009,[12] regulates the use and disclosure of certain types of ‘individually identifiable health information’ (ie, protected health information (PHI)).[13] Healthcare providers, health plans and healthcare clearing houses (covered entities), as well as their business associates, may use or disclose PHI only for treatment, payment or healthcare operations, unless a particular exception applies or if a patient provides written authorisation. Under HIPAA, patients have a right to access and amend their PHI held by a covered entity, and to receive an accounting of past disclosures and annual privacy notices.

HIPAA requires that covered entities and their business associates implement and maintain a broad range of administrative, technical and physical safeguards to protect the confidentiality, availability and integrity of electronic PHI (ePHI).[14] Covered entities and business associates are also required to carry out a periodic risk analysis of the threats and vulnerabilities facing their ePHI.[15]

Business associates must report actual and potential breaches involving PHI to the covered entity.[16] The covered entity must determine whether a notifiable breach occurred, which involves assessing the risk that the privacy or security of the PHI was compromised.[17] Covered entities must report breaches to impacted individuals, the US Department of Health and Human Services Office for Civil Rights, and in certain cases, the media.[18]

Genetic information

The Genetic Information Nondiscrimination Act (2008) (GINA)[19] prohibits both employers and group health plans from discriminating on the basis of genetic information relating to employees (including family medical histories). GINA also places significant limits on the ability of employers and group health plans to collect genetic information or use any such information once collected. GINA adopts a broad definition of ‘genetic information’ that encompasses not only genetic test results relating to, but also any manifestation of disease or disorder in, an individual and his or her family members (the individual’s blood relatives to the fourth degree of relation and any person who is a dependent of that individual as a result of marriage, birth or adoption).[20]

Consumer report information

The Fair Credit Reporting Act (FCRA)[21] regulates the use and disclosure of consumer reports information compiled and disclosed by consumer reporting agencies. For example, the FCRA and counterpart state laws restrict the ability of employers to obtain and use consumer reports in connection with pre-employment screening and internal investigations. The FCRA also requires other prospective users of consumer reports, such as lenders, insurers and landlords, to have a permissible purpose before obtaining a consumer report. In addition, the FCRA imposes obligations on companies that furnish information about consumers to consumer reporting agencies to be included in consumer reports focused on ensuring the accuracy of information that ultimately will be included in consumer reports. Similarly, the FCRA imposes extensive obligations and limitations on consumer reporting agencies with respect to, among other things, the type of information that may be included in consumer reports.

Children’s information

The Children’s Online Privacy Protection Act of 1998[22] and the FTC’s rule promulgated pursuant to the Act[23] (together, COPPA) apply to operators of websites or other online services that are directed to children under the age of 13 or that knowingly collect personal information from children under the age of 13.[24] Such operators must provide a privacy notice that explains their practices regarding children’s personal information.[25] The collection of personal information from a child, in most instances, requires the prior opt-in consent of the child’s parent or legal guardian.[26] COPPA imposes additional use, security, access, deletion and other obligations on covered operators. Interestingly, COPPA also applies to foreign websites that collect personal information from children in the United States.[27] Violation of the COPPA rule can give rise to civil penalties of up to US$42,530 per violation.[28]

Regulatory focus on electronic information

US regulation on information misuse focuses on information stored electronically because of a perception that such information can be misused more easily and exploited on a larger scale, causing greater problems. Other countries with omnibus laws tend not to distinguish among the forms in which information is maintained.

Computer Fraud and Abuse Act

The Computer Fraud and Abuse Act (CFAA) imposes criminal and civil liability on any individual who: (1) accesses information without authorisation or exceeds any such authorisation; (2) obtains information from any protected computer, a broad term encompassing any computer ‘used in or affecting interstate or foreign commerce’.[29] The CFAA requires typical plaintiffs to show losses of at least US$5,000 in any one-year period.[30] This can be an aggregate figure, including lost revenue associated with a service interruption and the company’s cost of investigating the violation.[31]

US federal courts have developed varying approaches to analysing authorised access in the context of terms of use violations and third-party access (ie, password-sharing, in which a user shares his or her log-in credentials with a third party who then accesses the network using those credentials). In Facebook, Inc v Power Ventures, Inc, the Ninth Circuit declared mere ‘violation of the terms of use of a website—without more—cannot be the basis for liability under the CFAA’.[32] However, when a defendant’s ‘permission to access a computer . . . has been revoked explicitly . . . technological gamesmanship or the enlisting of a third party to aid in access will not excuse liability.’[33] Other courts have found that violating terms of use related to log-ins can run afoul of the CFAA if the defendant knew about the term in question. For instance, where a licensing agreement prohibited sharing log-in credentials, the Eleventh Circuit found third-party access by a defendant who knew about the prohibition breached the CFAA.[34] In contrast, the Eastern District of Virginia found no CFAA violation when a third party flouted the terms of use by downloading screenshots. Because users did not need to accept the user agreement every time they logged in, there was no evidence that the third party, logging in with the user’s shared credentials, knew about the terms of use.[35]

Electronic Communications Privacy Act

The Electronic Communications Privacy Act of 1986[36] (ECPA) prohibits unauthorised interception or disclosure of wire, oral and electronic communications.[37] However, the ECPA’s prohibitions on interception do not apply if a company: intercepted the communication in the ordinary course of business; or one of the parties consented to the interception.[38] Under certain conditions, they may permit service providers to intercept customer communications.[39]

Regulatory focus on deception and unfairness

US laws also protect against unfair and deceptive practices. The FTC has broad regulatory authority over most companies. Relying on its powers to regulate unfair and deceptive trade practices, the FTC prosecutes businesses that, for example, fail to comply with their public statements regarding their privacy protections and practices, or fail to provide reasonable security protections for sensitive personal information. Thus, under US law, one of the key ways in which companies are held accountable is if they make a public promise that they do not in fact keep or if they fail to disclose an information practice that is material to consumers.

Federal Trade Commission Act

Section 5 of the Federal Trade Commission Act (FTC Act) prohibits ‘unfair or deceptive acts or practices in or affecting commerce’ and grants the FTC the authority to prosecute such unlawful activity against all companies except those listed as exceptions in section 5 (such as banks and telecommunication providers).[40] The FTC has relied heavily on this section, bringing actions against companies allegedly engaged in ‘deceptive’ or ‘unfair’ practices. For example, a company’s failure to abide by its own privacy policy may be deemed a deceptive practice. In addition, even if a company follows its privacy policy, its practice may be deemed ‘unfair’ if the FTC determines that the company’s actions cause substantial harm to a consumer that he or she could not reasonably have avoided, with no countervailing benefits to consumers or competition.[41] The FTC can impose injunctive relief for violations of section 5, which could include equitable monetary relief (eg, disgorgement of ill-gotten gains or consumer redress), by bringing an action in federal district court. The FTC may also seek voluntary compliance for alleged violations by entering into a consent order with a company that will impose FTC oversight and be in effect for 20 years. The FTC may enter into consent orders of its own accord, but penalties for violating such orders are assessed by a federal district court in a suit brought to enforce the FTC’s order.

Deceptive business practices: deviations from stated privacy policies

The FTC’s interpretation of ‘deceptive’ practices includes noncompliance with, and material omissions in, stated privacy policies as well as failure to obtain affected individuals’ consent when materially changing those policies and seeking to apply the change retroactively, as demonstrated by its first-of-its-kind 2004 enforcement action against Gateway Learning Corporation (Gateway).[42] The Gateway website privacy policy had stated that the company would ‘not sell, rent, or loan any personally identifiable information regarding our consumers with any third party . . . [without the] customer’s explicit consent’.[43] Gateway indicated that it would inform consumers and provide the opportunity to ‘opt out’ if the policy changed.[44] According to the FTC, however, Gateway rented consumers’ personal information to target marketers for use in mailings and telemarketing calls, without such consumers’ consent. The FTC charged Gateway with violating section 5 of the FTC Act because its practice of sharing consumers’ personal information in this way violated its privacy policy and was thus deceptive.

The FTC has also taken action against companies for falsely claiming compliance with specific privacy frameworks and using deceptive tactics to collect information. For instance, in 2019, background screening company SecurTest, Inc reached a settlement with the FTC over allegations that it falsely represented itself as a certified participant in the Swiss–US Privacy Shield and EU–US Privacy Shield agreements.[45] The FTC issued warning letters to 15 companies making similar claims, instructing them to remove these claims from their websites and privacy policies.[46] The FTC has also entered settlement agreements with companies over allegedly deceptive harvesting tactics. For example, the FTC alleged that infor­mation analytics company Cambridge Analytica falsely claimed that it did not collect Facebook users’ personal information. Similarly, the FTC alleged that digital advertising company Turn Inc had deceived consumers about the extent of online tracking in which it engaged.[47]

Unfair business practices: inadequate information security measures and surreptitious monitoring

Under the ‘unfairness’ prong of section 5, the FTC may take action against companies for, for example, allegedly failing to adequately safeguard consumer information and monitoring consumers without their consent. FTC guidance collates lessons learned from its enforcement actions on how to design products and services in a privacy-protective way.[48] This guidance emphasises that businesses should:

  • collect only the personal information they need;
  • retain personal information only as long as they have a legitimate business need for it;
  • restrict access to sensitive personal information and limit administrative access, which may allow a user to make system-wide changes;
  • require strong passwords and authentication procedures;
  • secure sensitive personal information during transmission and storage;
  • securely dispose of sensitive personal information;
  • segment and monitor networks;
  • verify and test privacy and security features in new products; and
  • contractually require service providers to implement appropriate security measures and verify their compliance.[49]

This guidance applies to both electronic information (proper encryption of sensitive information, securely wiping hard drives, etc) and tangible items (shredding prescriptions, not leaving laptops containing sensitive files in cars, etc).[50]

The FTC has considered inadequate information security a violation of section 5’s prohibition against ‘unfair and deceptive’ practices. For instance, after the 2017 Equifax breach affecting 147 million people, the FTC brought an enforcement action against the consumer credit reporting agency for its alleged failure ‘to take reasonable steps to secure its network’.[51] While at other times the FTC has gone so far as to take actions against companies for inadequate security measures even in the absence of express promises of protection to consumers,[52] Equifax did have a privacy policy at the time of the breach, claiming it protected consumer information with ‘reasonable physical, technical and procedural safeguards’. However, the FTC alleged that the company ‘failed to implement basic security measures’. Specifically, the FTC criticised Equifax for storing sensitive information like passwords and Social Security numbers in plain text, and failing to patch security vulnerabilities, segment its network so hackers who accessed one part could not access the rest, and install robust programmes to detect intrusions.[53]

FTC information security settlement agreements under section 5 often require companies to establish, implement and maintain comprehensive security programmes that include key elements required under the GLBA’s Safeguards Rule.[54] For instance, in June 2019, the FTC entered a settlement agreement with a provider of auto dealer software that, the FTC alleged, failed to implement reasonable security measures for consumer personal information and thereby engaged in ‘unfair and deceptive’ practices, as well as violations of the GLBA’s Safeguards Rule. The settlement, which will remain in effect for 20 years, obliged the software provider to: implement a comprehensive information security programme; obtain security programme assessments from third parties approved by the FTC; and file those assessments with, and certify compliance to, the FTC biennially for 20 years.[55]

The FTC has also alleged that companies have engaged in unfair practices through surreptitious monitoring of consumers. For instance, in 2013, the FTC took action against DesignerWare, LLC, which provided software to rent-to-own companies to assist with recovering stolen computers.[56] According to the FTC, DesignerWare installed monitoring and geolocation tracking software on rented computers, then used that software to take screenshots of sensitive information, log keystrokes and take webcam photographs – all without notifying or obtaining the consent of users.[57] In the consent order, DesignerWare and its affiliates agreed to: stop using monitoring software on consumers or providing third parties with such software on computers rented to consumers; and limit its use of geolocation tracking software to users who expressly consent to the technology before renting the computers and re-notify users prior to each use.[58]

State laws

Many states in the United States have also passed privacy laws. These states often have constitutions enshrining broad privacy rights, in addition to specific laws that focus on regulating sensitive information in certain sectors, information misuse and misrepresentations about how information may be used or protected.

State constitutions

Several states guarantee the right to privacy in their state constitutions. For example, California’s state constitution includes an explicit right to privacy[59] and the New Jersey Supreme Court has recognised that ‘[w]ith its declaration of the right to life, liberty and the pursuit of happiness, Article I, Section 1 of the New Jersey Constitution encompasses the right to privacy.’[60]

Broad laws and regulations

Some states have implemented broad privacy statutes including California and Nevada. After California passed its landmark privacy bill in 2018, 17 other states introduced similar bills, though some have narrower scopes.[61]

California Consumer Privacy Act

With enforcement slated to begin in 2020, the California Consumer Privacy Act (CCPA) establishes a set of protections for California residents.[62] The CCPA imposes obligations on for-profit businesses that pass a size threshold by having annual gross revenues greater than US$25 million, handle personal information of over 50,000 California residents or devices per year, or garner at least half of their annual revenues from selling the personal information of California residents.[63] The CCPA also defines personal information broadly, as ‘information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household’.[64] At the least, this includes information like names, contact information, IP addresses, search history, browsing history, and location information; however, it excludes de-identified, aggregated information or certain information lawfully collected from government records.[65]

The CCPA establishes five key rights for California residents[66] whose information has been collected by a covered business:

  • the right to know what personal information has been collected about the individual and the right to access ‘[t]he specific pieces of personal information’ collected in the past 12 months;
  • the right to have the business delete the personal information, subject to limitations including free speech, compliance with subpoenas, research in the public interest and internal uses a consumer would reasonably expect;
  • the right to opt out of the sale of personal information to third parties (with extra protections for children’s information);
  • the right to be free from price or service discrimination despite exercising these rights; and
  • the right to sue for at least US$100 per consumer and per incident, if the business’s failure to maintain reasonable security results in information breaches (although the consumer must first give the business targeted written notice and an opportunity to cure the violation).

The CCPA also empowers the California Attorney General to bring enforcement actions against covered businesses, with civil penalties of US$2,500 for unintentional violations and US$7,500 for intentional violations.

A business may need to take a number of steps to comply with the CCPA, such as:

  • updating or creating a privacy disclosure on its website. A business must provide detailed information on its website including what personal information it collects about consumers and the purposes for which it will use that information, how a consumer may submit requests concerning his or her personal information, and the categories of consumer personal information that were actually collected and sold or disclosed for business purposes in the preceding 12 months;[67]
  • posting a clear and conspicuous ‘Do Not Sell My Personal Information’ link on its website;[68]
  • establishing a process for responding to verifiable consumer requests within a 45-day response window;[69] and
  • ensuring that appropriate agreements with service provider agreements are in place.

Nevada

In May 2019, Nevada expanded its online privacy statute to include a ‘do not sell’ component.[70] Under its pre-existing 2017 online privacy statute, Nevada already required online services and websites collecting specific types of personal information from Nevada consumers (first and last name, contact information, personally identifiable information, etc) to disclose what types of covered information it collected, with what kind of third parties it shared covered information, and how consumers could see and request updates to covered information about themselves.[71]

Nevada’s new amendment that takes effect in October 2019, adds a requirement that covered businesses respond to requests from consumers not to sell covered personal information.[72]

The Nevada law exempts financial institutions subject to GLBA and healthcare institutions subject to HIPAA. The Nevada law is also more narrowly tailored than the CCPA as ‘sale’ is limited to an exchange of personal information for money rather than including non-monetary consideration, and the definition of consumer does not include all residents of Nevada.[73]

Sector-specific laws and regulations

States also have implemented statutes and regulations protecting sensitive information on a sectoral basis, along similar lines as the federal government. In many cases, these state laws provide more robust and detailed regulation than corresponding federal laws.

Financial information

The privacy provisions of the GLBA do not pre-empt state laws that offer stronger consumer protections,[74] and a number of states have enacted their own financial privacy statutes. For example, the California Financial Information Privacy Act prohibits financial institutions from disclosing non-public personal information relating to consumers to nonaffiliated third parties without obtaining affirmative ‘opt-in’ consent from the consumers, rather than the ‘opt-out’ approval permitted by the GLBA.[75] Similarly, Vermont has adopted a financial privacy regulation (the Vermont Rule) that bars a financial institution from disclosing non-public personal information about a resident customer to any non-affiliated third party unless the customer has been provided with a notice and an opportunity to authorise, or ‘opt in’ to, the disclosure, and the consumer opts in.[76]

Health information

A number of states have medical privacy laws. These laws are not pre-empted by HIPAA to the extent that they provide greater protections. California’s Confidentiality of Medical Information Act,[77] for example, applies to ‘[a]ny business organized for the primary purpose of maintaining medical information . . . in order to make the information available to an individual or to a provider of healthcare’, a broader scope than HIPAA.[78] It also requires ‘each employer who receives medical information [to] establish appropriate procedures to ensure the confidentiality and protection from unauthorized disclosure and use of that information’.[79] The Texas Medical Privacy Act, similarly, applies to a broader range of entities than HIPAA,[80] does not allow a patient’s health information to be used for marketing without the patient’s consent, and prohibits re-identification of de-identified information.[81]

Certain states have privacy laws relating to HIV/AIDS status,[82] mental health records[83] and substance abuse records.[84] In certain states, recipients of such sensitive information must receive a written notice explaining how unauthorised re-disclosure is restricted by state law.[85] Some states also limit genetic testing of employees and the disclosure of genetic test results, and require ‘each employer who receives medical information [to] establish appropriate procedures to ensure the confidentiality and protection from unauthorised disclosure and use of that information’.[86]

Finally, a number of state breach laws address the unauthorised access or acquisition of treatment, diagnosis or other medical or health insurance information.[87] Some of these breach laws also state that if HIPAA applies to a business, or if the business complies with HIPAA, then that state’s breach notice laws will not apply.[88] In California, healthcare providers must report breaches of medical information to impacted patients and the California Department of Public Health within 15 days of discovery (compared to 60 days under HIPAA).[89]

Biometric information

Several states have enacted protections for biometric information, starting with Illinois’ Biometric Information Privacy Act (BIPA) of 2008, which served as a blueprint for Texas and Washington laws.[90] California’s recently passed CCPA also addresses privacy protection for biometric information.[91]

The BIPA covers fingerprints, voiceprints and facial geometry, as well as retina, iris and hand scans, but not biological samples used for scientific testing, donated organs or tissues, or information used for healthcare treatment.[92] The statute requires private entities to acquire informed consent before collecting and before disclosing biometrics, bars them from selling or profiting from biometrics, and obliges them to securely store, transmit and ultimately destroy biometrics no later than three years after the last interaction with the associated individual or after the purpose of collection has terminated, whichever occurs sooner.[93]

Unlike Texas and Washington biometric privacy laws, the BIPA also creates a private right of action with minimum damages of US$1,000 per negligent violation and US$5,000 for intentional or reckless damages.[94] In February 2019, the Illinois Supreme Court decided that plaintiffs need not show ‘some actual injury or harm’ to sue for violation of their rights under the BIPA, rejecting any characterisation of an individual’s right to ‘biometric privacy vanish[ing] into thin area’ as a ‘mere “technicality”.’[95]

Consumer report information

The FCRA is not the exclusive source of restrictions on the use of information gathered by consumer reporting agencies. State laws also may apply and in some cases may be more restrictive than the FCRA. California, for example, has two applicable statutes. California’s Investigative Consumer Reporting Agencies Act (ICRAA) limits the circumstances in which a person can initiate an investigative consumer report, requires a consumer to be provided with an option to receive a copy of the report, constrains the information that can be included in the report, and makes investigative consumer reporting agencies liable for breaches.[96] California also has passed the Consumer Credit Reporting Agencies Act (CCRAA), which includes provisions for consumers to request security alerts be placed in their credit reports to notify the report recipient that the consumer may have been the victim of identity fraud. [97]

Protection of information

State legislatures also have passed laws requiring that personal information be protected. These laws often cover a greater number of areas and contain broader requirements than comparable federal laws against misuse.

Procedures and practices

The California Security Safeguard Act[98] applies to any company that owns or licenses unencrypted ‘personal information’ about California residents. The Security Safeguard Act requires these companies to implement and maintain ‘reasonable security procedures and practices’ to protect such information. Texas and Rhode Island[99] followed with similar laws. These laws apply to businesses that maintain unencrypted personal information about their employees.

Nevada enacted an information security law that mandates encryption for the transmission of personal information.[100] Specifically, the Nevada encryption statute prohibits businesses in Nevada from transferring ‘any personal information through an electronic transmission’, except via facsimile, ‘unless the business uses encryption to ensure the security of electronic transmission’,[101] The ‘personal information’ covered by the Nevada encryption law is the same information subject to that state’s security breach notification law:

a natural person’s first name or first initial and last name in combination with [his or her] . . . . (1) Social Security number[;] (2) driver’s license number or identification card number[; or] (3) account number, credit card number or debit card number, in combination with any required security code, access code or password.[102]

The Nevada encryption law states that entities ‘doing business in th[e] [s]tate’ are subject to the law, but does not define the scope of this phrase.[103]

The Massachusetts Office of Consumer Affairs and Business Regulations has adopted a rule that requires ‘[e]very person that owns, licenses, stores or maintains personal information about a [Massachusetts] resident [to] develop, implement, maintain and monitor a comprehensive, written information security program applicable to any records containing such personal information’.[104] This rule requires a covered entity to implement, maintain and update a risk-based security programme tailored to the size, complexity and nature and scope of its activities. The rule also prescribes particular requirements that an organisation’s risk-based security programme must include, such as ‘[t]aking reasonable steps to select and retain third-party service providers that are capable of’ – and requiring them by contract to – implement and maintain ‘appropriate security measures to protect such personal information’.[105] In addition, the rule prescribes several specific required elements of a company’s comprehensive information security programme relating to its computer systems, including any wireless system.[106]

Security breach notification

All 50 US states have enacted security breach notification requirements.[107] These laws generally require organisations to expeditiously notify individuals when unencrypted computerised personal information has been, or is reasonably believed to have been, accessed or acquired by an unauthorised person. In addition, several of these laws also apply to security breach incidents affecting personal information stored in any medium, including paper records, rather than only computerised records.[108] While state laws vary in their nuances, personal information that commonly triggers notification includes an individual’s name in combination with one or more of the following:

  • a national or government-issued identification number, such as a Society Security number (SSN), individual taxpayer identification number, driver’s licence number or passport number;
  • a financial account number in combination with a security code, access code, password or PIN that is necessary to access the account;
  • health or medical information;
  • health insurance policy number or a subscriber identification number; or
  • online credentials, such as username and password to an online account.

Most of these laws also require notification of state regulators as well as individuals.[109] Congress has considered, but not passed, bills that would enact national requirements to notify individuals about a breach of security.[110]

Social Security number laws

At least 32 states, as well as Guam and Puerto Rico, have passed laws restricting the use of SSNs.[111] Many of these laws prohibit a person or entity from:

  • publicly posting or publicly displaying in any manner an individual’s SSN;
  • printing SSNs on any card required for the individual to receive products or services provided by the person or entity;
  • requiring SSNs to access an internet website, unless a password or other authentication device is also required;
  • requiring an individual transfer his or her SSN over the internet unless the connection is secure or the number is encrypted; and
  • printing a number known to be the individual’s SSN on materials that are mailed to the individual unless required by federal or state law.[112]

Some states, such as Connecticut, Massachusetts, Michigan, New Mexico, New York, and Texas, also require that an organisation adopt a policy designed to ensure that SSNs are properly safeguarded.[113] Examples of policy components include implementing information security mechanisms protecting SSNs, limiting access to SSNs within the organisation, providing for proper disposal of materials including SNNs, and penalising policy violations.[114] The scope of these laws varies markedly; for instance, Massachusetts imposes these obligations on any organisation handling any personal information of Massachusetts residents, while the Texas law only applies to entities that requires a customer to disclose his or her SSN to complete a transaction.[115]

Monitoring

Businesses that intend to engage in surveillance of communications also must be aware of relevant state law. The federal wiretap statute discussed above, ECPA, does not pre-empt state wiretap statutes that offer privacy protections greater than those available under federal law, and many states do restrict surveillance activities. For example, 15 states (California, Connecticut, Delaware, Florida, Illinois, Maryland, Massachusetts, Michigan, Montana, Nevada, New Hampshire, Oregon, Pennsylvania, Vermont and Washington) all generally require both parties’ consent in at least some circumstances before a conversation may be overheard, intercepted, or recorded.[116] In addition to state wiretap laws, some states prohibit video surveillance in certain locations.[117] Two states, Connecticut and Delaware, have laws requiring private employers to notify employees if their email or internet access is being monitored.[118] Two others, Colorado and Tennessee, require public and state employers to adopt written policies on email monitoring of their employees.[119]

Unfair or deceptive acts or practices

Similar to the federal approach to regulating the misrepresentation of information, all states have adopted laws that regulate unfair or deceptive acts or practices (UDAP).[120] Some of these are modelled after section 5 of the FTC Act and are often referred to as ‘Mini-FTC Acts’. Other states have modelled their UDAP laws after the FTC Act. States have used these laws to bring enforcement actions against companies, alleging failures to protect the security of customer information.

For instance, Ohio’s Attorney General relied on that state’s UDAP law[121] to bring a complaint against DSW, Inc, a large shoe retailer, after it experienced a security breach affecting customers’ personal information and failed to identify and notify all affected customers.[122] The Attorney General claimed that DSW’s conduct was both deceptive and unfair, although the complaint made no allegation that DSW had made any commitments to consumers regarding the disclosure of information security incidents; at the time, Ohio did not have a security breach notification law in force.[123]


Footnotes