Download the AI Principles and Best Practices (PDF)

Introductory Note from Acting Secretary Su

Advancements in artificial intelligence are the product of human innovation, not the inevitable outcome of some mechanical process. So, too, will human agency determine how AI is used, for whose benefit, and to what ends. We should imagine a world where the human creativity that fuels AI is applied to make life better for working people and where AI is deployed in the workplace to improve the quality of jobs so that they are safer, more fulfilling, and more rewarding. This approach could introduce AI to the workplace in ways that would combat poverty, increase support for workers and their families, and expand worker autonomy. And AI-powered tools could amplify workers' voice in the workplace, including through their right to organize. In short, we should think of AI as a potentially powerful technology for worker well- being, and we should harness our collective human talents to design and use AI with workers as its beneficiaries, not as obstacles to innovation. AI's promise of a better world cannot be fulfilled without making it a better world for workers.

As part of President Biden's Executive Order, the Department of Labor has developed principles and best practices for AI developers and employers to center the well-being of workers in the development and deployment of AI in the workplace and to value workers as the essential resources they are, even—and especially— in a moment of technological change. In practice, this approach means engaging workers in the development and deployment of AI for the workplace, creating a win-win for both employees and employers. It requires taking proactive measures to retrain and reallocate workers in order to prevent worker displacement from the outset. It calls for both private sector employers and government to play their part to train workers in the new skills needed for an AI economy. And workers should share in the benefits and rewards of AI's adoption in their workplaces.

The Department of Labor will remain vigilant in protecting workers from the potential harms of AI, while at the same time, recognizing that this is a moment of tremendous opportunity. Whether AI in the workplace creates harm for workers and deepens inequality or supports workers and unleashes expansive opportunity depends (in large part) on the decisions we make. The stakes are high. But with these best practices and principles, guided by President Biden's and Vice President Harris's leadership, we can seize this moment and promote innovation and prosperity for all.

Julie A. Su, Acting Secretary, United States Department of Labor

 

DISCLAIMER: "Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers" is a document published by the U.S. Department of Labor. It is responsive to President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It is intended to aid employers and Artificial Intelligence (AI) developers in mitigating AI's potential harms to workers' well-being and maximizing its potential benefits. The "Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers" is non-binding and does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or international instrument. It does not constitute binding guidance for the public or Federal agencies and therefore does not require compliance with the principles described herein. It also is not determinative of what the U.S. government's position will be in any international negotiation. These principles are not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities. Any links to non-federal websites on this page provide additional information that is consistent with the intended purpose of this federal site, but linking to such sites does not constitute an endorsement by the U.S. Department of Labor of the information or organization providing such information. For more information, please visit https://www.dol.gov/general/disclaim.

Introduction

Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have moved with urgency to harness AI's potential to spur innovation, advance opportunity, and transform the nature of many jobs and industries, while also protecting workers from the risk that they might not share in these gains. As part of this commitment, the AI Executive Order directed the Department of Labor to create Principles and Best Practices for Developers and Employers when using AI in the workplace. These Principles and Best Practices will create a roadmap for developers and employers on how to harness AI technologies for their businesses while ensuring workers benefit from new opportunities created by AI and are protected from its potential harms.

The precise scope and nature of how AI will change the workplace remains uncertain. AI can positively augment work by replacing and automating repetitive tasks or assisting with routine decisions, which may reduce the burden on workers and allow them to better perform other responsibilities. Consequently, the introduction of AI-augmented work will create demand for workers to gain new skills and training to learn how to use AI in their day-to-day work. AI will also continue creating new jobs, including those focused on the development, deployment, and human oversight of AI. But AI-augmented work also poses risks if workers no longer have autonomy and direction over their work or their job quality declines. The risks of AI for workers are greater if it undermines workers' rights, embeds bias and discrimination in decision-making processes, or makes consequential workplace decisions without transparency, human oversight, and review. There are also risks that workers will be displaced entirely from their jobs by AI.

In recent years, unions and employers have come together to collectively bargain new agreements setting sensible, worker-protective guardrails around the use of AI and automated systems in the workplace. In order to provide AI developers and employers across the country with a shared set of guidelines, the Department of Labor developed "Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers" as directed by President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, with input from workers, unions, researchers, academics, employers, and developers, among others, and through public listening sessions.

The Department's AI Principles for Developers and Employers include:

  • [North Star] Centering Worker Empowerment: Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace.
  • Ethically Developing AI: AI systems should be designed, developed, and trained in a way that protects workers.
  • Establishing AI Governance and Human Oversight: Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace.
  • Ensuring Transparency in AI Use: Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace.
  • Protecting Labor and Employment Rights: AI systems should not violate or undermine workers' right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections.
  • Using AI to Enable Workers: AI systems should assist, complement, and enable workers, and improve job quality.
  • Supporting Workers Impacted by AI: Employers should support or upskill workers during job transitions related to AI.
  • Ensuring Responsible Use of Worker Data: Workers' data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.

Applying The Principles And Best Practices 

To help employers and AI developers implement these Principles, the Department of Labor is issuing this set of Best Practices. The Principles and Best Practices apply to the development and deployment of AI systems in the workplace, and should be considered during the whole lifecycle of AI – from design to development, testing, training, deployment and use, oversight, and auditing. Tailored for the workplace, these guidelines are intended to help AI developers and employers as they establish high-road practices today and in the long-term. They are not, however, intended as a substitute for existing or future federal or state laws and regulations. The Principles and Best Practices are applicable to all sectors and intended to be mutually reinforcing, though not all Principles and Best Practices will apply to the same extent in every industry or workplace. The Principles and Best Practices are not intended to be an exhaustive list but instead a guiding framework for businesses. AI developers and employers should review and customize the best practices based on their own context and with input from workers.

The following document provides examples of potential best practices of how an employer might choose to implement the overall principles. An employer can still commit to these principles without implementing any of these best practices, as this is contingent on what is appropriate for the workplace. There may also be best practices that are not enumerated in the factsheet. These are simply examples for employers to potentially follow.

Definitions

Throughout the Principles and Best Practices, the following terms are defined as:

Artificial Intelligence (AI): A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

AI System: Any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI.

Algorithm: A set of instructions that can be followed by a computer to accomplish some end.

Automated System: Software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. This could include software or a process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals.

Algorithmic Discrimination: Instances when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their actual or perceived race, color, ethnicity, sex (including based on pregnancy, childbirth, and related conditions; gender identity; intersex status; and sexual orientation), religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, or any other classification protected by law.

Electronic Monitoring: The technologies and methods used to electronically observe and track workers, collecting data about workers' activities, location, performance, or behavior. Data created by electronic monitoring can be an important source of training data for AI systems and a key input for automated systems.

Generative AI: The class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.

Impact Assessment: Assessing and evaluating requirements for AI system legal compliance, accountability, combating harmful bias, examining impacts of AI systems on jobs, safety and health, liability, and security, among others.

Significant Employment Decisions: Significant employment decisions include decisions related to hiring, job assignment/placement, reassignment, promotions, compensation, scheduling, performance standards, discipline, discharge, and any other decisions that affect the terms and conditions of employment.

Underserved Communities: Those populations as well as geographic communities that have been systematically denied the opportunity to participate fully in aspects of economic, social, and civic life, as defined in Executive Orders 13985 and 14020. Examples of underserved communities include people of color, Indigenous individuals, LGBTQI+ individuals, women, immigrants, veterans, individuals with disabilities, individuals in rural communities, individuals without a college degree, individuals with or recovering from a substance use disorder, justice-involved individuals, and opportunity youth.

Worker Data: Personally identifiable information (as defined in the Office of Management and Budget Circular A-130) about a particular worker or job seeker. For example, worker data includes, but is not limited to, personal identity information, biometric information, health, medical, lifestyle, and wellness information, and online information such as social media activity. Worker data also includes information related to workplace activities, regardless of how the information is collected, inferred, or obtained, such as human resources information, productivity and efficiency data, worker communications, device usage and data, audio-video data, and other information created by or associated with a worker in the workplace. The Principles and Best Practices below address how employers should collect, analyze and store worker data when using AI systems.

Worker-Impacting AI: Worker-impacting AI includes AI that has the potential to significantly impact workers. Worker-impacting AI includes, but is not limited to, AI that may impact significant employment decisions; AI which is used in workplace monitoring; AI which may impact the health or safety of workers; AI which may violate or undermine workers' rights, including their rights to organize; or AI which significantly impacts the job functions or job conditions of workers.

Centering Worker Empowerment: Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems in the workplace.

All of the principles and best practices detailed below aim to help employers realize the benefits of new AI technologies for their organization while also advancing workers' well-being. AI developers and employers that wish to fully embed the principles below into their actions should begin by centering workers' experiences and empowerment – especially those from underserved communities – throughout the cycle of AI design, development, testing, training, procurement, deployment, use, and oversight. Integrating early and regular input from workers into the adoption and use of AI improves workers' job quality and enables businesses to deliver on desired outcomes. When workers are represented by a union, employers should bargain in good faith on the use of AI and electronic monitoring in the workplace.

Ethically Developing AI: AI systems should be designed, developed, and trained in a way that protects workers.

Some developers of AI technologies have established and published a set of principles and standards that they commit to follow when developing new AI technologies. Businesses may use these standards and internal review processes to ensure AI and automated systems brought to the market meet safety, security, and trustworthy standards for their customers, customers' workers, and the public. The end use and impact of AI technologies can only be as good as the foundation upon which they are built. That is why clear ethical standards and review processes are crucial to enhancing workers' well-being. Below are some recommended best practices for AI developers as they create and implement ethical standards of development.

  • Developers should establish standards so that any AI products brought to market protect workers' civil rights, mitigate risks to workers' safety, and meet performance requirements. Developers may restrict or contractually prohibit uses of an AI system to limit risks to workers' rights and safety. Risks may include high error rates in certain workplace contexts, risks of discrimination due to biased performance in race, gender, age, disability, or other protected categories. Instead, developers should design AI systems that can enhance civil rights, equity, and safety and that are tested for accuracy, validity, and reliability.
  • Developers should conduct impact assessments and independent audits of worker-impacting AI they bring to market, and publish their resultsin an accessible format, enabling employers to understand the efficacy and reliability of products. Developers' impact assessments should capture the intended purpose for the AI and its expected benefit (supported by metrics or qualitative evidence), evaluate error rates, assess the risks of potential algorithmic discrimination or bias, assess accessibility, identify impacts on workers' labor and employment rights, and document negative impacts on workers' job quality and well-being.
  • Developers should ensure jobs (domestic or abroad) that are created to review and refine data inputs used to train AI systems meet basic human rights, domestic and international labor standards, and are good quality jobs. Those jobs should also have processes in place to enable workers to report issues, including potential violations of labor and employment rights or the unauthorized use of intellectual property under current standards, and provide feedback.
  • Developers should design worker-impacting AI systems to allow for employers' ongoing monitoring and human oversight and retrospective review of AI-generated recommendations or data that inform decisions. This includes ensuring worker-impacting AI systems produce understandable operations and outcomes for non-technical users and during independent evaluations.

Establishing AI Governance And Human Oversight: Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace.

Across any organization, numerous teams may seek to use different AI technologies to improve performance and outcomes within the workplace. Establishing governance standards and processes will help organizations improve outcomes and mitigate risks for their business and workforce when using a new worker-impacting AI system. In particular, employers could benefit from ensuring appropriate human oversight across an organization of worker-impacting AI systems, especially related to hiring, performance standards, compensation, scheduling, discipline, and discharge. Below are some best practices for employers on key elements of governance, oversight, and evaluation of worker-impacting AI systems.

  • Employers should establish governance structures, accountable to leadership, to produce guidance and provide coordination to ensure consistency across organizational components when adopting and implementing worker-impacting AI systems. The governance structure should be sufficiently empowered within an organization and should incorporate input from workers and their representatives into decision-making processes. Governance structures may include new roles or entities responsible for overseeing AI systems as a whole, and may issue clear procurement policies, standards for new AI technologies, or ongoing monitoring requirements for worker-impacting AI systems. Governance structures may also review impacts and evaluations of worker-impacting AI systems against their intended uses and potential risks, including potential impacts on civil rights and labor rights.
  • Employers should provide appropriate training about AI systems in use to as broad a range of employees as possible throughout the enterprise—from executives and managers to supervisors and frontline workers. Training should include, as applicable, an overview of the business purpose for the AI system, appropriate ways to interpret and act on its outputs, processes for raising concerns, and how to implement the system to enhance workers' well-being.
  • Employers should not rely solely on AI and automated systems, or information collected through electronic monitoring, to make significant employment decisions. Rather, employers should ensure meaningful human oversight of any such decisions supported by AI systems. Individuals who make or oversee employment decisions that are influenced or informed by AI outputs should be provided with appropriate training to understand how to appropriately interpret AI outputs.
  • Employers should identify and document the types of significant employment decisions informed by AI and automated systems. If AI or algorithmic recommendations are a principal basis for significant employment decisions made by managers or other parties, employers should provide job seekers and workers a meaningful and plain language explanation of the AI system's role in the decision and the data relied on to make the decision.
  • When using AI and automated systems, in whole or in part, to make significant employment decisions, employers should maintain and document procedures for appeal, human consideration, and remedy of decisions which adversely impact employees. Human consideration and fallback mechanisms to a human or previously working system should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on employees.
  • Employers should ensure their worker-impacting AI systems are regularly independently audited and should publicly report information on rights and safety evaluations for worker-impacting AI (with appropriate safeguards for privacy and security and without disclosing proprietary business information). These evaluations will aid developers and employers in tracking the performance of the system against its use case, ensuring the system is performing safely and effectively, and correcting errors or harms against workers and job seekers.

Ensuring Transparency In AI Use: Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace.

Today, many workers are recognizing and responding to the need for greater transparency over the implementation of AI and automated systems in the workplace. Some unions are securing collective bargaining provisions that give workers a right to advanced notice of the use of new technology in the workplace. This notice may include receiving information about an AI system, how it will be used, and how it will impact workers; it may also include obtaining informed consent from workers before deploying AI systems. Employers that provide greater disclosure and transparency about how workers will interact with AI and automated systems will foster greater trust and job security, prepare workers to effectively use AI, and open channels for workers to provide input to improve the technology or correct errors. Employers should consider the following best practices for transparency and disclosure about how worker-impacting AI systems are being used, what data is collected, how the data is used, and for what purpose before it is deployed.

  • Employers should provide workers and their representatives advance notice and appropriate disclosure if they intend to use worker-impacting AI. This disclosure should include an explanation of the purpose of the AI system; how job seekers or workers will engage with the worker-impacting AI system; and how the AI systems will be used to monitor workers, direct work, or inform significant employment decisions.
  • Developers and employers should ensure that workers and their representatives are informed – in a clear and accessible manner – about what data will be collected and stored about them and for what purpose that data will be used by AI systems. When workers are subject to electronic monitoring, they should be provided conspicuous notification when monitoring is occurring and be given prior notice of what activities will be monitored and the purpose of the monitoring.
  • Employers should ensure, where feasible, through well-documented and plain language procedures, that workers and their representatives can request, view, and submit corrections to individually identifiable data used to make significant employment decisions. The procedures should, where feasible, include the ability for workers to and their representatives to dispute data if it is inaccurate without fear of retaliation.

Protecting Labor And Employment Rights: AI systems should not violate or undermine workers' right to organize, health and safety rights, wage and hour rights, and anti- discrimination and anti-retaliation protections.

The adoption of AI in the workplace has the potential to increase worker productivity and improve workers' experience on the job. However, these advancements may also, in some instances, undermine or violate workers' labor and employment rights. For example, while some research highlights the potential of AI systems to improve the health and safety of workers in certain high- risk industries, other reports have highlighted how AI monitoring and management tools may disincentivize practices that reduce injury on the job, such as taking rest breaks or working at a safe pace and speed. Employers should be mindful of these opportunities and risks and take steps to ensure they maintain, at minimum, their obligations under the law while using AI systems. The following best practices detail how employers should uphold labor and employment rights as they implement AI systems in the workplace.

  • Employers shouldnot use AI systems that undermine, interfere with, or have a chilling effect on labor organizing and other protected activities, such as workers speaking with one another about their wages and working conditions. Employers must comply with legal obligations under the National Labor Relations Act; they should not use automated systems to limit or detect labor organizing or workers' other protected activities aimed at improving their working conditions, and should not use electronic monitoring in nonwork areas or in a manner that limits workers from taking breaks together or otherwise engaging with one another (see more from the National Labor Relations Board General Counsel). Employers should notify workers and their representatives of current and new technologies in the workplace that can monitor or could be perceived to monitor worker organizing activities.
  • Employers should appropriately mitigate any risks that worker-impacting AI and automated systems directly or indirectly have on health and safety outcomes, including fatigue and injury rates, among others. Employers using AI and automated systems must continue to comply with the Occupational Safety and Health Act and relevant standards and regulations. Employers should regularly evaluate the effects of AI systems whose outputs have the potential to significantly impact health and safety outcomes and avoid using systems when it is not possible to sufficiently mitigate the harms they produce.
  • Employers' use of worker-impacting AI should not be used to reduce wages, break time, or benefits workers are legally due. Employers should ensure AI systems used to prioritize or schedule work does not discriminate against workers who are entitled to federally protected leave, break time, or accommodations. See additional guidance explaining that employers must continue to comply with the Fair Labor Standards Act and other federal labor standards as they rely on AI and other automated systems in the workplace.
  • Prior to deployment, developers and employers should audit AI systems for disparate or adverse impacts on the basis of race, color, national origin, religion, sex, disability, age, genetic information, and other protected bases, and should make the results public. Developers and employers using AI must maintain their compliance with anti-discrimination legal requirements. Developers can minimize disparate or adverse impacts in design by ensuring the data inputs used to train AI systems, and the algorithms and machine learning models, do not reproduce bias or discrimination. Employers should continue to routinely monitor and analyze whether the use of the AI system is causing a disparate impact or disadvantaging individuals with protected characteristics, and, if so, take steps to reduce the impact or use a different tool. For more information, see the Office of Federal Contract Compliance Programs' AI and Equal Employment Opportunity for Federal Contractors Guide and resources by the EEOC.
  • Developers and employers should specifically assess how the design and use of AI technologies may impact workers and job seekers with disabilities. This includes considering how AI systems may assist people with disabilities in the workplace or reduce their barriers to employment. In addition, employers must meet their legal obligations to provide applicants and workers with an effective means to request reasonable accommodations in the event that AI creates barriers to participation in employment processes. For more information, see resources by the Department's Office of Disability Employment Policy and the EEOC.
  • Employers should affirmatively encourage workers to raise concerns, individually or collectively, about the use and impact of AI, algorithmic management or electronic monitoringon their labor or employment rights, and should affirmatively advise them that they will not be retaliated against for doing so. Consistent with their legal obligations, employers must ensure there is no retaliation against applicants and workers for asserting their labor and employment rights or raising concerns about AI.

Using AI To Enable Workers: AI systems should assist, complement, and enable work and improve job quality.

As organizations weigh whether and how to adopt AI technologies, they should consider what systems will best assist their current workforce. Employers should implement AI systems in ways that enhance job quality for all workers, as defined by the Department of Commerce and Department of Labor's Good Job Principles. This approach will minimize automation of good jobs and can prevent some of the unintended consequences certain AI systems have been shown to have on job quality. It will also maximize the potential benefits of AI to businesses. Below are some steps employers can take to ensure AI enables work and protects job quality, which in turn can help improve operations and attract and retain workers.

  • Prior to procuring AI technologies, employers should consider how AI systems would impact specific job tasks, skills needed, job opportunities, and risks for workers. Employers should consider how the use of AI technologies can assist and complement workers and improve job quality, for example by reducing the time spent on certain tasks and providing opportunities to enhance skills. Employers should continue to engage workers and their representatives to determine how AI can further support worker productivity, performance, and well-being.
  • Employers should consider piloting the use of worker-impacting AI systems before deploying them more broadly. This should include collecting workers' input and, as appropriate, providing hands-on training for workers, with processes to learn, iterate, and improve the AI system before it is implemented across a team or workplace.
  • Employers should minimize electronic monitoring and ensure that it is limited to the least invasive means necessary to accomplish legitimate and defined business purposes. In particular, employers should be cautious about the impact of electronic monitoring on people with disabilities and other workers who may rely on accommodations in the workplace, which may lead to inaccurate assessments of worker performance. Employers should refrain from monitoring private areas like locker rooms or break rooms.
  • Employers should take steps to ensure that AI used to prioritize or schedule work helps implement fair and predictable scheduling practices and does not make work more erratic or unpredictable, with insufficient rest time between shifts, or less notice prior to scheduling changes.
  • Employers that experience productivity gains or increased profits due to the use of AI systems can consider how workers can share in those benefits, for example through increased wages, improved benefits, increased training, fair compensation for the collection and use of worker data, or reduced working hours without loss of pay.

Supporting Workers Impacted By AI: Employers should support and upskill workers during job transitions related to AI.

As with prior periods of innovation, there inevitably will be instances when organizations restructure, repurpose, or eliminate specific job functions in their operations due to the use of AI. How these decisions are made and implemented has significant, long-lasting impacts not only for workers but also for businesses, local economies, and communities. Employers that collaborate closely with their workers and worker representatives, attempt to preserve jobs, and think holistically about AI-related job transitions and reductions, will be stronger in the long-term. Below are some best practices for employers as they consider how to support their workers whose jobs are at risk of displacement due to AI.

  • Employers should provide workers with appropriate training opportunities to learn how to use AI systems that complement their work to prevent displacement before it happens. Providing training and professional development opportunities related to AI systems can also be an effective strategy for workforce recruitment and retention.
  • Employers should prioritize retraining and reallocating workers displaced by AI to other jobs within the organization whenever feasible.
  • Employers should seek opportunities to work with their state and local workforce systems to further support education and training partnerships for upskilling, which can help their current workforce gain new skills and provide continued matching of skilled workers with employers.

Ensuring Responsible Use Of Worker Data: Workers' data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.

In our data-driven world, employers are increasingly collecting data about workers through electronic monitoring and using worker data as inputs to AI or automated systems to make business and employment decisions. Employers' collection, retention, and use of worker data, especially sensitive and personally identifiable data, can increase privacy risks for workers. Rooted in the Fair Information Practice Principles, below are some steps for developers and employers to protect and responsibly handle data about workers.

  • When building AI which impacts workers, developers should design and build AI systems with safeguards for securing and protecting data, by default, about workers. Developers should establish capabilities for employers to make consent, access, and control decisions in a complex data ecosystem.
  • Employers should avoid collection, retention, and other handling of worker data that is not necessary for a legitimate and defined business purpose. Employers should also work to mitigate the risks and comply with relevant laws related to the collection and retention of information regarding employee disabilities or genetic information, including any employee's family medical history. Organizations may consider creating an executive-level Data Protection Officer to oversee the collection, processing, maintenance, or use of worker data.
  • Employers should secure and protect any data about workersfrom internal and external threats. In the case of unauthorized access of worker data, employers should promptly inform workers and their representatives about what data was affected, describe likely consequences of the breach, and describe what steps that the employer is taking, and that the worker can take, to mitigate potential harms.
  • Employers should not share workers' data outside the employer's business and employer's agents (including with consultants, AI auditors, and a merging or acquiring business) without workers' freely given, informed, and specific consent or unless required by law.